Page 11 - Developer
P. 11

we have a new view of the same   same y value as our desired view
           LISTING 1            The core of the fragment shader (using GLSL).
                                                            scene we wish to render. We   plane location, so we’ll just search a
                                                            could just use the same algorithm   range of candidate screen positions
           UNIFORM SAMPLER2D COLOR_TEXTURE;                 by progressively casting line   P* along a line of constant y. In
           UNIFORM SAMPLER2D DEPTH_TEXTURE;                 segments per pixel to generate the   this case we start with our pixel’s
           UNIFORM VEC4 STEREO_PARAMS;                      new view. However, because of the   position from the right eye viewpoint
           VARYING VEC2 UV;                                 nature of stereo reprojection, we   (in green) and iterate in the -x
           VOID MAIN()                                      can simplify things greatly. Figure   direction (to yellow and orange). So,
           {                                                4 shows the situation for a desired   rather than casting a ray and then
             VEC2 OUT_TEX_COORDS = UV - VEC2(STEREO_PARAMS.X, 0.0);  location in the right eye view (in   computing a view plane position
             VEC2 REPROJ_TEX_COORDS = UV;                   green). We create sample points   to look up a depth value, we’ll just
             FLOAT DIR = SIGN(STEREO_PARAMS.Y);             along a ray from the right viewpoint   use the depth at the current P* to
             FLOAT ORIG_X = UV.X;                           through the view plane (in this   reproject into the right eye view. If
             FLOAT SHIFT = STEREO_PARAMS.Z;                 case, the convergence plane)   its position is close to our desired
             FLOAT STEP = 0.0625*STEREO_PARAMS.W;           location for our current pixel. Since   location then we’ll stop. We’ll use
             INT INDEX = 0;                                 the view separation is only in the   that depth as our final depth to look

             WHILE (INDEX <= 16)                            x-direction, we end up only casting   up the color value for our pixel.
             {                                              the ray along x and z. The y-value of   This raises the question of
               REPROJ_TEX_COORDS.X = ORIG_X + SHIFT;        the ray will always be 0.   how we perform our reprojection—
               FLOAT L = GETLINEARDEPTH(DEPTH_TEXTURE, REPROJ_TEX_  For the sake of clarity, we start   namely, how do we use a view
           COORDS);                                         our sampling in Figure 4 at the   plane position P_left in the left
               FLOAT DEPTH_ADJUST = STEREO_PARAMS.X + STEREO_PARAMS.Y   convergence plane, though we   view and the depth to determine
           / L;                                             usually will want to start between   the corresponding view plane
               FLOAT TEST = SHIFT + DEPTH_ADJUST;           the eye point and the convergence   position P_right in the right view?
               IF (DIR*TEST >= 0.0)                         plane to get some negative   We can simplify the problem down
               {                                            parallax. For each sample point (in   to one of finding the displacement

                 OUT_TEX_COORDS.X = ORIG_X - DEPTH_ADJUST;  yellow), we cast a new ray from the   along the convergence plane,
                 BREAK;                                     left eye, determine the view plane   between the left view and the right
               }                                            position for that ray, and look up   view (see Figure 6). This is a case
               SHIFT = SHIFT + STEP; INDEX = INDEX + 1;     the depth by mapping that view   of computing a ratio of similar
             }                                              plane position to texture space. If   triangles: The distance between
             GL_FRAGCOLOR = TEXTURE2D(COLOR_TEXTURE, OUT_TEX_COORDS);  this depth is less than the depth   P_left and P_right is to the distance
           }
                                                            of the sample point, we stop and   between the two view positions as
                                                            use an interpolation of the sample   the distance from the convergence
          information in Brawley and   the depth map, you use that   point’s depth and left eye depth   plane is to the distance from the
          Tatarchuk’s original ShaderX article   penetration point to determine the   to get our final depth value, which   view positions. This gives us the

          [see Reference 4]. However, the   color seen from that view direction.   we use to reproject from the right   final equations in Figure 6.


          general idea is that when you   Stereo reprojection, at its   viewpoint to get our final color.  Since this is using a ratio of
          render the surface you wish to   heart, is essentially the same   However, because we are only   distances, this will work just as
          apply parallax mapping to, for each   process. We have a color image   searching in the xz directions,   well whether we’re in NDC space
          pixel you cast a line segment along   and its corresponding depth   we can think of this in a different   or texture space, but it assumes
          the view direction at progressive   information (albeit using a   way—see Figure 5. In this case, we   that the depth we have is linear.
          depths into the depth map. When   perspective projection rather   know the corresponding view plane   However, if we use standard
          the line segment has penetrated   than an orthographic one), and   position in the left view will have the   perspective projection, the depth


















                                                FIGURE 4: Stereo                                  FIGURE 5: Stereo
                                                reprojection using                                reprojection using
                                                   ray-casting.                                      pixel match.

                                                                                            WWW.GDMAG.COM  9
   6   7   8   9   10   11   12   13   14   15   16