Page 13 - Developer
P. 13
FiguRe 6: Computing
reprojection
displacement.
values in the buffer are nonlinear. in the source view from P_right–s
Under normal circumstances, All to P_right+s, and stepped one
4 One’s engine provides a second texel at a time. This provided a
floating-point depth buffer which is certain amount of negative parallax
linear (it’s used for computing SSAO, without unduly increasing the
among other things), but because search. However, by stepping
the stereo buffers require more one texel at a time, the number of
VRAM, activating stereo disables texels we search will increase as
SSAO and removes this extra depth s increases. Since s is one of the
buffer. We solved this by using a knobs we can turn to improve our
scratch area of VRAM to compute stereo result, we will either have
the floating-point buffer just before poor performance, or are limited to
applying the reprojection. Note that fairly flat-looking images.
you do need a certain amount of We derived a better approach
resolution in your depth, otherwise from parallax mapping. In my
you may get strange striations final revision, I chose a constant
across your stereo images as you number of iterations, and split
suddenly step from one depth the search space into a fixed
range to another. number of samples. In the parallax
mapping case, the samples are
Summarizing the algorithm: along the search ray; in our case,
For each P_right (i.e., the along the scanline. One approach
pixel we’re considering (see Reference 5) uses a single
in the fragment shader): sample at the farthest depth, but
Iterate along values of I didn’t get very good results with
P* for the left view it. Instead, 16 samples presented
Reproject to get a the best trade-off between quality
candidate location and speed. Reducing the amount of
If it is past P_right, we negative parallax by only searching
have a hit between P_right–s/2 to P_right+s
Use the color values for also helped the quality.
our current and previous As far as the final color value,
P* to set our final color for the initial algorithm I used a
linear interpolation of the current
This search strategy raises a few and previous texel colors, based on
questions. First, what is the range the previous and current screen FiguRe 7: Right-eye views of Ratchet, showing a) ideal stereo, b) reprojecting with the
of the search for P*, and what space positions. This made for wrong search direction, and c) reprojecting with the correct direction.
step size should we use? As z muddy edges, particularly when an
approaches infinity, the magnitude edge switches from a near object to from the right eye view position. the University of Utrecht, who
of the displacement between P_left a far-distance object, or vice versa. This final reprojected point is were also trying to create fast
and P_right approaches s, which In parallax mapping, as mentioned, used to look up the color value, in reprojection. They skipped the
means the maximum possible you would normally take a blend of this case, in the left eye image. blend and just used the depth
value of separation for positive the ray sample point’s depth and However, I discovered a paper by map lookup alone for the final
parallax will be s. So for my initial the depth from depth map lookup, van de Hoef and Zalmstra (see reprojection. This is faster than
algorithm I chose a range of texels and then use that to reproject Reference 5), two students from the lerp, matches reasonably well,
www.gdmag.com 11