Page 38 - Developer
P. 38
INNER PRODUCT // bRUCE DawsON
you can drive a virtual truck Unfortunately, the ULPs-based
through them. relative comparison technique can listing 4
The best starting point for sometimes be quite expensive.
comparing two floats to see if they Transferring data from the float using units in the Last Place (uLPs)to compare floats relative float
are “close enough” is to take the unit to the integer unit can comparison code, using the integer representation of floats.
larger of the two floats, multiply it break pipelining and cause other
by some small number, and use the significant performance costs. union Float_t
result as the epsilon. The smallest However, on processors with good {
number you should multiply a float store forwarding, this technique Float_t(float num = 0.0f) : f(num) {}
by to get your epsilon value is may perform well, and you can bool Negative() const { return i < 0; }
powf(2, -23), because that is the implement it in vector units that do
minimum guaranteed precision for float and vector operations in the int32_t i;
a normalized float. Using a number same registers. float f;
much smaller than that is, for I like this technique when doing };
floats, just an expensive equality accuracy investigations because it
test. Larger epsilons are fine, but if lets me make definitive statements bool AlmostEqualUlps(float A, float B, int maxUlpsDiff = 1)
you find that your relative epsilon such as “The answer was only off by {
needs to be thousands of times one ULP” or, equivalently, “The floats Float_t uA(A);
larger, then you might want to were adjacent.” These statements Float_t uB(B);
investigate to find out what is going have more intuitive meaning than
on, since that level of instability can error ratios have. The semantics of // Different signs means they do not match.
be hard to work with. the two techniques are similar, but if (uA.Negative() != uB.Negative())
It turns out that powf(2, not identical. When your numbers {
-23) has a standard name: are near a power of two the meaning // Check for equality to make sure +0==-0
FLT_EPSILON. The constant of an ULP suddenly changes—the if (A == B)
FLT_EPSILON is the smallest distance between floats suddenly return true;
amount that you can increment doubles (see Figure 2). return false;
1.0f by, and its value is also the }
smallest relative epsilon you
should use with floats. Small // Find the difference in ULPs.
multiples of FLT_EPSILON give int ulpsDiff = abs(uA.i - uB.i);
progressively greater tolerance in if (ulpsDiff <= maxUlpsDiff)
the comparison, so you can pass, return true;
say, 10 * FLT_EPSILON to allow for Figure 2 : Measuring inaccuracy in uLPs.
more error. See Listing 3 for some return false;
sample comparison code: The Mask oF Zero }
When doing floating-point » Maybe the Babylonians should
comparisons, you might find have not invented zero. It just number such that if the difference this possibly essential value will
it handy to make use of the causes problems for floats. between the two floats is less than vary depending on what units you
integer representation of floats. Relative error works incredibly this number then you say that they have chosen.
Since adjacent same-signed badly when the numbers being are equal. I hate to say “Experiment and
floats have adjacent integer compared are near zero. It isn’t even Finding the right value can be see what works for your particular
representations, if you subtract well defined. If one of the numbers quite tricky, to say the least. There is domain,” but I don’t have any better
the integer representation of is zero and the other isn’t, then the no natural, obvious default value. If advice. You can do a complicated
two (same-signed) floats, your relative error is either infinite or you do a bunch of test calculations mathematical analysis of your
result is their distance from each 100%, depending how you measure in your game world, then you may algorithms in order to evaluate their
other—the number of floats in it. If the two numbers straddle zero, find that an absolute epsilon of stability, but I don’t think that’s
between plus one. If two floats then the relative error is even worse. 0.00001 is appropriate. However if particularly practical.
are adjacent, we say that they (It’s negative, I guess—whatever you were to change your units from,
differ by one Unit in the Last that means.) say, meters to millimeters, then all Pi, PLease
Place (ULP). By making use of Since relative epsilon doesn’t your numbers would get a thousand » One fascinating example of the
a union of an int32_t and a float work around zero, you need to use times larger and you would need problem of comparisons around
we can implement this technique an absolute epsilon to handle this to change your absolute epsilon zero is for sin(pi). Mathematics
in a way that stays within the case instead. This is simply a small to 0.01. It’s slightly terrifying that teaches us that the result of this
aliasing rules of all the compilers
I am familiar with (see Listing 4 float(pi) +3.141592741012573
for an example). The check for the
signs being different is important
because subtracting the integer + sin(float(pi)) -0.000000087422776
representations of opposite-
signed floats is problematic when = a better approximation of pi +3.141592653589797
the signs don’t match.
36 gamE DEvElOPER | OCTObER 2012