Comparing floating point numbers(double, float) in .net directly for equality is not safe. A double value in a variable may change over time by very small amount. For example, if you set the variable num(double) to 0.2 of an object, after some time that object waited in the memory, you may find that num became 0.1999999999999. So num == 0.2 will be false in this case. My solution to this problem is to create a property to round the number:
double Num
{
get{ return Math.Round(num, 1); }
}
After the get of Num is called and result is开发者_如何学编程 returned, can this returned number change to 0.19 again at the time of comparison(Num == 0.2)? It is not likely but is it guaranteed?
No, it is not guaranteed.
From MSDN - Math.Round
:
The behavior of this method follows IEEE Standard 754, section 4. This kind of rounding is sometimes called rounding to nearest, or banker's rounding. It minimizes rounding errors that result from consistently rounding a midpoint value in a single direction.
(emphasis mine)
Point is - it minimizes, not ensures.
When comparing floating point types, you should always test against an epsilon - a minimum value beyond which you don't care.
Example adapted from here:
double dValue = 0.2;
var diff = Math.Abs(num - dValue);
if( diff < 0.0000001 ) // need some min threshold to compare floating points
{
// treat as equal
}
Recommended reading: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Whether you believe it or not, this is intended behaviour, and conforms to some IEEE standard.
Its not possible to represent an analogue every-day value such as a massive number or a small fraction with complete fidelity in a single binary representation. The floating point numbers in .NET, such as float or double do their best to minimize error when you assign numbers to them, so when you assigned 0.2 to the variable, the language did its best to choose the representation with the smallest error.
Its not that the number somehow degrades in memory - this is a deliberate step. If you are comparing floating point numbers, you should always allow a region either side of your comparison that is acceptable. Your representation of 0.2 is close to a very large number of decimal places. Is this good enough for your application? It looks glaring to your eyes, but actually is a very small error. When comparing doubles and floats, (to integers or to each other), you should always consider what is the acceptable precision, and accept a range either side of your expected result.
You can also choose to use other types, like decimal that has extremely good precision on decimal places - but is also very large compared to floats and doubles.
Variables don't change by themselves. If a == b at one point in time then a == b for ever more until you modify a or b.
You may well have a problem related to representability in floating point data types, but it's not clear what the problem is. What is clear is that your current solution is almost certainly not a good idea.
Use code like this to test double for equality:
public static bool AreEqual(double d1, double d2, double delta)
{
return Math.Abs(d1 - d2) < delta;
}
精彩评论