some minutes ago "Jon Skeet" found something. see this
this issue and many issues like this made me to think about something about Undefined Operation
or Undefined Result
.
should these kind of th开发者_C百科ings that may cause different result in different implementation of libraries like .net
or mono
results in an Undefined Operation
or Undefined Result
? maybe throwing an exception or setting a field
in it's struct
or class
?!
what are cons and pros of having Undefined Operation
or Undefined Result
?
also look at this I think some of them mentioned there makes unexpected results which should be handled or make the programmer be aware of them.
I mean some how it is getting a big amount of knowing "X is doing something in this way" and "how X implemented another thing"! is this really help programming and developers?
Do you mean like throwing a NotSupportedException? By tautology, if you define that your operation will throw an exception or return a special value, your behavior was defined. If the specification says that behavior is undefined, then the implementation is free to do anything that the implementer wishes - enforcing that they throw an UndefinedOperation
(or similar behavior) means your operation is now defined!
For the case you linked:
It's already specified that comparing a number to NaN returns false ( 1<NaN
, 1>NaN
, and 1==NaN
are all false)? Given that information, when asked "what's biggest - 1 or NaN?" I would answer "there is no maximum , hence the answer is not a number" (like Math.Max does). But if you phrase it like "given these values - 1,2, NaN - which is the biggest number?" I would have to say "2" - because it's the biggest number (NaN is not a number!)
So in this case the issue isn't that the operation is undefined at all - the problem is that the specification is confusing! I agree that there are confusing issues with NaN (and countless other things), but those are places to argue that the specification needs cleaning or changing. In my opinion, you should petition for a spec change, or possibly a compiler warning - not an exception or special return value.
I don't see how that other question applies. The operations behave as documented and have very useful semantics, even if initially seeming incorrect. I found the answer by Philip Rieck to be the most clear in the explanation of why it occurs. In either case, it's not undefined.
If there are "hints" or "warnings" that the compiler wishes to issue, then it can do so at will -- see the current VS2010 offering of Code Analysis and what it will complain on. (This is in addition to normal compiler warnings.)
While this Code Analysis could be augmented with documentation tags and or custom attributes, using such approaches may be unwise because it only works if "the issue" is known at design-time and encoded into the data. Changing the IL/structures in any way to accommodate this is going down a similar ill-conceived road.
Just like everything else in code -- if the believed (or provided) semantics are wrong, then all bets are off. The only way to catch this issue in general is testing, testing and more testing -- in a perfect world it would be prevented and never have to be caught ;-) It's no more different than confusing Math.Log
as being "log to the base 2" (it's not).
Well, in the case you cited, there is a well-defined standard, IEEE 754, and Math.Max
adheres to it, while the implementation of Enumerable.Max(this IEnumerable<double>)
does not. As for implementing code based upon a standard, the onus is always on the implementer.
精彩评论