开发者

Should you correct compiler warnings about type conversions using explicit typecasts?

开发者 https://www.devze.com 2023-02-01 02:17 出处:网络
In my current project, the compiler shows hundreds of warnings about type conversions. There is a lot of code like this

In my current project, the compiler shows hundreds of warnings about type conversions.

There is a lot of code like this

iVar开发者_如何学运维 = fVar1*fVar2/fVar3;
// or even
iVar = fVar1*fVar2/fVar3+.5f;

which intentionally assign float values to int.

Of course, I could fix these warnings using

iVar = (int)(...);

but that looks kind of ugly.

Would you rather live with the ugliness or live with the warnings?

Or is there even a clean solution?


Having hundreds of warnings that aren't an issue is dangerous, one day a warning that's a real issue will appear and drown in the noise.

Keep the code free of warnings.

If you know what you're doing, add the casts or conversion.


Yes.

You should always fix the compiler warnings. Several reasons:

*) It may be the cause of an error and need an actual fix rather than just a cast. You won't know until you look.

*) Actual coding errors that manifest as warnings can get lost in the noise generated by hundreds of warnings

*) It makes it clear to other coders that you really did mean to use that variable of the wrong type/sign there. That it is deliberate.

*) It makes it clear and explicit that the type and/or signedness is being changed. If your variable names do not contain an indication of the type and signedness it may not be obvious that this is occurring.


I compile with "warnings treated as errors". Warnings are often indicators that you wrote code that won't behave in the way you intended to.

A cast in this case would make it obvious that you're changing the type, which means a change in precision (and performance impact, if you happen to work on extremely time-critical code). It's always a great policy to write code that shows all its explicit and implicit results in the most apparent way so that you still know what your code does after digging it out several months later - or if a team member has to work with it.


In my view, sometimes compilers spew out warnings for issues that I regard as not problematic. In that case the solution may be to switch off those particular warnings. But you must exercise caution and be sure that you aren't also hiding significant warnings.

For implicit type conversion warnings you want them some of the time, but not all the time. Typically you want to ignore int to float conversions, but hear about others. Ideally the compiler would allow you to configure warning reporting at that level of granularity.


It's a good idea to enable warnings for any narrowing implicit type conversion, that is, any conversion for which the converted-to type cannot hold all the values of the original type. This includes e.g. float to int conversions, and conversions in either direction between signed int and unsigned int. These conversions have the potential for overflow and/or loss of information, so they should always be made explicit.


In the case of floating-point-to-integer conversions, I think it's better to write and use functions rather than typecasts; the functions themselves may use typecasts, but I would consider a call to a function like int_round_mid_up(x) to be better than (int)(x+0.5), especially since the function can easily be written, once, to properly handle positive and negative numbers including the otherwise-treacherous 0.499999999999999944 [which, when added to 0.5, yields 1.0], while the typecast-based expression (int)(x+0.5) will incorrectly round -2.4 to -1. IMHO, there is no good way for a language to define floating-point-to-integer typecasts (some use various forms of rounding, some truncation, and some may use flooring; since there's no clear "best" way to do the conversion, it would be better to require a programmer to specify what's needed than have the language select a method) and programmers should thus avoid them outside of very narrow contexts.

With regard to conversions among integer types, warnings about narrowing conversions are generally good and should be globally enabled but locally stifled via explicit typecasts. For most kinds of numeric work, conversions from double to float generally shouldn't generate warnings (or worse, errors), though unfortunately not all languages or compilers allow those warnings to be stifled independent of those for other types.

A category of conversions which should be set to generate warnings for compilers that support it (though alas many don't) is conversion from a narrow expression which might yield a truncated result to a type which would be able to hold the result if not truncated, e.g.

uint64_t ul = ui1 - ui2; // With ui1 and ui2 being uint32_t
double d1 = f1 / f2; // With f1 and f2 being float

If I wanted the behavior yielded by above expressions, I would write them as:

uint64_t ul = (uint32_t)(ui1 - ui2);
double d1 = (float)(f1 / f2);

since otherwise a programmer seeing the above without the typecasts may be inclined to rewrite them as:

uint64_t ul = (uint64_t)ui1 - ui2;
double d1 = (double)f1 / f2;

which would yield behavior that, while more typically desirable, might be contrary to what the program actually needs.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号