开发者

Differences between llvm and g++ on OSX 10.7

开发者 https://www.devze.com 2023-04-03 21:56 出处:网络
I upgraded to OSX 10.7 Lion this weekend, and now I\'m tr开发者_开发百科ying to get all my unit and regression tests to pass... but there are quite a few problems. Several of my regression tests are n

I upgraded to OSX 10.7 Lion this weekend, and now I'm tr开发者_开发百科ying to get all my unit and regression tests to pass... but there are quite a few problems. Several of my regression tests are now producing numerical results which differ (e.g. in the 3rd decimal place). This is a surprise because I get consistent results between OSX 10.6 and Linux, and for various compilers (because we already apply some tricks to keep the numerics stable enough to be comparable)... But it seems that OSX 10.7 is producing significantly different results. We could raise the threshold of course to get all these tests to pass, but I'd rather not because that weakens the test.

By default "g++" is now aliased to "llvm-g++-4.2". Can somebody explain to me what kinds of differences to expect in my results for g++ vs llvm? If I want to preserve my regression results, do I basically have to choose between llvm and -ffast-math?


Basic floating-point computation shouldn't be substantially different between llvm-gcc-4.2 and gcc-4.2 in the normal case; a basic floating-point operation will generate a functionally identical code sequence with llvm-gcc and gcc-4.2 (assuming default compiler flags).

You're mentioning -ffast-math; LLVM generally does relatively few additional optimizations when -ffast-math is used. That could cause substantial differences if you're depending on the compiler to do certain transformations, I guess.

Beyond that, it's really hard to say without an actual testcase.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号