For my work it's开发者_C百科 particularly interesting to do integer calculations, which obviously are not what GPUs were made for. My question is: Do modern GPUs support efficient integer operations? I realize this should be easy to figure out for myself, but I find conflicting answers (for example yes vs no), so I thought it best to ask.
Also, are there any libraries/techniques for arbitrary precision integers on GPUs?
First, you need to consider the hardware you're using: GPU devices performance widely differs from a constructor to another.
Second, it also depends on the operations considered: for example adds might be faster than multiplies.
In my case, I'm only using NVIDIA devices. For this kind of hardware: the official documentation announces equivalent performance for both 32-bit integers and 32-bit single precision floats with the new architecture (Fermi). Previous architecture (Tesla) used to offer equivalent performance for 32-bit integers and floats but only when considering adds and logical operations.
But once again, this may not be true depending on the device and instructions you use.
 
         
                                         
                                         
                                         
                                        ![Interactive visualization of a graph in python [closed]](https://www.devze.com/res/2023/04-10/09/92d32fe8c0d22fb96bd6f6e8b7d1f457.gif) 
                                         
                                         
                                         
                                         加载中,请稍侯......
 加载中,请稍侯......
      
精彩评论