I have C# code for counting pixel opacity, it is used in XNA, how I can improve it's performance?
public static uint GetPixelForOpacity(uint pixelBackground, uint pixelForeground, uint pixelCanvasAlpha)
{
byte surfaceR = (byte)((pixelForeground & 0x00FF0000) >> 16);
byte surfaceG = (byte)((pixelForeground & 0x0000FF00) >> 8);
byte surfaceB = (byte)((pixelForeground & 0x000000FF));
byte sourceR = (byte)((pixelBackground & 0x00FF0000) >> 16);
byte sourceG = (byte)((pixelBackground & 0x0000FF00) >> 8);
byte sourceB = (byte)((pixelBackground & 0x000000FF));
uint newR = sourceR * pixelCanvasAlpha / 256 + surfaceR * (255 - pixelCanvasAlpha) / 256;
uint newG = sourceG * pixelCanvasAlpha / 256 + surfaceG * (255 - pixelCanvasAlpha) / 256;
uint newB = sourceB * pixelCanvasAlpha / 256 + surfaceB * (255 - pixelCanvasA开发者_开发技巧lpha) / 256;
return (uint)255 << 24 | newR << 16 | newG << 8 | newB;
}
- Use premultiplied alpha.
- Videocards are made exactly to accelerate operations like this. Put your pixel data in a Texture2D and draw it to the screen with an appropriate blend mode. This will be much, much faster. Or if you don't want it on the screen right away, redirect your rendering to another texture.
Replace pixelCanvasAlpha / 256 with pixelCanvasAlpha >> 8. Don't divide by power of two with integer types, shift is always faster. Also, unchecked should help.
I doubt it but a bitwise not could be faster than 255-n
uint newR = (sourceR * pixelCanvasAlpha + surfaceR * (uint)(byte)~pixelCanvasAlpha) >> 8;
or
uint newR = (sourceR * pixelCanvasAlpha + surfaceR * ~pixelCanvasAlpha & 0x000000FF) >> 8;
The only way to know is to benchmark.
加载中,请稍侯......
精彩评论