开发者

YUV420 to RGB conversion

开发者 https://www.devze.com 2023-03-16 19:16 出处:网络
I converted an RGB matrix to YUV matrix using this formula: Y=(0.257 * R) + (0.504 * G) + (0.098 * B) + 16

I converted an RGB matrix to YUV matrix using this formula:

Y  =      (0.257 * R) + (0.504 * G) + (0.098 * B) + 16
Cr = V =  (0.439 * R) - (0.368 * G) - (0.071 * B) + 128
Cb = U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128

I then did a 4:2:0 chroma subsample on the matrix. I think I did this correctly, I took 2x2 submatrices from the YUV matrix, ordered the values from least to greatest, and took the average between the 2 values in the middle.

I then used this formula, from Wikipedia, to access the Y, U, and V planes:

size.total = size.width * size.height;
y = yuv[position.y * size.width + position.x];
u = yuv[(position.y / 2) * (size.width / 2) + (position.x / 2) + size.total];
v = yuv[(position.y / 2) * (size.width / 2) + (position.x / 2) + size.total + (size.total / 4)];

I'm using OpenCV so I tried to interpret this as best I can:

y = src.data[(i*channels)+(j*step)];
u = src.data[(j%4)*step + ((i%2)*channels+1) + max];
v = src.data[(j%4)*step + ((i%2)*channels+2) + max + (max%4)];

src is the YUV subsampled matrix. Did I interpret that formula correctly?

Here is how I converted the colours back to RGB:

bgr.data[(i*channels)+(j*step)] = (1.164 * (y - 16)) + (2.018 * (u - 128)); // B
bgr.data[(i*channels+1)+(j*step)] = (1.164 * (y - 16)) - (0.813 * (v - 128)) - (0.391 * (u - 128)); // G
bgr.data[(i*channels+2)+(j*step)] = (1.164 * (y - 16)) + (1.596 * (v - 128));   // R

The problem is my image does not return to its original colours.

Here are the images for reference: http://i.stack.imgur.com/vQkpT.jpg (Subsampled) http://i.stack.imgur.com/Oucc5.jpg (Output)

I see that I should be converting from YUV444 to RGB now but I don't quite I understand what the clip function does in the sample I found on Wiki.

C = Y' − 16
D = U − 128
E = V − 128

R = clip(( 298 * C           + 409 * E + 128) >> 8)
G = clip(( 298 * C - 100 * D - 208 * E + 128) >> 8)
B = clip(( 298 * C + 516 * D           + 128) >> 8)

Does the >> mean I should shift bits?

I'd appreciate any help/comments! Thanks

Update

Tried doing the YUV444 conversion but it just made my image appear in shades of green.

        y = src.data[(i*channels)+(j*step)];
        u = src.data[(j%4)*step + ((i%2)*channels+1) + max];
        v = src.data[(j%4)*step + ((i%2)*channels+2) 开发者_StackOverflow社区+ max + (max%4)];

        c = y - 16;
        d = u - 128;
        e = v - 128;

        bgr.data[(i*channels+2)+(j*step)] = clip((298*c + 409*e + 128)/256);
        bgr.data[(i*channels+1)+(j*step)] = clip((298*c - 100*d - 208*e + 128)/256);
        bgr.data[(i*channels)+(j*step)] = clip((298*c + 516*d + 128)/256);

And my clip function: int clip(double value) { return (value > 255) ? 255 : (value < 0) ? 0 : value; }


I had the same problem when decoding WebM frames to RGB. I finally found the solution after hours of searching.

Take SCALEYUV function from here: http://www.telegraphics.com.au/svn/webpformat/trunk/webpformat.h

Then to decode the RGB data from YUV, see this file: http://www.telegraphics.com.au/svn/webpformat/trunk/decode.c

Search for "py = img->planes[0];", there are two algorithms to convert the data. I only tried the simple one (after "// then fall back to cheaper method.").

Comments in the code also refer to this page: http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html#RTFToC30

Works great for me.


You won't get back perfectly the same image since UV does compress the image.
You don't say if the result is completely wrong (ie an error) or just not perfect

R = clip(( 298 * C           + 409 * E + 128) >> 8)
G = clip(( 298 * C - 100 * D - 208 * E + 128) >> 8)
B = clip(( 298 * C + 516 * D           + 128) >> 8)

The >> 8 is a bit shift, equivalent to dividing by 256. This is just to allow you to do all the arithmatic in integer units rather than floating point for speed


Was experimenting with formulas present on wiki and found that mixed formula:

byte c = (byte) (y - 16);
byte d = (byte) (u - 128);
byte e = (byte) (v - 128);

byte r = (byte) (c + (1.370705 * (e))); 
byte g = (byte) (c - (0.698001 * (d)) - (0.337633 * (e)));
byte b = (byte) (c + (1.732446 * (d)));

produces "better" errors for my images, simply makes some black points pure green (i.e. rgb = 0x00FF00) which is better for detection and correction ...

wiki source: https://en.wikipedia.org/wiki/YUV#Y.27UV420p_.28and_Y.27V12_or_YV12.29_to_RGB888_conversion

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号