Y'UV420p (and Y'V12 or YV12) to RGB888 conversion - android

Am trying to show a yuv video file in android, I have a few yuv video files that am using.
This video yuv file video1 (160*120 resolution) is one that I captured from my server as raw h264 data and converted to yuv file using OpenH264.
I used YUV Player Deluxe to play the above yuv video files and it plays perfectly well.
When I try to play the same in Android am not getting the color component reproduced properly.The image almost appears black and white with a few traces of color in between of the image frame.
To display the video in Android, what I did was read the yuv video file frame by frame where each frame is of size = (w*h*1.5)bytes and obtain rgb array out of that using the code mentioned below
public static int[] convertYUV420_NV21toARGB8888(byte [] data, int width, int height) {
int size = width*height;
int offset = size;
int[] pixels = new int[size];
int u, v, y1, y2, y3, y4;
// i along Y and the final pixels
// k along pixels U and V
for(int i=0, k=0; i < size; i+=2, k+=2) {
y1 = data[i ]&0xff;
y2 = data[i+1]&0xff;
y3 = data[width+i ]&0xff;
y4 = data[width+i+1]&0xff;
v = data[offset+k ]&0xff;
u = data[offset+k+1]&0xff;
v = v-128;
u = u-128;
pixels[i ] = convertYUVtoARGB(y1, u, v);
pixels[i+1] = convertYUVtoARGB(y2, u, v);
pixels[width+i ] = convertYUVtoARGB(y3, u, v);
pixels[width+i+1] = convertYUVtoARGB(y4, u, v);
if (i!=0 && (i+2)%width==0)
i += width;
}
return pixels;
}
private static int convertYUVtoARGB(int y, int u, int v) {
int r = y + (int)(1.772f*v);
int g = y - (int)(0.344f*v + 0.714f*u);
int b = y + (int)(1.402f*u);
r = r>255? 255 : r<0 ? 0 : r;
g = g>255? 255 : g<0 ? 0 : g;
b = b>255? 255 : b<0 ? 0 : b;
return 0xff000000 | (r<<16) | (g<<8) | b;
}
Using the rgb[] obtained from mehod convertYUV420_NV21toARGB8888 above I constructed a bitmap to display in ImageView.
I tried various other codes to convert yuv[] to rgb[] but the result is same. Also I tried using Androids YUVImage API and the result is yet the same.
I know that the code stated above is to convert Y'UV420sp (NV21) to ARGB8888, but am not getting the difference between YUV to RGB and Y'UV420sp (NV21) to ARGB8888 conversion (just below the previous link....)
Please could anyone help me out....

Your main problem as you expected is that you treat input data as NV21 and not as YUV. The difference between this two formats is that in NV21 format chroma (U/V) samples are interleaved (i.e. VUVUVUVUVU...) and in YUV format they are in separate planes (i.e. UUUUU...VVVVV...) and another order so this part of you code should look like:
u = data[offset+k ]&0xff;
v = data[offset+k + size/4]&0xff;
and k in loop should increase by 1 (not by 2).

Related

Media Codec - YUV_YV12 to YUV420SP/NV21

I'm trying to develop an android application that encodes opencv array of Mats of resolution 1200x1200 4 channel images to a mp4 video using Android media codec. The problem I'm facing is that, when I'm trying to use emulator(which uses YUV420P color format - I'm using OpenCV COLOR_BGRA2YUV_I420 conversion) the video output color is same as the array of images, but on real android devices the color output is completely different, so I've debugged and found out that my android devices color format is YUV420SP. Since there are no inbuilt opencv functions to convert RGBA/BGRA to YUV420SP, I've converted image to YUV_YV12 and then to YUV420SP/NV21 using the below code
public byte[] YV12toNV21(final byte[] input, final int width, final int height) {
byte[] output = input;
final int size = width * height;
final int quarter = size / 4;
final int vPosition = size; // This is where V starts
final int uPosition = size + quarter; // This is where U starts
System.arraycopy(input, 0, output, 0, size); // Y is same
for (int i = 0; i < quarter; i++) {
output[size + i*2 ] = input[vPosition + i]; // For NV21, V first
output[size + i*2 + 1] = input[uPosition + i]; // For Nv21, U second
}
return output;
}
But still I'm facing the same issue.
This is the Original RGBA Picture
This one is the output of video
UPDATE:
After swapping U and V

How to crop a byte[] array containing Y' (luma) without converting to Bitmap

EDIT: Solved! See below.
I need to crop my image (YUV422888 color space) which I obtain from the onImageAvailable listener of Camera2. I don't want or need to convert it to Bitmap as it affects performance a lot, and also I'm actually interested in luma and not in RGB information (which is contained in Plane 0 of the Image).
I came up with the following solution:
Get the Y' information contained in the Plane 0 of the Image object made available by Camera2 in the listener.
Convert the Y' Plane into a byte[] array in.
Convert the byte[] array to a 2d byte[][] array in order to crop.
Use some for loops to crop at desired left, right, top and bottom coordinates.
Fold the 2d byte[][] array back to a 1d byte[] array out, containing cropped luma Y' information.
Point 4 unfortunately yields a corrupt image. What am I doing wrong?
In the onImageAvailableListener of Camera2 (please note that although I am computing a bitmap, it's only to see what's happening, as I'm not interested in the Bitmap/RGB data):
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer(); // Grab just the Y' Plane.
buffer.rewind();
byte[] data = new byte[buffer.capacity()];
buffer.get(data);
Bitmap bitmap = cropByteArray(data, image.getWidth(), image.getHeight()); // Just for preview/sanity check purposes. The bitmap is **corrupt**.
runOnUiThread(new bitmapRunnable(bitmap) {
#Override
public void run() {
image_view_preview.setImageBitmap(this.bitmap);
}
});
The cropByteArray function needs fixing. It outputs a bitmap that is corrupt, and should output an out byte[] array similar to in, but containing only the cropped area:
public Bitmap cropByteArray(byte[] in, int inw, int inh) {
int l = 100; // left crop start
int r = 400; // right crop end
int t = 400; // top crop start
int b = 700; // top crop end
int outw = r-l;
int outh = b-t;
byte[][] in2d = new byte[inw][inh]; // input width and height are 1080 x 1920.
byte[] out = new byte[outw*outh];
int[] pixels = new int[outw*outh];
i = 0;
for(int col = 0; col < inw; col++) {
for(int row = 0; row < inh; row++) {
in2d[col][row] = in[i++];
}
}
i = 0;
for(int col = l; col < r; col++) {
for(int row = t; row < b; row++) {
//out[i++] = in2d[col][row]; // out is the desired output of the function, but for now we output a bitmap instead
int grey = in2d[col][row] & 0xff;
pixels[i++] = 0xFF000000 | (grey * 0x00010101);
}
}
return Bitmap.createBitmap(pixels, inw, inh, Bitmap.Config.ARGB_8888);
}
EDIT Solved thanks to the suggestion by Eddy Talvala. The following code will yield the Y' (luma plane 0 from ImageReader) cropped to the desired coordinates. The cropped data is in the out byte array. The bitmap is generated just for confirmation. I am also attaching the handy YUVtoGrayscale() function below.
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int stride = planes[0].getRowStride();
buffer.rewind();
byte[] Y = new byte[buffer.capacity()];
buffer.get(Y);
int t=200; int l=600;
int out_h = 600; int out_w = 600;
byte[] out = new byte[out_w*out_h];
int firstRowOffset = stride * t + l;
for (int row = 0; row < out_h; row++) {
buffer.position(firstRowOffset + row * stride);
buffer.get(out, row * out_w, out_w);
}
Bitmap bitmap = YUVtoGrayscale(out, out_w, out_h);
Here goes the YUVtoGrayscale().
public Bitmap YUVtoGrayscale(byte[] yuv, int width, int height) {
int[] pixels = new int[yuv.length];
for (int i = 0; i < yuv.length; i++) {
int grey = yuv[i] & 0xff;
pixels[i] = 0xFF000000 | (grey * 0x00010101);
}
return Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
}
There are some remaining issues. I am using the front camera and although the preview orientation is correct inside the TextureView, the image returned by ImageViewer is rotated clockwise and flipped vertically (a person is lying on their right cheek in the preview, only the right cheek is the left cheek because of the vertical flip) on my device which has sensor orientation of 270 deg. Is there an accepted solution to have both the preview and saved photos in the same, correct orientation using Camera2?
Cheers.
It'd be helpful if you described how the image is corrupt - do you see a valid image but it's distorted, or is it just total garbage, or just total black?
But I'm guessing you're not paying attention to the row stride of the Y plane (https://developer.android.com/reference/android/media/Image.Plane.html#getRowStride() ), which would typically result in an image that's skewed (vertical lines become angled lines).
When accessing the Y plane, the byte index of pixel (x,y) is:
y * rowStride + x
not
y * width + x
because row stride may be larger than width.
I'd also avoid copying so much; you really don't need the 2D array, and a large byte[] for the image also wastes memory.
You can instead seek() to the start of each output row, and then only read the bytes you need to copy straight into your destination byte[] out with ByteBuffer.get(byte[], offset, length).
That'd look something like
int stride = planes[0].getRowStride();
ByteBuffer img = planes[0].getBuffer();
int firstRowOffset = stride * t + l;
for (int row = 0; row < outh; row++) {
img.position(firstRowOffset + row * stride);
img.get(out, row * outw, outw);
}

Custom byteArray data to WebRTC videoTrack

I need to use WebRTC for android to send specific cropped(face) video to the videoChannel. I was able manipulate Camera1Session class of WebRTC to get the face cropped. Right now I am setting it to an ImageView.
listenForBytebufferFrames() of Camera1Session.java
private void listenForBytebufferFrames() {
this.camera.setPreviewCallbackWithBuffer(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera callbackCamera) {
Camera1Session.this.checkIsOnCameraThread();
if(callbackCamera != Camera1Session.this.camera) {
Logging.e("Camera1Session", "Callback from a different camera. This should never happen.");
} else if(Camera1Session.this.state != Camera1Session.SessionState.RUNNING) {
Logging.d("Camera1Session", "Bytebuffer frame captured but camera is no longer running.");
} else {
mFrameProcessor.setNextFrame(data, callbackCamera);
long captureTimeNs = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
if(!Camera1Session.this.firstFrameReported) {
int startTimeMs = (int)TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - Camera1Session.this.constructionTimeNs);
Camera1Session.camera1StartTimeMsHistogram.addSample(startTimeMs);
Camera1Session.this.firstFrameReported = true;
}
ByteBuffer byteBuffer1 = ByteBuffer.wrap(data);
Frame outputFrame = new Frame.Builder()
.setImageData(byteBuffer1,
Camera1Session.this.captureFormat.width,
Camera1Session.this.captureFormat.height,
ImageFormat.NV21)
.setTimestampMillis(mFrameProcessor.mPendingTimeMillis)
.setId(mFrameProcessor.mPendingFrameId)
.setRotation(3)
.build();
int w = outputFrame.getMetadata().getWidth();
int h = outputFrame.getMetadata().getHeight();
SparseArray<Face> detectedFaces = mDetector.detect(outputFrame);
if (detectedFaces.size() > 0) {
Face face = detectedFaces.valueAt(0);
ByteBuffer byteBufferRaw = outputFrame.getGrayscaleImageData();
byte[] byteBuffer = byteBufferRaw.array();
YuvImage yuvimage = new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
//My crop logic to get face co-ordinates
yuvimage.compressToJpeg(new Rect(left, top, right, bottom), 80, baos);
final byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Activity currentActivity = getActivity();
if (currentActivity instanceof CallActivity) {
((CallActivity) currentActivity).setBitmapToImageView(bitmap); //face on ImageView is set just fine
}
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
} else {
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
}
}
}
});
}
jpegArray is the final byteArray that I need to stream via WebRTC, which I tried with something like this:
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, jpegArray, (int) face.getWidth(), (int) face.getHeight(), Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(jpegArray);
Setting them up like this gives me following error:
../../webrtc/sdk/android/src/jni/androidvideotracksource.cc line 82
Check failed: length >= width * height + 2 * uv_width * ((height + 1) / 2) (2630 vs. 460800)
Which I assume is because androidvideotracksource does not get the same length of byteArray that it expects, since the frame is cropped now.
Could someone point me in the direction of how to achieve it? Is this the correct way/place to manipulate the data and feed into the videoTrack?
Edit:bitmap of byteArray data does not give me a camera preview on ImageView, unlike byteArray jpegArray. Maybe because they are packed differently?
Can we use WebRTC's Datachannel to exchang custom data ie cropped face "image" in your case and do the respective calculation at receiving end using any third party library ie OpenGL etc? Reason I am suggesting is that the WebRTC Video feed received from channel is a stream in real time not a bytearray . WebRTC Video by its inherent architecture isn't meant to crop video at other hand. If we want to crop or augment video we have to use any ar library to fulfill this job.
We can always leverage WebRTC's Data channel to exchange customized data. Using Video channel for the same is not recommended because it's real time stream not the bytearray.Please revert in case of any concern.
WebRTC in particular and video streaming in general presumes that the video has fixed dimensions. If you want to crop the detected face, your options are either to have pad the cropped image with e.g. black pixels (WebRTC does not use transparency), and crop the video on the receiver side, or, if you don't have control over the receiver, resize the cropped region to fill the expected width * height frame (you should also keep the expected aspect ratio).
Note that JPEG compress/decompress that you use to crop the original is far from efficient. Some other options can be found in Image crop and resize in Android.
Okay, this was definitely a problem of how the original byte[] data was packed and the way byte[] jpegArray was packed. Changing the way of packing this and scaling it as AlexCohn suggested worked for me. I found help from other post on StackOverflow on way to pack it. This is the code for it:
private byte[] getNV21(int left, int top, int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, left, top, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
private void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}`
I pass this byte[] data to onByteBufferFrameCaptured and callback:
Camera1Session.this.events.onByteBufferFrameCaptured(
Camera1Session.this,
data,
w,
h,
Camera1Session.this.getFrameOrientation(),
captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
Prior to this, I had to scale the bitmap which is pretty straight forward:
int width = bitmapToScale.getWidth();
int height = bitmapToScale.getHeight();
Matrix matrix = new Matrix();
matrix.postScale(newWidth / width, newHeight / height);
Bitmap scaledBitmap = Bitmap.createBitmap(bitmapToScale, 0, 0, bitmapToScale.getWidth(), bitmapToScale.getHeight(), matrix, true);

NV21 to Bitmap on Android, Very dark image, grayscale, or yellow tint?

I have been looking at converting the NV21 byte[] that I get from onPreviewFrame(). I have searched the forums and google for various solutions. I have tried RenderScripts and some other code examples. Some of them give me an image with a yellow tint, some give me an image with red and blue flipped (after I flip it back in the code, I get yellow tint back), some give me strange color features all throughout the image (almost like a negative), some give me a grayscale image, some give me an image so dark you can't really make anything out.
Since I am the one typing the question, I realize I must be the idiot in the room so we will start with this post. This particular solution gives me a very dark image, but I am not cool enough to be able to comment yet. Has anyone tried this solution or has one that produces an image with the same quality as the original NV21 format?
I need either a valid ARGB byte[] or a valid Bitmap, I can modify my project to deal with either. Just for reference I have tried these (and a few others that are really just carbon copies of these):
One solution I tried
Another solution I tried
If you are trying to convert YUV from camera to Bitmap, here is something you can try:
// import android.renderscript.*
// RenderScript mRS;
// ScriptIntrinsicYuvToRGB mYuvToRGB;
// Allocation yuvPreviewAlloc;
// Allocation rgbOutputAlloc;
// Create RenderScript context, ScriptIntrinsicYuvToRGB and Allocations and keep reusing them.
if (NotInitialized) {
mRS = RenderScript.create(this).
mYuvToRGB = ScriptIntrinsicYuvToRGB.create(mRS, Element.YUV(mRS));
// Create a RS Allocation to hold NV21 data.
Type.Builder tYuv = new Type.Builder(mRS, Element.YUV(mRS));
tYuv.setX(width).setY(height).setYuvFormat(android.graphics.ImageFormat.NV21);
yuvPreviewAlloc = Allocation.createTyped(mRS, tYuv.create(), Allocation.USAGE_SCRIPT | Allocation.USAGE_IO_INPUT);
// Create a RS Allocation to hold RGBA data.
Type.Builder tRgb = new Type.Builder(mRS, Element.RGBA_8888(mRS));
tRgb.setX(width).tRgb(height);
rgbOutputAlloc = Allocation.createTyped(mRS, tRgb.create(), Allocation.USAGE_SCRIPT);
// Set input of ScriptIntrinsicYuvToRGB
mYuvToRGB.setInput(yuvPreviewAlloc);
}
// Use rsPreviewSurface as one of the output surface from Camera API.
// You can refer to https://github.com/googlesamples/android-HdrViewfinder/blob/master/Application/src/main/java/com/example/android/hdrviewfinder/HdrViewfinderActivity.java#L504
Surface rsPreviewSurface = yuvPreviewAlloc.getSurface();
...
// Whenever a new frame is available
// Update the yuv Allocation with a new Camera buffer without any copy.
// You can refer to https://github.com/googlesamples/android-HdrViewfinder/blob/master/Application/src/main/java/com/example/android/hdrviewfinder/ViewfinderProcessor.java#L109
yuvPreviewAlloc.ioReceive();
// The actual Yuv to Rgb conversion.
mYuvToRGB.forEach(rgbOutputAlloc);
// Copy the rgb Allocation to a Bitmap.
rgbOutputAlloc.copyTo(mBitmap);
// continue processing mBitmap.
...
When using ScriptIntrinsics I highly recommend to update to at least JellyBean 4.3 or higher (API18). Things are much easier to use than in JB 4.2 (API 17).
ScriptIntrinsicYuvToRGB is not as complicated as it seems.
Especially you donĀ“t need Type.Builder objects.
Camera preview format must be NV21 !
in the onCreate()... method create the RenderScript object and the Intrinsic:
mRS = RenderScript.create(this);
mYuvToRGB = ScriptIntrinsicYuvToRGB.create(mRS, Element.U8_4(mRS));
With your cameraPreviewWidth and cameraPreviewHeight calculate the
length of the camera data byte array:
int yuvDatalength = cameraPreviewWidth*cameraPreviewHeight*3/2 ; // this is 12 bit per pixel
You need a bitmap for output:
mBitmap = Bitmap.createBitmap(cameraPreviewWidth, cameraPreviewHeight, Bitmap.Config.ARGB_8888);
Then you create the input and output allocations (here are the changes in API18+)
yuvPreviewAlloc = Allocation.createSized(mRS, Element.U8(mRS), yuvDatalength);
rgbOutputAlloc = Allocation.createFromBitmap(mRS, mBitmap); // this simple !
and set the script-input to the input allocation
mYuvToRGB.setInput(yuvPreviewAlloc); // this has to be done only once !
In the camera loop (whenever a new frame is avaliable), copy the NV21 byte-array (data[]) to the yuvPreviewAlloc, execute the script and copy result to bitmap:
yuvPreviewAlloc.copyFrom(data); // or yuvPreviewAlloc.copyFromUnchecked(data);
mYuvToRGB.forEach(rgbOutputAlloc);
rgbOutputAlloc.copyTo(mBitmap);
For example: on Nexus 7 (2013, JellyBean 4.3) a full HD (1920x1080) camera preview conversion takes about 7 ms.
I was able to get a different method working (one that was previously linked) by using the code here. But that was giving the Red/Blue color flip. So, I just rearranged the U and V lines and all was ok. This is not as fast as a RenderScript though. It would be good to have a RenderScript that functioned properly. Here is the code:
static public void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0) {
u = (0xff & yuv420sp[uvp++]) - 128; //Just changed the order
v = (0xff & yuv420sp[uvp++]) - 128; //It was originally v then u
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}
Any one have a RenderScript that doesn't have color tint and or flip problems?

MediaCodec different colours on genymotion and huddle 2

My aim:
Use filters (cropping, Black and white, Edge detection) on a MP4 video from the SD card using render script.
Attempted Solutions
-Use MediaCodec to output to a surface directly.
The rendered colour were correct but I could not find a way to process each frame at a time to apply filters using renderscript.
-Copy the decoded buffer from render script and convert to RGB using ScriptIntrinsicYuvToRGB
I can not using ScriptIntrinsicYuvToRGB because it assumes the incoming YUV data is formatted in a different way from the incoming data. (How do I correctly convert YUV colours to RGB in Android?)
-Copy the decoded buffer from render script and convert to RGB using custom code
My current solution converts the YUV to RGB using code very similar to https://stackoverflow.com/a/12702836/601147 (Thanks #Derzu)
/**
* Converts YUV420 NV21 to RGB8888
*
* #param data byte array on YUV420 NV21 format.
* #param width pixels width
* #param height pixels height
* #return a RGB8888 pixels int array. Where each int is a pixels ARGB.
*/
public int[] convertYUV420_NV21toRGB8888(byte [] data, int width, int height) {
int size = width*height;
int offset = size;
int[] pixels = new int[size];
int u, v, y1, y2, y3, y4;
// i percorre os Y and the final pixels
// k percorre os pixles U e V
for(int i=0, k=0; i < size; i+=2, k+=2) {
y1 = data[i]&0xff;
y2 = data[i+1]&0xff;
y3 = data[width+i ]&0xff;
y4 = data[width+i+1]&0xff;
u = data[offset+k ]&0xff;
v = data[offset+k+1]&0xff;
u = u-128;
v = v-128;
// pixels[i] = convertYUVtoRGB(y1, v,u);
// pixels[i+1] = convertYUVtoRGB(y2, v,u);
// pixels[width+i ] = convertYUVtoRGB(y3, v,u);
// pixels[width+i+1] = convertYUVtoRGB(y4, v,u);
pixels[i] = convertYUVtoRGB(y1, u, v);
pixels[i+1] = convertYUVtoRGB(y2, u, v);
pixels[width+i ] = convertYUVtoRGB(y3, u, v);
pixels[width+i+1] = convertYUVtoRGB(y4, u, v);
if (i!=0 && (i+2)%width==0)
i+=width;
}
return pixels;
}
private int convertYUVtoRGB(int y, int u, int v) {
int r,g,b;
r = y + (int)1.402f*v;
g = y - (int)(0.344f*u +0.714f*v);
b = y + (int)1.772f*u;
r = r>255? 255 : r<0 ? 0 : r;
g = g>255? 255 : g<0 ? 0 : g;
b = b>255? 255 : b<0 ? 0 : b;
return 0xff000000 | (b<<16) | (g<<8) | r;
}
The only difference I made was to flip the UV values from
pixels[i] = convertYUVtoRGB(y1, v,u);
to
pixels[i] = convertYUVtoRGB(y1, u, v);
Because the latter one works better.
My problem
The colours look ok-ish on my Huddle 2, but they are totally wrong on the Genymotion JellyBean emulator.
Huddle2
Genymotion
Please can you offer me help and suggestions?
Thanks you very much
You need to check which color format it actually uses - this assumes that the output is NV12, but it looks like your output is I420 or something similar (planar, not semiplanar). Have a look at http://bigflake.com/mediacodec/, in particular https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java (the checkFrame method).
The key element is this:
int colorFormat = format.getInteger(MediaFormat.KEY_COLOR_FORMAT);
The checkFrame method also shows you how to deal with it if it is planar instead of semiplanar, which should work for your case.
Do keep in mind that devices are also allowed to output in proprietary formats, that you can't interpret as simple as this - you need to be ready to handle that case in some way (e.g. telling the user that your app can't handle it, fall back to other SW based implementations, etc).

Categories

Resources