Converting YUV->RGB(Image processing)->YUV during onPreviewFrame in android? - android

I am capturing image using SurfaceView and getting Yuv Raw preview data in public void onPreviewFrame4(byte[] data, Camera camera)
I have to perform some image preprocessing in onPreviewFrame so i need to convert Yuv preview data to RGB data than image preprocessing and back to Yuv data.
I have used both function for encoding and decoding Yuv data to RGB as following :
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
if (data != null) {
Log.i("DEBUG", "data Not Null");
// Preprocessing
Log.i("DEBUG", "Try For Image Processing");
Camera.Parameters mParameters = camera.getParameters();
Size mSize = mParameters.getPreviewSize();
int mWidth = mSize.width;
int mHeight = mSize.height;
int[] mIntArray = new int[mWidth * mHeight];
// Decode Yuv data to integer array
decodeYUV420SP(mIntArray, data, mWidth, mHeight);
// Converting int mIntArray to Bitmap and
// than image preprocessing
// and back to mIntArray.
// Encode intArray to Yuv data
encodeYUV420SP(data, mIntArray, mWidth, mHeight);
}
}
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
// rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &
// 0xff00) | ((b >> 10) & 0xff);
// rgba, divide 2^10 ( >> 10)
rgba[yp] = ((r << 14) & 0xff000000) | ((g << 6) & 0xff0000)
| ((b >> 2) | 0xff00);
}
}
}
static public void encodeYUV420SP_original(byte[] yuv420sp, int[] rgba,
int width, int height) {
final int frameSize = width * height;
int[] U, V;
U = new int[frameSize];
V = new int[frameSize];
final int uvwidth = width / 2;
int r, g, b, y, u, v;
for (int j = 0; j < height; j++) {
int index = width * j;
for (int i = 0; i < width; i++) {
r = (rgba[index] & 0xff000000) >> 24;
g = (rgba[index] & 0xff0000) >> 16;
b = (rgba[index] & 0xff00) >> 8;
// rgb to yuv
y = (66 * r + 129 * g + 25 * b + 128) >> 8 + 16;
u = (-38 * r - 74 * g + 112 * b + 128) >> 8 + 128;
v = (112 * r - 94 * g - 18 * b + 128) >> 8 + 128;
// clip y
yuv420sp[index++] = (byte) ((y < 0) ? 0 : ((y > 255) ? 255 : y));
U[index] = u;
V[index++] = v;
}
}
The problem is that encoding and decoding Yuv data might have some mistake because if i skip the preprocessing step than also encoded Yuv data are differ from original data of PreviewCallback.
Please help me to resolve this issue. I have to used this code in OCR scanning so i need to implement this type of logic.
If any other way of doing same thing than please provide me.
Thanks in advance. :)

Although the documentation suggests that you can set which format the image data should arrive from the camera in, in practice you often have a choice of one: NV21, a YUV format. For lots of information on this format see http://www.fourcc.org/yuv.php#NV21 and for information on the theory behind converting it to RGB see http://www.fourcc.org/fccyvrgb.php. There is a picture based explanation at Extract black and white image from android camera's NV21 format. There is an android specific section on a wikipedia page about the subject (thanks #AlexCohn): YUV#Y'UV420sp (NV21) to RGB conversion (Android).
However, once you've set up your onPreviewFrame routine, the mechanics of going from the byte array it sends you to useful data is somewhat, ummmm, unclear. From API 8 onwards, the following solution is available, to get to a ByteStream holiding a JPEG of the image (compressToJpeg is the only conversion option offered by YuvImage):
// pWidth and pHeight define the size of the preview Frame
ByteArrayOutputStream out = new ByteArrayOutputStream();
// Alter the second parameter of this to the actual format you are receiving
YuvImage yuv = new YuvImage(data, ImageFormat.NV21, pWidth, pHeight, null);
// bWidth and bHeight define the size of the bitmap you wish the fill with the preview image
yuv.compressToJpeg(new Rect(0, 0, bWidth, bHeight), 50, out);
This JPEG may then need to be converted into the format you want. If you want a Bitmap:
byte[] bytes = out.toByteArray();
Bitmap bitmap= BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
If, for whatever reason, you are unable to do this, you can do the conversion manually. Some problems to be overcome in doing this:
The data arrives in a byte array. By definition, bytes are signed numbers, meaning that they go from -128 to 127. However, the data is actually unsigned bytes (0 to 255). If this isn't dealt with, the outcome is doomed to have some odd clipping effects.
The data is in a very specific order (as per the previously mentioned web pages) and each pixel needs to be extracted carefully.
Each pixel needs to be put into the right place on a bitmap, say. This also requires a rather messy (in my view) approach of building a buffer of the data and then filling a bitmap from it.
In principle, the values should be stored [16..240], but it appears that they are stored [0..255] in the data sent to onPreviewFrame
Just about every web page on the matter proposes different coefficients, even allowing for [16..240] vs [0..255] options.
If you've actually got NV12 (another variant on YUV420), then you will need to swap the reads for U and V.
I present a solution (which seems to work), with requests for corrections, improvements and ways of making the whole thing less costly to run. I have set it out to hopefully make clear what is happening, rather than to optimise it for speed. It creates a bitmap the size of the preview image:
The data variable is coming from the call to onPreviewFrame
// Define whether expecting [16..240] or [0..255]
boolean dataIs16To240 = false;
// the bitmap we want to fill with the image
Bitmap bitmap = Bitmap.createBitmap(imageWidth, imageHeight, Bitmap.Config.ARGB_8888);
int numPixels = imageWidth*imageHeight;
// the buffer we fill up which we then fill the bitmap with
IntBuffer intBuffer = IntBuffer.allocate(imageWidth*imageHeight);
// If you're reusing a buffer, next line imperative to refill from the start,
// if not good practice
intBuffer.position(0);
// Set the alpha for the image: 0 is transparent, 255 fully opaque
final byte alpha = (byte) 255;
// Holding variables for the loop calculation
int R = 0;
int G = 0;
int B = 0;
// Get each pixel, one at a time
for (int y = 0; y < imageHeight; y++) {
for (int x = 0; x < imageWidth; x++) {
// Get the Y value, stored in the first block of data
// The logical "AND 0xff" is needed to deal with the signed issue
float Y = (float) (data[y*imageWidth + x] & 0xff);
// Get U and V values, stored after Y values, one per 2x2 block
// of pixels, interleaved. Prepare them as floats with correct range
// ready for calculation later.
int xby2 = x/2;
int yby2 = y/2;
// make this V for NV12/420SP
float U = (float)(data[numPixels + 2*xby2 + yby2*imageWidth] & 0xff) - 128.0f;
// make this U for NV12/420SP
float V = (float)(data[numPixels + 2*xby2 + 1 + yby2*imageWidth] & 0xff) - 128.0f;
if (dataIs16To240) {
// Correct Y to allow for the fact that it is [16..235] and not [0..255]
Y = 1.164*(Y - 16.0);
// Do the YUV -> RGB conversion
// These seem to work, but other variations are quoted
// out there.
R = (int)(Yf + 1.596f*V);
G = (int)(Yf - 0.813f*V - 0.391f*U);
B = (int)(Yf + 2.018f*U);
}
else {
// No need to correct Y
// These are the coefficients proposed by #AlexCohn
// for [0..255], as per the wikipedia page referenced
// above
R = (int)(Yf + 1.370705f*V);
G = (int)(Yf - 0.698001f*V - 0.337633f*U);
B = (int)(Yf + 1.732446f*U);
}
// Clip rgb values to 0-255
R = R < 0 ? 0 : R > 255 ? 255 : R;
G = G < 0 ? 0 : G > 255 ? 255 : G;
B = B < 0 ? 0 : B > 255 ? 255 : B;
// Put that pixel in the buffer
intBuffer.put(alpha*16777216 + R*65536 + G*256 + B);
}
}
// Get buffer ready to be read
intBuffer.flip();
// Push the pixel information from the buffer onto the bitmap.
bitmap.copyPixelsFromBuffer(intBuffer);
As #Timmmm points out below, you could do the conversion in int by multiplying the scaling factors by 1000 (ie. 1.164 becomes 1164) and then dividng the end results by 1000.

Why not specify that camera preview should provide RGB images?
i.e. Camera.Parameters.setPreviewFormat(ImageFormat.RGB_565);

You can use RenderScript -> ScriptIntrinsicYuvToRGB
Kotlin Sample
val rs = RenderScript.create(CONTEXT_HERE)
val yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs))
val yuvType = Type.Builder(rs, Element.U8(rs)).setX(byteArray.size)
val inData = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT)
val rgbaType = Type.Builder(rs, Element.RGBA_8888(rs)).setX(width).setY(height)
val outData = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT)
inData.copyFrom(byteArray)
yuvToRgbIntrinsic.setInput(inData)
yuvToRgbIntrinsic.forEach(outData)
val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
outData.copyTo(bitmap)

After some tests on Samsung S4 mini fastest code is (120% faster then Neil's [floats!] and 30% faster then original Hitesh's):
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
// define variables before loops (+ 20-30% faster algorithm o0`)
int r, g, b, y1192, y, i, uvp, u, v;
for (int j = 0, yp = 0; j < height; j++) {
uvp = frameSize + (j >> 1) * width;
u = 0;
v = 0;
for (i = 0; i < width; i++, yp++) {
y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
y1192 = 1192 * y;
r = (y1192 + 1634 * v);
g = (y1192 - 833 * v - 400 * u);
b = (y1192 + 2066 * u);
// Java's functions are faster then 'IFs'
r = Math.max(0, Math.min(r, 262143));
g = Math.max(0, Math.min(g, 262143));
b = Math.max(0, Math.min(b, 262143));
// rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &
// 0xff00) | ((b >> 10) & 0xff);
// rgba, divide 2^10 ( >> 10)
rgba[yp] = ((r << 14) & 0xff000000) | ((g << 6) & 0xff0000)
| ((b >> 2) | 0xff00);
}
}
}
Speed is comparable to YuvImage.compressToJpeg() with ByteArrayOutputStream as output (30-50 ms for 640x480 image).
Result: Samsung S4 mini (2x1.7GHz) can't compress to JPEG/convert YUV to RGB in real time (640x480#30fps)

Java implementation is 10 times slow than the c version, I suggest you use GPUImage library or just move this part of code.
There is a android version of GPUImage:
https://github.com/CyberAgent/android-gpuimage
You can include this library if you use gradle, and call the method:
GPUImageNativeLibrary.YUVtoRBGA( inputArray, WIDTH, HEIGHT, outputArray);
I compare the time, for a NV21 image which is 960x540, use above java code, it cost 200ms+, with GPUImage version, just 10ms~20ms.

You can use ColorHelper library for this:
using ColorHelper;
YUV yuv = new YUV(0.1, 0.1, 0.2);
RGB rgb = ColorConverter.YuvToRgb(yuv);
Links:
Github
Nuget

Fixup the above code snippet
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
int r, g, b, y1192, y, i, uvp, u, v;
for (int j = 0, yp = 0; j < height; j++) {
uvp = frameSize + (j >> 1) * width;
u = 0;
v = 0;
for (i = 0; i < width; i++, yp++) {
y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
// above answer is wrong at the following lines. just swap ***u*** and ***v***
u = (0xff & yuv420sp[uvp++]) - 128;
v = (0xff & yuv420sp[uvp++]) - 128;
}
y1192 = 1192 * y;
r = (y1192 + 1634 * v);
g = (y1192 - 833 * v - 400 * u);
b = (y1192 + 2066 * u);
r = Math.max(0, Math.min(r, 262143));
g = Math.max(0, Math.min(g, 262143));
b = Math.max(0, Math.min(b, 262143));
// combine ARGB
rgba[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00)
| ((b >> 10) | 0xff);
}
}
}

Try RenderScript ScriptIntrinsicYuvToRGB, which comes with JellyBean 4.2 (Api 17+).
https://developer.android.com/reference/android/renderscript/ScriptIntrinsicYuvToRGB.html
On Nexus 7 (2013, JellyBean 4.3) a 1920x1080 image conversion (full HD camera preview) takes about 7 ms.

You can get the bitmap directly from the TextureView. Which is really fast.
Bitmap bitmap = textureview.getBitmap()

After reading many suggested links, articles, etc. I found the following great Android example app which captures the YUV Image from the camera and converts it into RGB Bitmap:
https://github.com/android/camera-samples/tree/main/CameraXTfLite
Nice things about this:
It uses the aforementioned RenderScript framework and the code can be easily reused - check out the YuvToRgbConverter.kt class
according to their documentation, this code achieves " ~30 FPS # 640x480 on a Pixel 3 phone"
After switching to this code (especially the YUV to RGB conversion part) my framerate doubled! I am not quite reaching 30 FPS overall since I am doing a bit more things after capturing the image, but the speed-up is remarkable!

Related

How to correctly pass YUV_420_888 Image Buffer from Java through JNI to OpenCV, accounting for stride/padding

I'm trying to convert the image data from an Android device from YUV_420_888 to an RGB matrix on the C++ side. On some devices, this is working flawlessly. On a Note 10, the image comes out looking like this:
My guess here is that the stride is causing this issue. How do I remove this extra data and then pass the correct buffer through JNI?
Here is the Java code:
IntBuffer rgb = image.getPlanes()[0].getBuffer().asIntBuffer();
NativeLib.passImageBuffer(rgb);
And here is the C++ code:
cv::Mat outputRGB;
cv::cvtColor(cv::Mat(height+height/2, width, CV_8UC1, inputRGB), outputRGB, CV_YUV2BGR_NV21);
I've tried some different image formats on the C++ side, but they all come back with the same band on the side of the screen.
I've implemented this answer, in order to remove the extra padding, but the image that is passed ends up being completely green. Do some corresponding edits need to be made to the C++ code? I've tried using a 3 channel format, but that crashes at runtime. I'm thinking that since passing the buffer works with the 1 channel matrix on phones that have 8 bits per pixel, that it should be possible to do that with the note 10?
Image.Plane Y = image.getPlanes()[0];
Image.Plane U = image.getPlanes()[1];
Image.Plane V = image.getPlanes()[2];
int[] rgbBytes = new int[image.getHeight()*image.getWidth()*4];
int idx = 0;
ByteBuffer yBuffer = Y.getBuffer();
int yPixelStride = Y.getPixelStride();
int yRowStride = Y.getRowStride();
ByteBuffer uBuffer = U.getBuffer();
int uPixelStride = U.getPixelStride();
int uRowStride = U.getRowStride();
ByteBuffer vBuffer = V.getBuffer();
int vPixelStride = V.getPixelStride();
int vRowStride = V.getRowStride();
ByteBuffer rgbBuffer = ByteBuffer.allocateDirect(rgb.limit());
for (int row = 0; row < image.getHeight(); row++) {
for (int col = 0; col < image.getWidth(); col++) {
int y = yBuffer.get(col*yPixelStride + row*yRowStride) & 0xff;
int u = uBuffer.get(col/2*uPixelStride + row/2*uRowStride) & 0xff;
int v = vBuffer.get(col/2*vPixelStride + row/2*vRowStride) & 0xff;
int y1 = ((19077 << 8) * y) >> 16;
int r = (y1 + (((26149 << 8) * v) >> 16) - 14234) >> 6;
int g = (y1 - (((6419 << 8) * u) >> 16) - (((13320 << 8) * v) >> 16) + 8708) >> 6;
int b = (y1 + (((33050 << 8) * u) >> 16) - 17685) >> 6;
if (r < 0) r = 0;
if (g < 0) g = 0;
if (b < 0) b = 0;
if (r > 255) r = 255;
if (g > 255) g = 255;
if (b > 255) b = 255;
byte pixel = (byte)(0xff000000 + b + 256 * (g + 256 * r));
rgbBuffer.put(pixel);
}
}
Look at this repo
https://github.com/quickbirdstudios/yuvToMat/
It supports different formats (YUV420, NV12) and variety of pixel and row strides.

Android ImageReader get NV21 format?

I do not have a background in imaging or graphics, so please bear with me :)
I am using JavaCV in one of my projects. In the examples, a Frame is constructed which has a buffer of a certain size.
When using the public void onPreviewFrame(byte[] data, Camera camera) function in Android, copying this data byte array is no problem if you declare the Frame as new Frame(frameWidth, frameHeight, Frame.DEPTH_UBYTE, 2); where frameWidth and frameHeight are declared as
Camera.Size previewSize = cameraParam.getPreviewSize();
int frameWidth = previewSize.width;
int frameHeight = previewSize.height;
Recently, Android added a method to capture your screen. Naturally, I wanted to grab those images and also covert them to Frames. I modified the example code from Google to use the ImageReader.
This ImageReader is constructed as ImageReader.newInstance(DISPLAY_WIDTH, DISPLAY_HEIGHT, PixelFormat.RGBA_8888, 2);. So currently it uses the RGBA_8888 pixel format. I use the following code to copy the bytes to the Frame, which is instantiated as new Frame(DISPLAY_WIDTH, DISPLAY_HEIGHT, Frame.DEPTH_UBYTE, 2);:
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
mImage.close();
((ByteBuffer) frame.image[0].position(0)).put(bytes);
But this gives me a java.nio.BufferOverflowException. I printed the sizes of both buffers and the Frame's buffer size is 691200 whereas the bytes array above is of size 1413056. Figuring out how this latter number is constructed failed because I ran into this native call. So clearly, this won't work out.
After quite a bit of digging I found out that the NV21 image format is "the default format for Camera preview images, when not otherwise set with setPreviewFormat(int)", but the ImageReader class does not support the NV21 format (see the format parameter). So that's tough luck. In the documentation it also reads that
"For the android.hardware.camera2 API, the YUV_420_888 format is recommended for YUV output instead."
So I tried creating an ImageReader like this ImageReader.newInstance(DISPLAY_WIDTH, DISPLAY_HEIGHT, ImageFormat.YUV_420_888, 2);, but this gives me java.lang.UnsupportedOperationException: The producer output buffer format 0x1 doesn't match the ImageReader's configured buffer format 0x23. so that won't work either.
As a last resort, I tried to convert RGBA_8888 to YUV myself using e.g. this post, but I fail to understand how I can obtain an int[] rgba as per the answer.
So, TL;DR how can I obtain NV21 image data like you get in Android's public void onPreviewFrame(byte[] data, Camera camera) camera function to instantiate my Frame and work with it using Android's ImageReader (and Media Projection)?
Edit (25-10-2016)
I have created the following conversion runnable to go from RGBA to NV21 format:
private class updateImage implements Runnable {
private final Image mImage;
public updateImage(Image image) {
mImage = image;
}
#Override
public void run() {
int mWidth = mImage.getWidth();
int mHeight = mImage.getHeight();
// Four bytes per pixel: width * height * 4.
byte[] rgbaBytes = new byte[mWidth * mHeight * 4];
// put the data into the rgbaBytes array.
mImage.getPlanes()[0].getBuffer().get(rgbaBytes);
mImage.close(); // Access to the image is no longer needed, release it.
// Create a yuv byte array: width * height * 1.5 ().
byte[] yuv = new byte[mWidth * mHeight * 3 / 2];
RGBtoNV21(yuv, rgbaBytes, mWidth, mHeight);
((ByteBuffer) yuvImage.image[0].position(0)).put(yuv);
}
void RGBtoNV21(byte[] yuv420sp, byte[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int A, R, G, B, Y, U, V;
int index = 0;
int rgbIndex = 0;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
R = argb[rgbIndex++];
G = argb[rgbIndex++];
B = argb[rgbIndex++];
A = argb[rgbIndex++]; // Ignored right now.
// RGB to YUV conversion according to
// https://en.wikipedia.org/wiki/YUV#Y.E2.80.B2UV444_to_RGB888_conversion
Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor
// of 2 meaning for every 4 Y pixels there are 1 V and 1 U.
// Note the sampling is every other pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (i % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
}
index++;
}
}
}
}
The yuvImage object is initialized as yuvImage = new Frame(DISPLAY_WIDTH, DISPLAY_HEIGHT, Frame.DEPTH_UBYTE, 2);, the DISPLAY_WIDTH and DISPLAY_HEIGHT are just two integers specifying the display size.
This is the code where a background handler handles the onImageReady:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
mBackgroundHandler.post(new updateImage(reader.acquireNextImage()));
}
};
...
mImageReader = ImageReader.newInstance(DISPLAY_WIDTH, DISPLAY_HEIGHT, PixelFormat.RGBA_8888, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);
The methods work and I at least don't get any errors, but the output image is malformed. What is going wrong in my conversion? An example image that is being created:
Edit (15-11-2016)
I have modified the RGBtoNV21 function to be the following:
void RGBtoNV21(byte[] yuv420sp, int width, int height) {
try {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int pixelStride = mImage.getPlanes()[0].getPixelStride();
int rowStride = mImage.getPlanes()[0].getRowStride();
int rowPadding = rowStride - pixelStride * width;
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
Bitmap bitmap = Bitmap.createBitmap(getResources().getDisplayMetrics(), width, height, Bitmap.Config.ARGB_8888);
int A, R, G, B, Y, U, V;
int offset = 0;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
// Useful link: https://stackoverflow.com/questions/26673127/android-imagereader-acquirelatestimage-returns-invalid-jpg
R = (buffer.get(offset) & 0xff) << 16; // R
G = (buffer.get(offset + 1) & 0xff) << 8; // G
B = (buffer.get(offset + 2) & 0xff); // B
A = (buffer.get(offset + 3) & 0xff) << 24; // A
offset += pixelStride;
int pixel = 0;
pixel |= R; // R
pixel |= G; // G
pixel |= B; // B
pixel |= A; // A
bitmap.setPixel(j, i, pixel);
// RGB to YUV conversion according to
// https://en.wikipedia.org/wiki/YUV#Y.E2.80.B2UV444_to_RGB888_conversion
// Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
// U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
// V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
Y = (int) Math.round(R * .299000 + G * .587000 + B * .114000);
U = (int) Math.round(R * -.168736 + G * -.331264 + B * .500000 + 128);
V = (int) Math.round(R * .500000 + G * -.418688 + B * -.081312 + 128);
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor
// of 2 meaning for every 4 Y pixels there are 1 V and 1 U.
// Note the sampling is every other pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (i % 2 == 0 && j % 2 == 0) {
yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
}
}
offset += rowPadding;
}
File file = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES).getAbsolutePath(), "/Awesomebitmap.png");
FileOutputStream fos = new FileOutputStream(file);
bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
} catch (Exception e) {
Timber.e(e, "Converting image to NV21 went wrong.");
}
}
Now the image is no longer malformed, but the chroma is off.
The right side is the bitmap that is being created in that loop, the left side is the NV21 saved to an image. So the RGB pixels are processed correctly. Clearly the chroma is off, but the RGB to YUV conversion should be the same one as depicted by wikipedia. What could be wrong here?
Generally speaking, the point of ImageReader is to give you raw access to the pixels sent to the Surface with minimal overhead, so attempting to have it perform color conversions doesn't make sense.
For the Camera you get to pick one of two output formats (NV21 or YV12), so pick YV12. That's your raw YUV data. For screen capture the output will always be RGB, so you need to pick RGBA_8888 (format 0x1) for your ImageReader, rather than YUV_420_888 (format 0x23). If you need YUV for that, you will have to do the conversion yourself. The ImageReader gives you a series of Plane objects, not a byte[], so you will need to adapt to that.

Android : Preview frame, converted from YCbCr_420_SP (NV21) format to RGB render correct picture but in green

I post this question as I didn’t find answers solving my issues in the following posts :
Converting preview frame to bitmap
Android decodeYUV420SP results in green images?
Displaying YUV Image in Android
I got my data and cameraResolution (after selecting BestPreviewSize) from :
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
if (cameraResolution != null && thePreviewHandler != null) {
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}
}
Later from conversion to RGB, I use :
int[] pixels=new int[yuvData.length];
// int size = dataWidth*dataHeight;
// int[] pixels=new int[size]; replacing yuvData.length by size is not working too
pixels=decodeYUV420SP(pixels,yuvData, dataWidth, dataHeight);
Bitmap bitmap = Bitmap.createBitmap(dataWidth, dataHeight, Bitmap.Config.RGB_565);
bitmap.setPixels(pixels, 0, dataWidth, 0, 0, dataWidth, dataHeight);
I tried 2 methods, one with RenderScript and one with decodeYUV420SP:
public Bitmap convertYUV420_NV21toRGB8888_RenderScript(byte [] data,int W, int H, CaptureActivityOCR fragment) {
// https://stackoverflow.com/questions/20358803/how-to-use-scriptintrinsicyuvtorgb-converting-byte-yuv-to-byte-rgba
RenderScript rs;
ScriptIntrinsicYuvToRGB yuvToRgbIntrinsic;
rs = RenderScript.create(fragment.getActivity());
yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs)); //Create an intrinsic for converting YUV to RGB.
Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(data.length);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT); //an Allocation will be populated with empty data when it is first created
Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(W).setY(H);
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT); //an Allocation will be populated with empty data when it is first created
in.copyFrom(data);//Populate Allocations with data.
yuvToRgbIntrinsic.setInput(in); //Set the input yuv allocation, must be U8(RenderScript).
yuvToRgbIntrinsic.forEach(out); //Launch the appropriate kernels,Convert the image to RGB.
Bitmap bmpout = Bitmap.createBitmap(W, H, Bitmap.Config.ARGB_8888);
out.copyTo(bmpout); //Copy data out of Allocation objects.
return bmpout;
}
or
int[] decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
Log.e("camera", " decodeYUV420SP ");
// TODO Auto-generated method stub
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
return rgb;
}
But still I don't know why I got some good image but in green. (I have another similar method in black and white which is working)
In addition of the links above,i tried all posts linked to this subject on SO , so If someone could help for another tip please? I may be confused with the size to apply?
Somewhere after OnPreviewFrame method and before conversion to RGB method, I use another method with instruction below as I need to rotate the data received. I am wondering if this is not the origin of issue :
byte[] rotatedData = new byte[data.length];
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++)
rotatedData[x * height + height - y - 1] = data[x + y * width];
}
int tmp = width;
width = height;
height = tmp;
Please help me?
I think your rotation routine is wrong.
It works for Luma (Y), hence you get good results for black and white picture but not for chroma. If width and height are dimensions of your picture you do not rotate chroma values at all.
So you will need to add second loop for chroma and move pairs of bytes (V and U).
If you tried it without this rotation and you still have greenish picture the problem might be also in your decodeYUV420SP function.
There is not one universal formula for yuv to rgba conversion. It must match the opposite one.
Look here http://www.codeproject.com/Articles/402391/RGB-to-YUV-conversion-with-different-chroma-sampli and here http://www.fourcc.org/fccyvrgb.php
This one works for me
B = 1.164(Y - 16) + 2.018(U - 128)
G = 1.164(Y - 16) - 0.813(V - 128) - 0.391(U - 128)
R = 1.164(Y - 16) + 1.596(V - 128)
It seems to be the same as you are using only float version so you may try other.
I suggest to rotate the bitmap after RenderScript rotation:
if (bmpout == null) {
bmpout = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
}
rgbaAllocation.copyTo(bmpout);
if (matrixPostRotate90 == null) {
matrixPostRotate90 = new Matrix();
matrixPostRotate90.postRotate(90);
}
Bitmap rotatedBitmap = Bitmap.createBitmap(bmpout, 0, 0, w, h, matrixPostRotate90, true);

Android decodeYUV420SP results in green images?

Ok so my question is pretty much identical to this:
Converting preview frame to bitmap
However his answer is no good, and trying to use it doesn't solve my problem.
So what I'm trying to do at the moment is to send each frame as a bitmap to a method to detect if there are any faces, but first I need to create a bitmap which means I have to use the decodeYUV420sp method, which doesn't seem to be working properly and all my images just come out as a green and yellow tie dye looking image. Here is my code:
This is from onPreviewFrame:
Parameters parameters = cam.getParameters();
Integer width = parameters.getPreviewSize().width;
Integer height = parameters.getPreviewSize().height;
Log.i("preview size: ", String.valueOf(width) + "x" + String.valueOf(height));
int[] mIntArray = new int[width*height];
// Decode Yuv data to integer array
decodeYUV420SP(mIntArray, data, width, height);
//Initialize the bitmap, with the replaced color
Bitmap bmp = Bitmap.createBitmap(mIntArray, width, height, Bitmap.Config.ARGB_8888);
saveImage(bmp);
This is decodeYUV method:
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
// rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &
// 0xff00) | ((b >> 10) & 0xff);
// rgba, divide 2^10 ( >> 10)
rgba[yp] = ((r << 14) & 0xff000000) | ((g << 6) & 0xff0000)
| ((b >> 2) | 0xff00);
}
}
}
and this is the method I'm calling to save the bitmaps to see what they look like:
private void saveImage(Bitmap bmp) {
File myDir=new File("/sdcard/saved_images");
myDir.mkdirs();
Random generator = new Random();
int n = 10000;
n = generator.nextInt(n);
String fname = "Image-"+ n +".jpg";
File file = new File (myDir, fname);
if (file.exists ()) file.delete ();
try {
FileOutputStream out = new FileOutputStream(file);
bmp.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Here is a resulting image:
https://docs.google.com/drawings/d/1kyIvb4oHHInW_c71mjfFSVCxVopBgBWX3k1OR_nMgRA/edit
The key point here is that there are a (large) number of different YUV encodings, and an even larger list of names used for them. A lot of information about all the different variants (and their names) is given by fourcc, although 420SP isn't mentionned explicitly. Looking here, it looks like:
420P is the same as YV12. 'P' appears to stand for planar: there are three 'planes' of data one after the other: Y, U and then V. (Or, in YV21, which is also a 420P encoding, Y, V and then U.)
420SP is the same as NV12 (which is the same as NV21 but with U and V swapped around). 'SP' appears to stand for 'semi-planar', and so '420SP' could, technically, refer to either NV21 or NV12.
In this case, therefore, you are decoding NV12 (as opposed to NV21) and so the order of U and V is swapped around compared to the answer you quote in your answer. In case it helps, I have provided some code here.
Ok so the problem was the decodeYUV method which I got from a different stackoverflow post here:
Converting YUV->RGB(Image processing)->YUV during onPreviewFrame in android? didn't quite work.
But I replaced that with what I think must be the original decodeYUV method from here:
http://code.google.com/p/android/issues/detail?id=823
an

Android YUV Image format

In our application, we need to transfer video, we are using Camera class to capture the buffer and send to destination,
I have set format is YV12 as a Camera parameter to receive the buffer,
for the 500X300 buffer, we receive buffer of 230400 bytes,
i want to know , is this expected buffer size ?
I believe the size would be
Y Plane = width * height = 500X300 = 150000
U Plane = width/2 * height/2 = = 37500
V Plane = width/2 * height/2 = = 37500
========
225000
========
Can anyone explain me, if i need to get stride values of each component, how can i get that
Is there any way to get it ?
I can show you how you can get int rgb[] from this:
public int[] decodeYUV420SP(byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
int rgb[] = new int[width * height];
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000)
| ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
return rgb;
}
I guess Android document is already explained it:
http://developer.android.com/reference/android/graphics/ImageFormat.html#YV12
I think this is simple.
chekout YUVImage class from android. You can construct an YUV Image from byte[]data coming from camera preview.
You can write like this:
//width and height you get it from camera properties, image width and height of camera preview
YuvImage image=new YuvImage(data, ImageFormat.NV21, int width, int height, null);
byte[] newData = image.getYuvData();
//or if you want int format = image.getYuvFormat();
It's a quite old question, but I've struggled with the same issue for a few days. So I decided to write some comments to help others.
YV12 described in the Android developer site(here) seems not a kind of YV12 but IMC1. The page says that both of the y-stride and the uv-stride should be aligned in 16bytes.
And also this page says that:
For YV12, the image buffer that is received is not necessarily tightly
packed, as there may be padding at the end of each row of pixel data,
as described in YV12.
Based on the above comments, I calculated it using python command line:
>>> w = 500
>>> h = 300
>>> y_stride = (500 + 15) / 16 * 16.0
>>> y_stride
512.0
>>> y_size = y_stride * h
>>> y_size
153600.0
>>> uv_stride = (500 / 2 + 15) / 16 * 16.0
>>> uv_stride
256.0
>>> u_size = uv_stride * h / 2
>>> v_size = uv_stride * h / 2
>>> size = y_size + u_size + v_size
>>> size
230400.0

Categories

Resources