Scaling YUV420 Image using libyuv produces weird output - android

Possible duplicate of This question with major parts picked from here. I've tried whatever solutions were provided there, they don't work for me.
Background
I'm capturing an image in YUV_420_888 image format returned from ARCore's frame.acquireCameraImage() method. Since I've set the camera configuration at 1920*1080 resolution, I need to scale it down to 224*224 to pass it to my tensorflow-lite implementation. I do that by using LibYuv library through the Android NDK.
Implementation
Prepare the image frames
//Figure out the source image dimensions
int y_size = srcWidth * srcHeight;
//Get dimensions of the desired output image
int out_size = destWidth * destHeight;
//Generate input frame
i420_input_frame.width = srcWidth;
i420_input_frame.height = srcHeight;
i420_input_frame.data = (uint8_t*) yuvArray;
i420_input_frame.y = i420_input_frame.data;
i420_input_frame.u = i420_input_frame.y + y_size;
i420_input_frame.v = i420_input_frame.u + (y_size / 4);
//Generate output frame
free(i420_output_frame.data);
i420_output_frame.width = destWidth;
i420_output_frame.height = destHeight;
i420_output_frame.data = new unsigned char[out_size * 3 / 2];
i420_output_frame.y = i420_output_frame.data;
i420_output_frame.u = i420_output_frame.y + out_size;
i420_output_frame.v = i420_output_frame.u + (out_size / 4);
I scale my image using Libyuv's I420Scale method
libyuv::FilterMode mode = libyuv::FilterModeEnum::kFilterBox;
jint result = libyuv::I420Scale(i420_input_frame.y, i420_input_frame.width,
i420_input_frame.u, i420_input_frame.width / 2,
i420_input_frame.v, i420_input_frame.width / 2,
i420_input_frame.width, i420_input_frame.height,
i420_output_frame.y, i420_output_frame.width,
i420_output_frame.u, i420_output_frame.width / 2,
i420_output_frame.v, i420_output_frame.width / 2,
i420_output_frame.width, i420_output_frame.height,
mode);
and return it to java
//Create a new byte array to return to the caller in Java
jbyteArray outputArray = env -> NewByteArray(out_size * 3 / 2);
env -> SetByteArrayRegion(outputArray, 0, out_size, (jbyte*) i420_output_frame.y);
env -> SetByteArrayRegion(outputArray, out_size, out_size / 4, (jbyte*) i420_output_frame.u);
env -> SetByteArrayRegion(outputArray, out_size + (out_size / 4), out_size / 4, (jbyte*) i420_output_frame.v);
Actual image :
What it looks like post scaling :
What it looks like if I create an Image from the i420_input_frame without scaling :
Since the scaling messes up the colors big time, tensorflow fails to recognize objects properly. (It recognizes properly in their sample application) What am I doing wrong to mess up the colors big time?

Either I was doing something wrong (Which I couldn't fix) or LibYuv does not handle colors properly while dealing with YUV images from Android.
Refer official bug posted on Libyuv library : https://bugs.chromium.org/p/libyuv/issues/detail?id=815&can=1&q=&sort=-id
They suggested I use a method Android420ToI420() first and then I apply whatever transformations I need. I ended up using Android420ToI420() first, then Scaling, then transformation to RGB. In the end, the output was slightly better than the cup image posted above but the distorted colors were still present. I ended up using OpenCV to shrink the image and convert it to RGBA or RGB formats.
// The camera image received is in YUV YCbCr Format at preview dimensions
// so we will scale it down to 224x224 size using OpenCV
// Y plane (0) non-interleaved => stride == 1; U/V plane interleaved => stride == 2
// Refer : https://developer.android.com/reference/android/graphics/ImageFormat.html#YUV_420_888
val cameraPlaneY = cameraImage.planes[0].buffer
val cameraPlaneUV = cameraImage.planes[1].buffer
// Create a new Mat with OpenCV. One for each plane - Y and UV
val y_mat = Mat(cameraImage.height, cameraImage.width, CvType.CV_8UC1, cameraPlaneY)
val uv_mat = Mat(cameraImage.height / 2, cameraImage.width / 2, CvType.CV_8UC2, cameraPlaneUV)
var mat224 = Mat()
var cvFrameRGBA = Mat()
// Retrieve an RGBA frame from the produced YUV
Imgproc.cvtColorTwoPlane(y_mat, uv_mat, cvFrameRGBA, Imgproc.COLOR_YUV2BGRA_NV21)
//Then use this frame to retrieve all RGB channel data
//Iterate over all pixels and retrieve information of RGB channels
for(rows in 1 until cvFrameRGBA.rows())
for(cols in 1 until cvFrameRGBA.cols()) {
val imageData = cvFrameRGBA.get(rows, cols)
// Type of Mat is 24
// Channels is 4
// Depth is 0
rgbBytes.put(imageData[0].toByte())
rgbBytes.put(imageData[1].toByte())
rgbBytes.put(imageData[2].toByte())
}

The color problem is caused because you are working with a different YUV format. The YUV format that camera frameworks use is YUV NV21. This format (NV21) is the standard picture format on Android camera preview. YUV 4:2:0 planar image, with 8 bit Y samples, followed by interleaved V/U plane with 8bit 2x2 subsampled chroma samples.
If your colors are inversed, it means that:
You are working with a YUV NV12 (plane U is V and V is U).
One of your color planes is doing something weird.
To work properly with libyuv I suggest you to convert your camera output to a YUV I420 using transformI420 method and sending the format by parameter:
return libyuv::ConvertToI420(src, srcSize, //src data
dstY, dstWidth, //dst planes
dstU, dstWidth / 2,
dstV, dstWidth / 2,
cropLeft, cropTop, //crop start
srcWidth, srcHeight, //src dimensions
cropRight - cropLeft, cropBottom - cropTop, //dst dimensions
rotationMode,
libyuv::FOURCC_NV21); //libyuv::FOURCC_NV12
After do this conversion you will be able to properly work with libyuv using all the I420scale, I420rotate... and so on. Your scale method should look like:
JNIEXPORT jint JNICALL
Java_com_aa_project_images_yuv_myJNIcl_scaleI420(JNIEnv *env, jclass type,
jobject srcBufferY,
jobject srcBufferU,
jobject srcBufferV,
jint srcWidth, jint srcHeight,
jobject dstBufferY,
jobject dstBufferU,
jobject dstBufferV,
jint dstWidth, jint dstHeight,
jint filterMode) {
const uint8_t *srcY = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferY));
const uint8_t *srcU = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferU));
const uint8_t *srcV = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferV));
uint8_t *dstY = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferY));
uint8_t *dstU = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferU));
uint8_t *dstV = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferV));
return libyuv::I420Scale(srcY, srcWidth,
srcU, srcWidth / 2,
srcV, srcWidth / 2,
srcWidth, srcHeight,
dstY, dstWidth,
dstU, dstWidth / 2,
dstV, dstWidth / 2,
dstWidth, dstHeight,
static_cast<libyuv::FilterMode>(filterMode));
}
If you want to convert this image to a JPEG after all the process. You can use I420toNV21 method and after that use the android native conversion from YUV to JPEG. Also you can use libJpegTurbo which is a complementary library for this kind of situations.

Related

Android NDK converting YUV420 AIMAGE to RGB

I'm working on streaming camera frames using NDK CAMERA2 API. the format of my AIMAGE_READER is YUV420;however I would like to convert it to RGB. I looked up some Java examples that do the same, but for some reason it doesn't work well and the image is distorted.
the frame resolution is 640X480.
can someone tell me what I am doing wrong here
void NativeCamera::previewImageCallback(void *context, AImageReader *reader)
{
Log::Debug(TAG) << "previewImageCallback" << Log::Endl;
AImage *previewImage = nullptr;
auto status = AImageReader_acquireLatestImage(reader, &previewImage);
if(status !=AMEDIA_OK)
{
return;
}
std::thread processor([=]()
{
uint8_t *dataY = nullptr;
uint8_t *dataU = nullptr;
uint8_t *dataV = nullptr;
int lenY = 0;
int lenU = 0;
int lenV = 0;
AImage_getPlaneData(previewImage, 0, (uint8_t**)&dataY, &lenY);
AImage_getPlaneData(previewImage, 1, (uint8_t**)&dataU, &lenU);
AImage_getPlaneData(previewImage, 2, (uint8_t**)&dataV, &lenV);
uchar buff[lenY+lenU+lenV];
memcpy(buff+0,dataY,lenY);
memcpy(buff+lenY,dataV,lenV);
memcpy(buff+lenY+lenV,dataU,lenU);
cv::Mat yuvMat(480+240,640,CV_8UC1,&buff);
cv::Mat rgbMat;
cv::cvtColor(yuvMat,rgbMat,cv::COLOR_YUV2RGB_NV21,3);
//colorBuffer defined elsewhere
memcpy((char*)colorBuffer,rgbMat.data,640*480*3);
That happens because YUV420p format has 6 bytes per 4 pixels, but RGB has 12 bytes per 4 pixel. So you have to repack your array first, cv::cvtColor - does not do it. Check here about YUV420p packing.

Problems when scaling a YUV image using libyuv library

I'm developing a camera app based on Camera API 2 and I have found several problems using the libyuv.
I want to convert YUV_420_888 images retrieved from a ImageReader, but I'm having some problems with scaling in a reprocessable surface.
In essence: Images come out with tones of green instead of having the corresponding tones (I'm exporting the .yuv files and checking them using http://rawpixels.net/).
You can see an input example here:
And what I get after I perform scaling:
I think I am doing something wrong with strides, or providing an invalid YUV format (maybe I have to transform the image to another format?). However, I can't figure out where is the error since I don't know how to correlate the green color to the scaling algorithm.
This is the conversion code I am using, you can ignore the return NULL as there is further processing that is not related to the problem.
#include <jni.h>
#include <stdint.h>
#include <android/log.h>
#include <inc/libyuv/scale.h>
#include <inc/libyuv.h>
#include <stdio.h>
#define LOG_TAG "libyuv-jni"
#define unused(x) UNUSED_ ## x __attribute__((__unused__))
#define LOGD(...) __android_log_print(ANDROID_LOG_DEBUG, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS_)
struct YuvFrame {
int width;
int height;
uint8_t *data;
uint8_t *y;
uint8_t *u;
uint8_t *v;
};
static struct YuvFrame i420_input_frame;
static struct YuvFrame i420_output_frame;
extern "C" {
JNIEXPORT jbyteArray JNICALL
Java_com_android_camera3_camera_hardware_session_output_photo_yuv_YuvJniInterface_scale420YuvByteArray(
JNIEnv *env, jclass /*clazz*/, jbyteArray yuvByteArray_, jint src_width, jint src_height,
jint out_width, jint out_height) {
jbyte *yuvByteArray = env->GetByteArrayElements(yuvByteArray_, NULL);
//Get input and output length
int input_size = env->GetArrayLength(yuvByteArray_);
int out_size = out_height * out_width;
//Generate input frame
i420_input_frame.width = src_width;
i420_input_frame.height = src_height;
i420_input_frame.data = (uint8_t *) yuvByteArray;
i420_input_frame.y = i420_input_frame.data;
i420_input_frame.u = i420_input_frame.y + input_size;
i420_input_frame.v = i420_input_frame.u + input_size / 4;
//Generate output frame
free(i420_output_frame.data);
i420_output_frame.width = out_width;
i420_output_frame.height = out_height;
i420_output_frame.data = new unsigned char[out_size * 3 / 2];
i420_output_frame.y = i420_output_frame.data;
i420_output_frame.u = i420_output_frame.y + out_size;
i420_output_frame.v = i420_output_frame.u + out_size / 4;
libyuv::FilterMode mode = libyuv::FilterModeEnum::kFilterBilinear;
int result = I420Scale(i420_input_frame.y, i420_input_frame.width,
i420_input_frame.u, i420_input_frame.width / 2,
i420_input_frame.v, i420_input_frame.width / 2,
i420_input_frame.width, i420_input_frame.height,
i420_output_frame.y, i420_output_frame.width,
i420_output_frame.u, i420_output_frame.width / 2,
i420_output_frame.v, i420_output_frame.width / 2,
i420_output_frame.width, i420_output_frame.height,
mode);
LOGD("Image result %d", result);
env->ReleaseByteArrayElements(yuvByteArray_, yuvByteArray, 0);
return NULL;
}
You can try that code that it uses the y_size instead of full size of your array.
...
//Get input and output length
int input_size = env->GetArrayLength(yuvByteArray_);
int y_size = src_width * src_height;
int out_size = out_height * out_width;
//Generate input frame
i420_input_frame.width = src_width;
i420_input_frame.height = src_height;
i420_input_frame.data = (uint8_t *) yuvByteArray;
i420_input_frame.y = i420_input_frame.data;
i420_input_frame.u = i420_input_frame.y + y_size;
i420_input_frame.v = i420_input_frame.u + y_size / 4;
//Generate output frame
free(i420_output_frame.data);
i420_output_frame.width = out_width;
i420_output_frame.height = out_height;
i420_output_frame.data = new unsigned char[out_size * 3 / 2];
i420_output_frame.y = i420_output_frame.data;
i420_output_frame.u = i420_output_frame.y + out_size;
i420_output_frame.v = i420_output_frame.u + out_size / 4;
...
probably your code is based on that https://github.com/begeekmyfriend/yasea/blob/master/library/src/main/libenc/jni/libenc.cc and according to that code you have to use the y_size
You have an issue with the input size of the frame:
It should be:
int input_array_size = env->GetArrayLength(yuvByteArray_);
int input_size = input_array_size * 2 / 3; //This is the frame size
For example, If you have a Frame that is 6x4
Chanel y size: 6*4 = 24
1 2 3 4 5 6
_ _ _ _ _ _
|_|_|_|_|_|_| 1
|_|_|_|_|_|_| 2
|_|_|_|_|_|_| 3
|_|_|_|_|_|_| 4
Chanel u size: 3*2 = 6
1 2 3
_ _ _ _ _ _
| | | |
|_ _|_ _|_ _| 1
| | | |
|_ _|_ _|_ _| 2
Chanel v size: 3*2 = 6
1 2 3
_ _ _ _ _ _
| | | |
|_ _|_ _|_ _| 1
| | | |
|_ _|_ _|_ _| 2
Array Size = 6*4+3*2+3*2 = 36
But actual Frame Size = channel y Size = 36 * 2 / 3 = 24
gmetax is almost correct.
You are using the size of the entire array where you should be using the size of the Y component, which is src_width * src_height.
gmetax's answer is wrong in that he has put y_size in place of out_size when defining the output frame. The correct code snippet, I believe, would look like:
//Get input and output length
int input_size = env->GetArrayLength(yuvByteArray_);
int y_size = src_width * src_height;
int out_size = out_height * out_width;
//Generate input frame
i420_input_frame.width = src_width;
i420_input_frame.height = src_height;
i420_input_frame.data = (uint8_t *) yuvByteArray;
i420_input_frame.y = i420_input_frame.data;
i420_input_frame.u = i420_input_frame.y + y_size;
i420_input_frame.v = i420_input_frame.u + y_size / 4;
//Generate output frame
free(i420_output_frame.data);
i420_output_frame.width = out_width;
i420_output_frame.height = out_height;
i420_output_frame.data = new unsigned char[out_size * 3 / 2];
i420_output_frame.y = i420_output_frame.data;
i420_output_frame.u = i420_output_frame.y + out_size;
i420_output_frame.v = i420_output_frame.u + out_size / 4;
You are trying to scale your YUV422 image as if it was YUV420, no wonder the colors are all messed up. First of all you need to figure out what exactly format of your YUV input buffer. From documentation of YUV_422_888 it looks like it may represent planar as well as interleaved formats (if pixel stride is not 1). From your results it looks like your source is planar and processing of Y plane is ok, but your error is in handling U and V planes. To get scaling right:
You have to figure out if your U and V planes are interleaved or
planar. Most likely they are planar as well.
Use ScalePlane from libyuv to scale U and V separately. Perhaps
if you step into I420Scale it calls ScalePlane for individual
planes. Do the same, but use correct linesizes for your U and V
planes (each is twice larger than what I420Scale expects).
Some tips how to figure out if you have planar or interleaved U and V: try to skip scaling of your image and saving it, to ensure that you get correct result (identical to the source). Then try to zero out U frame or V frame and see what you get. If U and V are planar and you memset U plane to zero you should see entire picture changing color. If they are interleaved you'll get half of picture changing and the other one staying the same. Same way you can check your assumptions about sizes, linesizes, and offsets of your planes. Once you are sure about your YUV format and layout you can scale individual planes if your input is planar, or if you have interleaved input first you need to deinterleave planes and then scale them.
Alternatively, you can use libswscale from ffmpeg/libav and try different formats to find correct one and then use libyuv.
The green images was caused by one of the planes being full of 0's. This means that one of the planes was empty. This was caused because I was converting from YUV NV21 instead of YUV I420. The images from the framework of camera in android comes as I420 YUVs.
We need to convert them to YUV I420 to work properly with Libyuv. After that we can start using the multiple operations that the library offer you. Like rotate, scale etc.
Here is the snipped about how the scaling method looks:
JNIEXPORT jint JNICALL
Java_com_aa_project_images_yuv_myJNIcl_scaleI420(JNIEnv *env, jclass type,
jobject srcBufferY,
jobject srcBufferU,
jobject srcBufferV,
jint srcWidth, jint srcHeight,
jobject dstBufferY,
jobject dstBufferU,
jobject dstBufferV,
jint dstWidth, jint dstHeight,
jint filterMode) {
const uint8_t *srcY = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferY));
const uint8_t *srcU = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferU));
const uint8_t *srcV = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferV));
uint8_t *dstY = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferY));
uint8_t *dstU = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferU));
uint8_t *dstV = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferV));
return libyuv::I420Scale(srcY, srcWidth,
srcU, srcWidth / 2,
srcV, srcWidth / 2,
srcWidth, srcHeight,
dstY, dstWidth,
dstU, dstWidth / 2,
dstV, dstWidth / 2,
dstWidth, dstHeight,
static_cast<libyuv::FilterMode>(filterMode));
}

YUV_420_888 interpretation on Samsung Galaxy S7 (Camera2)

I wrote a conversion from YUV_420_888 to Bitmap, considering the following logic (as I understand it):
To summarize the approach: the kernel’s coordinates x and y are congruent both with the x and y of the non-padded part of the Y-Plane (2d-allocation) and the x and y of the output-Bitmap. The U- and V-Planes, however, have a different structure than the Y-Plane, because they use 1 byte for coverage of 4 pixels, and, in addition, may have a PixelStride that is more than one, in addition they might also have a padding that can be different from that of the Y-Plane. Therefore, in order to access the U’s and V’s efficiently by the kernel I put them into 1-d allocations and created an index “uvIndex” that gives the position of the corresponding U- and V within that 1-d allocation, for given (x,y) coordinates in the (non-padded) Y-plane (and, so, the output Bitmap).
In order to keep the rs-Kernel lean, I excluded the padding area in the yPlane by capping the x-range via LaunchOptions (this reflects the RowStride of the y-plane which thus can be ignored WITHIN the kernel). So we just need to consider the uvPixelStride and uvRowStride within the uvIndex, i.e. the index used in order to access to the u- and v-values.
This is my code:
Renderscript Kernel, named yuv420888.rs
#pragma version(1)
#pragma rs java_package_name(com.xxxyyy.testcamera2);
#pragma rs_fp_relaxed
int32_t width;
int32_t height;
uint picWidth, uvPixelStride, uvRowStride ;
rs_allocation ypsIn,uIn,vIn;
// The LaunchOptions ensure that the Kernel does not enter the padding zone of Y, so yRowStride can be ignored WITHIN the Kernel.
uchar4 __attribute__((kernel)) doConvert(uint32_t x, uint32_t y) {
// index for accessing the uIn's and vIn's
uint uvIndex= uvPixelStride * (x/2) + uvRowStride*(y/2);
// get the y,u,v values
uchar yps= rsGetElementAt_uchar(ypsIn, x, y);
uchar u= rsGetElementAt_uchar(uIn, uvIndex);
uchar v= rsGetElementAt_uchar(vIn, uvIndex);
// calc argb
int4 argb;
argb.r = yps + v * 1436 / 1024 - 179;
argb.g = yps -u * 46549 / 131072 + 44 -v * 93604 / 131072 + 91;
argb.b = yps +u * 1814 / 1024 - 227;
argb.a = 255;
uchar4 out = convert_uchar4(clamp(argb, 0, 255));
return out;
}
Java side:
private Bitmap YUV_420_888_toRGB(Image image, int width, int height){
// Get the three image planes
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
byte[] y = new byte[buffer.remaining()];
buffer.get(y);
buffer = planes[1].getBuffer();
byte[] u = new byte[buffer.remaining()];
buffer.get(u);
buffer = planes[2].getBuffer();
byte[] v = new byte[buffer.remaining()];
buffer.get(v);
// get the relevant RowStrides and PixelStrides
// (we know from documentation that PixelStride is 1 for y)
int yRowStride= planes[0].getRowStride();
int uvRowStride= planes[1].getRowStride(); // we know from documentation that RowStride is the same for u and v.
int uvPixelStride= planes[1].getPixelStride(); // we know from documentation that PixelStride is the same for u and v.
// rs creation just for demo. Create rs just once in onCreate and use it again.
RenderScript rs = RenderScript.create(this);
//RenderScript rs = MainActivity.rs;
ScriptC_yuv420888 mYuv420=new ScriptC_yuv420888 (rs);
// Y,U,V are defined as global allocations, the out-Allocation is the Bitmap.
// Note also that uAlloc and vAlloc are 1-dimensional while yAlloc is 2-dimensional.
Type.Builder typeUcharY = new Type.Builder(rs, Element.U8(rs));
//using safe height
typeUcharY.setX(yRowStride).setY(y.length / yRowStride);
Allocation yAlloc = Allocation.createTyped(rs, typeUcharY.create());
yAlloc.copyFrom(y);
mYuv420.set_ypsIn(yAlloc);
Type.Builder typeUcharUV = new Type.Builder(rs, Element.U8(rs));
// note that the size of the u's and v's are as follows:
// ( (width/2)*PixelStride + padding ) * (height/2)
// = (RowStride ) * (height/2)
// but I noted that on the S7 it is 1 less...
typeUcharUV.setX(u.length);
Allocation uAlloc = Allocation.createTyped(rs, typeUcharUV.create());
uAlloc.copyFrom(u);
mYuv420.set_uIn(uAlloc);
Allocation vAlloc = Allocation.createTyped(rs, typeUcharUV.create());
vAlloc.copyFrom(v);
mYuv420.set_vIn(vAlloc);
// handover parameters
mYuv420.set_picWidth(width);
mYuv420.set_uvRowStride (uvRowStride);
mYuv420.set_uvPixelStride (uvPixelStride);
Bitmap outBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Allocation outAlloc = Allocation.createFromBitmap(rs, outBitmap, Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT);
Script.LaunchOptions lo = new Script.LaunchOptions();
lo.setX(0, width); // by this we ignore the y’s padding zone, i.e. the right side of x between width and yRowStride
//using safe height
lo.setY(0, y.length / yRowStride);
mYuv420.forEach_doConvert(outAlloc,lo);
outAlloc.copyTo(outBitmap);
return outBitmap;
}
Testing on Nexus 7 (API 22) this returns nice color Bitmaps. This device, however, has trivial pixelstrides (=1) and no padding (i.e. rowstride=width). Testing on the brandnew Samsung S7 (API 23) I get pictures whose colors are not correct - except of the green ones. But the Picture does not show a general bias towards green, it just seems that non-green colors are not reproduced correctly. Note, that the S7 applies an u/v pixelstride of 2, and no padding.
Since the most crucial code line is within the rs-code the Access of the u/v planes uint uvIndex= (...) I think, there could be the problem, probably with incorrect consideration of pixelstrides here. Does anyone see the solution? Thanks.
UPDATE: I checked everything, and I am pretty sure that the code regarding the access of y,u,v is correct. So the problem must be with the u and v values themselves. Non green colors have a purple tilt, and looking at the u,v values they seem to be in a rather narrow range of about 110-150. Is it really possible that we need to cope with device specific YUV -> RBG conversions...?! Did I miss anything?
UPDATE 2: have corrected code, it works now, thanks to Eddy's Feedback.
Look at
floor((float) uvPixelStride*(x)/2)
which calculates your U,V row offset (uv_row_offset) from the Y x-coordinate.
if uvPixelStride = 2, then as x increases:
x = 0, uv_row_offset = 0
x = 1, uv_row_offset = 1
x = 2, uv_row_offset = 2
x = 3, uv_row_offset = 3
and this is incorrect. There's no valid U/V pixel value at uv_row_offset = 1 or 3, since uvPixelStride = 2.
You want
uvPixelStride * floor(x/2)
(assuming you don't trust yourself to remember the critical round-down behavior of integer divide, if you do then):
uvPixelStride * (x/2)
should be enough
With that, your mapping becomes:
x = 0, uv_row_offset = 0
x = 1, uv_row_offset = 0
x = 2, uv_row_offset = 2
x = 3, uv_row_offset = 2
See if that fixes the color errors. In practice, the incorrect addressing here would mean every other color sample would be from the wrong color plane, since it's likely that the underlying YUV data is semiplanar (so the U plane starts at V plane + 1 byte, with the two planes interleaved)
For people who encounter error
android.support.v8.renderscript.RSIllegalArgumentException: Array too small for allocation type
use buffer.capacity() instead of buffer.remaining()
and if you already made some operations on the image, you'll need to call rewind() method on the buffer.
Furthermore for anyone else getting
android.support.v8.renderscript.RSIllegalArgumentException: Array too
small for allocation type
I fixed it by changing yAlloc.copyFrom(y); to yAlloc.copy1DRangeFrom(0, y.length, y);
Posting full solution to convert YUV->BGR (can be adopted for other formats too) and also rotate image to upright using renderscript. Allocation is used as input and byte array is used as output. It was tested on Android 8+ including Samsung devices too.
Java
/**
* Renderscript-based process to convert YUV_420_888 to BGR_888 and rotation to upright.
*/
public class ImageProcessor {
protected final String TAG = this.getClass().getSimpleName();
private Allocation mInputAllocation;
private Allocation mOutAllocLand;
private Allocation mOutAllocPort;
private Handler mProcessingHandler;
private ScriptC_yuv_bgr mConvertScript;
private byte[] frameBGR;
public ProcessingTask mTask;
private ImageListener listener;
private Supplier<Integer> rotation;
public ImageProcessor(RenderScript rs, Size dimensions, ImageListener listener, Supplier<Integer> rotation) {
this.listener = listener;
this.rotation = rotation;
int w = dimensions.getWidth();
int h = dimensions.getHeight();
Type.Builder yuvTypeBuilder = new Type.Builder(rs, Element.YUV(rs));
yuvTypeBuilder.setX(w);
yuvTypeBuilder.setY(h);
yuvTypeBuilder.setYuvFormat(ImageFormat.YUV_420_888);
mInputAllocation = Allocation.createTyped(rs, yuvTypeBuilder.create(),
Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
//keep 2 allocations to handle different image rotations
mOutAllocLand = createOutBGRAlloc(rs, w, h);
mOutAllocPort = createOutBGRAlloc(rs, h, w);
frameBGR = new byte[w*h*3];
HandlerThread processingThread = new HandlerThread(this.getClass().getSimpleName());
processingThread.start();
mProcessingHandler = new Handler(processingThread.getLooper());
mConvertScript = new ScriptC_yuv_bgr(rs);
mConvertScript.set_inWidth(w);
mConvertScript.set_inHeight(h);
mTask = new ProcessingTask(mInputAllocation);
}
private Allocation createOutBGRAlloc(RenderScript rs, int width, int height) {
//Stored as Vec4, it's impossible to store as Vec3, buffer size will be for Vec4 anyway
//using RGB_888 as alternative for BGR_888, can be just U8_3 type
Type.Builder rgbTypeBuilderPort = new Type.Builder(rs, Element.RGB_888(rs));
rgbTypeBuilderPort.setX(width);
rgbTypeBuilderPort.setY(height);
Allocation allocation = Allocation.createTyped(
rs, rgbTypeBuilderPort.create(), Allocation.USAGE_SCRIPT
);
//Use auto-padding to be able to copy to x*h*3 bytes array
allocation.setAutoPadding(true);
return allocation;
}
public Surface getInputSurface() {
return mInputAllocation.getSurface();
}
/**
* Simple class to keep track of incoming frame count,
* and to process the newest one in the processing thread
*/
class ProcessingTask implements Runnable, Allocation.OnBufferAvailableListener {
private int mPendingFrames = 0;
private Allocation mInputAllocation;
public ProcessingTask(Allocation input) {
mInputAllocation = input;
mInputAllocation.setOnBufferAvailableListener(this);
}
#Override
public void onBufferAvailable(Allocation a) {
synchronized(this) {
mPendingFrames++;
mProcessingHandler.post(this);
}
}
#Override
public void run() {
// Find out how many frames have arrived
int pendingFrames;
synchronized(this) {
pendingFrames = mPendingFrames;
mPendingFrames = 0;
// Discard extra messages in case processing is slower than frame rate
mProcessingHandler.removeCallbacks(this);
}
// Get to newest input
for (int i = 0; i < pendingFrames; i++) {
mInputAllocation.ioReceive();
}
int rot = rotation.get();
mConvertScript.set_currentYUVFrame(mInputAllocation);
mConvertScript.set_rotation(rot);
Allocation allocOut = rot==90 || rot== 270 ? mOutAllocPort : mOutAllocLand;
// Run processing
// ain allocation isn't really used, global frame param is used to get data from
mConvertScript.forEach_yuv_bgr(allocOut);
//Save to byte array, BGR 24bit
allocOut.copyTo(frameBGR);
int w = allocOut.getType().getX();
int h = allocOut.getType().getY();
if (listener != null) {
listener.onImageAvailable(frameBGR, w, h);
}
}
}
public interface ImageListener {
/**
* Called when there is available image, image is in upright position.
*
* #param bgr BGR 24bit bytes
* #param width image width
* #param height image height
*/
void onImageAvailable(byte[] bgr, int width, int height);
}
}
RS
#pragma version(1)
#pragma rs java_package_name(com.affectiva.camera)
#pragma rs_fp_relaxed
//Script convers YUV to BGR(uchar3)
//current YUV frame to read pixels from
rs_allocation currentYUVFrame;
//input image rotation: 0,90,180,270 clockwise
uint32_t rotation;
uint32_t inWidth;
uint32_t inHeight;
//method returns uchar3 BGR which will be set to x,y in output allocation
uchar3 __attribute__((kernel)) yuv_bgr(uint32_t x, uint32_t y) {
// Read in pixel values from latest frame - YUV color space
uchar3 inPixel;
uint32_t xRot = x;
uint32_t yRot = y;
//Do not rotate if 0
if (rotation==90) {
//rotate 270 clockwise
xRot = y;
yRot = inHeight - 1 - x;
} else if (rotation==180) {
xRot = inWidth - 1 - x;
yRot = inHeight - 1 - y;
} else if (rotation==270) {
//rotate 90 clockwise
xRot = inWidth - 1 - y;
yRot = x;
}
inPixel.r = rsGetElementAtYuv_uchar_Y(currentYUVFrame, xRot, yRot);
inPixel.g = rsGetElementAtYuv_uchar_U(currentYUVFrame, xRot, yRot);
inPixel.b = rsGetElementAtYuv_uchar_V(currentYUVFrame, xRot, yRot);
// Convert YUV to RGB, JFIF transform with fixed-point math
// R = Y + 1.402 * (V - 128)
// G = Y - 0.34414 * (U - 128) - 0.71414 * (V - 128)
// B = Y + 1.772 * (U - 128)
int3 bgr;
//get red pixel and assing to b
bgr.b = inPixel.r +
inPixel.b * 1436 / 1024 - 179;
bgr.g = inPixel.r -
inPixel.g * 46549 / 131072 + 44 -
inPixel.b * 93604 / 131072 + 91;
//get blue pixel and assign to red
bgr.r = inPixel.r +
inPixel.g * 1814 / 1024 - 227;
// Write out
return convert_uchar3(clamp(bgr, 0, 255));
}
On a Samsung Galaxy Tab 5 (Tablet), android version 5.1.1 (22), with alleged YUV_420_888 format, the following renderscript math works well and produces correct colors:
uchar yValue = rsGetElementAt_uchar(gCurrentFrame, x + y * yRowStride);
uchar vValue = rsGetElementAt_uchar(gCurrentFrame, ( (x/2) + (y/4) * yRowStride ) + (xSize * ySize) );
uchar uValue = rsGetElementAt_uchar(gCurrentFrame, ( (x/2) + (y/4) * yRowStride ) + (xSize * ySize) + (xSize * ySize) / 4);
I do not understand why the horizontal value (i.e., y) is scaled by a factor of four instead of two, but it works well. I also needed to avoid use of rsGetElementAtYuv_uchar_Y|U|V. I believe the associated allocation stride value is set to zero instead of something proper. Use of rsGetElementAt_uchar() is a reasonable work-around.
On a Samsung Galaxy S5 (Smart Phone), android version 5.0 (21), with alleged YUV_420_888 format, I cannot recover the u and v values, they come through as all zeros. This results in a green looking image. Luminous is OK, but image is vertically flipped.
This code requires the use of the RenderScript compatibility library (android.support.v8.renderscript.*).
In order to get the compatibility library to work with Android API 23, I updated to gradle-plugin 2.1.0 and Build-Tools 23.0.3 as per Miao Wang's answer at How to create Renderscript scripts on Android Studio, and make them run?
If you follow his answer and get an error "Gradle version 2.10 is required" appears, do NOT change
classpath 'com.android.tools.build:gradle:2.1.0'
Instead, update the distributionUrl field of the Project\gradle\wrapper\gradle-wrapper.properties file to
distributionUrl=https\://services.gradle.org/distributions/gradle-2.10-all.zip
and change File > Settings > Builds,Execution,Deployment > Build Tools > Gradle >Gradle to Use default gradle wrapper as per "Gradle Version 2.10 is required." Error.
Re: RSIllegalArgumentException
In my case this was the case that buffer.remaining() was not multiple of stride:
The length of last line was less than stride (i.e. only up to where actual data was.)
An FYI in case someone else gets this as I was also getting "android.support.v8.renderscript.RSIllegalArgumentException: Array too small for allocation type" when trying out the code. In my case it turns out that the when allocating the buffer for Y i had to rewind the buffer because it was being left at the wrong end and wasn't copying the data. By doing buffer.rewind(); before allocation the new bytes array makes it work fine now.

Rendering issue on Project Tango using OpenCV image processing

I came across one problem to render the camera image after some process on its YUV buffer.
I am using the example video-overlay-jni-example and in the method OnFrameAvailable I am creating a new frame buffer using the cv::Mat...
Here is how I create a new frame buffer:
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
After process, I copy the frame.data to the yuv_temp_buffer_ in order to render it on the texture: memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
And this works fine...
The problem starts when I try to execute an OpenCV method findChessboardCorners... using the frame that I've created before.
The method findChessboardCorners takes about 90ms to execute (11 fps), however, it seems to be rendering in a slower rate. (It appears to be rendering in ~0.5 fps on the screen).
Here is the code of the OnFrameAvailable method:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
if (yuv_drawable_ == NULL){
return;
}
if (yuv_drawable_->GetTextureId() == 0) {
LOGE("AugmentedRealityApp::yuv texture id not valid");
return;
}
if (buffer->format != TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP) {
LOGE("AugmentedRealityApp::yuv texture format is not supported by this app");
return;
}
// The memory needs to be allocated after we get the first frame because we
// need to know the size of the image.
if (!is_yuv_texture_available_) {
yuv_width_ = buffer->width;
yuv_height_ = buffer->height;
uv_buffer_offset_ = yuv_width_ * yuv_height_;
yuv_size_ = yuv_width_ * yuv_height_ + yuv_width_ * yuv_height_ / 2;
// Reserve and resize the buffer size for RGB and YUV data.
yuv_buffer_.resize(yuv_size_);
yuv_temp_buffer_.resize(yuv_size_);
rgb_buffer_.resize(yuv_width_ * yuv_height_ * 3);
AllocateTexture(yuv_drawable_->GetTextureId(), yuv_width_, yuv_height_);
is_yuv_texture_available_ = true;
}
std::lock_guard<std::mutex> lock(yuv_buffer_mutex_);
memcpy(&yuv_temp_buffer_[0], buffer->data, yuv_size_);
///
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
if (!stam.isCalibrated()) {
Profiler profiler;
profiler.startSampling();
stam.initFromChessboard(frame, cv::Size(9, 6), 100);
profiler.endSampling();
profiler.print("initFromChessboard", -1);
}
///
memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
swap_buffer_signal_ = true;
}
Here is the code of the method initFromChessBoard:
bool STAM::initFromChessboard(const cv::Mat& image, const cv::Size& chessBoardSize, int squareSize)
{
cv::Mat rvec = cv::Mat(cv::Size(3, 1), CV_64F);
cv::Mat tvec = cv::Mat(cv::Size(3, 1), CV_64F);
std::vector<cv::Point2d> imagePoints, imageBoardPoints;
std::vector<cv::Point3d> boardPoints;
for (int i = 0; i < chessBoardSize.height; i++)
{
for (int j = 0; j < chessBoardSize.width; j++)
{
boardPoints.push_back(cv::Point3d(j*squareSize, i*squareSize, 0.0));
}
}
//getting only the Y channel (many of the functions like face detect and align only needs the grayscale image)
cv::Mat gray(image.rows, image.cols, CV_8UC1);
gray.data = image.data;
bool found = findChessboardCorners(gray, chessBoardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
#ifdef WINDOWS_VS
printf("Number of chessboard points: %d\n", imagePoints.size());
#elif ANDROID
LOGE("Number of chessboard points: %d", imagePoints.size());
#endif
for (int i = 0; i < imagePoints.size(); i++) {
cv::circle(image, imagePoints[i], 6, cv::Scalar(149, 43, 0), -1);
}
}
Is anyone having the same problem after process something in the YUV buffer to render on the texture?
I did a test using other device rather than the project Tango using camera2 API, and the rendering process on the screen appears to be the same rate of the OpenCV function process itself.
I appreciate any help.
I had a similar problem. My app slowed down after using the copied yuv buffer and doing some image processing with OpenCV. I would recommand you to use the tango_support library to access the yuv image buffer by doing the following:
In your config function:
int AugmentedRealityApp::TangoSetupConfig() {
TangoSupport_createImageBufferManager(TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP, 1280, 720, &yuv_manager_);
}
In your callback function:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
TangoSupport_updateImageBuffer(yuv_manager_, buffer);
}
In your render thread:
void AugmentedRealityApp::Render() {
TangoImageBuffer* yuv = new TangoImageBuffer();
TangoSupport_getLatestImageBuffer(yuv_manager_, &yuv);
cv::Mat yuv_frame, rgb_img, gray_img;
yuv_frame.create(720*3/2, 1280, CV_8UC1);
memcpy(yuv_frame.data, yuv->data, 720*3/2*1280); // yuv image
cv::cvtColor(yuv_frame, rgb_img, CV_YUV2RGB_NV21); // rgb image
cvtColor(rgb_img, gray_img, CV_RGB2GRAY); // gray image
}
You can share the yuv_manger with other objects/threads so you can access the yuv image buffer wherever you want.

how to convert from yv12 to yuv420p

On camera preview frame I get data in yv12 format on Android side. I need to convert it to YUV420P on jni side. How can I do it? As I have read from many sources in YUV420P format y samples appears first which is followed by u samples. u samples are followed by v sample. yv12 format is same as YUV420P except u and v samples appears in reverse order, that means y samples are followed by v and then u samples. Keeping that in mind I have used following swapping code to produce YUV420P data from yv12 data format before encoding.
avpicture_fill((AVPicture*)outframe, (uint8_t*)camData, codecCtx->pix_fmt, codecCtx->width, codecCtx->height);
uint8_t * buf_store = outframe->data[1];
outframe->data[1]=outframe->data[2];
outframe->data[2]=buf_store;
But it does not seems to be working. How should I adjust my code?
Do, t use avpicture_fill. I have implemented in my application like this and its working fine
picture->linesize[0] = frameWidth;
picture->linesize[1] = frameWidth/2;
picture->linesize[2] = frameWidth/2;
picture->data[0] = camData;
picture->data[1] = camData + picture->linesize[0]*frameHeight+picture->linesize[1]*frameHeight/2;
picture->data[2] = camData + picture->linesize[0]*frameHeight;
May be you need this method:
//yv12 to yuv420p
public static void swapYV12toI420(final byte[] yv12bytes, final byte[] i420bytes, int width, int height) {
int size = width * height;
int part = size / 4;
System.arraycopy(yv12bytes, 0, i420bytes, 0, size);
System.arraycopy(yv12bytes, size + part, i420bytes, size, part);
System.arraycopy(yv12bytes, size, i420bytes, size + part, part);
}
For every YV12 data packages you received, swap to i420.

Categories

Resources