I need to crop every frame from the camera's onPreviewFrame. I want to crop it directly doing operations with the byte array without converting it to a bitmap. The reason I want to do it directly on the array is because it can't be too slow or too expensive. After the crop operation I will use the output byte array directly so I don't need any bitmap conversion.
int frameWidth;
int frameHeight;
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Crop the data array directly.
cropData(data, frameWidth, frameHeight, startCropX, startCropY, outputWidth, outputHeight);
}
private static byte[] cropNV21(byte[] img, int imgWidth, #NonNull Rect cropRect) {
// 1.5 mean 1.0 for Y and 0.25 each for U and V
int croppedImgSize = (int)Math.floor(cropRect.width() * cropRect.height() * 1.5);
byte[] croppedImg = new byte[croppedImgSize];
// Start points of UV plane
int imgYPlaneSize = (int)Math.ceil(img.length / 1.5);
int croppedImgYPlaneSize = cropRect.width() * cropRect.height();
// Y plane copy
for (int w = 0; w < cropRect.height(); w++) {
int imgPos = (cropRect.top + w) * imgWidth + cropRect.left;
int croppedImgPos = w * cropRect.width();
System.arraycopy(img, imgPos, croppedImg, croppedImgPos, cropRect.width());
}
// UV plane copy
// U and V are reduced by 2 * 2, so each row is the same size as Y
// and is half U and half V data, and there are Y_rows/2 of UV_rows
for (int w = 0; w < (int)Math.floor(cropRect.height() / 2.0); w++) {
int imgPos = imgYPlaneSize + (cropRect.top / 2 + w) * imgWidth + cropRect.left;
int croppedImgPos = croppedImgYPlaneSize + (w * cropRect.width());
System.arraycopy(img, imgPos, croppedImg, croppedImgPos, cropRect.width());
}
return croppedImg;
}
NV21 structure:
P.S.: Also, if the starting position of your rectangle is odd, the U and V will be swapped. To handle this case, I use:
if (cropRect.left % 2 == 1) {
cropRect.left -= 1;
}
if (cropRect.top % 2 == 1) {
cropRect.top -= 1;
}
Related
I have to provide a YUV(NV21) byte array to a recognition solution and I'd like, to reduce processing time, to down scale the preview frame.
From solutions gathered here and there on SO, I manage to convert on a 1:1 ratio and I get recognition hits. But if I'd like to scale the intermediate bitmap down, I get no result. Even if I scale it down to 95% only.
Any help would be appreciated.
Thus, every 400-ish ms I take the preview frame to convert it asynchronously. I convert it to ARGB using RenderScript, scale it down and then convert it back.
// Camera callback
#Override
public void onPreviewFrame(byte[] frame, Camera camera) {
if (camera != null) {
// Debounce
if ((System.currentTimeMillis() - mStart) > 400) {
mStart = System.currentTimeMillis();
Camera.Size size = camera.getParameters().getPreviewSize();
new FrameScaleAsyncTask(frame, size.width, size.height).execute();
}
}
if (mCamera != null) {
mCamera.addCallbackBuffer(mBuffer);
}
}
// In FrameScaleAsyncTask
#Override
protected Void doInBackground(Void... params) {
// Create YUV type for in-allocation
Type yuvType = new Type.Builder(mRenderScript, Element.U8(mRenderScript))
.setX(mFrame.length)
.create();
mAllocationIn = Allocation.createTyped(mRenderScript, yuvType, Allocation.USAGE_SCRIPT);
// Create ARGB-8888 type for out-allocation
Type rgbType = new Type.Builder(mRenderScript, Element.RGBA_8888(mRenderScript))
.setX(mWidth)
.setY(mHeight)
.create();
mAllocationOut = Allocation.createTyped(mRenderScript, rgbType, Allocation.USAGE_SCRIPT);
// Copy frame data into in-allocation
mAllocationIn.copyFrom(mFrame);
// Set script input and fire !
mScript.setInput(mAllocationIn);
mScript.forEach(mAllocationOut);
// Create a bitmap of camera preview size (see camera setup) and copy out-allocation to it
Bitmap bitmap = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
mAllocationOut.copyTo(bitmap);
// Scale bitmap down
double scaleRatio = 1;
Bitmap scaledBitmap = Bitmap.createScaledBitmap(
bitmap,
(int) (bitmap.getWidth() * scaleRatio),
(int) (bitmap.getHeight() * scaleRatio),
false
);
bitmap.recycle();
int size = scaledBitmap.getRowBytes() * scaledBitmap.getHeight();
int scaledWidth = scaledBitmap.getWidth();
int scaledHeight = scaledBitmap.getHeight();
int[] pixels = new int[scaledWidth * scaledHeight];
// Put bitmap pixels into an int array
scaledBitmap.getPixels(pixels, 0, scaledWidth, 0, 0, scaledWidth, scaledHeight);
mFrame = new byte[pixels.length * 3 / 2];
ImageHelper.encodeYUV420SPAlt(mFrame, pixels, scaledWidth, scaledHeight);
return null;
}
The RGB to YUV algorithm (see : this answer ):
public static void encodeYUV420SPAlt(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
}
index++;
}
}
}
I finally end up resizing my image (as a OpenCV.Mat) directly in C++. This was way easier and faster.
Size size(correctedWidth, correctedHeight);
Mat dst;
resize(image, dst, size);
I have a YUV_420_888 image I got from the camera. I want to crop a rectangle out of grayscale of this image to feed to an image processing algorithm. This is what I have so far:
public static byte[] YUV_420_888toCroppedY(Image image, Rect cropRect) {
byte[] yData;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
int ySize = yBuffer.remaining();
yData = new byte[ySize];
yBuffer.get(yData, 0, ySize);
if (cropRect != null) {
int cropArea = (cropRect.right - cropRect.left) * (cropRect.bottom - cropRect.top);
byte[] croppedY = new byte[cropArea];
int cropIndex = 0;
// from the top of the rectangle, to the bottom, sequentially add rows to the output array, croppedY
for (int y = cropRect.top; y < cropRect.top + cropRect.height(); y++) {
// (2x+W) * y + x
int rowStart = (2*cropRect.left + cropRect.width()) * y + cropRect.left;
// (2x+W) * y + x + W
int rowEnd = (2*cropRect.left + cropRect.width()) * y + cropRect.left + cropRect.width();
for (int x = rowStart; x < rowEnd; x++) {
croppedY[cropIndex] = yData[x];
cropIndex++;
}
}
return croppedY;
}
return yData;
}
This function runs without error but the image I get out of it is garbage - it looks something like this:
I'm not sure how to solve this problem or what I'm doing wrong.
Your rowStart/end calculations are wrong.
You need to calculate the row start location based on the source image dimensions, not on your crop window dimensions. And I'm not sure where you get the factor of 2 from; there's 1 byte per pixel in the Y channel of the image.
They should be roughly:
int yRowStride = image.getPlanes()[0].getRowStride();
..
int rowStart = y * yRowStride + cropRect.left();
int rowEnd = y * yRowStride + cropRect.left() + cropRect.width();
I do not have a background in imaging or graphics, so please bear with me :)
I am using JavaCV in one of my projects. In the examples, a Frame is constructed which has a buffer of a certain size.
When using the public void onPreviewFrame(byte[] data, Camera camera) function in Android, copying this data byte array is no problem if you declare the Frame as new Frame(frameWidth, frameHeight, Frame.DEPTH_UBYTE, 2); where frameWidth and frameHeight are declared as
Camera.Size previewSize = cameraParam.getPreviewSize();
int frameWidth = previewSize.width;
int frameHeight = previewSize.height;
Recently, Android added a method to capture your screen. Naturally, I wanted to grab those images and also covert them to Frames. I modified the example code from Google to use the ImageReader.
This ImageReader is constructed as ImageReader.newInstance(DISPLAY_WIDTH, DISPLAY_HEIGHT, PixelFormat.RGBA_8888, 2);. So currently it uses the RGBA_8888 pixel format. I use the following code to copy the bytes to the Frame, which is instantiated as new Frame(DISPLAY_WIDTH, DISPLAY_HEIGHT, Frame.DEPTH_UBYTE, 2);:
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
mImage.close();
((ByteBuffer) frame.image[0].position(0)).put(bytes);
But this gives me a java.nio.BufferOverflowException. I printed the sizes of both buffers and the Frame's buffer size is 691200 whereas the bytes array above is of size 1413056. Figuring out how this latter number is constructed failed because I ran into this native call. So clearly, this won't work out.
After quite a bit of digging I found out that the NV21 image format is "the default format for Camera preview images, when not otherwise set with setPreviewFormat(int)", but the ImageReader class does not support the NV21 format (see the format parameter). So that's tough luck. In the documentation it also reads that
"For the android.hardware.camera2 API, the YUV_420_888 format is recommended for YUV output instead."
So I tried creating an ImageReader like this ImageReader.newInstance(DISPLAY_WIDTH, DISPLAY_HEIGHT, ImageFormat.YUV_420_888, 2);, but this gives me java.lang.UnsupportedOperationException: The producer output buffer format 0x1 doesn't match the ImageReader's configured buffer format 0x23. so that won't work either.
As a last resort, I tried to convert RGBA_8888 to YUV myself using e.g. this post, but I fail to understand how I can obtain an int[] rgba as per the answer.
So, TL;DR how can I obtain NV21 image data like you get in Android's public void onPreviewFrame(byte[] data, Camera camera) camera function to instantiate my Frame and work with it using Android's ImageReader (and Media Projection)?
Edit (25-10-2016)
I have created the following conversion runnable to go from RGBA to NV21 format:
private class updateImage implements Runnable {
private final Image mImage;
public updateImage(Image image) {
mImage = image;
}
#Override
public void run() {
int mWidth = mImage.getWidth();
int mHeight = mImage.getHeight();
// Four bytes per pixel: width * height * 4.
byte[] rgbaBytes = new byte[mWidth * mHeight * 4];
// put the data into the rgbaBytes array.
mImage.getPlanes()[0].getBuffer().get(rgbaBytes);
mImage.close(); // Access to the image is no longer needed, release it.
// Create a yuv byte array: width * height * 1.5 ().
byte[] yuv = new byte[mWidth * mHeight * 3 / 2];
RGBtoNV21(yuv, rgbaBytes, mWidth, mHeight);
((ByteBuffer) yuvImage.image[0].position(0)).put(yuv);
}
void RGBtoNV21(byte[] yuv420sp, byte[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int A, R, G, B, Y, U, V;
int index = 0;
int rgbIndex = 0;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
R = argb[rgbIndex++];
G = argb[rgbIndex++];
B = argb[rgbIndex++];
A = argb[rgbIndex++]; // Ignored right now.
// RGB to YUV conversion according to
// https://en.wikipedia.org/wiki/YUV#Y.E2.80.B2UV444_to_RGB888_conversion
Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor
// of 2 meaning for every 4 Y pixels there are 1 V and 1 U.
// Note the sampling is every other pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (i % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
}
index++;
}
}
}
}
The yuvImage object is initialized as yuvImage = new Frame(DISPLAY_WIDTH, DISPLAY_HEIGHT, Frame.DEPTH_UBYTE, 2);, the DISPLAY_WIDTH and DISPLAY_HEIGHT are just two integers specifying the display size.
This is the code where a background handler handles the onImageReady:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
mBackgroundHandler.post(new updateImage(reader.acquireNextImage()));
}
};
...
mImageReader = ImageReader.newInstance(DISPLAY_WIDTH, DISPLAY_HEIGHT, PixelFormat.RGBA_8888, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);
The methods work and I at least don't get any errors, but the output image is malformed. What is going wrong in my conversion? An example image that is being created:
Edit (15-11-2016)
I have modified the RGBtoNV21 function to be the following:
void RGBtoNV21(byte[] yuv420sp, int width, int height) {
try {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int pixelStride = mImage.getPlanes()[0].getPixelStride();
int rowStride = mImage.getPlanes()[0].getRowStride();
int rowPadding = rowStride - pixelStride * width;
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
Bitmap bitmap = Bitmap.createBitmap(getResources().getDisplayMetrics(), width, height, Bitmap.Config.ARGB_8888);
int A, R, G, B, Y, U, V;
int offset = 0;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
// Useful link: https://stackoverflow.com/questions/26673127/android-imagereader-acquirelatestimage-returns-invalid-jpg
R = (buffer.get(offset) & 0xff) << 16; // R
G = (buffer.get(offset + 1) & 0xff) << 8; // G
B = (buffer.get(offset + 2) & 0xff); // B
A = (buffer.get(offset + 3) & 0xff) << 24; // A
offset += pixelStride;
int pixel = 0;
pixel |= R; // R
pixel |= G; // G
pixel |= B; // B
pixel |= A; // A
bitmap.setPixel(j, i, pixel);
// RGB to YUV conversion according to
// https://en.wikipedia.org/wiki/YUV#Y.E2.80.B2UV444_to_RGB888_conversion
// Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
// U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
// V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
Y = (int) Math.round(R * .299000 + G * .587000 + B * .114000);
U = (int) Math.round(R * -.168736 + G * -.331264 + B * .500000 + 128);
V = (int) Math.round(R * .500000 + G * -.418688 + B * -.081312 + 128);
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor
// of 2 meaning for every 4 Y pixels there are 1 V and 1 U.
// Note the sampling is every other pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (i % 2 == 0 && j % 2 == 0) {
yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
}
}
offset += rowPadding;
}
File file = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES).getAbsolutePath(), "/Awesomebitmap.png");
FileOutputStream fos = new FileOutputStream(file);
bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
} catch (Exception e) {
Timber.e(e, "Converting image to NV21 went wrong.");
}
}
Now the image is no longer malformed, but the chroma is off.
The right side is the bitmap that is being created in that loop, the left side is the NV21 saved to an image. So the RGB pixels are processed correctly. Clearly the chroma is off, but the RGB to YUV conversion should be the same one as depicted by wikipedia. What could be wrong here?
Generally speaking, the point of ImageReader is to give you raw access to the pixels sent to the Surface with minimal overhead, so attempting to have it perform color conversions doesn't make sense.
For the Camera you get to pick one of two output formats (NV21 or YV12), so pick YV12. That's your raw YUV data. For screen capture the output will always be RGB, so you need to pick RGBA_8888 (format 0x1) for your ImageReader, rather than YUV_420_888 (format 0x23). If you need YUV for that, you will have to do the conversion yourself. The ImageReader gives you a series of Plane objects, not a byte[], so you will need to adapt to that.
My Problem is: I've set up a camera in Android and receive the preview data by using an onPreviewFrame-listener which passes me an byte[] array containing the image data in the default android YUV-format (device does not support R5G6B5-format). Each pixel consists of 12bits which makes the thing a little tricky. Now what I want to do is converting the YUV-data into ARGB-data in order to do image processing with it. This has to be done with renderscript, in order to maintain a high performance.
My idea was to pass two pixels in one element (which would be 24bits = 3 bytes) and then return two ARGB pixels. The problem is, that in Renderscript a u8_3 (a 3dimensional 8bit vector) is stored in 32bit, which means that the last 8 bits are unused. But when copying the image data into the allocation all of the 32bits are used, so the last 8bit get lost. Even if I used a 32bit input data, the last 8bit are useless, because they're only 2/3 of a pixel. When defining an element consisting a 3-byte-array it actually has a real size of 3 bytes. But then the Allocation.copyFrom()-method doesn't fill the in-Allocation with data, argueing it doesn't has the right data type to be filled with a byte[].
The renderscript documentation states, that there is a ScriptIntrinsicYuvToRGB which should do exactly that in API Level 17. But in fact the class doesn't exist. I've downloaded API Level 17 even though it seems not to be downloadable any more. Does anyone have any information about it? Does anyone have ever tried out a ScriptIntrinsic?
So in conclusion my question is: How to convert the camera data into ARGB data fast, hardwareaccelerated?
That's how to do it in Dalvik VM (found the code somewhere online, it works):
#SuppressWarnings("unused")
private void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}
I'm sure you will find the LivePreview test application interesting ... it's part of the Android source code in the latest Jelly Bean (MR1). It implements a camera preview and uses ScriptIntrinsicYuvToRgb to convert the preview data with Renderscript. You can browse the source online here:
LivePreview
I was not able to get running ScriptInstrinsicYuvToRgb, so I decided to write my own RS solution.
Here's ready script (named yuv.rs):
#pragma version(1)
#pragma rs java_package_name(com.package.name)
rs_allocation gIn;
int width;
int height;
int frameSize;
void yuvToRgb(const uchar *v_in, uchar4 *v_out, const void *usrData, uint32_t x, uint32_t y) {
uchar yp = rsGetElementAtYuv_uchar_Y(gIn, x, y) & 0xFF;
int index = frameSize + (x & (~1)) + (( y>>1) * width );
int v = (int)( rsGetElementAt_uchar(gIn, index) & 0xFF ) -128;
int u = (int)( rsGetElementAt_uchar(gIn, index+1) & 0xFF ) -128;
int r = (int) (1.164f * yp + 1.596f * v );
int g = (int) (1.164f * yp - 0.813f * v - 0.391f * u);
int b = (int) (1.164f * yp + 2.018f * u );
r = r>255? 255 : r<0 ? 0 : r;
g = g>255? 255 : g<0 ? 0 : g;
b = b>255? 255 : b<0 ? 0 : b;
uchar4 res4;
res4.r = (uchar)r;
res4.g = (uchar)g;
res4.b = (uchar)b;
res4.a = 0xFF;
*v_out = res4;
}
Don't forget to set camera preview format to NV21:
Parameters cameraParameters = camera.getParameters();
cameraParameters.setPreviewFormat(ImageFormat.NV21);
// Other camera init stuff: preview size, framerate, etc.
camera.setParameters(cameraParameters);
Allocations initialization and script usage:
// Somewhere in initialization section
// w and h are variables for selected camera preview size
rs = RenderScript.create(this);
Type.Builder tbIn = new Type.Builder(rs, Element.U8(rs));
tbIn.setX(w);
tbIn.setY(h);
tbIn.setYuvFormat(ImageFormat.NV21);
Type.Builder tbOut = new Type.Builder(rs, Element.RGBA_8888(rs));
tbOut.setX(w);
tbOut.setY(h);
inData = Allocation.createTyped(rs, tbIn.create(), Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT & Allocation.USAGE_SHARED);
outData = Allocation.createTyped(rs, tbOut.create(), Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT & Allocation.USAGE_SHARED);
outputBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
yuvScript = new ScriptC_yuv(rs);
yuvScript.set_gIn(inData);
yuvScript.set_width(w);
yuvScript.set_height(h);
yuvScript.set_frameSize(previewSize);
//.....
Camera callback method:
public void onPreviewFrame(byte[] data, Camera camera) {
// In your camera callback, data
inData.copyFrom(data);
yuvScript.forEach_yuvToRgb(inData, outData);
outData.copyTo(outputBitmap);
// draw your bitmap where you want to
// .....
}
For anyone who didn't know, RenderScript is now in the Android Support Library, including intrinsics.
http://android-developers.blogspot.com.au/2013/09/renderscript-in-android-support-library.html
http://android-developers.blogspot.com.au/2013/08/renderscript-intrinsics.html
We now have the new renderscript-intrinsics-replacement-toolkit to do it. First, build and import the renderscript module to your project and add it as a dependency to your app module. Then, go to Toolkit.kt and add the following:
fun toNv21(image: Image): ByteArray? {
val nv21 = ByteArray((image.width * image.height * 1.5f).toInt())
return if (!nativeYuv420toNv21(
nativeHandle,
image.width,
image.height,
image.planes[0].buffer, // Y buffer
image.planes[1].buffer, // U buffer
image.planes[2].buffer, // V buffer
image.planes[0].pixelStride, // Y pixel stride
image.planes[1].pixelStride, // U/V pixel stride
image.planes[0].rowStride, // Y row stride
image.planes[1].rowStride, // U/V row stride
nv21
)
) {
null
} else nv21
}
private external fun nativeYuv420toNv21(
nativeHandle: Long,
imageWidth: Int,
imageHeight: Int,
yByteBuffer: ByteBuffer,
uByteBuffer: ByteBuffer,
vByteBuffer: ByteBuffer,
yPixelStride: Int,
uvPixelStride: Int,
yRowStride: Int,
uvRowStride: Int,
nv21Output: ByteArray
): Boolean
Now, go to JniEntryPoints.cpp and add the following:
extern "C" JNIEXPORT jboolean JNICALL Java_com_google_android_renderscript_Toolkit_nativeYuv420toNv21(
JNIEnv *env, jobject/*thiz*/, jlong native_handle,
jint image_width, jint image_height, jobject y_byte_buffer,
jobject u_byte_buffer, jobject v_byte_buffer, jint y_pixel_stride,
jint uv_pixel_stride, jint y_row_stride, jint uv_row_stride,
jbyteArray nv21_array) {
auto y_buffer = static_cast<jbyte*>(env->GetDirectBufferAddress(y_byte_buffer));
auto u_buffer = static_cast<jbyte*>(env->GetDirectBufferAddress(u_byte_buffer));
auto v_buffer = static_cast<jbyte*>(env->GetDirectBufferAddress(v_byte_buffer));
jbyte* nv21 = env->GetByteArrayElements(nv21_array, nullptr);
if (nv21 == nullptr || y_buffer == nullptr || u_buffer == nullptr
|| v_buffer == nullptr) {
// Log this.
return false;
}
RenderScriptToolkit* toolkit = reinterpret_cast<RenderScriptToolkit*>(native_handle);
toolkit->yuv420toNv21(image_width, image_height, y_buffer, u_buffer, v_buffer,
y_pixel_stride, uv_pixel_stride, y_row_stride, uv_row_stride,
nv21);
env->ReleaseByteArrayElements(nv21_array, nv21, 0);
return true;
}
Go to YuvToRgb.cpp and add the following:
void RenderScriptToolkit::yuv420toNv21(int image_width, int image_height, const int8_t* y_buffer,
const int8_t* u_buffer, const int8_t* v_buffer, int y_pixel_stride,
int uv_pixel_stride, int y_row_stride, int uv_row_stride,
int8_t *nv21) {
// Copy Y channel.
for(int y = 0; y < image_height; ++y) {
int destOffset = image_width * y;
int yOffset = y * y_row_stride;
memcpy(nv21 + destOffset, y_buffer + yOffset, image_width);
}
if (v_buffer - u_buffer == sizeof(int8_t)) {
// format = nv21
// TODO: If the format is VUVUVU & pixel stride == 1 we can simply the copy
// with memcpy. In Android Camera2 I have mostly come across UVUVUV packaging
// though.
}
// Copy UV Channel.
int idUV = image_width * image_height;
int uv_width = image_width / 2;
int uv_height = image_height / 2;
for(int y = 0; y < uv_height; ++y) {
int uvOffset = y * uv_row_stride;
for (int x = 0; x < uv_width; ++x) {
int bufferIndex = uvOffset + (x * uv_pixel_stride);
// V channel.
nv21[idUV++] = v_buffer[bufferIndex];
// U channel.
nv21[idUV++] = u_buffer[bufferIndex];
}
}
}
Finally, go to RenderscriptToolkit.h and add the following:
/**
* https://blog.minhazav.dev/how-to-use-renderscript-to-convert-YUV_420_888-yuv-image-to-bitmap/#tobitmapimage-image-method
* #param image_width width of the image you want to convert to byte array
* #param image_height height of the image you want to convert to byte array
* #param y_buffer Y buffer
* #param u_buffer U buffer
* #param v_buffer V buffer
* #param y_pixel_stride Y pixel stride
* #param uv_pixel_stride UV pixel stride
* #param y_row_stride Y row stride
* #param uv_row_stride UV row stride
* #param nv21 the output byte array
*/
void yuv420toNv21(int image_width, int image_height, const int8_t* y_buffer,
const int8_t* u_buffer, const int8_t* v_buffer, int y_pixel_stride,
int uv_pixel_stride, int y_row_stride, int uv_row_stride,
int8_t *nv21);
You are now ready to harness the full power of renderscript. Below, I am providing an example with the ARCore Camera Image object (replace the first line with whatever code gives you your camera image):
val cameraImage = arFrame.frame.acquireCameraImage()
val width = cameraImage.width
val height = cameraImage.height
val byteArray = Toolkit.toNv21(cameraImage)
byteArray?.let {
Toolkit.yuvToRgbBitmap(
byteArray,
width,
height,
YuvFormat.NV21
).let { bitmap ->
saveBitmapToDevice(
name,
session,
bitmap,
context
)}}
I am doing histogram equalization on an image. I first get the RGB image and convert it to YUV. I run the histogram equalization algorithm on Y' of YUV and then convert back to RGB. Is it me, or does the image look weird? I am doing this correctly? this image is pretty bright, other images are a little red.
Here are the before/after images:
The algorithm (the commented values are values that I used previously for conversion. Both yield pretty much the same results) :
public static void createContrast(Bitmap src) {
int width = src.getWidth();
int height = src.getHeight();
Bitmap processedImage = Bitmap.createBitmap(width, height, src.getConfig());
int A = 0,R,G,B;
int pixel;
float[][] Y = new float[width][height];
float[][] U = new float[width][height];
float[][] V = new float [width][height];
int [] histogram = new int[256];
Arrays.fill(histogram, 0);
int [] cdf = new int[256];
Arrays.fill(cdf, 0);
float min = 257;
float max = 0;
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
pixel = src.getPixel(x, y);
//Log.i("TEST","("+x+","+y+")");
A = Color.alpha(pixel);
R = Color.red(pixel);
G = Color.green(pixel);
B = Color.blue(pixel);
/*Log.i("TESTEST","R: "+R);
Log.i("TESTEST","G: "+G);
Log.i("TESTEST","B: "+B);*/
// convert to YUV
/*Y[x][y] = 0.299f * R + 0.587f * G + 0.114f * B;
U[x][y] = 0.492f * (B-Y[x][y]);
V[x][y] = 0.877f * (R-Y[x][y]);*/
Y[x][y] = 0.299f * R + 0.587f * G + 0.114f * B;
U[x][y] = 0.565f * (B-Y[x][y]);
V[x][y] = 0.713f * (R-Y[x][y]);
// create a histogram
histogram[(int) Y[x][y]]+=1;
// get min and max values
if (Y[x][y] < min){
min = Y[x][y];
}
if (Y[x][y] > max){
max = Y[x][y];
}
}
}
cdf[0] = histogram[0];
for (int i=1;i<=255;i++){
cdf[i] = cdf[i-1] + histogram[i];
//Log.i("TESTEST","cdf of: "+i+" = "+cdf[i]);
}
float minCDF = cdf[(int)min];
float denominator = width*height - minCDF;
//Log.i("TEST","Histeq Histeq Histeq Histeq Histeq Histeq");
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
//Log.i("TEST","("+x+","+y+")");
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
Y[x][y] = ((cdf[ (int) Y[x][y]] - minCDF)/(denominator)) * 255;
/*R = minMaxCalc(Y[x][y] + 1.140f * V[x][y]);
G = minMaxCalc (Y[x][y] - 0.395f * U[x][y] - 0.581f * V[x][y]);
B = minMaxCalc (Y[x][y] + 2.032f * U[x][y]);*/
R = minMaxCalc(Y[x][y] + 1.140f * V[x][y]);
G = minMaxCalc (Y[x][y] - 0.344f * U[x][y] - 0.714f * V[x][y]);
B = minMaxCalc (Y[x][y] + 1.77f * U[x][y]);
//Log.i("TESTEST","A: "+A);
/*Log.i("TESTEST","R: "+R);
Log.i("TESTEST","G: "+G);
Log.i("TESTEST","B: "+B);*/
processedImage.setPixel(x, y, Color.argb(A, R, G, B));
}
}
}
My next step is to graph the histograms before and after. I just want to get an opinion here.
The question is a little bit old, but let me answer.
The reason is the way histogram equalization works. The algorithm tries to use all of the 0-255 range instead of given image's range.
So if you give it a dark image, it will change relatively brighter pixels to white colors. And relatively darker colors to black colors.
If you give it a bright image, for the same reason it will get darkened.