I want to do some Imageprocessing on a YUV_420_888 Image and need a greyscale Edition from it. As I read about the YUV Image it should be enough to extract the Y Plane of the Image. In Android I'll try that with this workflow to convert the Y Plane into a byte Array.
Image.Plane Y = img.getPlanes()[0];
ByteBuffer byteBuffer = Y.getBuffer();
byte[] data = new byte[byteBuffer.remaining()];
byteBuffer.get(data);
So as I want to compare this Image I get now with another grayscale Image (or at least a Result of the image processing) I have the Question, Is the grayscale Image I get extracting the Y-Plane nearly the same as a RGB which was turned into grayscale? Or do I have to do some additional processing steps for that?
Yes, the data you get from Y plane should be the same as if you go through an RGB image.
No, I am using the IR sensor in which I am getting a YUV_420_888 image which is already grey scale. But to convert it in bytes I used following function which gave me error. As per your answer, I took only Y plane and on result it gave me green screen.
ByteBuffer[] buffer = new ByteBuffer[1];
Image image = reader.acquireNextImage();
buffer[0] = image.getPlanes()[0].getBuffer().duplicate();
//buffer[1] = image.getPlanes()[1].getBuffer().duplicate();
int buffer0_size = buffer[0].remaining();
//int buffer1_size = buffer[1].remaining();
buffer[0].clear();
//buffer[1].clear();
byte[] buffer0_byte = new byte[buffer0_size];
//byte[] buffer1_byte = new byte[buffer1_size];
buffer[0].get(buffer0_byte, 0, buffer0_size);
//buffer[1].get(buffer1_byte, 0, buffer1_size);
byte[] byte2 = buffer0_byte;
//byte2=buffer0_byte;
//byte2[1]=buffer1_byte;
image.close();
mArrayImageBuffer.add(byte2);
After dequeing the bytes and goes to funcion:
public static byte[] convertYUV420ToNV12(byte[] byteBuffers){
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
try {
outputStream.write( byteBuffers);
//outputStream.write( byteBuffers[1] );
} catch (IOException e) {
e.printStackTrace();
}
// outputStream.write( buffer2_byte );
byte[] rez = outputStream.toByteArray();
return rez;
}
Related
I am currently working on a project which uses OpenCV in background mode to detect faces while the app is playing videos .
I've managed to run OpenCV as a service and I am using an ImageReader instance to capture the images
private ImageReader mImageReader = ImageReader.newInstance(mWidth, mHeight, ImageFormat.YUV_420_888, 1);
What I am trying to do is to get the detected face image and send it to the backend side , the image from the Imagereader is converted to Mat so I have both access to the Mat type and the Image type.
I've managed to convert the aquired image to Yuv image then to jpg ( byte array ) by using both toYuvImage and toJpegImage methods from this link : https://blog.minhazav.dev/how-to-convert-yuv-420-sp-android.media.Image-to-Bitmap-or-jpeg/#how-to-convert-yuv_420_888-image-to-jpeg-format
After converting the image to an array of bytes , I'm also trying to convert it to base64 to send it using http , the problem is when I try to put the imageQuality to 100 in toJpegImage , the result of the base64 image is looking corrupted , but when I put the value to something lower like 15 or 10 the image output ( resolution ) is better but the quality is bad , I am not sure if this problem is related to the resolution
byte[] jpegDataTest = ImageUtil.toJpegImage(detectionImage,15);
String base64New = Base64.encodeToString(jpegDataTest, Base64.DEFAULT);
PS : I am converting the image each time a face is detected in a for loop
for(Rect rect : faceDetections.toArray()){}
compress quality is set to 100 : https://i.postimg.cc/YqSmFxrT/quality100.jpg
compress quality is set to 15
public static byte[] toJpegImage(Image image, int imageQuality) {
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("Invalid image format");
}
YuvImage yuvImage = toYuvImage(image);
int width = image.getWidth();
int height = image.getHeight();
// Convert to jpeg
byte[] jpegImage = null;
try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
yuvImage.compressToJpeg(new Rect(0, 0, width, height), imageQuality, out);
jpegImage = out.toByteArray();
} catch (IOException e) {
e.printStackTrace();
}
return jpegImage;
}
private static byte[] YUV_420_888toNV21(Image image) {
byte[] nv21;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer vuBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int vuSize = vuBuffer.remaining();
nv21 = new byte[ySize + vuSize];
yBuffer.get(nv21, 0, ySize);
vuBuffer.get(nv21, ySize, vuSize);
return nv21;
}
I am trying to obtain rgb value of a pixel from camera.
I keep getting null values.
Is there another way to capture the image from camera into bitmap? I've looked into several options but most of them generated NullPointerException.
It also outputs SkImageDecoder::Factory returned null.
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// Image image = reader.acquireNextImage();
// mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
try {
Image image = reader.acquireNextImage();
final Image.Plane[] planes = image.getPlanes();
final Buffer buffer = planes[0].getBuffer();
Log.d("BUFFER", String.valueOf(buffer));
int offset = 0;
//
byte[] bytes = new byte[buffer.remaining()];
Log.d("BUYTES", String.valueOf(bytes));
//
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length); // NULL err
Log.d("R1", "bitmap created");
//
int r1, g1, b1;
int p = 50;
r1 = (p >> 16) & 0xff;
g1 = (p >> 8) & 0xff;
b1 = p & 0xff;
Log.d("R1", String.valueOf(r1));
Log.d("G1", String.valueOf(g1));
Log.d("B1", String.valueOf(b1));
} catch (Exception e) {
e.printStackTrace();
}
}
};
What format is your ImageReader using? If it's JPEG, then your approach should generally work, but you're not actually copying buffer into bytes anywhere.
You're creating the bytes array, and then passing that empty array into decodeByteArray. You need something like ByteBuffer.get to actually copy data into bytes.
If the ImageReader is YUV or RAW, then this won't work; those Images are raw arrays of image data, and have no headers/etc for BitmapFactory to know what to do with them. You'd have to just inspect the pixel values directly, since the contents aren't compressed in any way already.
i trying convert image from YUV_420_888 to rgb and i have some trouble with output image. In ImageReader i get image in format YUV_420_888 (using camera 2 api for get this image preview).
imageReader = ImageReader.newInstance(1920,1080,ImageFormat.YUV_420_888,10);
In android sdk for YuvImage class writing, that YuvImage using only NV21, YUY2.
as we can see difference between N21 and yuv420 not big and i try convert data to N21
YUV420:
and N21:
in onImageAvailable i get separately each Planes and put them in correct place (as on image)
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
ByteBuffer bufferY = image.getPlanes()[0].getBuffer();
byte[] data0 = new byte[bufferY.remaining()];
bufferY.get(data0);
ByteBuffer bufferU = image.getPlanes()[1].getBuffer();
byte[] data1 = new byte[bufferU.remaining()];
bufferU.get(data1);
ByteBuffer bufferV = image.getPlanes()[2].getBuffer();
byte[] data2 = new byte[bufferV.remaining()];
bufferV.get(data2);
...
outputStream.write(data0);
for (int i=0;i<bufferV.remaining();i++) {
outputStream.write(data1[i]);
outputStream.write(data2[i]);
}
after create YuvImage, convert to Bitmap, view and save
final YuvImage yuvImage = new YuvImage(outputStream.toByteArray(), ImageFormat.NV21, 1920,1080, null);
ByteArrayOutputStream outBitmap = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0,1920, 1080), 95, outBitmap);
byte[] imageBytes = outBitmap.toByteArray();
final Bitmap imageBitmap = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
mImageView.setImageBitmap(imageBitmap);
...
imageBitmap.compress(Bitmap.CompressFormat.JPEG, 95, out);
but my saved image is green and pink:
what did i miss??
I have implemented the YUV_420 logic (exactly as shown in the above diagram) in RenderScript, see full code here:
Conversion YUV_420 _888 to Bitmap, complete code
It produces perfect bimaps for API 22, but for API 21 it shows the "green idyll". From this I can confirm, the results you found. As already mentioned by Silvaren above, the reason seems to be an Android bug in API 21. Looking at my rs code it is clear, that if U and V information is missing (i.e. zero) the G(reen) ARGB component becomes huge during the conversion.
I see similar green pictures on my Galaxy S5 (still API 21) - here even upside down ;-). I suspect that most devices at API 21 currently do not yet use Camera2 for their device-camera apps. There is a free app called "Manual Camera Compatibility" which allows to test this. From this I see that indeed the S5/API21 still not uses Camera2...fortunately not...
There are two main problems on your conversion attempt:
We can not assume that the U and V planes are isolated, they might contain interleaved data (e.g. U-PLANE = {U1, V1, U2, V2, ...} ). In fact, it might even be a NV21 style interleaving already. The key here is looking at the plane's row stride and pixel stride and also check what we can assume about the YUV_420_888 format.
The fact that you've commented that most of the U an V planes data are zeros indicates that you are experiencing an Android bug on the generation of images in YUV_420_888. This means that even if you get the conversion right, the image would still look green if you are affected by the bug, which was only fixed at the Android 5.1.1 and up, so it is worth to check which version you are using besides fixing the code.
bufferV.get(data2) increases the the position of the ByteBuffer. That's why the loop for (int i=0;i<bufferV.remaining();i++) produces 0 iterations. You can easily rewrite it as
for (int i=0; i<data1.length; i++) {
outputStream.write(data1[i]);
outputStream.write(data2[i]);
}
I got an image of ImageFormat.YUV_420_888 and was successful to save to jpeg file, and could view it correctly on windows.
I am sharing here :
private final Image mImage;
private final File mFile;
private final int mImageFormat;
ByteArrayOutputStream outputbytes = new ByteArrayOutputStream();
ByteBuffer bufferY = mImage.getPlanes()[0].getBuffer();
byte[] data0 = new byte[bufferY.remaining()];
bufferY.get(data0);
ByteBuffer bufferU = mImage.getPlanes()[1].getBuffer();
byte[] data1 = new byte[bufferU.remaining()];
bufferU.get(data1);
ByteBuffer bufferV = mImage.getPlanes()[2].getBuffer();
byte[] data2 = new byte[bufferV.remaining()];
bufferV.get(data2);
try
{
outputbytes.write(data0);
outputbytes.write(data2);
outputbytes.write(data1);
final YuvImage yuvImage = new YuvImage(outputbytes.toByteArray(), ImageFormat.NV21, mImage.getWidth(),mImage.getHeight(), null);
ByteArrayOutputStream outBitmap = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0,mImage.getWidth(), mImage.getHeight()), 95, outBitmap);
FileOutputStream outputfile = null;
outputfile = new FileOutputStream(mFile);
outputfile.write(outBitmap.toByteArray());
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
mImage.close();
}
I need to process Image for my Application. I get the Images for ImageReader.
reader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image mImage = reader.acquireNextImage();
//mImage to Mat here
mImage.close();
}
},null);
But now, I need to convert those images in Mat.
I know that I can pass by the Bitmap class, but i don't know how to convert an Image into Bitmap too.
I think that i found a possible answer.
I give to my ImageReader a simple plane format like JPEG.
reader = ImageReader.newInstance(previewSize.getWidth(),previewSize.getHeight(), ImageFormat.JPEG, 2);
Then i do that :
ByteBuffer bb = image.getPlanes()[0].getBuffer();
byte[] buf = new byte[bb.remaining()];
bb.get(buf);
imageGrab = new Mat();
imageGrab.put(0,0,buf);
Try this
// image to byte array
ByteBuffer bb = image.getPlanes()[0].getBuffer();
byte[] data = new byte[bb.remaining()];
bb.get(data);
// byte array to mat
Mat = Imgcodecs.imdecode(new MatOfByte(data), Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);
I have a bitmap that I want to send to the server by encoding it to base64 but I do not want to compress the image in either png or jpeg.
Now what I was previously doing was.
ByteArrayOutputStream byteArrayBitmapStream = new ByteArrayOutputStream();
bitmapPicture.compress(Bitmap.CompressFormat.PNG, COMPRESSION_QUALITY, byteArrayBitmapStream);
byte[] b = byteArrayBitmapStream.toByteArray();
//then simple encoding to base64 and off to server
encodedImage = Base64.encodeToString(b, Base64.NO_WRAP);
Now I just dont want to use any compression nor any format plain simple byte[] from bitmap that I can encode and send to the server.
Any pointers?
You can use copyPixelsToBuffer() to move the pixel data to a Buffer, or you can use getPixels() and then convert the integers to bytes with bit-shifting.
copyPixelsToBuffer() is probably what you'll want to use, so here is an example on how you can use it:
//b is the Bitmap
//calculate how many bytes our image consists of.
int bytes = b.getByteCount();
//or we can calculate bytes this way. Use a different value than 4 if you don't use 32bit images.
//int bytes = b.getWidth()*b.getHeight()*4;
ByteBuffer buffer = ByteBuffer.allocate(bytes); //Create a new buffer
b.copyPixelsToBuffer(buffer); //Move the byte data to the buffer
byte[] array = buffer.array(); //Get the underlying array containing the data.
instead of the following line in #jave answer:
int bytes = b.getByteCount();
Use the following line and function:
int bytes = byteSizeOf(b);
protected int byteSizeOf(Bitmap data) {
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.HONEYCOMB_MR1) {
return data.getRowBytes() * data.getHeight();
} else if (Build.VERSION.SDK_INT < Build.VERSION_CODES.KITKAT) {
return data.getByteCount();
} else {
return data.getAllocationByteCount();
}
BitmapCompat.getAllocationByteCount(bitmap);
is helpful to find the required size of the ByteBuffer