Converting YUV_420_888 To Base64 Android corrupted image - android

I am currently working on a project which uses OpenCV in background mode to detect faces while the app is playing videos .
I've managed to run OpenCV as a service and I am using an ImageReader instance to capture the images
private ImageReader mImageReader = ImageReader.newInstance(mWidth, mHeight, ImageFormat.YUV_420_888, 1);
What I am trying to do is to get the detected face image and send it to the backend side , the image from the Imagereader is converted to Mat so I have both access to the Mat type and the Image type.
I've managed to convert the aquired image to Yuv image then to jpg ( byte array ) by using both toYuvImage and toJpegImage methods from this link : https://blog.minhazav.dev/how-to-convert-yuv-420-sp-android.media.Image-to-Bitmap-or-jpeg/#how-to-convert-yuv_420_888-image-to-jpeg-format
After converting the image to an array of bytes , I'm also trying to convert it to base64 to send it using http , the problem is when I try to put the imageQuality to 100 in toJpegImage , the result of the base64 image is looking corrupted , but when I put the value to something lower like 15 or 10 the image output ( resolution ) is better but the quality is bad , I am not sure if this problem is related to the resolution
byte[] jpegDataTest = ImageUtil.toJpegImage(detectionImage,15);
String base64New = Base64.encodeToString(jpegDataTest, Base64.DEFAULT);
PS : I am converting the image each time a face is detected in a for loop
for(Rect rect : faceDetections.toArray()){}
compress quality is set to 100 : https://i.postimg.cc/YqSmFxrT/quality100.jpg
compress quality is set to 15
public static byte[] toJpegImage(Image image, int imageQuality) {
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("Invalid image format");
}
YuvImage yuvImage = toYuvImage(image);
int width = image.getWidth();
int height = image.getHeight();
// Convert to jpeg
byte[] jpegImage = null;
try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
yuvImage.compressToJpeg(new Rect(0, 0, width, height), imageQuality, out);
jpegImage = out.toByteArray();
} catch (IOException e) {
e.printStackTrace();
}
return jpegImage;
}
private static byte[] YUV_420_888toNV21(Image image) {
byte[] nv21;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer vuBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int vuSize = vuBuffer.remaining();
nv21 = new byte[ySize + vuSize];
yBuffer.get(nv21, 0, ySize);
vuBuffer.get(nv21, ySize, vuSize);
return nv21;
}

Related

PixelCopy on pre-Nougat devices [duplicate]

In android, I get an Image object from here https://inducesmile.com/android/android-camera2-api-example-tutorial/ this camera tutorial. But I want to now loop through the pixel values, does anyone know how I can do that? Do I need to convert it to something else and how can I do that?
Thanks
If you want to loop all throughout the pixel then you need to convert it first to Bitmap object. Now since what I see in the source code that it returns an Image, you can directly convert the bytes to bitmap.
Image image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
Then once you get the bitmap object, you can now iterate through all of the pixels.
YuvToRgbConverter is useful for conversion from Image to Bitmap.
https://github.com/android/camera-samples/blob/master/Camera2Basic/utils/src/main/java/com/example/android/camera/utils/YuvToRgbConverter.kt
Usage sample.
val bmp = Bitmap.createBitmap(image.width, image.height, Bitmap.Config.ARGB_8888)
yuvToRgbConverter.yuvToRgb(image, bmp)
Actually you have two questions in one
1) How do you loop throw android.media.Image pixels
2) How do you convert android.media.image to Bitmap
The 1-st is easy. Note that the Image object that you get from the camera, it's just a YUV frame, where Y, and U+V components are in different planes. In many Image Processing cases you need only the Y plane, that means the gray part of the image. To get it I suggest code like this:
Image.Plane[] planes = image.getPlanes();
int yRowStride = planes[0].getRowStride();
byte[] yImage = new byte[yRowStride];
planes[0].getBuffer().get(yImage);
The yImage byte buffer is actually the gray pixels of the frame.
In same manner you can get the U+V parts to. Note that they can be U first, and V after, or V and after it U, and maybe interlived (that is the common case case with Camera2 API). So you get UVUV....
For debug purposes, I often write the frame to a file, and trying to open it with Vooya app (Linux) to check the format.
The 2-th question is a little bit more complex.
To get a Bitmap object I found some code example from TensorFlow project here. The most interesting functions for you is "convertImageToBitmap" that will return you with RGB values.
To convert them to a real Bitmap do the next:
Bitmap rgbFrameBitmap;
int[] cachedRgbBytes;
cachedRgbBytes = ImageUtils.convertImageToBitmap(image, cachedRgbBytes, cachedYuvBytes);
rgbFrameBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
rgbFrameBitmap.setPixels(cachedRgbBytes,0,image.getWidth(), 0, 0,image.getWidth(), image.getHeight());
Note: There is more options of converting YUV to RGB frames, so if you need the pixels value, maybe Bitmap is not the best choice, as it may consume more memory than you need, to just get the RGB values
Java Conversion Method
ImageAnalysis imageAnalysis = new ImageAnalysis.Builder()
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
.build();
imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this), new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy image) {
// call toBitmap function
Bitmap bitmap = toBitmap(image);
image.close();
}
});
private Bitmap bitmapBuffer;
private Bitmap toBitmap(#NonNull ImageProxy image) {
if(bitmapBuffer == null){
bitmapBuffer = Bitmap.createBitmap(image.getWidth(),image.getHeight(),Bitmap.Config.ARGB_8888);
}
bitmapBuffer.copyPixelsFromBuffer(image.getPlanes()[0].getBuffer());
return bitmapBuffer;
}
https://docs.oracle.com/javase/1.5.0/docs/api/java/nio/ByteBuffer.html#get%28byte[]%29
According to the java docs: The buffer.get method transfers bytes from this buffer into the given destination array. An invocation of this method of the form src.get(a) behaves in exactly the same way as the invocation
src.get(a, 0, a.length)
I assume you have YUV (YUV (YUV_420_888) Image provided by Camera. Using this interesting How to use YUV (YUV_420_888) Image in Android tutorial I can propose following solution to convert Image to Bitmap.
Use this to convert YUV Image to Bitmap:
private Bitmap yuv420ToBitmap(Image image, Context context) {
RenderScript rs = RenderScript.create(SpeedMeasurementActivity.this);
ScriptIntrinsicYuvToRGB script = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
// Refer the logic in a section below on how to convert a YUV_420_888 image
// to single channel flat 1D array. For sake of this example I'll abstract it
// as a method.
byte[] yuvByteArray = image2byteArray(image);
Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(yuvByteArray.length);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs))
.setX(image.getWidth())
.setY(image.getHeight());
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
// The allocations above "should" be cached if you are going to perform
// repeated conversion of YUV_420_888 to Bitmap.
in.copyFrom(yuvByteArray);
script.setInput(in);
script.forEach(out);
Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
out.copyTo(bitmap);
return bitmap;
}
and a supportive function to convert 3 Planes YUV image to 1 dimesional byte array:
private byte[] image2byteArray(Image image) {
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("Invalid image format");
}
int width = image.getWidth();
int height = image.getHeight();
Image.Plane yPlane = image.getPlanes()[0];
Image.Plane uPlane = image.getPlanes()[1];
Image.Plane vPlane = image.getPlanes()[2];
ByteBuffer yBuffer = yPlane.getBuffer();
ByteBuffer uBuffer = uPlane.getBuffer();
ByteBuffer vBuffer = vPlane.getBuffer();
// Full size Y channel and quarter size U+V channels.
int numPixels = (int) (width * height * 1.5f);
byte[] nv21 = new byte[numPixels];
int index = 0;
// Copy Y channel.
int yRowStride = yPlane.getRowStride();
int yPixelStride = yPlane.getPixelStride();
for(int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
nv21[index++] = yBuffer.get(y * yRowStride + x * yPixelStride);
}
}
// Copy VU data; NV21 format is expected to have YYYYVU packaging.
// The U/V planes are guaranteed to have the same row stride and pixel stride.
int uvRowStride = uPlane.getRowStride();
int uvPixelStride = uPlane.getPixelStride();
int uvWidth = width / 2;
int uvHeight = height / 2;
for(int y = 0; y < uvHeight; ++y) {
for (int x = 0; x < uvWidth; ++x) {
int bufferIndex = (y * uvRowStride) + (x * uvPixelStride);
// V channel.
nv21[index++] = vBuffer.get(bufferIndex);
// U channel.
nv21[index++] = uBuffer.get(bufferIndex);
}
}
return nv21;
}
start with the imageProxy from the analyizer
#Override
public void analyze(#NonNull ImageProxy imageProxy)
{
Image mediaImage = imageProxy.getImage();
if (mediaImage != null)
{
toBitmap(mediaImage);
}
imageProxy.close();
}
Then convert to a bitmap
private Bitmap toBitmap(Image image)
{
if (image.getFormat() != ImageFormat.YUV_420_888)
{
throw new IllegalArgumentException("Invalid image format");
}
byte[] nv21b = yuv420ThreePlanesToNV21BA(image.getPlanes(), image.getWidth(), image.getHeight());
YuvImage yuvImage = new YuvImage(nv21b, ImageFormat.NV21, image.getWidth(), image.getHeight(), null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvImage.compressToJpeg (new Rect(0, 0,
yuvImage.getWidth(),
yuvImage.getHeight()),
mQuality, baos);
mFrameBuffer = baos;
//byte[] imageBytes = baos.toByteArray();
//Bitmap bm = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
return null;
}
Here's the static function that worked for me
public static byte [] yuv420ThreePlanesToNV21BA(Plane[] yuv420888planes, int width, int height)
{
int imageSize = width * height;
byte[] out = new byte[imageSize + 2 * (imageSize / 4)];
if (areUVPlanesNV21(yuv420888planes, width, height)) {
// Copy the Y values.
yuv420888planes[0].getBuffer().get(out, 0, imageSize);
ByteBuffer uBuffer = yuv420888planes[1].getBuffer();
ByteBuffer vBuffer = yuv420888planes[2].getBuffer();
// Get the first V value from the V buffer, since the U buffer does not contain it.
vBuffer.get(out, imageSize, 1);
// Copy the first U value and the remaining VU values from the U buffer.
uBuffer.get(out, imageSize + 1, 2 * imageSize / 4 - 1);
}
else
{
// Fallback to copying the UV values one by one, which is slower but also works.
// Unpack Y.
unpackPlane(yuv420888planes[0], width, height, out, 0, 1);
// Unpack U.
unpackPlane(yuv420888planes[1], width, height, out, imageSize + 1, 2);
// Unpack V.
unpackPlane(yuv420888planes[2], width, height, out, imageSize, 2);
}
return out;
}
bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.image);
1-Store the path to the image file as a string variable. To decode the content of an image file, you need the file path stored within your code as a string. Use the following syntax as a guide:
String picPath = "/mnt/sdcard/Pictures/mypic.jpg";
2-Create a Bitmap Object And Use BitmapFactory:
Bitmap picBitmap;
Bitmap picBitmap = BitmapFactory.decodeFile(picPath);

Can not get camera output, face detection android

I am trying face detection and adding mask(graphic overlay) using google vision api ,the problem is i could not get the ouptut from camera after detecting and adding mask.so far I have tried this solution from github , https://github.com/googlesamples/android-vision/issues/24 ,based on this issue i have added a custom detector class,
Mobile Vision API - concatenate new detector object to continue frame processing . and added this on mydetector class How to create Bitmap from grayscaled byte buffer image? .
MyDetectorClass
class MyFaceDetector extends Detector<Face>
{
private Detector<Face> mDelegate;
MyFaceDetector(Detector<Face> delegate) {
mDelegate = delegate;
}
public SparseArray<Face> detect(Frame frame) {
// *** add your custom frame processing code here
ByteBuffer byteBuffer = frame.getGrayscaleImageData();
byte[] bytes = byteBuffer.array();
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
YuvImage yuvimage=new YuvImage(bytes, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Log.e("got bitmap","bitmap val " + bitmap);
return mDelegate.detect(frame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
}
frame processing
public SparseArray<Face> detect(Frame frame)
{
// *** add your custom frame processing code here
ByteBuffer byteBuffer = frame.getGrayscaleImageData();
byte[] bytes = byteBuffer.array();
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
YuvImage yuvimage=new YuvImage(bytes, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Log.e("got bitmap","bitmap val " + bitmap);
return mDelegate.detect(frame);
}
i am getting a rotated bitmap ,that is without the mask (graphic overlay) i have added .How can i get the camera output with mask .
Thanks in advance.
The simple answer is: You can't.
Why? Android camera output frames in NV21 ByteBuffer. And you must generate your masks based on the landmarks points in a separated Bitmap, then join them.
Sorry but, that's how the Android Camera API work. Nothing can be done. You must do it manually.
Also, I wouldn't get the camera preview then convert it to YuvImage then to Bitmap. That process consumes a lot of resources and makes preview very very slow. Instead I would use this method which will be a lot faster and rotates your preview internally so you don't loose time doing it:
outputFrame = new Frame.Builder().setImageData(mPendingFrameData, mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.NV21)
.setId(mPendingFrameId)
.setTimestampMillis(mPendingTimeMillis)
.setRotation(mRotation)
.build();
mDetector.receiveFrame(outputFrame);
All the code can be found in CameraSource.java

Convert YUV Image into greyscale Image - Same Result as RGB to Grayscale?

I want to do some Imageprocessing on a YUV_420_888 Image and need a greyscale Edition from it. As I read about the YUV Image it should be enough to extract the Y Plane of the Image. In Android I'll try that with this workflow to convert the Y Plane into a byte Array.
Image.Plane Y = img.getPlanes()[0];
ByteBuffer byteBuffer = Y.getBuffer();
byte[] data = new byte[byteBuffer.remaining()];
byteBuffer.get(data);
So as I want to compare this Image I get now with another grayscale Image (or at least a Result of the image processing) I have the Question, Is the grayscale Image I get extracting the Y-Plane nearly the same as a RGB which was turned into grayscale? Or do I have to do some additional processing steps for that?
Yes, the data you get from Y plane should be the same as if you go through an RGB image.
No, I am using the IR sensor in which I am getting a YUV_420_888 image which is already grey scale. But to convert it in bytes I used following function which gave me error. As per your answer, I took only Y plane and on result it gave me green screen.
ByteBuffer[] buffer = new ByteBuffer[1];
Image image = reader.acquireNextImage();
buffer[0] = image.getPlanes()[0].getBuffer().duplicate();
//buffer[1] = image.getPlanes()[1].getBuffer().duplicate();
int buffer0_size = buffer[0].remaining();
//int buffer1_size = buffer[1].remaining();
buffer[0].clear();
//buffer[1].clear();
byte[] buffer0_byte = new byte[buffer0_size];
//byte[] buffer1_byte = new byte[buffer1_size];
buffer[0].get(buffer0_byte, 0, buffer0_size);
//buffer[1].get(buffer1_byte, 0, buffer1_size);
byte[] byte2 = buffer0_byte;
//byte2=buffer0_byte;
//byte2[1]=buffer1_byte;
image.close();
mArrayImageBuffer.add(byte2);
After dequeing the bytes and goes to funcion:
public static byte[] convertYUV420ToNV12(byte[] byteBuffers){
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
try {
outputStream.write( byteBuffers);
//outputStream.write( byteBuffers[1] );
} catch (IOException e) {
e.printStackTrace();
}
// outputStream.write( buffer2_byte );
byte[] rez = outputStream.toByteArray();
return rez;
}

Finding RGB values of bitmap from camera2 API in Android

I am trying to obtain rgb value of a pixel from camera.
I keep getting null values.
Is there another way to capture the image from camera into bitmap? I've looked into several options but most of them generated NullPointerException.
It also outputs SkImageDecoder::Factory returned null.
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// Image image = reader.acquireNextImage();
// mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
try {
Image image = reader.acquireNextImage();
final Image.Plane[] planes = image.getPlanes();
final Buffer buffer = planes[0].getBuffer();
Log.d("BUFFER", String.valueOf(buffer));
int offset = 0;
//
byte[] bytes = new byte[buffer.remaining()];
Log.d("BUYTES", String.valueOf(bytes));
//
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length); // NULL err
Log.d("R1", "bitmap created");
//
int r1, g1, b1;
int p = 50;
r1 = (p >> 16) & 0xff;
g1 = (p >> 8) & 0xff;
b1 = p & 0xff;
Log.d("R1", String.valueOf(r1));
Log.d("G1", String.valueOf(g1));
Log.d("B1", String.valueOf(b1));
} catch (Exception e) {
e.printStackTrace();
}
}
};
What format is your ImageReader using? If it's JPEG, then your approach should generally work, but you're not actually copying buffer into bytes anywhere.
You're creating the bytes array, and then passing that empty array into decodeByteArray. You need something like ByteBuffer.get to actually copy data into bytes.
If the ImageReader is YUV or RAW, then this won't work; those Images are raw arrays of image data, and have no headers/etc for BitmapFactory to know what to do with them. You'd have to just inspect the pixel values directly, since the contents aren't compressed in any way already.

Android - Get byte[] from Surface / virtualDisplay

I am trying to screen cast my android device screen to a web browser using projection API and Webrtc.
Projection API renders its output to a Surface and returns a virtualDisplay. I have done till this. I saw the webrtc library for android. they have made it to receive input only from device camera. I am trying to read and modify webrtc code to stream whatever is shown to the surface.
My question is How can i receive byte[] data from a surface regularly like the Camera.PreviewCallback function. What other available options i have?
Here is how i solved my problem. I used ImageReader class like
imageReader = ImageReader.newInstance(displayWidth, displayHeight, PixelFormat.RGBA_8888, 2);
mediaProjection.createVirtualDisplay("screencapture",
displayWidth, displayHeight, density,
flags, imageReader.getSurface(), null, handler);
imageReader.setOnImageAvailableListener(new ImageAvailableListener(), null);
private class ImageAvailableListener implements ImageReader.OnImageAvailableListener {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
Bitmap bitmap = null;
ByteArrayOutputStream stream = null;
try {
image = imageReader.acquireLatestImage();
if (image != null) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * displayWidth;
// create bitmap
bitmap = Bitmap.createBitmap(displayWidth + rowPadding / pixelStride,
displayHeight, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 50, stream);
StringBuilder sb = new StringBuilder();
sb.append("data:image/png;base64,");
sb.append(StringUtils.newStringUtf8(Base64.encode(stream.toByteArray(), Base64.DEFAULT)));
WebrtcClient.sendProjection(sb.toString());
}
} catch (Exception e) {
e.printStackTrace();
}
I am converting byte[] to Base64 string and sending to through webrtc datachannel.

Categories

Resources