I am currently working on a project which uses OpenCV in background mode to detect faces while the app is playing videos .
I've managed to run OpenCV as a service and I am using an ImageReader instance to capture the images
private ImageReader mImageReader = ImageReader.newInstance(mWidth, mHeight, ImageFormat.YUV_420_888, 1);
What I am trying to do is to get the detected face image and send it to the backend side , the image from the Imagereader is converted to Mat so I have both access to the Mat type and the Image type.
I've managed to convert the aquired image to Yuv image then to jpg ( byte array ) by using both toYuvImage and toJpegImage methods from this link : https://blog.minhazav.dev/how-to-convert-yuv-420-sp-android.media.Image-to-Bitmap-or-jpeg/#how-to-convert-yuv_420_888-image-to-jpeg-format
After converting the image to an array of bytes , I'm also trying to convert it to base64 to send it using http , the problem is when I try to put the imageQuality to 100 in toJpegImage , the result of the base64 image is looking corrupted , but when I put the value to something lower like 15 or 10 the image output ( resolution ) is better but the quality is bad , I am not sure if this problem is related to the resolution
byte[] jpegDataTest = ImageUtil.toJpegImage(detectionImage,15);
String base64New = Base64.encodeToString(jpegDataTest, Base64.DEFAULT);
PS : I am converting the image each time a face is detected in a for loop
for(Rect rect : faceDetections.toArray()){}
compress quality is set to 100 : https://i.postimg.cc/YqSmFxrT/quality100.jpg
compress quality is set to 15
public static byte[] toJpegImage(Image image, int imageQuality) {
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("Invalid image format");
}
YuvImage yuvImage = toYuvImage(image);
int width = image.getWidth();
int height = image.getHeight();
// Convert to jpeg
byte[] jpegImage = null;
try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
yuvImage.compressToJpeg(new Rect(0, 0, width, height), imageQuality, out);
jpegImage = out.toByteArray();
} catch (IOException e) {
e.printStackTrace();
}
return jpegImage;
}
private static byte[] YUV_420_888toNV21(Image image) {
byte[] nv21;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer vuBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int vuSize = vuBuffer.remaining();
nv21 = new byte[ySize + vuSize];
yBuffer.get(nv21, 0, ySize);
vuBuffer.get(nv21, ySize, vuSize);
return nv21;
}
In android, I get an Image object from here https://inducesmile.com/android/android-camera2-api-example-tutorial/ this camera tutorial. But I want to now loop through the pixel values, does anyone know how I can do that? Do I need to convert it to something else and how can I do that?
Thanks
If you want to loop all throughout the pixel then you need to convert it first to Bitmap object. Now since what I see in the source code that it returns an Image, you can directly convert the bytes to bitmap.
Image image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
Then once you get the bitmap object, you can now iterate through all of the pixels.
YuvToRgbConverter is useful for conversion from Image to Bitmap.
https://github.com/android/camera-samples/blob/master/Camera2Basic/utils/src/main/java/com/example/android/camera/utils/YuvToRgbConverter.kt
Usage sample.
val bmp = Bitmap.createBitmap(image.width, image.height, Bitmap.Config.ARGB_8888)
yuvToRgbConverter.yuvToRgb(image, bmp)
Actually you have two questions in one
1) How do you loop throw android.media.Image pixels
2) How do you convert android.media.image to Bitmap
The 1-st is easy. Note that the Image object that you get from the camera, it's just a YUV frame, where Y, and U+V components are in different planes. In many Image Processing cases you need only the Y plane, that means the gray part of the image. To get it I suggest code like this:
Image.Plane[] planes = image.getPlanes();
int yRowStride = planes[0].getRowStride();
byte[] yImage = new byte[yRowStride];
planes[0].getBuffer().get(yImage);
The yImage byte buffer is actually the gray pixels of the frame.
In same manner you can get the U+V parts to. Note that they can be U first, and V after, or V and after it U, and maybe interlived (that is the common case case with Camera2 API). So you get UVUV....
For debug purposes, I often write the frame to a file, and trying to open it with Vooya app (Linux) to check the format.
The 2-th question is a little bit more complex.
To get a Bitmap object I found some code example from TensorFlow project here. The most interesting functions for you is "convertImageToBitmap" that will return you with RGB values.
To convert them to a real Bitmap do the next:
Bitmap rgbFrameBitmap;
int[] cachedRgbBytes;
cachedRgbBytes = ImageUtils.convertImageToBitmap(image, cachedRgbBytes, cachedYuvBytes);
rgbFrameBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
rgbFrameBitmap.setPixels(cachedRgbBytes,0,image.getWidth(), 0, 0,image.getWidth(), image.getHeight());
Note: There is more options of converting YUV to RGB frames, so if you need the pixels value, maybe Bitmap is not the best choice, as it may consume more memory than you need, to just get the RGB values
Java Conversion Method
ImageAnalysis imageAnalysis = new ImageAnalysis.Builder()
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
.build();
imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this), new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy image) {
// call toBitmap function
Bitmap bitmap = toBitmap(image);
image.close();
}
});
private Bitmap bitmapBuffer;
private Bitmap toBitmap(#NonNull ImageProxy image) {
if(bitmapBuffer == null){
bitmapBuffer = Bitmap.createBitmap(image.getWidth(),image.getHeight(),Bitmap.Config.ARGB_8888);
}
bitmapBuffer.copyPixelsFromBuffer(image.getPlanes()[0].getBuffer());
return bitmapBuffer;
}
https://docs.oracle.com/javase/1.5.0/docs/api/java/nio/ByteBuffer.html#get%28byte[]%29
According to the java docs: The buffer.get method transfers bytes from this buffer into the given destination array. An invocation of this method of the form src.get(a) behaves in exactly the same way as the invocation
src.get(a, 0, a.length)
I assume you have YUV (YUV (YUV_420_888) Image provided by Camera. Using this interesting How to use YUV (YUV_420_888) Image in Android tutorial I can propose following solution to convert Image to Bitmap.
Use this to convert YUV Image to Bitmap:
private Bitmap yuv420ToBitmap(Image image, Context context) {
RenderScript rs = RenderScript.create(SpeedMeasurementActivity.this);
ScriptIntrinsicYuvToRGB script = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
// Refer the logic in a section below on how to convert a YUV_420_888 image
// to single channel flat 1D array. For sake of this example I'll abstract it
// as a method.
byte[] yuvByteArray = image2byteArray(image);
Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(yuvByteArray.length);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs))
.setX(image.getWidth())
.setY(image.getHeight());
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
// The allocations above "should" be cached if you are going to perform
// repeated conversion of YUV_420_888 to Bitmap.
in.copyFrom(yuvByteArray);
script.setInput(in);
script.forEach(out);
Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
out.copyTo(bitmap);
return bitmap;
}
and a supportive function to convert 3 Planes YUV image to 1 dimesional byte array:
private byte[] image2byteArray(Image image) {
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("Invalid image format");
}
int width = image.getWidth();
int height = image.getHeight();
Image.Plane yPlane = image.getPlanes()[0];
Image.Plane uPlane = image.getPlanes()[1];
Image.Plane vPlane = image.getPlanes()[2];
ByteBuffer yBuffer = yPlane.getBuffer();
ByteBuffer uBuffer = uPlane.getBuffer();
ByteBuffer vBuffer = vPlane.getBuffer();
// Full size Y channel and quarter size U+V channels.
int numPixels = (int) (width * height * 1.5f);
byte[] nv21 = new byte[numPixels];
int index = 0;
// Copy Y channel.
int yRowStride = yPlane.getRowStride();
int yPixelStride = yPlane.getPixelStride();
for(int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
nv21[index++] = yBuffer.get(y * yRowStride + x * yPixelStride);
}
}
// Copy VU data; NV21 format is expected to have YYYYVU packaging.
// The U/V planes are guaranteed to have the same row stride and pixel stride.
int uvRowStride = uPlane.getRowStride();
int uvPixelStride = uPlane.getPixelStride();
int uvWidth = width / 2;
int uvHeight = height / 2;
for(int y = 0; y < uvHeight; ++y) {
for (int x = 0; x < uvWidth; ++x) {
int bufferIndex = (y * uvRowStride) + (x * uvPixelStride);
// V channel.
nv21[index++] = vBuffer.get(bufferIndex);
// U channel.
nv21[index++] = uBuffer.get(bufferIndex);
}
}
return nv21;
}
start with the imageProxy from the analyizer
#Override
public void analyze(#NonNull ImageProxy imageProxy)
{
Image mediaImage = imageProxy.getImage();
if (mediaImage != null)
{
toBitmap(mediaImage);
}
imageProxy.close();
}
Then convert to a bitmap
private Bitmap toBitmap(Image image)
{
if (image.getFormat() != ImageFormat.YUV_420_888)
{
throw new IllegalArgumentException("Invalid image format");
}
byte[] nv21b = yuv420ThreePlanesToNV21BA(image.getPlanes(), image.getWidth(), image.getHeight());
YuvImage yuvImage = new YuvImage(nv21b, ImageFormat.NV21, image.getWidth(), image.getHeight(), null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvImage.compressToJpeg (new Rect(0, 0,
yuvImage.getWidth(),
yuvImage.getHeight()),
mQuality, baos);
mFrameBuffer = baos;
//byte[] imageBytes = baos.toByteArray();
//Bitmap bm = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
return null;
}
Here's the static function that worked for me
public static byte [] yuv420ThreePlanesToNV21BA(Plane[] yuv420888planes, int width, int height)
{
int imageSize = width * height;
byte[] out = new byte[imageSize + 2 * (imageSize / 4)];
if (areUVPlanesNV21(yuv420888planes, width, height)) {
// Copy the Y values.
yuv420888planes[0].getBuffer().get(out, 0, imageSize);
ByteBuffer uBuffer = yuv420888planes[1].getBuffer();
ByteBuffer vBuffer = yuv420888planes[2].getBuffer();
// Get the first V value from the V buffer, since the U buffer does not contain it.
vBuffer.get(out, imageSize, 1);
// Copy the first U value and the remaining VU values from the U buffer.
uBuffer.get(out, imageSize + 1, 2 * imageSize / 4 - 1);
}
else
{
// Fallback to copying the UV values one by one, which is slower but also works.
// Unpack Y.
unpackPlane(yuv420888planes[0], width, height, out, 0, 1);
// Unpack U.
unpackPlane(yuv420888planes[1], width, height, out, imageSize + 1, 2);
// Unpack V.
unpackPlane(yuv420888planes[2], width, height, out, imageSize, 2);
}
return out;
}
bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.image);
1-Store the path to the image file as a string variable. To decode the content of an image file, you need the file path stored within your code as a string. Use the following syntax as a guide:
String picPath = "/mnt/sdcard/Pictures/mypic.jpg";
2-Create a Bitmap Object And Use BitmapFactory:
Bitmap picBitmap;
Bitmap picBitmap = BitmapFactory.decodeFile(picPath);
I want to do some Imageprocessing on a YUV_420_888 Image and need a greyscale Edition from it. As I read about the YUV Image it should be enough to extract the Y Plane of the Image. In Android I'll try that with this workflow to convert the Y Plane into a byte Array.
Image.Plane Y = img.getPlanes()[0];
ByteBuffer byteBuffer = Y.getBuffer();
byte[] data = new byte[byteBuffer.remaining()];
byteBuffer.get(data);
So as I want to compare this Image I get now with another grayscale Image (or at least a Result of the image processing) I have the Question, Is the grayscale Image I get extracting the Y-Plane nearly the same as a RGB which was turned into grayscale? Or do I have to do some additional processing steps for that?
Yes, the data you get from Y plane should be the same as if you go through an RGB image.
No, I am using the IR sensor in which I am getting a YUV_420_888 image which is already grey scale. But to convert it in bytes I used following function which gave me error. As per your answer, I took only Y plane and on result it gave me green screen.
ByteBuffer[] buffer = new ByteBuffer[1];
Image image = reader.acquireNextImage();
buffer[0] = image.getPlanes()[0].getBuffer().duplicate();
//buffer[1] = image.getPlanes()[1].getBuffer().duplicate();
int buffer0_size = buffer[0].remaining();
//int buffer1_size = buffer[1].remaining();
buffer[0].clear();
//buffer[1].clear();
byte[] buffer0_byte = new byte[buffer0_size];
//byte[] buffer1_byte = new byte[buffer1_size];
buffer[0].get(buffer0_byte, 0, buffer0_size);
//buffer[1].get(buffer1_byte, 0, buffer1_size);
byte[] byte2 = buffer0_byte;
//byte2=buffer0_byte;
//byte2[1]=buffer1_byte;
image.close();
mArrayImageBuffer.add(byte2);
After dequeing the bytes and goes to funcion:
public static byte[] convertYUV420ToNV12(byte[] byteBuffers){
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
try {
outputStream.write( byteBuffers);
//outputStream.write( byteBuffers[1] );
} catch (IOException e) {
e.printStackTrace();
}
// outputStream.write( buffer2_byte );
byte[] rez = outputStream.toByteArray();
return rez;
}
I'm trying to create a program that processes images directly from the Camera2 preview, and I keep running into a problem when it comes to actually processing the incoming images.
In my OnImageAvailableListener.onImageAvailable() callback, I'm getting an ImageReader object, from which I acquireNextImage() and pass that Image object into my helper function. From there, I convert it into a byte array, and attempt to do the processing. Instead, every time I get to the part where I convert it to Bitmap, the BitmapFactory.getByteArray returns null, even though the byte array is a properly-formatted JPEG.
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader imageReader) {
Image image = imageReader.acquireNextImage();
ProcessBarcode(image);
image.close();
}
};
private void ProcessBarcode(Image image) {
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
int bufferSize = buffer.remaining();
byte[] bytes = new byte[bufferSize];
buffer.get(bytes);
FileOutputStream output = null;
try {
output = new FileOutputStream(mFile);
output.write(bytes);
output.close();
} catch (IOException e) {
// Do something clever
}
// This call FAILS
//Bitmap b = BitmapFactory.decodeByteArray(bytes, 0, buffer.remaining(), o);
// But this call WORKS?
Bitmap b = BitmapFactory.decodeFile(mFile.getAbsolutePath());
detector = new BarcodeDetector.Builder(getActivity())
.setBarcodeFormats(Barcode.EAN_13 | Barcode.ISBN)
.build();
if (b != null) {
Frame isbnFrame = new Frame.Builder().setBitmap(b).build();
SparseArray<Barcode> barcodes = detector.detect(isbnFrame);
if (barcodes.size() != 0) {
Log.d("Barcode decoded: ",barcodes.valueAt(0).displayValue);
}
else {
Log.d(TAG, "No barcode detected");
}
}
else {
Log.d(TAG, "No bitmap detected");
}
}
The ImageReader is set up like:
mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(), ImageFormat.JPEG, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);
Basically what I see is that if I use the byte array directly without first saving it to the internal memory, the camera preview is fast and snappy, although the Bitmap is always null so I'm not actually performing any processing. If I save the byte array to the memory, then I get maybe 2-3fps, but the processing works as I'm intending.
After the call to buffer.get(bytes), buffer.remaining() will return 0, since you just read through the whole ByteBuffer. remaining() tells you how many bytes are left between your current position and the limit of the buffer, and calls to get() move the position forward by the number of bytes read.
Therefore, when you do decodeByteArray(bytes,0, buffer.remaining(), 0), you don't actually decode any bytes.
If you try decodeByteArray(bytes, 0, bytes.length, 0), does it work?
I am trying to screen cast my android device screen to a web browser using projection API and Webrtc.
Projection API renders its output to a Surface and returns a virtualDisplay. I have done till this. I saw the webrtc library for android. they have made it to receive input only from device camera. I am trying to read and modify webrtc code to stream whatever is shown to the surface.
My question is How can i receive byte[] data from a surface regularly like the Camera.PreviewCallback function. What other available options i have?
Here is how i solved my problem. I used ImageReader class like
imageReader = ImageReader.newInstance(displayWidth, displayHeight, PixelFormat.RGBA_8888, 2);
mediaProjection.createVirtualDisplay("screencapture",
displayWidth, displayHeight, density,
flags, imageReader.getSurface(), null, handler);
imageReader.setOnImageAvailableListener(new ImageAvailableListener(), null);
private class ImageAvailableListener implements ImageReader.OnImageAvailableListener {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
Bitmap bitmap = null;
ByteArrayOutputStream stream = null;
try {
image = imageReader.acquireLatestImage();
if (image != null) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * displayWidth;
// create bitmap
bitmap = Bitmap.createBitmap(displayWidth + rowPadding / pixelStride,
displayHeight, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 50, stream);
StringBuilder sb = new StringBuilder();
sb.append("data:image/png;base64,");
sb.append(StringUtils.newStringUtf8(Base64.encode(stream.toByteArray(), Base64.DEFAULT)));
WebrtcClient.sendProjection(sb.toString());
}
} catch (Exception e) {
e.printStackTrace();
}
I am converting byte[] to Base64 string and sending to through webrtc datachannel.