i trying convert image from YUV_420_888 to rgb and i have some trouble with output image. In ImageReader i get image in format YUV_420_888 (using camera 2 api for get this image preview).
imageReader = ImageReader.newInstance(1920,1080,ImageFormat.YUV_420_888,10);
In android sdk for YuvImage class writing, that YuvImage using only NV21, YUY2.
as we can see difference between N21 and yuv420 not big and i try convert data to N21
YUV420:
and N21:
in onImageAvailable i get separately each Planes and put them in correct place (as on image)
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
ByteBuffer bufferY = image.getPlanes()[0].getBuffer();
byte[] data0 = new byte[bufferY.remaining()];
bufferY.get(data0);
ByteBuffer bufferU = image.getPlanes()[1].getBuffer();
byte[] data1 = new byte[bufferU.remaining()];
bufferU.get(data1);
ByteBuffer bufferV = image.getPlanes()[2].getBuffer();
byte[] data2 = new byte[bufferV.remaining()];
bufferV.get(data2);
...
outputStream.write(data0);
for (int i=0;i<bufferV.remaining();i++) {
outputStream.write(data1[i]);
outputStream.write(data2[i]);
}
after create YuvImage, convert to Bitmap, view and save
final YuvImage yuvImage = new YuvImage(outputStream.toByteArray(), ImageFormat.NV21, 1920,1080, null);
ByteArrayOutputStream outBitmap = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0,1920, 1080), 95, outBitmap);
byte[] imageBytes = outBitmap.toByteArray();
final Bitmap imageBitmap = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
mImageView.setImageBitmap(imageBitmap);
...
imageBitmap.compress(Bitmap.CompressFormat.JPEG, 95, out);
but my saved image is green and pink:
what did i miss??
I have implemented the YUV_420 logic (exactly as shown in the above diagram) in RenderScript, see full code here:
Conversion YUV_420 _888 to Bitmap, complete code
It produces perfect bimaps for API 22, but for API 21 it shows the "green idyll". From this I can confirm, the results you found. As already mentioned by Silvaren above, the reason seems to be an Android bug in API 21. Looking at my rs code it is clear, that if U and V information is missing (i.e. zero) the G(reen) ARGB component becomes huge during the conversion.
I see similar green pictures on my Galaxy S5 (still API 21) - here even upside down ;-). I suspect that most devices at API 21 currently do not yet use Camera2 for their device-camera apps. There is a free app called "Manual Camera Compatibility" which allows to test this. From this I see that indeed the S5/API21 still not uses Camera2...fortunately not...
There are two main problems on your conversion attempt:
We can not assume that the U and V planes are isolated, they might contain interleaved data (e.g. U-PLANE = {U1, V1, U2, V2, ...} ). In fact, it might even be a NV21 style interleaving already. The key here is looking at the plane's row stride and pixel stride and also check what we can assume about the YUV_420_888 format.
The fact that you've commented that most of the U an V planes data are zeros indicates that you are experiencing an Android bug on the generation of images in YUV_420_888. This means that even if you get the conversion right, the image would still look green if you are affected by the bug, which was only fixed at the Android 5.1.1 and up, so it is worth to check which version you are using besides fixing the code.
bufferV.get(data2) increases the the position of the ByteBuffer. That's why the loop for (int i=0;i<bufferV.remaining();i++) produces 0 iterations. You can easily rewrite it as
for (int i=0; i<data1.length; i++) {
outputStream.write(data1[i]);
outputStream.write(data2[i]);
}
I got an image of ImageFormat.YUV_420_888 and was successful to save to jpeg file, and could view it correctly on windows.
I am sharing here :
private final Image mImage;
private final File mFile;
private final int mImageFormat;
ByteArrayOutputStream outputbytes = new ByteArrayOutputStream();
ByteBuffer bufferY = mImage.getPlanes()[0].getBuffer();
byte[] data0 = new byte[bufferY.remaining()];
bufferY.get(data0);
ByteBuffer bufferU = mImage.getPlanes()[1].getBuffer();
byte[] data1 = new byte[bufferU.remaining()];
bufferU.get(data1);
ByteBuffer bufferV = mImage.getPlanes()[2].getBuffer();
byte[] data2 = new byte[bufferV.remaining()];
bufferV.get(data2);
try
{
outputbytes.write(data0);
outputbytes.write(data2);
outputbytes.write(data1);
final YuvImage yuvImage = new YuvImage(outputbytes.toByteArray(), ImageFormat.NV21, mImage.getWidth(),mImage.getHeight(), null);
ByteArrayOutputStream outBitmap = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0,mImage.getWidth(), mImage.getHeight()), 95, outBitmap);
FileOutputStream outputfile = null;
outputfile = new FileOutputStream(mFile);
outputfile.write(outBitmap.toByteArray());
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
mImage.close();
}
Related
I am currently working on a project which uses OpenCV in background mode to detect faces while the app is playing videos .
I've managed to run OpenCV as a service and I am using an ImageReader instance to capture the images
private ImageReader mImageReader = ImageReader.newInstance(mWidth, mHeight, ImageFormat.YUV_420_888, 1);
What I am trying to do is to get the detected face image and send it to the backend side , the image from the Imagereader is converted to Mat so I have both access to the Mat type and the Image type.
I've managed to convert the aquired image to Yuv image then to jpg ( byte array ) by using both toYuvImage and toJpegImage methods from this link : https://blog.minhazav.dev/how-to-convert-yuv-420-sp-android.media.Image-to-Bitmap-or-jpeg/#how-to-convert-yuv_420_888-image-to-jpeg-format
After converting the image to an array of bytes , I'm also trying to convert it to base64 to send it using http , the problem is when I try to put the imageQuality to 100 in toJpegImage , the result of the base64 image is looking corrupted , but when I put the value to something lower like 15 or 10 the image output ( resolution ) is better but the quality is bad , I am not sure if this problem is related to the resolution
byte[] jpegDataTest = ImageUtil.toJpegImage(detectionImage,15);
String base64New = Base64.encodeToString(jpegDataTest, Base64.DEFAULT);
PS : I am converting the image each time a face is detected in a for loop
for(Rect rect : faceDetections.toArray()){}
compress quality is set to 100 : https://i.postimg.cc/YqSmFxrT/quality100.jpg
compress quality is set to 15
public static byte[] toJpegImage(Image image, int imageQuality) {
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("Invalid image format");
}
YuvImage yuvImage = toYuvImage(image);
int width = image.getWidth();
int height = image.getHeight();
// Convert to jpeg
byte[] jpegImage = null;
try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
yuvImage.compressToJpeg(new Rect(0, 0, width, height), imageQuality, out);
jpegImage = out.toByteArray();
} catch (IOException e) {
e.printStackTrace();
}
return jpegImage;
}
private static byte[] YUV_420_888toNV21(Image image) {
byte[] nv21;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer vuBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int vuSize = vuBuffer.remaining();
nv21 = new byte[ySize + vuSize];
yBuffer.get(nv21, 0, ySize);
vuBuffer.get(nv21, ySize, vuSize);
return nv21;
}
I'm using Camera2 api to do a still image capture and save it to a jpeg file. The problem is that the size of the file is always >900kb, even if I set the image dimensions to the smallest available and put jpeg quality low.
This is how I'm saving the file in the ImageAvailableListener. It's a xamarin project so code is in c#.
image = reader.AcquireLatestImage();
ByteBuffer buffer = image.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
output = new FileOutputStream(File);
output.Write(bytes);
output.Close();
The file should be ~20kb, so why can't I get file sizes lower than 900kb?
You can also reduce the capture image quality
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.set(CaptureRequest.JPEG_THUMBNAIL_QUALITY, (byte) 70); // add this line and set your own quality
I figured it out. Needed to create a bitmap to apply compression:
image = reader.AcquireLatestImage();
ByteBuffer buffer = image.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
// need to get the bitmap in order to compress
Bitmap bitmap = BitmapFactory.DecodeByteArray(bytes, 0, bytes.Length);
using (System.IO.MemoryStream stream = new System.IO.MemoryStream())
{
bitmap.Compress(Bitmap.CompressFormat.Jpeg, 85, stream);
Save(stream.GetBuffer());
}
I want to do some Imageprocessing on a YUV_420_888 Image and need a greyscale Edition from it. As I read about the YUV Image it should be enough to extract the Y Plane of the Image. In Android I'll try that with this workflow to convert the Y Plane into a byte Array.
Image.Plane Y = img.getPlanes()[0];
ByteBuffer byteBuffer = Y.getBuffer();
byte[] data = new byte[byteBuffer.remaining()];
byteBuffer.get(data);
So as I want to compare this Image I get now with another grayscale Image (or at least a Result of the image processing) I have the Question, Is the grayscale Image I get extracting the Y-Plane nearly the same as a RGB which was turned into grayscale? Or do I have to do some additional processing steps for that?
Yes, the data you get from Y plane should be the same as if you go through an RGB image.
No, I am using the IR sensor in which I am getting a YUV_420_888 image which is already grey scale. But to convert it in bytes I used following function which gave me error. As per your answer, I took only Y plane and on result it gave me green screen.
ByteBuffer[] buffer = new ByteBuffer[1];
Image image = reader.acquireNextImage();
buffer[0] = image.getPlanes()[0].getBuffer().duplicate();
//buffer[1] = image.getPlanes()[1].getBuffer().duplicate();
int buffer0_size = buffer[0].remaining();
//int buffer1_size = buffer[1].remaining();
buffer[0].clear();
//buffer[1].clear();
byte[] buffer0_byte = new byte[buffer0_size];
//byte[] buffer1_byte = new byte[buffer1_size];
buffer[0].get(buffer0_byte, 0, buffer0_size);
//buffer[1].get(buffer1_byte, 0, buffer1_size);
byte[] byte2 = buffer0_byte;
//byte2=buffer0_byte;
//byte2[1]=buffer1_byte;
image.close();
mArrayImageBuffer.add(byte2);
After dequeing the bytes and goes to funcion:
public static byte[] convertYUV420ToNV12(byte[] byteBuffers){
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
try {
outputStream.write( byteBuffers);
//outputStream.write( byteBuffers[1] );
} catch (IOException e) {
e.printStackTrace();
}
// outputStream.write( buffer2_byte );
byte[] rez = outputStream.toByteArray();
return rez;
}
I have list of Bitmap files on my sd card. Now, I want to create video using mediacodec. I have checked MediaCodec documents.I could not find a way to create video. I don't want to use FFmpeg. I have tried below code. Any help would be appreciated!!
protected void MergeVideo() throws IOException {
// TODO Auto-generated method stub
MediaCodec mMediaCodec;
MediaFormat mMediaFormat;
ByteBuffer[] mInputBuffers;
mMediaCodec = MediaCodec.createEncoderByType("video/avc");
mMediaFormat = MediaFormat.createVideoFormat("video/avc", 320, 240);
mMediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 125000);
mMediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 15);
mMediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar);
mMediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
mMediaCodec.configure(mMediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mMediaCodec.start();
mInputBuffers = mMediaCodec.getInputBuffers();
//for (int i = 0; i<50; i++) {
int i=0;
int j=String.valueOf(i).length()<1?Integer.parseInt("0"+i) : i;
File imagesFile = new File(Environment.getExternalStorageDirectory() + "/VIDEOFRAME/","frame-"+j+".png");
Bitmap bitmap = BitmapFactory.decodeFile(imagesFile.getAbsolutePath());
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, byteArrayOutputStream); // image is the bitmap
byte[] input = byteArrayOutputStream.toByteArray();
int inputBufferIndex = mMediaCodec.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0) {
ByteBuffer inputBuffer = mInputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(input);
mMediaCodec.queueInputBuffer(inputBufferIndex, 0, input.length, 0, 0);
}
You're missing a few pieces. The answer to this question has some of the information you need, but it was written for someone specifically wanting support in API 16. If you're willing to target API 18 and later, your life will be easier.
The biggest problem with what you have is that MediaCodec input from a ByteBuffer is always in uncompressed YUV format, but you seem to be passing compressed PNG images in. You will need to convert the bitmap to YUV. The exact layout and best method for doing this varies between devices (some use planar, some use semi-planar), but you can find code for doing so. Or just look at the way frames are generated in the buffer-to-buffer parts of EncodeDecodeTest.
Alternatively, use Surface input to the MediaCodec. Attach a Canvas to the input surface and draw the bitmap on it. The EncodeAndMuxTest does essentially this, but with OpenGL ES.
One potential issue is that you're passing in 0 for the frame timestamps. You should pass a real (generated) timestamp in, so that the value gets forwarded to MediaMuxer along with the encoded frame.
On very recent devices (API 21+), MediaRecorder can accept Surface input. This may be easier to work with than MediaCodec.
I am working on an Android app that displays photos which are downloaded from Flickr. I obtain a bitmap object from a byte array, which in turn is read from the relevant Flickr URL, as follows:
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inDither = true;
opt.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0, data.length, opt);
I then draw the bitmap onto a canvas in the onDraw method of a View object:
Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG | Paint.FILTER_BITMAP_FLAG);
canvas.drawBitmap(bitmap, 0, 0, paint);
The problem is that the resulting picture is pixelated and I can't figure out why; I have tried a number of variations of the opt and paint objects with no luck. The difference between the picture displayed in my app and the picture at the original URL is roughly demonstrated by the following:
Bad image, see pixelation in top left corner http://homepages.inf.ed.ac.uk/s0677975/bad.jpg
Good picture, this is the expected result http://homepages.inf.ed.ac.uk/s0677975/good.jpg
Look e.g. at the clouds in the top-left corner to see the difference.
Note that JPEG pictures which are loaded from the project resources and drawn in a similar way display just fine, i.e. have no pixelation.
Can anybody give me a hint as to why this is happening?
To elaborate a little, the byte array is obtained from Flickr as follows; this is based on code from the Photostream app by Romain Guy:
InputStream in = new BufferedInputStream(url.openStream(), IO_BUFFER_SIZE);
final ByteArrayOutputStream dataStream = new ByteArrayOutputStream();
out = new BufferedOutputStream(dataStream, IO_BUFFER_SIZE);
copy(in, out);
out.flush();
final byte[] data = dataStream.toByteArray();
PS: I also posted a variant of this question on the android.developer Google group.
Thanks a lot for your suggestion -- now I am really puzzled! I did as you suggested and found that the image resulting directly from the downloaded byte array is indeed pixelated. However, this is downloaded from exactly the same URL which, when accessed on my computer, is NOT pixelated. Here is the corresponding Flickr URL:
http://farm3.static.flickr.com/2678/4315351421_54e8cdb8e5.jpg
Even stranger, when I run the same app in the simulator rather than on my phone (a HTC Hero), there is no pixelation.
How on earth is this possible?
Below is the code I use for loading a bitmap from a URL -- it is based on the Photostream app by Romain Guy, and it incorporates Will's suggestion to write the raw byte array to file:
Bitmap loadPhotoBitmap(URL url) {
Bitmap bitmap = null;
InputStream in = null;
BufferedOutputStream out = null;
try {
FileOutputStream fos = new FileOutputStream("/sdcard/photo-tmp.jpg");
BufferedOutputStream bfs = new BufferedOutputStream(fos);
in = new BufferedInputStream(url.openStream(),
IO_BUFFER_SIZE);
final ByteArrayOutputStream dataStream = new ByteArrayOutputStream();
out = new BufferedOutputStream(dataStream, IO_BUFFER_SIZE);
copy(in, out);
out.flush();
final byte[] data = dataStream.toByteArray();
bfs.write(data, 0, data.length);
bfs.flush();
BitmapFactory.Options opt = new BitmapFactory.Options();
bitmap = BitmapFactory.decodeByteArray(data, 0, data.length, opt);
} catch (IOException e) {
android.util.Log.e(LOG_TAG, "Could not load photo: " + this, e);
} finally {
closeStream(in);
closeStream(out)
closeStream(bfs);
}
return bitmap;
}
private static void copy(InputStream in, OutputStream out) throws IOException {
byte[] b = new byte[IO_BUFFER_SIZE];
int read;
while ((read = in.read(b)) != -1) {
out.write(b, 0, read);
}
}
private static void closeStream(Closeable stream) {
if (stream != null) {
try {
stream.close();
} catch (IOException e) {
android.util.Log.e(LOG_TAG, "Could not close stream", e);
}
}
}
Am I going crazy here?
Best,
Michael.
Ok, so I finally get it: it appears that my mobile network does image compression to save bandwidth.
Hence a picture downloaded from my phone is of lower quality than the same picture downloaded from my computer.
That's a bummer for my app, but I don't suppose there is anything I can do about it. Sigh.
Thanks again for your input though!
Best,
Michael.
Write the raw bytes fetched from the URL to /sdcard/tmp.jpg, and view on your PC.
JPEG images are compressed in 8x8 (or 16x16) tiles. The 'pixelation' as you describe it is actually in these tiles, suggesting that the 'bad' image is a JPEG that is more aggressively compressed than the other.
So I'd anticipate that the actual issue is that the image being downloaded is a very low-quality version, e.g. one intended for thumbnailing/preview use-cases.
Some version of Android have a bug in Bitmap class and convert the Bitmap to RGB_565 upon some operations. This would manifest itself in artifacts similar to those on your picture. This would also explain the banding of the blue sky.
Also, have in mind that android attempts to "optimize" image by converting them to rgb_565 upon loading and even compiling in resource files. Take a look at:
http://android.nakatome.net/2010/04/bitmap-basics.html