In android, I get an Image object from here https://inducesmile.com/android/android-camera2-api-example-tutorial/ this camera tutorial. But I want to now loop through the pixel values, does anyone know how I can do that? Do I need to convert it to something else and how can I do that?
Thanks
If you want to loop all throughout the pixel then you need to convert it first to Bitmap object. Now since what I see in the source code that it returns an Image, you can directly convert the bytes to bitmap.
Image image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
Then once you get the bitmap object, you can now iterate through all of the pixels.
YuvToRgbConverter is useful for conversion from Image to Bitmap.
https://github.com/android/camera-samples/blob/master/Camera2Basic/utils/src/main/java/com/example/android/camera/utils/YuvToRgbConverter.kt
Usage sample.
val bmp = Bitmap.createBitmap(image.width, image.height, Bitmap.Config.ARGB_8888)
yuvToRgbConverter.yuvToRgb(image, bmp)
Actually you have two questions in one
1) How do you loop throw android.media.Image pixels
2) How do you convert android.media.image to Bitmap
The 1-st is easy. Note that the Image object that you get from the camera, it's just a YUV frame, where Y, and U+V components are in different planes. In many Image Processing cases you need only the Y plane, that means the gray part of the image. To get it I suggest code like this:
Image.Plane[] planes = image.getPlanes();
int yRowStride = planes[0].getRowStride();
byte[] yImage = new byte[yRowStride];
planes[0].getBuffer().get(yImage);
The yImage byte buffer is actually the gray pixels of the frame.
In same manner you can get the U+V parts to. Note that they can be U first, and V after, or V and after it U, and maybe interlived (that is the common case case with Camera2 API). So you get UVUV....
For debug purposes, I often write the frame to a file, and trying to open it with Vooya app (Linux) to check the format.
The 2-th question is a little bit more complex.
To get a Bitmap object I found some code example from TensorFlow project here. The most interesting functions for you is "convertImageToBitmap" that will return you with RGB values.
To convert them to a real Bitmap do the next:
Bitmap rgbFrameBitmap;
int[] cachedRgbBytes;
cachedRgbBytes = ImageUtils.convertImageToBitmap(image, cachedRgbBytes, cachedYuvBytes);
rgbFrameBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
rgbFrameBitmap.setPixels(cachedRgbBytes,0,image.getWidth(), 0, 0,image.getWidth(), image.getHeight());
Note: There is more options of converting YUV to RGB frames, so if you need the pixels value, maybe Bitmap is not the best choice, as it may consume more memory than you need, to just get the RGB values
Java Conversion Method
ImageAnalysis imageAnalysis = new ImageAnalysis.Builder()
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
.build();
imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this), new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy image) {
// call toBitmap function
Bitmap bitmap = toBitmap(image);
image.close();
}
});
private Bitmap bitmapBuffer;
private Bitmap toBitmap(#NonNull ImageProxy image) {
if(bitmapBuffer == null){
bitmapBuffer = Bitmap.createBitmap(image.getWidth(),image.getHeight(),Bitmap.Config.ARGB_8888);
}
bitmapBuffer.copyPixelsFromBuffer(image.getPlanes()[0].getBuffer());
return bitmapBuffer;
}
https://docs.oracle.com/javase/1.5.0/docs/api/java/nio/ByteBuffer.html#get%28byte[]%29
According to the java docs: The buffer.get method transfers bytes from this buffer into the given destination array. An invocation of this method of the form src.get(a) behaves in exactly the same way as the invocation
src.get(a, 0, a.length)
I assume you have YUV (YUV (YUV_420_888) Image provided by Camera. Using this interesting How to use YUV (YUV_420_888) Image in Android tutorial I can propose following solution to convert Image to Bitmap.
Use this to convert YUV Image to Bitmap:
private Bitmap yuv420ToBitmap(Image image, Context context) {
RenderScript rs = RenderScript.create(SpeedMeasurementActivity.this);
ScriptIntrinsicYuvToRGB script = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
// Refer the logic in a section below on how to convert a YUV_420_888 image
// to single channel flat 1D array. For sake of this example I'll abstract it
// as a method.
byte[] yuvByteArray = image2byteArray(image);
Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(yuvByteArray.length);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs))
.setX(image.getWidth())
.setY(image.getHeight());
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
// The allocations above "should" be cached if you are going to perform
// repeated conversion of YUV_420_888 to Bitmap.
in.copyFrom(yuvByteArray);
script.setInput(in);
script.forEach(out);
Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
out.copyTo(bitmap);
return bitmap;
}
and a supportive function to convert 3 Planes YUV image to 1 dimesional byte array:
private byte[] image2byteArray(Image image) {
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("Invalid image format");
}
int width = image.getWidth();
int height = image.getHeight();
Image.Plane yPlane = image.getPlanes()[0];
Image.Plane uPlane = image.getPlanes()[1];
Image.Plane vPlane = image.getPlanes()[2];
ByteBuffer yBuffer = yPlane.getBuffer();
ByteBuffer uBuffer = uPlane.getBuffer();
ByteBuffer vBuffer = vPlane.getBuffer();
// Full size Y channel and quarter size U+V channels.
int numPixels = (int) (width * height * 1.5f);
byte[] nv21 = new byte[numPixels];
int index = 0;
// Copy Y channel.
int yRowStride = yPlane.getRowStride();
int yPixelStride = yPlane.getPixelStride();
for(int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
nv21[index++] = yBuffer.get(y * yRowStride + x * yPixelStride);
}
}
// Copy VU data; NV21 format is expected to have YYYYVU packaging.
// The U/V planes are guaranteed to have the same row stride and pixel stride.
int uvRowStride = uPlane.getRowStride();
int uvPixelStride = uPlane.getPixelStride();
int uvWidth = width / 2;
int uvHeight = height / 2;
for(int y = 0; y < uvHeight; ++y) {
for (int x = 0; x < uvWidth; ++x) {
int bufferIndex = (y * uvRowStride) + (x * uvPixelStride);
// V channel.
nv21[index++] = vBuffer.get(bufferIndex);
// U channel.
nv21[index++] = uBuffer.get(bufferIndex);
}
}
return nv21;
}
start with the imageProxy from the analyizer
#Override
public void analyze(#NonNull ImageProxy imageProxy)
{
Image mediaImage = imageProxy.getImage();
if (mediaImage != null)
{
toBitmap(mediaImage);
}
imageProxy.close();
}
Then convert to a bitmap
private Bitmap toBitmap(Image image)
{
if (image.getFormat() != ImageFormat.YUV_420_888)
{
throw new IllegalArgumentException("Invalid image format");
}
byte[] nv21b = yuv420ThreePlanesToNV21BA(image.getPlanes(), image.getWidth(), image.getHeight());
YuvImage yuvImage = new YuvImage(nv21b, ImageFormat.NV21, image.getWidth(), image.getHeight(), null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvImage.compressToJpeg (new Rect(0, 0,
yuvImage.getWidth(),
yuvImage.getHeight()),
mQuality, baos);
mFrameBuffer = baos;
//byte[] imageBytes = baos.toByteArray();
//Bitmap bm = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
return null;
}
Here's the static function that worked for me
public static byte [] yuv420ThreePlanesToNV21BA(Plane[] yuv420888planes, int width, int height)
{
int imageSize = width * height;
byte[] out = new byte[imageSize + 2 * (imageSize / 4)];
if (areUVPlanesNV21(yuv420888planes, width, height)) {
// Copy the Y values.
yuv420888planes[0].getBuffer().get(out, 0, imageSize);
ByteBuffer uBuffer = yuv420888planes[1].getBuffer();
ByteBuffer vBuffer = yuv420888planes[2].getBuffer();
// Get the first V value from the V buffer, since the U buffer does not contain it.
vBuffer.get(out, imageSize, 1);
// Copy the first U value and the remaining VU values from the U buffer.
uBuffer.get(out, imageSize + 1, 2 * imageSize / 4 - 1);
}
else
{
// Fallback to copying the UV values one by one, which is slower but also works.
// Unpack Y.
unpackPlane(yuv420888planes[0], width, height, out, 0, 1);
// Unpack U.
unpackPlane(yuv420888planes[1], width, height, out, imageSize + 1, 2);
// Unpack V.
unpackPlane(yuv420888planes[2], width, height, out, imageSize, 2);
}
return out;
}
bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.image);
1-Store the path to the image file as a string variable. To decode the content of an image file, you need the file path stored within your code as a string. Use the following syntax as a guide:
String picPath = "/mnt/sdcard/Pictures/mypic.jpg";
2-Create a Bitmap Object And Use BitmapFactory:
Bitmap picBitmap;
Bitmap picBitmap = BitmapFactory.decodeFile(picPath);
I am doing an image processing which require to convert RGB bitmap image to YCbCr color space. I retrieved RGB value for each pixel and apply the conversion matrix to it.
public void convertRGB (View v) {
if (imageLoaded) {
int width = inputBM.getWidth();
int height = inputBM.getHeight();
int pixel;
int alpha, red, green, blue;
int Y,Cb,Cr;
outputBM = Bitmap.createBitmap(width, height, inputBM.getConfig());
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
pixel = inputBM.getPixel(x, y);
alpha = Color.alpha(pixel);
red = Color.red(pixel);
green = Color.green(pixel);
blue = Color.blue(pixel);
Y = (int) (0.299 * red + 0.587 * green + 0.114 * blue);
Cb = (int) (128-0.169 * red-0.331 * green + 0.500 * blue);
Cr = (int) (128+0.500 * red - 0.419 * green - 0.081 * blue);
int p = (Y << 24) | (Cb << 16) | (Cr<<8);
outputBM.setPixel(x,y,p);
}
}
comImgView.setImageBitmap(outputBM);
}
}
The problem is he output color is different with original. I tried to use BufferedImage but it do not work in Android
Original:
After Conversion:
May I know what is the correct way to handle YCbCr image in android java.
Try setting using below code
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(your_yuv_data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 50, out);
byte[] imageBytes = out.toByteArray();
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
iv.setImageBitmap(image);
Check documentation for detailed description for YuvImage Class.
I am getting an Image in the YUV_420_888 format as the result of a capture using the Camera2 APIs. I need to convert the image to the RGB format, but the colors of the resulting image are wrong.
This is the function that performs the conversion, using OpenCV:
#TargetApi(Build.VERSION_CODES.KITKAT)
public static Bitmap createBitmapFromYUV420(Image image) {
Image.Plane[] planes = image.getPlanes();
byte[] imageData = new byte[image.getWidth() * image.getHeight() * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8];
ByteBuffer buffer = planes[0].getBuffer();
int lastIndex = buffer.remaining();
buffer.get(imageData, 0, lastIndex);
int pixelStride = planes[1].getPixelStride();
for (int i = 1; i < planes.length; i++) {
buffer = planes[i].getBuffer();
byte[] planeData = new byte[buffer.remaining()];
buffer.get(planeData);
for (int j = 0; j < planeData.length; j += pixelStride) {
imageData[lastIndex++] = planeData[j];
}
}
Mat yuvMat = new Mat(image.getHeight() + image.getHeight() / 2, image.getWidth(), CvType.CV_8UC1);
yuvMat.put(0, 0, imageData);
Mat rgbMat = new Mat();
Imgproc.cvtColor(yuvMat, rgbMat, Imgproc.COLOR_YUV420p2RGBA);
Bitmap bitmap = Bitmap.createBitmap(rgbMat.cols(), rgbMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(rgbMat, bitmap);
return bitmap;
}
I think that the way the bytes from the 3 planes are appended into the byte array is correct, so perhaps the error is somewhere else?
SOLVED
Apparently, there is a bug in Android API 21 that causes the U and V arrays to be full of 0 except for a few bytes, resulting in a green image. This issue has been fixed with API 22.
I use the saveFrame method in grafika. But I found there is orientation issue.
The left one is from saveFrame and the right one is what i see.
The code as following:
/**
* Saves the EGL surface to a file.
* <p>
* Expects that this object's EGL surface is current.
*/
public void saveFrame(File file) throws IOException {
if (!mEglCore.isCurrent(mEGLSurface)) {
throw new RuntimeException("Expected EGL context/surface is not current");
}
// glReadPixels fills in a "direct" ByteBuffer with what is essentially big-endian RGBA
// data (i.e. a byte of red, followed by a byte of green...). While the Bitmap
// constructor that takes an int[] wants little-endian ARGB (blue/red swapped), the
// Bitmap "copy pixels" method wants the same format GL provides.
//
// Ideally we'd have some way to re-use the ByteBuffer, especially if we're calling
// here often.
//
// Making this even more interesting is the upside-down nature of GL, which means
// our output will look upside down relative to what appears on screen if the
// typical GL conventions are used.
String filename = file.toString();
int width = getWidth();
int height = getHeight();
ByteBuffer buf = ByteBuffer.allocateDirect(width * height * 4);
buf.order(ByteOrder.LITTLE_ENDIAN);
GLES20.glReadPixels(0, 0, width, height,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);
GlUtil.checkGlError("glReadPixels");
buf.rewind();
BufferedOutputStream bos = null;
try {
bos = new BufferedOutputStream(new FileOutputStream(filename));
Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bmp.copyPixelsFromBuffer(buf);
bmp.compress(Bitmap.CompressFormat.PNG, 90, bos);
bmp.recycle();
} finally {
if (bos != null) bos.close();
}
Log.d(TAG, "Saved " + width + "x" + height + " frame as '" + filename + "'");
}
So how to deal with the orientation issue ?
Using the following code to save the problem:
IntBuffer ib = IntBuffer.allocate(width * height);
IntBuffer ibt = IntBuffer.allocate(width * height);
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
// Convert upside down mirror-reversed image to right-side up normal image.
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
ibt.put((height - i - 1) * width + j, ib.get(i * width + j));
}
}
Bitmap bitmap = Bitmap.createBitmap(width, height,Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(ibt);
I am trying to draw the 'Y' component as greyscale from the image I get from the Camera via onPreviewFrame.
I am using the version of Canvas.drawBitmap that takes an array of 'colors' as a parameter. The Android docs don't mention what format the Color is in, so I'm assuming ARGB 8888.
I do get an image showing up, but it is showing up with an odd Yellow tint.
Here is my code below:
public void onPreviewFrame(byte[] bytes, Camera camera) {
Canvas canvas = null;
try {
synchronized(mSurfaceHolder) {
canvas = mSurfaceHolder.lockCanvas();
Size size = camera.getParameters().getPreviewSize();
int width = size.width;
int height = size.height;
if (mHeight * mWidth != height * width)
{
mColors = new int[width * height];
mHeight = height;
mWidth = width;
Log.i(TAG, "prewviw size = " + width + " x " + height);
}
for (int x = 0; x < width; x ++) {
for (int y = 0; y < height; y++) {
int yval = bytes[x + y * width];
mColors[x + y * width] = (0xFF << 24) | (yval << 16) | (yval << 8) | yval;
}
}
canvas.drawBitmap(mColors, 0, width, 0.f, 0.f, width, height, false, null);
}
}
finally {
if (canvas != null) {
mSurfaceHolder.unlockCanvasAndPost(canvas);
}
}
}
I've also tried using another version of Canvas.drawBitmap that takes a Bitmap as a parameter. I constructed the Bitmap in a similar way from the same array and I told it to use ARGB explicitly. But it still ended up being tinted Yellow!
What am I doing wrong here?
It's a different approach, but the following works, and don't suffer for from the color problems of your solution:
YuvImage yuv = new YuvImage(data, previewFormat, size.width, size.height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, size.width, size.height), 50, out);
byte[] bytes = out.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
canvas.drawBitmap(bitmap, null, new Rect(0, 0, size.width, size.height), null);
Note: This probably needs to allocate data for every frame, which your solution doesn't.