In my Android application, I use Camera2 API to allow the user to take a snapshot. The code mostly is straight out of standard Camera2 sample application.
When the image is available, it is obtained by calling acquireNextImage method:
public void onImageAvailable(ImageReader reader) {
mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
}
In my case, when I obtain the width and height of the Image object, it reports it as 4160x3120. However, in reality, it is 3120x4160. The real size can be seen when I dump the buffer into a jpg file:
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
// Dump "bytes" to file
For my purpose, I need the correct width and height. Wondering if the width and height are getting swapped because the sensor orientation is 90 degrees.
If so, I can simply swap the dimensions if I detect that the sensor orientation is 90 or 270. I already have this value:
mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
I feel this is a general problem and not specific to my code. Regards.
Edit:
Turns out the image size reported is correct. Jpeg image metadata stores a field called "Orientation." Most image viewers know how to interpret this value.
Related
I'm currently using an OnImageCaptured callback to get my image instead of saving it to the device. I'm having trouble understanding when it's necessary to rotate an image when it comes from an ImageProxy.
I use the following method to convert the data from an ImageProxy to a BitMap:
...
val buffer: ByteBuffer = imageProxy.planes[0].buffer // Only first plane because of JPEG format.
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
The resulting bitmap is sometimes rotated, and sometimes not, depending on the device the picture is taken from. ImageProxy.getImageInfo().getRotationDegrees() returns the correct rotation, but I don't know when it's necessary to apply it, since sometimes it's applied in the bitmap, and sometimes not.
The ImageCapture.OnCapturedImageListener documentation also says:
The image is provided as captured by the underlying ImageReader without rotation applied. rotationDegrees describes the magnitude of clockwise rotation, which if applied to the image will make it match the currently configured target rotation.
which leads me to think that I'm getting the bitmap incorrectly, because sometimes it has the rotation applied. Is there something I'm missing here?
Well, as it turn out the only necessary information is the Exif metadata. rotationDegrees contains the final orientation the image should be in, starting from the base orientation, but the Exif metadata only shows the rotation I had to make to get the final result. So rotating according to TAG_ORIENTATION solved the issue.
UPDATE: This was an issue with the CameraX library itself. It was fixed in 1.0.0-beta02, so now the exif metadata and rotationDegrees contain the same information.
I have created camera screen based on Google camera2 sample, all code almost identical, the camera takes photo and saves it on the device in JPEG format, but I have some weird behavior.
For example, taking photo from emulator rotates the image 90 degrees(the image rotated, not preview), on my Huawei the image not rotated.
What weird is that screen rotation and sensor orientation values is identical both on Emulator and Huawei.
So how exactly the jpeg orientation is sets?
Also while exploring CaptureRequest.JPEG_ORIENTATION
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, getOrientation(rotation))
I have noticed that this method have no effect on Emulator at all.
I have tried to get JPEG orientation from ExifInterface after bitmap saved, but in both Emulator and Huawei the value is ORIENTATION_UNDEFINED. Maybe while converting Image(from ImageReader) to File Exif tags ignored?
Maybe i need to set the ExifInterface manually while taking image, but if the values is identical what is the difference?
How we supposed to control the JPEG orientation?
Using this method to get orientation(from Google camera2 sample) result is 90 degrees for Emulator and Huawei.
private int getOrientation(int rotation) {
return (ORIENTATIONS.get(rotation) + mSensorOrientation + 270) % 360;
}
using this method to get Bitmap from ImageReader
public static Bitmap getBitmapFromReader(ImageReader reader) {
Bitmap bitmap = null;
Image image = null;
try {
image = reader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
buffer.rewind();
byte[] data = new byte[buffer.capacity()];
buffer.get(data);
bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);
} catch (Exception e) {
e.printStackTrace();
}
if (image != null) {
image.close();
}
return bitmap;
}
Emulator is a very bad starting point to work with Camera2 API. Essentially, it has LEGACY Camera2 support, with some quirks.
This said, Jpeg orientation is a very delicate topic on Android camera. The official docs explain that rotation request may apply to the image itself, or to EXIF flag only, but some devices (which Huawai did you test?) don't comply at all.
Also note that BitmapFactory.decodeByteArray() ignores the EXIF flag, since the very beginning.
I'm trying to create an app that processes camera images in real time and displays them on screen. I'm using the camera2 API. I have created a native library to process the images using OpenCV.
So far I have managed to set up an ImageReader that receives images in YUV_420_888 format like this.
mImageReader = ImageReader.newInstance(
mPreviewSize.getWidth(),
mPreviewSize.getHeight(),
ImageFormat.YUV_420_888,
4);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mImageReaderHandler);
From there I'm able to get the image planes (Y, U and V), get their ByteBuffer objects and pass them to my native function. This happens in the mOnImageAvailableListener:
Image image = reader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
Image.Plane YPlane = planes[0];
Image.Plane UPlane = planes[1];
Image.Plane VPlane = planes[2];
ByteBuffer YPlaneBuffer = YPlane.getBuffer();
ByteBuffer UPlaneBuffer = UPlane.getBuffer();
ByteBuffer VPlaneBuffer = VPlane.getBuffer();
myNativeMethod(YPlaneBuffer, UPlaneBuffer, VPlaneBuffer, w, h);
image.close();
On the native side I'm able to get the data pointers from the buffers, create a cv::Mat from the data and perform the image processing.
Now the next step would be to show my processed output, but I'm unsure how to show my processed image. Any help would be greatly appreciated.
Generally speaking, you need to send the processed image data to an Android view.
The most performant option is to get an android.view.Surface object to draw into - you can get one from a SurfaceView (via SurfaceHolder) or a TextureView (via SurfaceTexture). Then you can pass that Surface through JNI to your native code, and there use the NDK methods:
ANativeWindow_fromSurface to get an ANativeWindow
The various ANativeWindow methods to set the output buffer size and format, and then draw your processed data into it.
Use setBuffersGeometry() to configure the output size, then lock() to get an ANativeWindow_Buffer. Write your image data to ANativeWindow_Buffer.bits, and then send the buffer off with unlockAndPost().
Generally, you should probably stick to RGBA_8888 as the most compatible format; technically only it and two other RGB variants are officially supported. So if your processed image is in YUV, you'd need to convert it to RGBA first.
You'll also need to ensure that the aspect ratio of your output view matches that of the dimensions you set; by default, Android's Views will just scale those internal buffers to the size of the output View, possibly stretching it in the process.
You can also set the format to one of Android's internal YUV formats, but this is not guaranteed to work!
I've tried the ANativeWindow approach, but it's a pain to set up and I haven't managed to do it correctly. In the end I just gave up and imported OpenCV4Android library which simplifies things by converting camera data to a RGBA Mat behind the scenes.
I'm using the camera2 api to capture a burst of images. To ensure fastest capture speed, I am currently using yuv420888.
(jpeg results in approximately 3 fps capture while yuv results in approximately 30fps)
So what I'm asking is how can I access the yuv values for each pixel in the image.
i.e.
Image image = reader.AcquireNextImage();
Pixel pixel = image.getPixel(x,y);
pixel.y = ...
pixel.u = ...
pixel.v = ...
Also if another format would be faster please let me know.
If you look at the Image class you will see the immediate answer is simply the .getPlanes() method.
Of course, for YUV_420_888 this will yield three planes of YUV data which you will have to do a bit of work with in order to get the pixel value at any given location, because the U and V channels have been downsampled and may be interlaced in how they are stored in the Image.Planes. But that is beyond the scope of this question.
Also, you are correct that YUV will be the fastest available output for your camera. JPEG require extra time for encoding which will slow down the pipeline output, and RAW are very large and take a lot of time to read out because they are so large. YUV (of whatever type) is the data format that most camera pipelines work in so it is the 'native' output, and thus the fastest.
I'm trying to create an Android application that will process camera frames in real time. To start off with, I just want to display a grayscale version of what the camera sees. I've managed to extract the appropriate values from the byte array in the onPreviewFrame method. Below is just a snippet of my code:
byte[] pic;
int pic_size;
Bitmap picframe;
public void onPreviewFrame(byte[] frame, Camera c)
{
pic_size = mCamera.getParameters().getPreviewSize().height * mCamera.getParameters().getPreviewSize().width;
pic = new byte[pic_size];
for(int i = 0; i < pic_size; i++)
{
pic[i] = frame[i];
}
picframe = BitmapFactory.decodeByteArray(pic, 0, pic_size);
}
The first [width*height] values of the byte[] frame array are the luminance (greyscale) values. Once I've extracted them, how do I display them on the screen as an image? Its not a 2D array as well, so how would I specify the width and height?
You can get extensive guidance from the OpenCV4Android SDK. Look into their available examples, specifically Tutorial 1 Basic. 0 Android Camera
But, as it was in my case, for intensive image processing, this will get slower than acceptable for a real-time image processing application.
A good replacement for their onPreviewFrame 's byte array conversion to YUVImage:
YuvImage yuvImage = new YuvImage(frame, ImageFormat.NV21, width, height, null);
Create a rectangle the same size as the image.
Create a ByteArrayOutputStream and pass this, the rectangle and the compression value to compressToJpeg():
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(imageSizeRectangle, 100, baos);
byte [] imageData = baos.toByteArray();
Bitmap previewBitmap = BitmapFactory.decodeByteArray(imageData , 0, imageData .length);
Rendering these previewFrames on a surface and the best practices involved is a new dimension. =)
This very old post has caught my attention now.
The API available in '11 was much more limited. Today one can use SurfaceTexture (see example) to preview camera stream after (some) manipulations.
This is not an easy task to achieve, with the current Android tools/API available. In general, realtime image-processing is better done at the NDK level. To just show the black and white, you can still do it in java. The byte array containing the frame data is in YUV format, where the Y-Plane comes first. So, if you get the just the Y-plane alone (first width x height bytes), it already gives you the black and white.
I did achieve this through extensive work and trials. You can view the app at google:
https://play.google.com/store/apps/details?id=com.nm.camerafx