I have created camera screen based on Google camera2 sample, all code almost identical, the camera takes photo and saves it on the device in JPEG format, but I have some weird behavior.
For example, taking photo from emulator rotates the image 90 degrees(the image rotated, not preview), on my Huawei the image not rotated.
What weird is that screen rotation and sensor orientation values is identical both on Emulator and Huawei.
So how exactly the jpeg orientation is sets?
Also while exploring CaptureRequest.JPEG_ORIENTATION
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, getOrientation(rotation))
I have noticed that this method have no effect on Emulator at all.
I have tried to get JPEG orientation from ExifInterface after bitmap saved, but in both Emulator and Huawei the value is ORIENTATION_UNDEFINED. Maybe while converting Image(from ImageReader) to File Exif tags ignored?
Maybe i need to set the ExifInterface manually while taking image, but if the values is identical what is the difference?
How we supposed to control the JPEG orientation?
Using this method to get orientation(from Google camera2 sample) result is 90 degrees for Emulator and Huawei.
private int getOrientation(int rotation) {
return (ORIENTATIONS.get(rotation) + mSensorOrientation + 270) % 360;
}
using this method to get Bitmap from ImageReader
public static Bitmap getBitmapFromReader(ImageReader reader) {
Bitmap bitmap = null;
Image image = null;
try {
image = reader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
buffer.rewind();
byte[] data = new byte[buffer.capacity()];
buffer.get(data);
bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);
} catch (Exception e) {
e.printStackTrace();
}
if (image != null) {
image.close();
}
return bitmap;
}
Emulator is a very bad starting point to work with Camera2 API. Essentially, it has LEGACY Camera2 support, with some quirks.
This said, Jpeg orientation is a very delicate topic on Android camera. The official docs explain that rotation request may apply to the image itself, or to EXIF flag only, but some devices (which Huawai did you test?) don't comply at all.
Also note that BitmapFactory.decodeByteArray() ignores the EXIF flag, since the very beginning.
Related
I'm trying to capture a YUV image but is stretched. I'm using Android Camera2 following this https://github.com/googlearchive/android-Camera2Basic. The Android SDK is 28.
I'm facing a weird behavior when I capture a camera frame in YUV 2048x1536 from the method setOnImageAvailableListener() using the ImageReader. The capture is stretched:
YUV 2048x1536 stretched
What I do:
pictureImageReader.setOnImageAvailableListener(
reader -> {
Image image = reader.acquireLatestImage();
if(image != null){
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
image.close();
}
}, mBackgroundHandler);
To convert image to bitmap I used this How to convert android.media.Image to bitmap object?:
Image image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
However if I change the resolution my capture is fine (YUV 960x720):
YUV 960x720 ok
I don’t know why the capture in YUV 2048x1536 is stretched and in YUV 960x720 isn’t it. The only change is the resolution
Thanks
Does the device actually list 2048x1536 as a supported size in its stream configuration map? Sometimes camera devices will accept a resolution they don't actually list, but then can't properly output it.
If that size is listed, then this may just be a bug on that device's camera implementation, unfortunately.
(Also, it looks like you're capturing JPEG images, not uncompressed YUV, not that it necessarily matters here).
In my Android application, I use Camera2 API to allow the user to take a snapshot. The code mostly is straight out of standard Camera2 sample application.
When the image is available, it is obtained by calling acquireNextImage method:
public void onImageAvailable(ImageReader reader) {
mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
}
In my case, when I obtain the width and height of the Image object, it reports it as 4160x3120. However, in reality, it is 3120x4160. The real size can be seen when I dump the buffer into a jpg file:
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
// Dump "bytes" to file
For my purpose, I need the correct width and height. Wondering if the width and height are getting swapped because the sensor orientation is 90 degrees.
If so, I can simply swap the dimensions if I detect that the sensor orientation is 90 or 270. I already have this value:
mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
I feel this is a general problem and not specific to my code. Regards.
Edit:
Turns out the image size reported is correct. Jpeg image metadata stores a field called "Orientation." Most image viewers know how to interpret this value.
I have this code to preview an image loaded from the phone, the problem is when I use camera to take image and load it to my app it gets rotated 90 degrees. What could be causing this behavior? This is my code:
private void addImageToGrid(String selectedImageUri) {
Bitmap bitmap = getBitmapFromPath(selectedImageUri);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 8;
int bitmapWidth = 90;//bitmap.getScaledWidth(density);
int bitmapHeight = 80;// bitmap.getScaledWidth(density);
ImageItem imageItem = new ImageItem();
imageItem.setImage(Bitmap.createScaledBitmap(bitmap, bitmapWidth, bitmapHeight, false));
imageItem.setUri(selectedImageUri);
gridViewAdapter.add(imageItem);
}
While some android devices (or camera apps) really produce a bitmap corresponding to the device orientation (i.e. landscape vs portrait) others always use a landscape bitmap and only put the orientation information into the EXIF data packed into the JPEG file. It depends then on the viewer if the image is displayed correctly or not.
Because some devices will auto rotate the camera, some devices will use only portrait other only landscape you have to check the image what you select is it portrait or it is landscape by verify width and height.
So first check the Bitmap bitmap orientation and apply a rotation like this:
enter link description here
I'm using Android's Camera and SurfaceView to show a the preview image to the user before taking a picture. This appears fine on the screen but when I take the picture the resulting jpg is corrupted (horizontal lines).
As a starting point I used Mark Murphy's Camera/Picture example which exhibits the same issue on the G2.
The camera paramaters:
preview size: 800x480
picture format: JPEG
Both parameters are supported according to the getSupportedPreviewSizes() and getSupportedPictureFormats()
The Nexus One, which has the same size screen, works correctly with the same parameters.
The G2 works correctly when setting the preview size to 640x480
My question: Has anyone else run into this issue before (corrupted image despite using supported settings)? How frequent is it? How did you work around it?
So far I had the same issue and now I manage to fix mine using
setPictureFormat(ImageFormat.JPEG);
Camera parameter settings.
Just recently found out that the camera default picture output was YUV. That's is why it shows perfectly on preview but once the picture is taken you would get a green or pink or corrupted image output.
These are the ImageFormat values which camera might support.
JPEG = 256;
NV16 = 16;
NV21 = 17;
RAW10 = 37;
RAW_SENSOR = 32;
RGB_565 = 4;
UNKNOWN = 0;
YUV_420_888 = 35;
YUY2 = 20;
YV12 = 842094169;
So other than the supported size, on some cases we might need to check the PictureFormat as well.
http://i.stack.imgur.com/PlHFH.png
These are just sample images taken using ImageFormat YUV and thus saving it as JPEG compression.
But in any case that you don't want to change the ImageFormat and thus keep the settings in YUV format you may need to save the image in this workaround:
Camera.Parameters parameters = camera.getParameters();
Camera.Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, parameters.getPreviewFormat(),
size.width, size.height, null);
File file = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES), "YuvImage.jpg");
FileOutptStream filecon = new FileOutputStream(file);
image.compressToJpeg(new Rect(0, 0, image.getWidth(), image.getHeight()), 90,
filecon);
I'm trying to create an Android application that will process camera frames in real time. To start off with, I just want to display a grayscale version of what the camera sees. I've managed to extract the appropriate values from the byte array in the onPreviewFrame method. Below is just a snippet of my code:
byte[] pic;
int pic_size;
Bitmap picframe;
public void onPreviewFrame(byte[] frame, Camera c)
{
pic_size = mCamera.getParameters().getPreviewSize().height * mCamera.getParameters().getPreviewSize().width;
pic = new byte[pic_size];
for(int i = 0; i < pic_size; i++)
{
pic[i] = frame[i];
}
picframe = BitmapFactory.decodeByteArray(pic, 0, pic_size);
}
The first [width*height] values of the byte[] frame array are the luminance (greyscale) values. Once I've extracted them, how do I display them on the screen as an image? Its not a 2D array as well, so how would I specify the width and height?
You can get extensive guidance from the OpenCV4Android SDK. Look into their available examples, specifically Tutorial 1 Basic. 0 Android Camera
But, as it was in my case, for intensive image processing, this will get slower than acceptable for a real-time image processing application.
A good replacement for their onPreviewFrame 's byte array conversion to YUVImage:
YuvImage yuvImage = new YuvImage(frame, ImageFormat.NV21, width, height, null);
Create a rectangle the same size as the image.
Create a ByteArrayOutputStream and pass this, the rectangle and the compression value to compressToJpeg():
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(imageSizeRectangle, 100, baos);
byte [] imageData = baos.toByteArray();
Bitmap previewBitmap = BitmapFactory.decodeByteArray(imageData , 0, imageData .length);
Rendering these previewFrames on a surface and the best practices involved is a new dimension. =)
This very old post has caught my attention now.
The API available in '11 was much more limited. Today one can use SurfaceTexture (see example) to preview camera stream after (some) manipulations.
This is not an easy task to achieve, with the current Android tools/API available. In general, realtime image-processing is better done at the NDK level. To just show the black and white, you can still do it in java. The byte array containing the frame data is in YUV format, where the Y-Plane comes first. So, if you get the just the Y-plane alone (first width x height bytes), it already gives you the black and white.
I did achieve this through extensive work and trials. You can view the app at google:
https://play.google.com/store/apps/details?id=com.nm.camerafx