How do I know if an image is rotated on the buffer? - android

I'm currently using an OnImageCaptured callback to get my image instead of saving it to the device. I'm having trouble understanding when it's necessary to rotate an image when it comes from an ImageProxy.
I use the following method to convert the data from an ImageProxy to a BitMap:
...
val buffer: ByteBuffer = imageProxy.planes[0].buffer // Only first plane because of JPEG format.
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
The resulting bitmap is sometimes rotated, and sometimes not, depending on the device the picture is taken from. ImageProxy.getImageInfo().getRotationDegrees() returns the correct rotation, but I don't know when it's necessary to apply it, since sometimes it's applied in the bitmap, and sometimes not.
The ImageCapture.OnCapturedImageListener documentation also says:
The image is provided as captured by the underlying ImageReader without rotation applied. rotationDegrees describes the magnitude of clockwise rotation, which if applied to the image will make it match the currently configured target rotation.
which leads me to think that I'm getting the bitmap incorrectly, because sometimes it has the rotation applied. Is there something I'm missing here?

Well, as it turn out the only necessary information is the Exif metadata. rotationDegrees contains the final orientation the image should be in, starting from the base orientation, but the Exif metadata only shows the rotation I had to make to get the final result. So rotating according to TAG_ORIENTATION solved the issue.
UPDATE: This was an issue with the CameraX library itself. It was fixed in 1.0.0-beta02, so now the exif metadata and rotationDegrees contain the same information.

Related

Android BitmapFactory.decodeByteArray get the photo width/height is exchanged than iOS UIImage(data:)

I get a base64 string from the server, then I parse it to image, on Android side, the image is rotated to 270 degree, its width/height is exchanged than iOS side. Any idea? Thank you. Here is the code:
Android code:
val decodedByteArray: ByteArray = Base64.decode(base64Str, Base64.DEFAULT)
val bitmap = BitmapFactory.decodeByteArray(decodedByteArray, 0, decodedString.size)
Timber.i("image size: ${bitmap.width}, ${bitmap.height}")
iOS code:
if let photoData = Data(base64Encoded: base64Str, options: .ignoreUnknownCharacters),
let photo = UIImage(data: photoData) {
printLog("photo size: \(photo.size)")
cell.ivPhoto.image = photo
}
Check if your image has a rotation angle in its EXIF tag.
The Bitmapfactory does not perform any EXIF rotations/transformations at all.
Use the ExifInterface class to get such information. Remember to use the AndroidX variant instead of the deprecated framework one.
If such is the case, then you must rotate the image according to the EXIF angle.
This operation will require to create a new bitmap, therefore in order to avoid a potential OutOfMemoryError, use some of the tips provided in this link.
Check the image base64 data using any base64 to the image viewer. If it's same as uploaded. Then that might the android thing which rotates the image when it converts to base64 to image. So when you convert base64 to image you can rotate the image to a specific angle or portrait.
Android Bitmapfactory does rotate nothing.
Its the used ios code that rotates to original position using orientation information.
For example BitmapFactory does not look at orientation information in a jpg. The image can come in any position and orientation information is needed to put it in right orientation.
So in this case Android/BitmapFactory does not rotate or exchange anything.

How to obtain correct dimension from Camera2 and ImageReader?

In my Android application, I use Camera2 API to allow the user to take a snapshot. The code mostly is straight out of standard Camera2 sample application.
When the image is available, it is obtained by calling acquireNextImage method:
public void onImageAvailable(ImageReader reader) {
mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
}
In my case, when I obtain the width and height of the Image object, it reports it as 4160x3120. However, in reality, it is 3120x4160. The real size can be seen when I dump the buffer into a jpg file:
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
// Dump "bytes" to file
For my purpose, I need the correct width and height. Wondering if the width and height are getting swapped because the sensor orientation is 90 degrees.
If so, I can simply swap the dimensions if I detect that the sensor orientation is 90 or 270. I already have this value:
mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
I feel this is a general problem and not specific to my code. Regards.
Edit:
Turns out the image size reported is correct. Jpeg image metadata stores a field called "Orientation." Most image viewers know how to interpret this value.

camera matrix original image

As I know, when you using camera it crops some part of image. I mean that the application cuts out that part of the photo that goes beyond the rectangle.
Is there any way to get the original image that is full-sized and received directly from the camera's matrix?
Root access on my device is available.
I did a small demo years ago:
https://sourceforge.net/p/javaocr/code/HEAD/tree/trunk/demos/camera-utils/src/main/java/net/sf/javaocr/demos/android/utils/camera/CameraManager.java#l8
Basic idea is to set up callback, then you raw image data is delivered via byte array ( getPreviewFrame() / onPreviewFrame ) - no root access is necessary.
Actually, this data comes as mmapped memory buffer directly from adress space of camera app - no root is necessary
As this byte array does not provide any meta information, you have to get all the params from camera object yourself

Detect whether the loading image is taken from camera directly, when using smartphones

I am using html, tag:
<input type = "file" />
On android and on many cellulars I have the ability to get the file directly by taking a picture and save it.
How can I know (by javascript code) how did I get the picture (direcly by the camera, or by some files that on my cellular)?
I did some workarround, and found exif (http://www.nihilogic.dk/labs/exif/exif.js), but I didn't succeed using it for images loading dynamically, as the site : http://exif-viewer.com/
Need some source code examples, to understand how exif works on dynamically loaded images.
Thanks :)
I have found the solution by myself, so I want to participate it:
What I needed is translate the binary data to exif data,
so on exif.js, I added the following.
jQuery.fn.getExif = function() {
var exif;
var bin;
var bf;
bin = atob(this.attr("src").split(',')[1]);
if (bin) {
bf = new BinaryFile(bin);
}
if (bf) {
exif = EXIF.readFromBinaryFile(bf);
}
if (exif) {
this.attr("exifdata", exif);
}
return exif;
}
and use the above on code - just get any exif value I want.
The main issue is that the image should be rotated according to exif
(if, i.e. the orientation is 90 degrees clockwise, so I should rotate 90 counterclockwise, in order to fix the orientation) - No problem on most devices,
but there is a problem that persists on several devices, such as IPAD.
IPAD (or Safari - I don't know exactly where might be the problem) do me a favour, and auto rotate the image, when I am loading it from file, so it is displayed always correctly.
Now how can I know when to rotate the image and when not rotating it.
Thanks :)

Android Take Photo successfully

This might sound like a strange/silly question. But hear me out.
Android applications are, at least on the T-Mobile G1, limited to 16
MB of heap.
And it takes 4 bytes per pixel to store an image (in Bitmap form):
public void onPictureTaken(byte[] _data, Camera _camera) {
Bitmap temp = BitmapFactory.decodeByteArray(_data, 0, _data.length);
}
So 1 image, at 6 Megapixels takes up 24MB of heap. (Cue Memory overflow).
Now I am very much aware of the ability to decode with parameters, to effectively reduce the size of the image. I even have a method which will scale it down to a desired size.
But what about in the scenario when I want to use the camera as a quality camera!
I have no idea how to get this image into the database. As soon as I decode, it errors.
Note: I need(?) to convert it to Bitmap so that I can rotate it before storing it.
So to sum it up:
Limited to 16MB of heap
Image takes up 24MB of heap
Not enough space to take and manipulate an image
This doesnt address the problem, but I Recommend it as a starting point for others who are just loading images from a location:
Displaying Bitmaps on android
I can only think of a couple of things that might help, none of them are optimal
Do your rotations server side
Store the data from the capture directly to the SDCARD without decoding it, then rotate it chunk by chunk using the file system, then send that to your DB. Lots of examples on the web (if your angles are simple, 90 180 etc) though this would be time consuming since IO operations against SDCARD's are not exactly fast.
When you decode drop the alpha channel, this may not solve you issue though and if you are using a matrix to rotate the image then you would need a target/source anyway
Options opt = new Options();
opt.inPreferredConfig = Bitmap.Config.RGB_565;
// Decode the raw camera a bitmap with no alpha channel
bmp = BitmapFactory.decodeByteArray(raw, 0,raw.length, opt);
There may be a better way to do this, but since your device is so limited in heap etc. I can't think of any.
It would be nice if there was an optional file based matrix method (which in general is what I am suggesting as option 2) or some kind of "paging" system for Android but that's the best I can come up.
First save it to the filesystem, do your operations with the file from the filesystem...

Categories

Resources