Android Image Rotated and Resized - android

I looked everywhere for a solution but couldn't use anything useful. I'm using SurfaceView and FrameLayout for custom in-app camera. I have a button that calls "mCamera.takePicture()" function. In the Callback, I save the picture as it is; the problem is that the picture is rotated and flipped, and scaled in weird ratio.
I know that it is happening mostly because I set the camera parameter's setPreviewSize to an optimal size, and rotated it using mCamera.setDisplayOrientation(). The preview looks beautiful. But when I take a picture and save it, that's where it gets messed up.
Camera.Parameters cp = mCamera.getParameters();
Size theSize = getOptimalPreviewSize(opt, w, h);
cp.setPreviewSize(theSize.width, theSize.height);
mCamera.setDisplayOrientation(90);
mCamera.setParameters(cp);
The above code is written in surfaceChanged function.
How do I make the saved picture as it's shown in the preview?
Screenshot of what is shown in the preview of the camera. Size fine, rotation fine:
This is after the picture has been taken. Look at the monitor, collection of CDs and the plug. It's squashed and the monitor looks like a wave:

The image returned in pictureTaken() is not effected by Camera.setDisplayOrientation(). It is in Jpeg compressed format, so you can set JPEG rotation via Exif header without decoding it. This is what Camera.setRotation() often does, but on some devices it actually performs rotation in hardware. This is the most efficient method, but some viewers may still show a rotated image.
Alternatively, you can use JPEG lossless rotation, using jpegtran.
On SourceForge, there is a Java open source class LLJTran. The Android port is on GitHub.

Related

How can I crop the live feed from the camera on each frame on Android?

I have the video preview from the camera which I want to crop.
But the cropping window is changing frame by frame (size and position of the cropping window).
How could I achieve this on Android (flutter, kotlin, java, nativescript,.. doesn't matter) to be able to show the live cropping results in a view and also be able to save the result to file?
I don't want the code, I just don't know which libraries and api's to use (link too documentation) and how the concept should be solved for this problem.
How to crop the live video from camera frame by frame and preview and save to file?
I created (partially) what I want in Javascript, just to show what I mean. In Javascript I use a HTML Video-tag (this could get the feed from the Webcam) then I create a Canvas and can read each frame of the video-tag as data and select exactly what I want from frame to frame.
let a = 10;
processor.computeFrame = function computeFrame() {
let frame = ctx.getImageData(a,0, this.width*0.5 +a, this.height); // here I use a moving window from left to right for the crop
ctx2.drawImage(frame, 0, 0); // and draw it back to the view (second canvas)
a +=0.1; //move the window
};
How can I solve this in Android?
I don't expect a fully working code... if you have any good articles, tutorials or documentation where I should start to read to achieve this, it would be perfect!
Interesting article:
https://engineering.depop.com/android-square-video-cropping-59b5edd69bce
https://www.programmersought.com/article/3222812179/
If you just wanted to crop preview, you could use a TextureView and update its transform matrix to match your desired crop rectangle.
If all you need is an occasional screen grab at the same resolution as preview for saving, you could use TextureView.getBitmap to read the cropped view to a Bitmap, and then save it.
If you need a higher-resolution or higher-quality capture, you'll need a separate output from the camera. Options are either capturing JPEG, decoding it, cropping the resulting Bitmap, and re-encoding it (assuming you want to save a JPEG, anyway), or capturing uncompressed YUV, cropping it, and then converting it to a Bitmap.
The CameraX Android support library might save you a lot of time in setting up the high-quality path, though I think you still need to do the cropping yourself for the output of the ImageCapture use case.

How setting Camera2 API the standard camera?

If I use standard camera via Intent to capture image:
Open Camera:
val takePicture = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
if (takePicture.resolveActivity(packageManager) != null) {
startActivityForResult(takePicture, TAKE_PICTURE)
}
Camera Preview
Result display on ImageView
If I use Camera2 API at: git: https://github.com/googlesamples/android-Camera2Basic
Result of Image not display portrait
Camera 2 Preview
Result display on ImageView
My code display Image:
val bitmap = BitmapFactory.decodeFile(file.toString())
imagePreview.setImageBitmap(bitmap)
}
How setting Camera2 API to display result the same standard camera?
In Camera2Basic example, the ImageSaver does not rotate the captured JPEG with regards to device orientation. Instead, Camera2BasicFragment.captureStillPicture() sets CaptureRequest.JPEG_ORIENTATION, which is only a recommendation for the camera firmware.
Camera devices may either encode this value into the JPEG EXIF header, or rotate the image data to match this orientation. When the image data is rotated, the thumbnail data will also be rotated.
Most often, this recommendation 'only' sets the header, but some devices miss even that. See a recent article on this feature and its reliability.
Please note that the EXIF orientation tag is not respected by all viewer software, therefore often the stock Camera applications do rotate the actual JPEG to default orientation.
Your code that loads the captured picture to ImageView currently ignores this tag. You can use ExifInterface.getAttributeInt(TAG_ORIENTATION) to extract the orientation from the file or input stream. Or, if you capture an image and immediately display it, you can get device orientation directly from the sensor. Now it's time to decide if the camera stored the image as portrait (i.e. width is smaller than height), or as landscape, in which case it's your duty to rotate it for display. Don't rotate the bitmap according to this orientation. Instead, you can call imagePreview.setImageMatrix() to display the image correctly.
By the way, please don't decode the JPEG to full-scale bitmap in memory if you only need it to be passed to your ImageView: this may consume too much RAM. The easiest one-liner is to call setImageURI() instead.

Android camera ensure fast response onPictureTaken

I have created a custom camera activity (pretty much followed the Android tutorial)
I implemented a SurfaceView to use a preview and an ImageView to display the image taken on OnPictureTaken
Picture taken takes a while to show up and differs from the preview in terms of aspect ratio, size, lighting, white balance. (probably I didn't set the params properly). The quality of the picture doesn't really matter to me. I just want the app to be responsive, ie still picture shows up immediately, and is identical to the preview.
So what I ended up doing was remove the ImageView and only have the SurfaceView. When the camera button is clicked, I stopPreview(). However, now I don't know how to save the SurfaceView content to bitmap/file. From what I read there isn't really a way to SurfaceView to return a bitmap.
What would be a better way to implement the camera? To reiterate, I need to be able to get a bitmap that is identical to the preview, and have that still image shown to the user immediately.

How is the camera preview connected with the final image output?

I've always been under the impression that the preview and the final output are not connected in any way; meaning that I can set the preview to be some arbitrary dimension and that the final JPG will be whatever specific resolution I set in to be in the params, but I just ran into a very odd situation where the image data coming back in the byte[] that's in the jpg callback is different, depending on what dimensions I set my preview to.
Can someone enlighten me on what actual relationship the preview has on the final JPG? (or point me to documentation on said relationship).
TIA
[Edit]
As per ravi's answer, this was my assumption as well, however, I see no alternative but to surmise that they are, in fact, directly connected based on the evidence. I'll post code if necessary (though there's a lot of it) but here's what I'm doing.
I have a preview screen where the user takes a photo of themselves. I then display the picture captured (from the jpg callback bitmap data) in a subsequent draw view and allow them to trace a shape over their photo. I then pass the points of their polygon into a class that cuts that shape out of the original image, and gives back the cut image.
All of this works, BUT depending on how I present the PREVIEW, the polygon cutting class crashes on an array out of bounds index as it tries to access pixels on the final image that simply don't exist. This effect is produced EXCLUSIVELY by altering the shape of the preview View's dimensions. I'm not altering ANYTHING else in the code, and yet, just by mis-shaping my preview view, I can reproduce this error 100% of the time.
I can't see an explanation other than that the preview and the final are directly connected somehow, since I'm never operating on the preview's data, I only display it in a SurfaceView and then move on to deal exclusively with the data from the JPG callback following the user having taken their photo.
There is no relation between the preview resolution and the final image that is captured.
They are completely independent (at least for the still image capture). The preview resolution and the aspect ratio are not interrelated with the final image resolution and the aspect ratio in anyway.
In the camera application that I have written, the preview is always VGA but the image I capture varies from 5M to VGA (depending on the device capability)
Perhaps if you can explain the situation it would be more helpful.
We are currently developing a camera application and face very similiar problems. In our case, we want to display a 16:9 preview, while capturing a 4:3 picture. On most devices this works without any problems, but on some (e.g. Galaxy Nexus, LG Optimus 3D), the output camera picture depends on the preview you've chosen. In our case the outcoming pictures on that devices are distorted when the preview ratio is different from the picture ratio.
We tried to fix this, by changing the preview resolution to a better one just before capturing the image. But this does not work on some devices and occure error while starting the preview again after capturing is finished.
We also tried to fix this, by enlarging the SurfaceView to fullscreen-width and "over fullscreen"-height to make a 16:9 preview out of a 4:3 preview. But this does not work, because SurfaceViews can not be higher then screenheight.
So there IS any connection on SOME devices, and we really want to know, how to fix/workaround this.

How does android handle differences between preview size/ratio and actual SufaceView size?

I'm writing a small android app where a user can place an image inside the live preview of the camera and take a picture of this. The app will then combine the two images appropriately -- All of this is working fine.
I understand you can get/set the PreviewSize using Camera.getParameters(), I assume this is related to the size of the realtime "camera feed".
However, the size of my SurfaceView where the camera preview is shown is different from the reported (and used) PreviewSizes. For example, in the emulator my available SurfaceView happens to be 360x215, while the PreviewSize is 320x240. Still, the entire SurfaceView is filled with the preview.
But the picture that's generated in the end is (also?) 320x240. How does android compensate for these differences in size and aspect ratio? Is the image truncated?
Or am I simply misunderstanding what the PreviewSize is about - is this related to the size of the generated pictures, or is it related to the "realtime preview" that's projected on the SurfaceView? Are there any non-trivial Camera examples that deal with this?
I need to know how what transformation takes place to, eventually, copy/scale the image correctly into the photo, hence these questions.
I am trying to figure this out myself. Here is what I found out so far..
The surface view has an internal surface called mSurface which is actually used as camera feed and the encoder feed. So this buffer has to be the actual size at which you want do the recording.
You can set the size of this mSurface to be independent of the SurfaceView by using the setFixedSize method
Now you might want to perform a HD recording so the mSurface needs to 1280x760 resolution but you SurfaceView can't be that big (Assuming you are running it on a phone with a WVGA screen). So you try to set to a smaller resolution than 1280x760 which also maintains the same aspect ratio.
Android now performs a resizing on the HD buffer to get the preview resolution, cropping is not done, it is just resized to the SurfaceView reoslutions
So at this point both the mSurface and the previewSize that you set to the camera is the same resolution and hence the resultant video recording will also be of the same resolution.
That being said I am still struggling to get my VGA recorder to work on Nexus S, it is working on a LG Maha device. :)

Categories

Resources