I am making an app using Camera API.Although I can capture images, they are not orientated properly.I already tried using screen orientation to correctly orient the images, but it dosent work on all devices especially on front camera.I am new to android development, any help would be appreciated.Thankyou.
You can get rotation of captured bitmap using ExifInterface and create another bitmap with fixed rotation - here you can find example of usage.
Moreover, I recommend to use inBitmap option for reusing existing bitmap into created rotated one.
Camera1 and Camera2 APIs are kinda tricky ones.It is useful to know how they work inside but there are plenty of ready solutions. I can recommend the following ones:
https://camerakit.io - has fixes for rotation issues inside but
currently in beta, supports Camera2 features.
https://github.com/RedApparat/Fotoapparat - based on Camera1.
https://github.com/natario1/CameraView - based on Camera1, can
capture video.
Related
I have tried writing my own code for accessing camera via the camera2 API on Android instead of using the Google's example. On one hand, I've wasted way too much time understanding what exactly is going on, but on the other hand, I have noticed something quite weird:
I want the camera to produce vertical images. However, despite the fact that the ImageReader is initialized with height larger than width, the Image that I get in the onCaptureCompleted has the same size, except it is rotated 90 degrees. I was struggling trying to understand my mistake, so I went exploring Google's code. And what I have found is that their images are rotated 90 degrees as well! They compensate for that by setting a JPEG_ORIENTATION key in the CaptureRequestBuilder (if you comment that single line, the images that get saved will be rotated). This is unrelated to device orientation - in my case, screen rotation is disabled for the app entirely.
The problem is that for the purposes of the app I am making I need a non-compressed precise data from camera, so since JPEG a) compresses images b) with losses, I cannot use it. Instead I use the YUV_420_888 format which I later convert to a Bitmap. But while the JPEG_ORIENTATION flag can fix the orientation for JPEG images, it seems to do nothing for YUV ones. So how do I get the images to be correctly rotated?
One obvious solution is to rotate the resulting Bitmap, but I'm unsure what angle should I rotate it by on different devices. And more importantly, what causes such strange behavior?
Update: rotating the Bitmap and scaling it to proper size takes way too much time for the preview (the context is as follows: I need both high-res images from camera to process and a downscaled version of these same images in preview. Let's just say I'm making something similar to QR code recognition). I have even tried using RenderScripts to manipulate the image efficiently, but this is still too long. Also, I've read here that when I set multiple output surfaces simultaneously, the same resolution will be used for all of them, which is quite bad for me.
Android stores every image, no matter if it is taken in landscape or in portrait, in landscape mode. It also stores metadata that tells you if the image should be displayed in portrait or landscape.
If you don'r turn the image according to the metadata, you will end up with every image in landscape. I had that problem too (but I wanted compression so my solution doesn't work for you).
You need to read the metadata and turn it accordingly.
I hope this helps at least a bit.
In my app I'm trying to use ArCore as sort of a "camera assistant" in a custom camera view.
To be clear - I want to display images for the user in his camera and have him capture images that don't contain the AR models.
From what I understand, in order to capture an image with ArCore I'll have to use the Camera2 API which is enabled by configuring the session to use the "shared Camera".
However, I can't seem to configure the camera to use any high-end resolutions (I'm using pixel 3 so I should be able to go as high as 12MP).
In the "shared camera example", they toggle between Camera2 and ArCore (a shame there's no API for CameraX) and it has several problems:
In the ArCore mode the image is blurry (I assume that's because the depth sensor is disabled as stated in their documentation)
In the Camera2 mode I can't enhance the resolution at all.
I can't use the Camera2 API to capture an image while displaying models from ArCore.
Is this requirement at all possible at the moment?
I have not worked yet with shared camera with ARCore, but I can say a few things regarding the main point of your question.
In ARCore you can configure both CPU image size and GPU image size. You can do that by checking all available camera configurations (available through Session.getSupportedCameraConfigs(CameraConfigFilter cameraConfigFilter)) and selecting your preferred one by passing it back to the ARCore Session. On each CameraConfig you can check which CPU image size and GPU texture size you will get.
Probably you are currently using (maybe by default?) a CameraConfig with the lowest CPU image, 640x480 pixels if I remember correctly, so yes it definitely looks blurry when rendered (but nothing to do with depth sensor in this regard).
Sounds like you could just select a higher CPU image and you're good to go... but unfortunately that's not the case because that configuration applies to every frame. Getting higher resolution CPU images will result in much lower performance. When I tested this I got about 3-4 frames per second on my test device, definitely not ideal.
So now what? I think you have 2 options:
Pause the ARCore session, switch to a higher CPU image for 1 frame, get the image and switch back to the "normal" configuration.
Probably you are already getting a nice GPU image, maybe not the best due to camera Preview, but hopefully good enough? Not sure how you are rendering it, but with some OpenGL skills you can copy that texture. Not directly, of course, because of the whole GL_TEXTURE_EXTERNAL_OES thing... but rendering it onto another framebuffer and then reading the texture attached to it could work. Of course you might need to deal with texture coordinates yourself (full image vs visible area) but that's another topic.
Regarding CameraX, note that it is wrapping Camera2 API in order to provide some camera use cases so that app developers don't have to worry about the camera lifecycle. As I understand it would not be suitable for ARCore to use CameraX as I imagine they need full control of the camera.
I hope that helps a bit!
libstreaming works fine - in landscape mode. Unfortunately, my app will have to run in portrait mode. No problem for the preview window - I can set it upright with
session.setPreviewOrientation(90);
But the receiver of the stream will still have the video sideways. Is there a solution for that?
In Android's Mediarecorder, there is a method
setOrientationHint(int degrees)
that will rotate the streamed/recorded video. But I did not find anything like that in libstreaming...
I have encountered this problem before. There are 3 possible solutions that I implemented.
Convert each YUV frame that comes out from onPreviewFrame API to a bitmap then rotate the bitmap and finally convert the bitmap back to YUV frame.
The disadvantage of this solution is video frame has been dropped alot (in my case from 24 to 4-5 FPS).
Rotate each YUV frame 90/270 degrees (based on orientation of camera) clockwise using code from here. The disadvantage of this solution are images will be distorted and video frame might be dropped as well.
Using an open source library from Google called libyuv. If you have knowledge of JNI, it will be easy for you. BTW, you can see demo about this lib here and here. Because rotation process handled at native layer so this is efficient solution and the best one so far.
Hope this info will be helpful for anyone who are encountering with this problem.
Change the video quality to (1280,720) from (320,240). Then once you start it should be changed and rotated. BTW how did you implement your receiver?
How to make the front camera NOT be a mirror image ? I can't find the interface about how to mirror the image.
I am using the Android 4.2.
There's a post here that describes how to mirror, it uses a matrix to flip it. However, I'd try it out on a variety of devices first- I hadn't heard that about the front camera and would be surprised if it wasn't a device specific thing.
I'm using this camera code to ask the camera to rotate the captured image data:
Camera.Parameters params = camera.getParameters();
params.set("rotation", 90);
camera.setParameters(params);
this seems to work on all phones, except the Droid. Has anyone else seen this? The image data is always landscape, however, the native camera app on the Droid produces portrait images ok.
I wonder if the Droid will only respect the new Camera.Parameters.setRotation() method, but this seems to only be available in API level 5?
setRotation also didn't seem to work for me on Nexus One, but I did get image rotation to work by following the example of the android Camera app itself.
The source code is available here:
https://android.googlesource.com/platform/packages/apps/Camera
Start with the Camera.java, but you'll also be looking at ImageManager.java, Util.java, and other files.
The basic idea is, you listen for orientation changes and capture what the orientation is at the time you snapped the picture. Then, when you get the picture bytes in the callback, you manipulate the bitmap, doing a rotation on the bitmap. Then convert the rotated bitmap back to jpeg. When you are done, you'll have had to copy a shocking amount of code from the camera app just for this rotation.
The rotation may just be stored in the jpeg exif header as explained in the setRotation document. On Droid that is actually the case. You can use jpeg header reading tools like jhead to verify this. You can also use the ExifInterface API to read the orientation tag in your program.
The Droid runs Android 2.0 (well, now 2.0.1) which are API levels 5 and 6, respectively.
So it's quite possible that the Droid only respects the (more sensible) 2.0+ API for rotation.
However, I guess your concern is compatibility across a range of device types and OS versions, so I imagine you would have to invoke the 2.0+ API via reflection after detecting the OS version (using android.os.Build.VERSION_CODES).