I was trying to implement a burst mode camera in my app, which can take multiple pictures at the rate of 5-10(or more) snaps per second.
FYI I already saw the previous questions here, here and here - tried and failed with speed. Also the questions are old and there are no comprehensive answers addressing all the concerns like how to manage heap etc.
I would really appreciate if someone can help with useful pointers, best practice or maybe an SSCCE.
Update :
Tried successfully with pulling preview frames # 15+snaps/sec, but the
problem is preview size is limited. On nexus 5 I can get only
1920x1080 which is ~2mp, whereas the full resolution pic possible on
n5 is 8mp :-(
I think a big part of the problem is the question: How does burst mode work in current phones? A couple of blogs point out that Google has confirmed that they will be adding a burst mode API.
I suspect current implementations work by setting exposure time to minimum and calling takePicture in a loop or using Camera.PreviewCallback
I played around with the latter for some computer vision projects and happened to look into writing a burst mode camera using this API. You could store the buffers you receive from Camera.PreviewCallback in memory and process them on a background thread.
If I remember correctly, the resolution was lower than the actual camera resolution, so this may not ideal.
Short of device-specific APIs offered by their manufacturers, the only way you can get a "burst mode" that has a shot of working across devices will be to use the preview frames as the images. takePicture() has no guarantees of when you will be able to call takePicture() again.
Related
In my app I'm trying to use ArCore as sort of a "camera assistant" in a custom camera view.
To be clear - I want to display images for the user in his camera and have him capture images that don't contain the AR models.
From what I understand, in order to capture an image with ArCore I'll have to use the Camera2 API which is enabled by configuring the session to use the "shared Camera".
However, I can't seem to configure the camera to use any high-end resolutions (I'm using pixel 3 so I should be able to go as high as 12MP).
In the "shared camera example", they toggle between Camera2 and ArCore (a shame there's no API for CameraX) and it has several problems:
In the ArCore mode the image is blurry (I assume that's because the depth sensor is disabled as stated in their documentation)
In the Camera2 mode I can't enhance the resolution at all.
I can't use the Camera2 API to capture an image while displaying models from ArCore.
Is this requirement at all possible at the moment?
I have not worked yet with shared camera with ARCore, but I can say a few things regarding the main point of your question.
In ARCore you can configure both CPU image size and GPU image size. You can do that by checking all available camera configurations (available through Session.getSupportedCameraConfigs(CameraConfigFilter cameraConfigFilter)) and selecting your preferred one by passing it back to the ARCore Session. On each CameraConfig you can check which CPU image size and GPU texture size you will get.
Probably you are currently using (maybe by default?) a CameraConfig with the lowest CPU image, 640x480 pixels if I remember correctly, so yes it definitely looks blurry when rendered (but nothing to do with depth sensor in this regard).
Sounds like you could just select a higher CPU image and you're good to go... but unfortunately that's not the case because that configuration applies to every frame. Getting higher resolution CPU images will result in much lower performance. When I tested this I got about 3-4 frames per second on my test device, definitely not ideal.
So now what? I think you have 2 options:
Pause the ARCore session, switch to a higher CPU image for 1 frame, get the image and switch back to the "normal" configuration.
Probably you are already getting a nice GPU image, maybe not the best due to camera Preview, but hopefully good enough? Not sure how you are rendering it, but with some OpenGL skills you can copy that texture. Not directly, of course, because of the whole GL_TEXTURE_EXTERNAL_OES thing... but rendering it onto another framebuffer and then reading the texture attached to it could work. Of course you might need to deal with texture coordinates yourself (full image vs visible area) but that's another topic.
Regarding CameraX, note that it is wrapping Camera2 API in order to provide some camera use cases so that app developers don't have to worry about the camera lifecycle. As I understand it would not be suitable for ARCore to use CameraX as I imagine they need full control of the camera.
I hope that helps a bit!
I do not know much about Camera API, though i need to use frames from a capturing video with more then 30fps with a good quality camera(S9).
Can anybody suggest code for the same.
I tried to find fit code for this but i am failed.
Thanks in Advance
Mohit
You are lucky, since not so long ago a component of android JetPack was released, called CameraX. Sadly it is still in alpha stage, meaning that you should avoid using it in production since it might have breaking changes in the future. This component was built on top of Camera 2 API, witch is a low level API for working with camera.
If you plan to use your app in production I highly recommend to use Camera 2 API, it is low level, however you have the full control over the camera.
Here is an example to get you started.
I am using this project android-camera2-secret-picture-taker to capture image without open camera view, but the captured images is very bad like this
any help to make this better?
thanks
[Edit]
I tried other phones and it works fine, I take this bad images on Huawei Y6II only and I don't know why? the phone camera is 13 mpx and works fine with native camera app.
Did you issue only a single capture request to the camera device? (No free-running preview or such).
Generally, the auto-exposure, focus, and white-balance routines take a second or so of streaming before they stabilize to good values.
Even if you don't want a preview on screen, you need to request 10-30 frames of data from the camera to start before you save a final image. Or to be more robust, set a repeating request targeting some low-resolution SurfaceTexture, and wait until the CaptureResult CONTROL_AE_STATE / AWB_STATE fields reach CONVERGED, and the AF_STATE field is what you want as well (depends on what AF mode you're using). Then capture your image.
This is a wildly blind guess, but hey, worth a try.
If you used some code snippet from the web which suggests to get a list of supported image sizes and just pick the first one - well this has backfired for me on Huawei devices (more than one model) because Huawei seems to provide the list in the ascending order of resolution (i.e. smallest-first), whereas most other devices I've seen does that in descending order (i.e. largest-first).
So if this is a resolution issue, it might be worth a check.
I write an application for Motorola Xoom tablet with Android 3.1 for my master thesis that can scan multiple QR Codes in real time with it's camera and that displays additional information in the display over recognised QR Codes.
The recognition is done with the ZXing android app (http://code.google.com/p/zxing/), I basically just changed the code of the ZXing app so that it can recognise multiple QR Codes at the same time and can do this scan continually, without freezing after a successful scan like the original app does. So my app is basically the ZXing app with continous scanning of multiple QR Codes.
But I'm facing a problem:
The recognition rate of QR Codes with the built in camera is not
very good. The ZXing app uses the pictures that it gets from the
camera preview. But these pictures do not have a very good quality.
Is there any possibility to make the camera preview making better
quality pictures?
P.S. I also tried to make real snapshots with camera.takePicture()
to get a better quality, but it takes too long to take the picture
so the real time experience for the user is lost.
Any help is highly appreciated!
Thanks.
Well, the question would be... why is the image quality that bad? Do the image have low resolution? Is the preview out of focus? I've worked with the ZXing Android app before and I know that it has a mechanism to keep the camera auto focusing the live scene.
If the auto focus mechanism is undergoing, then you are possibly decoding some images that might be out of focus. Rationaly, it would make sense to decode only when the camera is in focus, but that would delay the decoding process, since it would have to wait for the focusing to do the image processing phase. However, I wouldn't be too much worried about this for several reasons: 1) the auto focus is very quick, so there will be very few blurry images (if there are any at all), 2) the camera keeps focus for a sufficient amount of time that would allow for a couple decodings, 3) QRCodes typically do not require perfect images to be detected and decoded - they were designed that way.
If this is a problem for you, then disable the continous auto-focus and set the parameter to anything that suits you.
If the problem comes from low resolution frames, well increase it..., but QRCodes were also designed to be identified even in small resolutions. Also, keep in mind the increasing the resolution will also increase decoding time...
I want to write an activity that:
Shows the camera preview (viewfinder), and has a "capture" button.
When the "capture" button is pressed, takes a picture and returns it to the calling activity (setResult() & finish()).
Are there any complete examples out there that works on every device? A link to a simple open source application that takes pictures would be the ideal answer.
My research so far:
This is a common scenario, and there are many questions and tutorials on this.
There are two main approaches:
Use the android.provider.MediaStore.ACTION_IMAGE_CAPTURE event. See this question
Use the Camera API directly. See this example or this question (with lots of references).
Approach 1 would have been perfect, but the issue is that the intent is implemented differently on each device. On some devices it works well. However, on some devices you can take a picture but it is never returned to your app. On some devices nothing happens when you launch the intent. Typically it also saves the picture to the SD card, and requires the SD card to be present. The user interaction is also different on every device.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
I would have used ZXing as an example application (I work with it a lot), but it only uses the preview (viewfinder), and doesn't take any pictures. I also found that on some devices, ZXing did not automatically adjust the white balance when the lighting conditions changed, while the native camera app did it properly (not sure if this can be fixed).
Update:
For a while I used the camera API directly. This gives more control (custom UI, etc), but I would not recommend it to anyone. I would work on 90% of devices, but every now and again a new device would be released, with a different problem.
Some of the problems I've encountered:
Handling autofocus
Handling flash
Supporting devices with a front camera, back camera or both
Each device has a different combination of screen resolution, preview resolutions (doesn't always match the screen resolution) and picture resolutions.
So in general, I'd not recommend going this route at all, unless there is no other way. After two years I dumped by custom code and switched back to the Intent-based approach. Since then I've had much less trouble. The issues I've had with the Intent-based approach in the past was probably just my own incompetence.
If you really need to go this route, I've heard it's much easier if you only support devices with Android 4.0+.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
Either there is a bug in the examples or there is a compatibility issue with the devices.
The example that CommonsWare gave works well. The example works when using it as-is, but here are the issues I ran into when modifying it for my use case:
Never take a second picture before the first picture has completed, in other words PictureCallback.onPictureTaken() has been called. The CommonsWare example uses the inPreview flag for this purpose.
Make sure that your SurfaceView is full-screen. If you want a smaller preview you might need to change the preview size selection logic, otherwise the preview might not fit into the SurfaceView on some devices. Some devices only support a full-screen preview size, so keeping it full-screen is the simplest solution.
To add more components to the preview screen, FrameLayout works well in my experience. I started by using a LinearLayout to add text above the preview, but that broke rule #2. When using a FrameLayout to add components on top of the preview, you don't have any issues with the preview resolution.
I also posted a minor issue relating to Camera.open() on GitHub.
"the recommended way to access the camera is to open Camera on a separate thread". Otherwise, Camera.open() can take a while and might bog down the UI thread.
"Callbacks will be invoked on the event thread open(int) was called from". That's why to achieve best performance with camera preview callbacks (e.g. to encode them in a low-latency video for live communication), I recommend to open camera in a new HandlerThread, as shown here.