I've bumped into the issue with slow focusing on Nexus 6.
I develop camera application and now I'm using camera2 API.
For application needs we create preview request with 2 surfaces
- SurfaceView (viewfinder)
- YUV ImageReader surface (to use data in hstogram calculation)
And there is a critical point! If just add only viewfinder surface, focusing occurs as normal. But with 2 those surfaces focusing occurs very slow with visual steps of lens moving!
Code is quite standard, written according google documentations:
mImageReaderPreviewYUV = ImageReader.newInstance(previewWidth, previewHeight, ImageFormat.YUV_420_888, 2);
previewRequestBuilder = camDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(getCameraSurface()); //Add surface of SurfaceView
previewRequestBuilder.addTarget(mImageReaderPreviewYUV); //Add ImageReader
mCaptureSession.setRepeatingRequest(previewRequestBuilder.build(), captureCallback null);
Does the system logcat show any warnings about buffers not being available?
Is the preview frame rate slow, or is smooth (~30fps) but focusing just works oddly?
If the former, you may not be returning Image objects to the ImageReader (by closing them once done with them) at 30 fps, so the camera device is starved for buffers to fill, and cannot maintain 30fps preview.
To test this, implement the minimal ImageReaderListener.onImageAvailable(ImageReader reader) method that just returns the image immediately:
public class TestImageListener extends ImageReaderListener {
public void onImageAvailable(ImageReader reader) {
Image img = reader.acquireNextImage();
img.close();
}
}
...
mImageReaderPreviewYUV.setOnImageAvailableListener(new TestImageListener());
If this lets you get fluid preview, then your image processing is too slow.
As a solution, you should increase the number of buffers in your ImageReader, and the nuse the reader.acquireLatestImage() to drop older buffers and only process the newest Image each time you calculate your histogram.
I had the same issues on the N6 and I think it works smoother now - add the ImageReader surface before the camera surface:
previewRequestBuilder = camDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(mImageReaderPreviewYUV); //Add ImageReader
previewRequestBuilder.addTarget(getCameraSurface()); //Add surface of SurfaceView
I also tested my camera app with a N4/5.0.1 and both ways work perfectly there.
Related
In my project, I need to capture the frames of the camera streams continuously. Here is the current code snippet I used.
To set up the ImageReader, I set the maxImages to 20. Let is every time when callback is triggered, there would have 20 frames in the ImageReader Queue.
imageReader = ImageReader.newInstance(
optimumSize.getWidth(),
optimumSize.getHeight(),
ImageFormat.YUV_420_888,
20
);
Then to access each image of these 20 frames. I used the following snippet.
imageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireNextImage();
while (image != null) {
// some processing here.....
image.close();
image = reader.acquireNextImage();
}
if (image != null) {
image.close();
}
}
}, processingHandler);
The key obstacle here is to be able to access each of 20 frames in a callback, for further image processing. However the aforementioned code seems have some problems (I can only access the latest image in the underlying queue). In fact, I only need to access a small patch (50 x 50 pixels) in each frames, specified by users.
The reason for doing this is that I need to get the 20 continuous frames data with sampling frequency being ~60Hz. This seems really hard to achieve if we can only access single frame in each callback, which can only achieve up to 30fps.
Any suggestions would be super welcome! Thanks!
Setting maxImages to 20 just means the queue will allow you to acquire 20 Images at the same time; it does not mean the onImageAvailable callback will fire only once 20 images are queued. The callback fires as soon as a single image is present.
Most camera devices run at 30fps max, so it's not surprising that's the speed you're seeing. Some cameras do have 60fps modes, but you have to explicitly switch to a CONTROL_AE_TARGET_FPS_RANGE of (60,60) to get that, and that's only if the device's CONTROL_AE_AVAILABLE_TARGET_FPS_RANGE values include that range.
60fps may also be resolution-limited (check the StreamConfigurationMap for minimum frame durations to find what resolutions can support 60fps, if you want to double-check).
I'm using CameraX and I'd like to play animation over canvas with the same fps which used by CameraX to show Preview.
question 1:
How can I to play 60 frames animation with 30 fps (for example) of CameraX in 2 seconds, if at all possible.
question 2:
How can I get CameraX fps?
Regarding the first question, add a listener to your SurfaceTexture and listen to onSurfaceTextureUpdated. You can use this method as reference to know when to render your animation.
About the second question, as far as I know there is no API in CameraX to get the FPS due to the variable nature of the a camera preview in terms of FPS. Only video recording has a fixed FPS value, for a different camera preview mode the FPS is variable. On the other hand, the camera2 API can be configured with a FPS range (min-max), so I guess the CameraX API will have something similar.
You can alternatively compute dynamically the FPS using the onSurfaceTextureUpdated method. In the internet you will find many pages showing how to calculate a FPS.
Here a small example, the code is untested, but should guide you. Call the method in onSurfaceTextureUpdated, after a few iterations you should get the active FPS.
private long lastFpsTime = 0L;
private float fps;
private void computeFPS()
{
if (this.lastFpsTime != 0L)
{
this.fps = 1000.0f / (SystemClock.elapsedRealtime() - this.lastFpsTime);
}
this.lastFpsTime = SystemClock.elapsedRealtime();
}
I am using Camera2 API to create a Camera component that can scan barcodes and has ability to take pictures during scanning. It is kinda working but the preview is flickering - it seems like previous frames and sometimes green frames are interrupting realtime preview.
My code is based on Google's Camera2Basic. I'm just adding one more ImageReader and its surface as a new output and target for CaptureRequest.Builder. One of the readers uses JPEG and the other YUV. Flickering disappears when I remove the JPEG reader's surface from outputs (not passing this into createCaptureSession).
There's quite a lot of code so I created a gist: click - Tried to get rid of completely irrelevant code.
Is the device you're testing on a LEGACY-level device?
If so, any captures targeting a JPEG output may be much slower since they can run a precapture sequence, and may briefly pause preview as well.
But it should not cause green frames, unless there's a device-level bug.
If anyone ever struggles with this. There is table in the docs showing that if there are 3 targets specified, the YUV ImageReader can use images with maximum size equal to the preview size (maximum 1920x1080). Reducing this helped!
Yes you can. Assuming that you configure your preview to feed the ImageReader with YUV frames (because you could also put JPEG there, check it out), like so:
mImageReaderPreview = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.YUV_420_888, 1);
You can process those frames inside your OnImageAvailable listener:
#Override
public void onImageAvailable(ImageReader reader) {
Image mImage = reader.acquireNextImage();
if (mImage == null) {
return;
}
try {
// Do some custom processing like YUV to RGB conversion, cropping, etc.
mFrameProcessor.setNextFrame(mImage));
mImage.close();
} catch (IllegalStateException e) {
Log.e("TAG", e.getMessage());
}
I was using a tutorial for camera2 api for android and one of the steps was to resize the textureview's surface to an acceptable format by doing the following:
SurfaceTexture surfaceTexture = mTextureView.getSurfaceTexture();
surfaceTecture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight();
Surface previewSurface = new Surface(surfaceTexture);
previewBuilder = CD.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewBuilder.addTarget(previewSurface);
So the mPreviewSize variable is of type Size and it was determined beforehand it cycles through the acceptable formats and selects the most optimal one according to your screen size. The problem is I'm using a SurfaceView and I'm trying to resize the surface object in the SurfaceView I tried this but it didn't work:
SurfaceHolder SH= gameSurface.getHolder();
SH.setFixedSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
Surface Sur = SH.getSurface();
previewBuilder = CD.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewBuilder.addTarget(Sur);
So in debug mode I see mPreviewSize is correct (as in it is set to an acceptable format) but I get an error saying that I'm trying to use an unacceptable format size, it shows the size and it's not the same as mPreviewSize which means the resizing isn't working. Any ideas?
You probably need to wait to receive the surfaceChanged callback from the SurfaceView, before trying to use the Surface to create a camera capture session.
setFixedSize doesn't necessarily take effect immediately.
In my android application, I need to get each frame that is returned by the android.hardware.camera2, make some processing with it's data and only then display it on the surfacetexture.
This question is similar to mine, but it didn't help me:
Camera preview image data processing with Android L and Camera2 API
I've tried to get the frame from here (as suggested in the answer to the question):
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.d("Img", "onImageAvailable");
mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
}
};
This was not useful, as the callback is called only after the user performed capture of image. And I don't need it only on capture, I need to get each frame that is sent to the camerapreview surface.
I wonder, maybe the farme can be taken here (from the texture):
public void onSurfaceTextureUpdated(SurfaceTexture texture) {
Log.d("Img", "onSurfaceTextureUpdated");
}
If yes, how?
I'm using this sample from google, as a basis:
https://github.com/googlesamples/android-Camera2Basic
Yes, you definitely can get the buffer from camera callback. You can provide your own texture and update it when you wish, and even modify the pixel data for this buffer.
You should push the 'original' SurfaceTexture (specified in createCaptureSession()) off screen, otherwise it will interfere with your filtered/modified buffers.
The main caveat of this approach is that it is now your responsibility to produce pseudo-preview buffers timely.
I want to do some image processing too. I've been studding the code on github.com/googlesamples/android-Camera2Basic, and I believe that mCaptureSession redirects the camera's pipeline to the preview texture and to the capture itself but not both at same time. The preview texture is 'refreshed' by mCaptureSession.setRepeatingRequest and the mOnImageAvailableListener is called when 'capture' is fired on captureStillPicture(), but if you disable the 'preview texture' and you set Repeating Request with the same builder that the 'preview texture' has to try call mOnImageAvailableListener it just won't work. Has anyone else been working on it? Any enlightenment?