I need to speed-up capturing of camera2 API. I tried to build "android-Camera2Basic" project from google samples. For default capture request from example:
if (null == activity || null == mCameraDevice) {
return;
}
// This is the CaptureRequest.Builder that we use to take a picture.
final CaptureRequest.Builder captureBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(mImageReader.getSurface());
// Use the same AE and AF modes as the preview.
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE,
CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
// Orientation
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
CameraCaptureSession.CaptureCallback CaptureCallback
= new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request,
TotalCaptureResult result) {
showToast("Saved: " + mFile);
unlockFocus();
}
};
mCaptureSession.stopRepeating();
mCaptureSession.capture(captureBuilder.build(), CaptureCallback, null);
It takes 200-300ms from send request
mCaptureSession.capture(captureBuilder.build(), CaptureCallback, null);
And get result in
onImageAvailable(ImageReader reader)
Is it possible to reduce this time? I tried set different parameters for capture request, such as TEMPLATE_ZERO_SHUTTER_LAG, NOISE_REDUCTION_MODE_OFF, EDGE_MODE_OFF, etc. But it has no any effect.
If I try to capture burst, then all images, except first are comes very fast, no more then in 30-40ms. How can I reduce capturing time for first image?
replying to your comment, but making it into a proper answer:
If you check those slides from the Samsung dev. conference on slide #22 it shows the camera2 model. As you can see, there're several queues:
Pending Request queue
In flight capture queue
output image queue to the Surface showing the camera preview
and the callback to onCaptureComplete
that explains why the 1st capture is slow, but in burst mode the next images comes very fast. The requests and processing are queued and the 1st takes 300ms to arrive all the way back on the callback but the next one is already "right behind it".
If you're interested in the new API (and who wouldn't be, camera2 is amazing), you can also check the full video from the Samsung Dev. conference on YouTube. And the official docs. Lot's of good info on those.
Related
I'm working on a camera application with Android camera2. Too many black stripes appear in the image when the camera is turned on.
I think this is an FPS issue. When the photo is taken, these stripes disappear and a beautiful image is obtained. When I look at the fps supported by the camera with code: Range<Integer>[] ranges = characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);, I see that it is much higher than it should be. The only range value is [5000-60000]. And i could'nt set the FPS range by previewRequestBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, getRange()); code, due to not supported fps range error. Here is my Camera session code:
private void createCameraSession() {
try {
previewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
Surface previewSurface = new Surface(surfaceTexture);
previewRequestBuilder.addTarget(previewSurface);
mCameraDevice.createCaptureSession(Arrays.asList(previewSurface, mImageReader.getSurface()), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
try {
mCameraCaptureSession = cameraCaptureSession;
previewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
previewRequestBuilder.set(CaptureRequest.CONTROL_AE_ANTIBANDING_MODE, CameraMetadata.CONTROL_AE_ANTIBANDING_MODE_50HZ);
previewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
previewRequestBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
previewRequestBuilder.set(CaptureRequest.STATISTICS_FACE_DETECT_MODE, CameraMetadata.STATISTICS_FACE_DETECT_MODE_SIMPLE);
previewRequestBuilder.set(CaptureRequest.CONTROL_AE_LOCK, false);
zoom.setZoom(previewRequestBuilder, zoomFactor);
CaptureRequest previewRequest = previewRequestBuilder.build();
Log.d("TAG", "preview build is done");
mCameraCaptureSession.setRepeatingRequest(previewRequest, null, mHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession session) {
}
}, mHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
I've tried many combination of AE and AWB modes but it wasn't work. The problem continues. Is there another way to set the target FPS range of the camera or the Surface? The api of my device Android 5.1, so Surface.setFrameRate() function doesn't work. Can anybody help me for the solution please?
First, Android 5.1 had a bug about reporting frame rate ranges - you'll need to divide the numbers from available target FPS range by 1000 for setting frame rate (so that'd be [5, 60] fps).
Second, this is very likely a problem with your lights. Most light sources flicker at some rate (60 or 50 Hz for old-fashioned lightbulbs or fluorescent lights. All sorts of hz between 60 and 200 for LED), and the camera autoexposure has to account for that or banding will happen.
Normally, antibanding algorithms will detect the flicker or banding and adjust exposure time to remove it. But your old device may not be able to handle modern LED light flicker.
Try the device outdoors and see if sunlight has banding to figure out if the lights are the problem.
Unfortunately, if lights are your problem, you may have trouble finding a way around this. You could tweak exposure compensation but that'll also make the image darker or brighter, and may still have banding. And I doubt your device supports manual control, and then you'd have to write your own autoexposure routine, which is complicated to do well.
I am working on a video recording app in which I want to record videos in portrait. Everything seems fine except for the video which is saved in landscape mode. I tried the implementation using this project: https://github.com/HofmaDresu/AndroidCamera2Sample as an example, but still, the video is being saved in landscape mode.
void PrepareMediaRecorder()
{
if (mediaRecorder == null)
{
mediaRecorder = new MediaRecorder();
}
else
{
mediaRecorder.Reset();
}
var map = (StreamConfigurationMap)characteristics.Get(CameraCharacteristics.ScalerStreamConfigurationMap);
if (map == null)
{
return;
}
videoFileName = GetVideoFilePath();
mediaRecorder.SetAudioSource(AudioSource.Mic);
mediaRecorder.SetVideoSource(VideoSource.Surface);
mediaRecorder.SetOutputFormat(OutputFormat.Mpeg4);
mediaRecorder.SetOutputFile(videoFileName);
mediaRecorder.SetVideoEncodingBitRate(10000000);
mediaRecorder.SetVideoFrameRate(30);
var videoSize = ChooseVideoSize(map.GetOutputSizes(Java.Lang.Class.FromType(typeof(MediaRecorder))));
mediaRecorder.SetVideoEncoder(VideoEncoder.H264);
mediaRecorder.SetAudioEncoder(AudioEncoder.Aac);
mediaRecorder.SetVideoSize(videoSize.Width, videoSize.Height);
int rotation = (int)Activity.WindowManager.DefaultDisplay.Rotation;
mediaRecorder.SetOrientationHint(GetOrientation(rotation));
mediaRecorder.Prepare();
}
Assuming a high-quality video player shows you the video in portrait (if not, your GetOrientation method probably has an error in it), but other players you still care about are stuck on landscape:
You'll have to rotate the frames yourself. Unfortunately, this is messy, since there's no automatic control for this on the media encoder APIs that I know of.
Options are receiving frames via an ImageReader from the camera, and then doing the rotation in Java or via JNI, in native code, and then sending the frame to the encoder either via an ImageWriter to a MediaRecorder or MediaCodec Surface, or writing frames via MediaCodec's ByteBuffer interface.
Or you could send the frames to the GPU via a SurfaceTexture, rotate in a fragment shader, and then write out to a Surface tied to a MediaRecorder/MediaCodec again.
Both of these require a lot of boilerplate code and understanding of lower-level details, unfortunately.
I will explain my case.
I'm trying to do an application in which every 5 seconds will take an image, one without flash, and then after 5 seconds one with flash, and repeat this every time. So it will take one without flash, one with flash, one without flash, one with flash... infinitely.
The case is that with my code I can do this in some devices, but the same code won't work in others as I want. i.e:
BQ Aquaris X5 Plus : The no-flash image is correct, but the flash image will be just white.
BQ Aquaris E5 : Won't fire the flash.
How can this be possible, all devices in which I have tried are LEGACY hardware support level for Camera2 API.
This are some important methods in my code (I can't post all code due to char limit). I started from the Google Example:
This setAutoFlash does the mentioned above.
private void setAutoFlash(CaptureRequest.Builder requestBuilder) {
if (mFlashSupported) {
if(phototaken) {
requestBuilder.set(CaptureRequest.CONTROL_AE_MODE,CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
requestBuilder.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF);
}else{
requestBuilder.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_SINGLE);
}
}
}
This other one works in some devices and the bq aquaris e5 but doesn't fire the flash in the bq aquaris x5 plus.
private void setAutoFlash(CaptureRequest.Builder requestBuilder) {
if (mFlashSupported) {
if(phototaken) {
requestBuilder.set(CaptureRequest.CONTROL_AE_MODE,CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
requestBuilder.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF);
}else{
requestBuilder.set(CaptureRequest.CONTROL_AE_MODE,CaptureRequest.CONTROL_AE_MODE_ON_ALWAYS_FLASH);
requestBuilder.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF);
}
}
}
And my captureStillPicture
private void captureStillPicture() {
try {
final Activity activity = getActivity();
if (null == activity || null == mCameraDevice) {
return;
}
// This is the CaptureRequest.Builder that we use to take a picture.
final CaptureRequest.Builder captureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(mImageReader.getSurface());
// Use the same AE and AF modes as the preview.
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
setAutoFlash(captureBuilder);
// Orientation
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, getOrientation(rotation));
CameraCaptureSession.CaptureCallback CaptureCallback
= new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session,
#NonNull CaptureRequest request,
#NonNull TotalCaptureResult result) {
showToast("Saved: " + mFile);
Log.d(TAG, mFile.toString());
unlockFocus();
}
};
mCaptureSession.stopRepeating();
mCaptureSession.capture(captureBuilder.build(), CaptureCallback, null);
phototaken = !phototaken;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
The question is, what am i doing wrong so it doesn't work in all devices? Any help will be great.
There are two levels of control for the flash - manual, and controlled by the auto-exposure routine. You're currently mixing them together.
If you want to fire the flash manually, then you need to set AE_MODE to either AE_MODE_OFF or AE_MODE_ON; not any of the FLASH modes. Then, FLASH_MODE will control whether the flash will be in torch mode, off, or fire once for a given request.
Since you're always leave AE_MODE in one of the FLASH states, what you do to FLASH_MODE should not matter, barring a bug in some specific device.
If you want to guarantee flash firing in every other picture, you need to use AE_MODE_ON_ALWAYS_FLASH for the force-flash photos, and you need to use AE_MODE_ON for the no-flash phoots; don't touch FLASH_MODE.
Right now, with AUTO_FLASH, it's up to the device whether to fire a flash or not, so you'll see different behavior from different devices and lighting conditions - some will fire, some won't.
The other key thing you're not doing is running a precapture sequence; this is essential for flash pictures, because it allows the device to fire the preflash to determine correct flash power, focus, and white balance.
To run precapture, set the AE_MODE as desired, and then set AE_PRECAPTURE_TRIGGER to START for one request. This will transition AE_STATE to PRECAPTURE, and it'll stay there for some number of frames; once AE_STATE is no longer PRECAPTURE, you can issue the actual image capture request. Make sure you keep the AE_MODE consistent throughout this.
The sample app Camera2Basic implements the precapture sequence, so take a look there; it also has some optimizations that skip precapture in case the scene is not dark enough to need flash, but since you want to force-fire flash, that's not relevant to you.
So I have played around all weekend with the camera2 api. Now I'm at a point where I begin to understand how things are wired together.
While testing the api to implement a video recording app I hit a wall though.
I started by changing the Android Camera2Video Sample to my needs. What bugged me is that after each recording process the camera session is being recreated. Even worse, when a recording session is beeing started whats happening is that the preview session will be destroyed first and a recording session is created. After the recording session is done it gets destroyed and a new preview session is created.
The documentation clearly states:
Creating a session is an expensive operation and can take several hundred milliseconds... CameraCaptureSession Documentation
The result looks pretty ugly and the screen stutters when I hit record and stop. I wanted to improve this behavior so I fiddled around with the code.
What I do now is I create my one and only CameraSession where I add my preview surface (a TextureView) and also the Surface from a already created MediaRecorder by calling its getSurface method. This works fine for the first video but when I try to capture a second Video I get a IllegalArgumentException: Bad argument passed to camera service. I think this is because the surface of the MediaRecorder which I pass to the CameraSession upon it's creation is somehow destroyed or changed when I reset the MediaRecorder to prepare a new recording.
My question now is, is there any way around this problem? (setInputSurface(Surface surface)) might be but the api level is too high so I didn't test it.
Here is a quick overview over the relevant code pieces:
setup the MediaRecorder
private void setUpMediaRecorder() throws IOException {
if (mMediaRecorder == null) {
mMediaRecorder = new MediaRecorder();
}
mMediaRecorder.setVideoEncodingBitRate(5000000);
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mMediaRecorder.setVideoFrameRate(24);
mMediaRecorder.setVideoSize(mVideoSize.getWidth(), mVideoSize.getHeight());
mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
mMediaRecorder.setOrientationHint(SENSOR_ORIENTATION_DEFAULT_DEGREES);
mNextVideoAbsolutePath = getVideoFilePath();
mMediaRecorder.setOutputFile(mNextVideoAbsolutePath);
mMediaRecorder.prepare();
}
create the all mighty recording session
SurfaceTexture texture = mTextureView.getSurfaceTexture();
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
List<Surface> surfaces = new ArrayList<>();
// Set up Surface for the camera preview
mPreviewSurface = new Surface(texture);
surfaces.add(mPreviewSurface);
// Set up Surface for the MediaRecorder
mRecorderSurface = mMediaRecorder.getSurface();
surfaces.add(mRecorderSurface);
// create the capture session
mCameraDevice.createCaptureSession(surfaces, new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
mCameraSession = cameraCaptureSession;
// now that the session is created, start using it for the preview
showPreview();
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
....
}
}
}, mBackgroundHandler);
} catch (CameraAccessException) {
e.printStackTrace();
}
void showPreview() {
mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO);
mPreviewBuilder.addTarget(mPreviewSurface);
mCameraSession.setRepeatingRequest(mPreviewBuilder.build(), null, mBackgroundHandler);
}
start recording a video
mVideoBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
mVideoBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_VIDEO);
mVideoBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
mVideoBuilder.addTarget(mPreviewSurface);
mVideoBuilder.addTarget(mRecorderSurface);
// set the request for the capture
mCameraSession.setRepeatingRequest(mVideoBuilder.build(), null, mBackgroundHandler);
// Start recording
mMediaRecorder.start();
stop recording
mMediaRecorder.stop();
mMediaRecorder.reset();
showPreview();
setUpMediaRecorder(); // this is key to not get an error from the MediaRecorder
All of this works perfect and the video recording starts and stops without any hiccups! It's awesome but when I go back to step 3 (after 4) I get the aforementioned IllegalArgumentException: Bad argument passed to camera service. I keep banging my head against the wall but I cannot find a way around this problem.
Any help is greatly appreciated!
Thanks!
Check out MediaRecorder#setInputSurface(android.view.Surface):
Configures the recorder to use a persistent surface when using SURFACE video source.
I stumbled upon it when trying to figure out how to reuse the MediaRecorder capture surface too. This way, you can set your persistent surface to be one of the output surfaces of your capture session, and you won't have to recreate a capture session just to change the MediaRecorder surface generated from a new prepare() call.
The Google Nexus and Pixel camera apps are able to start and stop video recording without any stutter in the preview, so it's definitely possible to do this somehow.
In my android application, I need to get each frame that is returned by the android.hardware.camera2, make some processing with it's data and only then display it on the surfacetexture.
This question is similar to mine, but it didn't help me:
Camera preview image data processing with Android L and Camera2 API
I've tried to get the frame from here (as suggested in the answer to the question):
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.d("Img", "onImageAvailable");
mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
}
};
This was not useful, as the callback is called only after the user performed capture of image. And I don't need it only on capture, I need to get each frame that is sent to the camerapreview surface.
I wonder, maybe the farme can be taken here (from the texture):
public void onSurfaceTextureUpdated(SurfaceTexture texture) {
Log.d("Img", "onSurfaceTextureUpdated");
}
If yes, how?
I'm using this sample from google, as a basis:
https://github.com/googlesamples/android-Camera2Basic
Yes, you definitely can get the buffer from camera callback. You can provide your own texture and update it when you wish, and even modify the pixel data for this buffer.
You should push the 'original' SurfaceTexture (specified in createCaptureSession()) off screen, otherwise it will interfere with your filtered/modified buffers.
The main caveat of this approach is that it is now your responsibility to produce pseudo-preview buffers timely.
I want to do some image processing too. I've been studding the code on github.com/googlesamples/android-Camera2Basic, and I believe that mCaptureSession redirects the camera's pipeline to the preview texture and to the capture itself but not both at same time. The preview texture is 'refreshed' by mCaptureSession.setRepeatingRequest and the mOnImageAvailableListener is called when 'capture' is fired on captureStillPicture(), but if you disable the 'preview texture' and you set Repeating Request with the same builder that the 'preview texture' has to try call mOnImageAvailableListener it just won't work. Has anyone else been working on it? Any enlightenment?