I'm working on an android app that is processing the input image from the camera and displays it to the user. This is fairly simple, I register a PreviewCallback on the camera object with the setPreviewCallbackWithBuffer.
This is easy and works smoothly with the old camera API
public void onPreviewFrame(byte[] data, Camera cam) {
// custom image data processing
}
I'm trying to port my app to take advantage of the new Camera2 API and I'm not sure how exactly shall I do that. I followed the Camera2Video in L Preview samples that allows to record a video. However, there is no direct image data transfer in the sample, so I don't understand where exactly shall I get the image pixel data and how to process it.
Could anybody help me or suggest the way how one can get the the functionality of PreviewCallback in android L, or how it's possible to process preview data from the camera before displaying it to the screen? (there is no preview callback on the camera object)
Thank you!
Combining a few answers into a more digestible one because #VP's answer, while technically clear, is difficult to understand if it's your first time moving from Camera to Camera2:
Using https://github.com/googlesamples/android-Camera2Basic as a starting point, modify the following:
In createCameraPreviewSession() init a new Surface from mImageReader
Surface mImageSurface = mImageReader.getSurface();
Add that new surface as a output target of your CaptureRequest.Builder variable. Using the Camera2Basic sample, the variable will be mPreviewRequestBuilder
mPreviewRequestBuilder.addTarget(mImageSurface);
Here's the snippet with the new lines (see my #AngeloS comments):
private void createCameraPreviewSession() {
try {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
//#AngeloS - Our new output surface for preview frame data
Surface mImageSurface = mImageReader.getSurface();
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
//#AngeloS - Add the new target to our CaptureRequest.Builder
mPreviewRequestBuilder.addTarget(mImageSurface);
mPreviewRequestBuilder.addTarget(surface);
...
Next, in setUpCameraOutputs(), change the format from ImageFormat.JPEG to ImageFormat.YUV_420_888 when you init your ImageReader. (PS, I also recommend dropping your preview size for smoother operation - one nice feature of Camera2)
mImageReader = ImageReader.newInstance(largest.getWidth() / 16, largest.getHeight() / 16, ImageFormat.YUV_420_888, 2);
Finally, in your onImageAvailable() method of ImageReader.OnImageAvailableListener, be sure to use #Kamala's suggestion because the preview will stop after a few frames if you don't close it
#Override
public void onImageAvailable(ImageReader reader) {
Log.d(TAG, "I'm an image frame!");
Image image = reader.acquireNextImage();
...
if (image != null)
image.close();
}
Since the Camera2 API is very different from the current Camera API, it might help to go through the documentation.
A good starting point is camera2basic example. It demonstrates how to use Camera2 API and configure ImageReader to get JPEG images and register ImageReader.OnImageAvailableListener to receive those images
To receive preview frames, you need to add your ImageReader's surface to setRepeatingRequest's CaptureRequest.Builder.
Also, you should set ImageReader's format to YUV_420_888, which will give you 30fps at 8MP (The documentation guarantees 30fps at 8MP for Nexus 5).
In the ImageReader.OnImageAvailableListener class, close the image after reading as shown below (this will release the buffer for next capture). You will have to handle exception on close
Image image = imageReader.acquireNextImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
image.close();
I needed the same thing, so I used their example and added a call to a new function when the camera is in preview state.
private CameraCaptureSession.CaptureCallback mCaptureCallback
= new CameraCaptureSession.CaptureCallback()
private void process(CaptureResult result) {
switch (mState) {
case STATE_PREVIEW: {
if (buttonPressed){
savePreviewShot();
}
break;
}
The savePreviewShot() is simply a recycled version of the original captureStillPicture() adapted to use the preview template.
private void savePreviewShot(){
try {
final Activity activity = getActivity();
if (null == activity || null == mCameraDevice) {
return;
}
// This is the CaptureRequest.Builder that we use to take a picture.
final CaptureRequest.Builder captureBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureBuilder.addTarget(mImageReader.getSurface());
// Orientation
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
CameraCaptureSession.CaptureCallback CaptureCallback
= new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request,
TotalCaptureResult result) {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd_HH:mm:ss:SSS");
Date resultdate = new Date(System.currentTimeMillis());
String mFileName = sdf.format(resultdate);
mFile = new File(getActivity().getExternalFilesDir(null), "pic "+mFileName+" preview.jpg");
Log.i("Saved file", ""+mFile.toString());
unlockFocus();
}
};
mCaptureSession.stopRepeating();
mCaptureSession.capture(captureBuilder.build(), CaptureCallback, null);
} catch (Exception e) {
e.printStackTrace();
}
};
It's better to init ImageReader with max image buffer is 2 then use reader.acquireLatestImage() inside onImageAvailable().
Because acquireLatestImage() will acquire the latest Image from the ImageReader's queue, dropping older one. This function is recommended to use over acquireNextImage() for most use-cases, as it's more suited for real-time processing. Note that max image buffer should be at least 2.
And remember to close() your image after processing.
Related
I have an android application that takes a photo and then displays the image. On my device, which I originally developed the app on, the image capture behaves as expected. However, when I have tried running it on other devices, on some devices it seems that the image is rotated 90 degrees. I have been able to determine that this is not an issue with the image preview, and that the image itself is rotated. The code for the image capture is here:
public void takePicture(){
if(null == cameraDevice) {
return;
}
try {
System.out.println("Taking Picture");
getCameraCharacteristics();
ImageReader reader = ImageReader.newInstance(1920, 1440, ImageFormat.JPEG, 1);
//ImageReader reader = ImageReader.newInstance(camera_width, camera_height, ImageFormat.RAW_SENSOR, 1);
List<Surface> outputSurfaces = buildOutputSurfaces(reader);
final CaptureRequest.Builder captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(reader.getSurface());
captureBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO);
// Orientation
int rotation = parent.getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
ImageReader.OnImageAvailableListener readerListener = reader1 -> getImageFromBuffer(reader1);
reader.setOnImageAvailableListener(readerListener, mBackgroundHandler);
final CameraCaptureSession.CaptureCallback captureListener = new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
createCameraPreview();
}
};
cameraDevice.createCaptureSession(outputSurfaces, new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
try {
session.capture(captureBuilder.build(), captureListener, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
}, mBackgroundHandler);
}
catch (CameraAccessException e) {
e.printStackTrace();
}
}
Regardless of device, the value for rotation is always 0. I have tried manually setting the JPEG_ORIENTATION to different values, but it does not seem to make a difference.
I have seen other StackOverflow questions with similar issues, but the fixes in those questions did not seem to make a difference here.
Can anyone suggest what might be causing this?
EDIT: to add some more details to the requirements for the app. The issue isn't just with displaying the image but with handling it afterwards. The user has to select a point in the image and then pair of point and image are sent to a server for processing. As a result, I need to orientation of the underlying image to be consistent between devices, its not enough to simply compensate when displaying the image.
Unfortunately I cant switch my application over to using a CameraIntent for image capture, as the application needs to be able to observe behaviour during photo capture and provide continuous feedback.
Use Glide to load and display your taken picture:
Glide.with(context).load(imageUri).into(imageView)
Demo:
https://youtu.be/tPwr2yYxlA4
Helpful reading:
Captured image will be displayed horizontally:
https://stackoverflow.com/a/47630783/3466808
Okay, I found a solution to this issue from a blogpost here. Essentially rather than relying on setting the JPEG rotation in the capture builder, you compute it yourself and incorporate the sensor data to determine how many degrees you have to rotate the image by.
// Orientation
int deviceRotation = parent.getWindowManager().getDefaultDisplay().getRotation();
int surfaceRotation = ORIENTATIONS.get(deviceRotation);
jpegOrientation = (surfaceRotation + sensorOrientation + 270) % 360;
I then decode the image into a bitmap, rotate it by the computed value, and then encoded it back into a ByteArray.
I'm using Android's Camera2 API and would like to perform some image processing on camera preview frames and then display the changes back on the preview (TextureView).
Starting from the common camera2video example, I've setup an ImageReader in my openCamera().
mImageReader = ImageReader.newInstance(mVideoSize.getWidth(),
mVideoSize.getHeight(), ImageFormat.YUV_420_888, mMaxBufferedImages);
mImageReader.setOnImageAvailableListener(mImageAvailable, mBackgroundHandler);
In my startPreview(), I've setup the Surfaces to receive frames from the CaptureRequest.
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
List<Surface> surfaces = new ArrayList<>();
// Here is where we connect the mPreviewSurface to the mTextureView.
mPreviewSurface = new Surface(texture);
surfaces.add(mPreviewSurface);
mPreviewBuilder.addTarget(mPreviewSurface);
// Connect our Image Reader to the Camera to get the preview frames.
Surface readerSurface = mImageReader.getSurface();
surfaces.add(readerSurface);
mPreviewBuilder.addTarget(readerSurface);
Then I'll modify the image data in the OnImageAvailableListener() callback.
ImageReader.OnImageAvailableListener mImageAvailable = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
try {
Image image = reader.acquireLatestImage();
if (image == null)
return;
final Image.Plane[] planes = image.getPlanes();
// Do something to the pixels.
// Black out part of the image.
ByteBuffer y_data_buffer = planes[0].getBuffer();
byte[] y_data = new byte[y_data_buffer.remaining()];
y_data_buffer.get(y_data);
byte y_value;
for (int row = 0; row < image.getHeight() / 2; row++) {
for (int col = 0; col < image.getWidth() / 2; col++) {
y_value = y_data[row * image.getWidth() + col];
y_value = 0;
y_data[row * image.getWidth() + col] = y_value;
}
}
image.close();
} catch (IllegalStateException e) {
Log.d(TAG, "mImageAvailable() Too many images acquired");
}
}
};
As I understand it now I am sending images to 2 Surface instances, the one for mTextureView and the other for my ImageReader.
How can I get my mTextureView to use the same Surface as the ImageReader, or should I be manipulating the image data directly from the mTextureView's Surface?
Thanks
If you only want to display the modified output, then I'm not sure why you have two outputs configured (the TextureView and the ImageReader).
Generally, if you want something like
camera -> in-app edits -> display
You have several options, depending on the kinds of edits you want, and various tradeoffs between ease of coding, performance, and so on.
One of the most efficient options is to do your edits as an OpenGL shader.
In that case, a GLSurfaceView is probably the simplest option.
Create a SurfaceTexture object with a texture ID that's unused in the GLSurfaceView's EGL context, and pass a Surface created from the SurfaceTexture to the camera session and requests.
Then in the SurfaceView drawing method, call SurfaceTexture's updateTexImage() method, and then use the texture ID to render your output as you'd like it.
That does require a lot of OpenGL code, so if you're not familiar with it, that can be challenging.
You can also use RenderScript for a similar effect; there you'll have an output SurfaceView or TextureView, and then a RenderScript script that reads from an input Allocation from the Camera and writes to an output Allocation to the View; you can create such Allocations from a Surface.
The Google HdrViewfinderDemo camera2 sample app uses this approach. It's a lot less boilerplate.
Third, you can just use an ImageReader like you're doing now, but you'll have to do a lot of conversion yourself to write it to the screen. The simplest (but slowest) option is to get a Canvas from a SurfaceView or a ImageView, and just write pixels to it one by one. Or you can do that via the ANativeWindow NDK, which is faster but requires writing JNI code and still requires you to do YUV->RGB conversions yourself (or use undocumented APIs to push YUV into the ANativeWindow and hope it works).
This is how I instantiate the ImageReader.
Size[] sizes = configs.getOutputSizes(ImageFormat.YUV_420_888);
mImageReader = ImageReader.newInstance(width, height, ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, null);
Surface rgbCaptureSurface = mImageReader.getSurface();
List<Surface> surfaces = new ArrayList<Surface>();
surfaces.add(rgbCaptureSurface);
//surfaces.add(surface);
mPreviewRequestBuilder
= mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
//mPreviewRequestBuilder.addTarget(surface);
mPreviewRequestBuilder.addTarget(rgbCaptureSurface);
mCameraDevice.createCaptureSession(surfaces, new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == mCameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_VIDEO);
// Flash is automatically enabled when necessary.
//setAutoFlash(mPreviewRequestBuilder);
// Finally, we start displaying the camera preview.
mPreviewRequest = mPreviewRequestBuilder.build();
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
Reading is done like this:
public void onImageAvailable(ImageReader reader) {
Image image;
while (true) {
image = reader.acquireLatestImage();
if (image == null) return;
Image.Plane Y = image.getPlanes()[0];
Image.Plane U = image.getPlanes()[1];
Image.Plane V = image.getPlanes()[2];
int Yb = Y.getBuffer().remaining();
int Ub = U.getBuffer().remaining();
int Vb = V.getBuffer().remaining();
byte[] data = new byte[Yb + Ub + Vb];
Y.getBuffer().get(data, 0, Yb);
U.getBuffer().get(data, Yb, Ub);
V.getBuffer().get(data, Yb + Ub, Vb);
I tried several different ImageFormats. I'm testing on LG G3, API 21 and the problem occurs.On Nexus 4 I do not have the problem, API 22.
I upgraded to API 23 and the same code worked fine. Also tested on API 22 and it also worked.
Same as : Using Camera2 API with ImageReader
Your observation is correct. API 21 does not properly support Camera2. This has been found by several people independently here on SO, see e.g. Camera2 API21 not working
So it is reasonable to start using Camera2 not before API22. It is not understandable why documentation hasn't been amended in the meantime.
Personally I am continuing to perform Camera2 studies, but I am still reluctant to use Camera2 in my app now. I first want to test it on many many devices first and for the near future I don't expect "Camera1" not being supported anymore by new devices.
I'm trying to capture image data from the camera using the camera2 API. I've mostly used code taken from the android Capture2RAW example. Only a few images come through (i.e. calls to onImageAvailable) before stopping completely. I've tried capturing using the RAW_SENSOR and JPEG formats at different sizes with the same results. What am I doing wrong?
this.mImageReader = ImageReader.newInstance(width, height, ImageFormat.RAW_SENSOR, /*maxImages*/ 1);
Surface surface = this.mImageReader.getSurface();
final List<Surface> surfaces = Arrays.asList(surface);
this.mCamera.createCaptureSession(surfaces, new CameraCaptureSession.StateCallback() {
// Callback methods here
}, null);
CaptureRequest.Builder captureRequestBuilder;
captureRequestBuilder = this.mCamera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureRequestBuilder.addTarget(surface);
this.mCaptureRequest = captureRequestBuilder.build();
this.mCaptureSession.setRepeatingRequest(mCaptureRequest, null, null);
Fixed it. The Images produced by the ImageReader need to be closed, otherwise they quickly fill up memory.
#Override
onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
// Process the image
image.close();
}
I am developing an Android App in which I'm using ImageReader to get image from a Surface. The surface's data is achieved from the VirtualDisplay when i record screen in Lollipop version. The problem is the image is available with very low rate (1 fps) (OnImageAvailableListener.onImageAvailable() function is invoked). When i tried to use MediaEncoder with this surface as an input surface the output video looks smooth under 30fps.
Is there any suggestion for me to read the image data of surface with high fps?
ImageReader imageReader = ImageReader.newInstance(width, height, PixelFormat.RGBA_8888, 2);
mImageReader.setOnImageAvailableListener(onImageListener, null);
mVirtualDisplay = mMediaProjection.createVirtualDisplay("VideoCap",
mDisplayWidth, mDisplayHeight, mScreenDensity,
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR,
imageReader.getSurface(), null /*Callbacks*/, null /*Handler*/);
//
//
OnImageAvailableListener onImageListener = new OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// TODO Auto-generated method stub
if(reader != mImageReader)
return;
Image image = reader.acquireLatestImage();
if(image == null)
return;
// do some stuff
image.close();
}
};
FPS increased extremely when I switched to another format :
mImageReader = ImageReader.newInstance(mVideoSize.getWidth(), mVideoSize.getHeight(), ImageFormat.YUV_420_888, 2);
Hope this will help you.
I found that in addition to selecting the YUV format, I also had to select the smallest image size available for the device in order to get a significant speed increase.