Android Video Processing - how to connect the ImageReader Surface to the preview? - android

I'm using Android's Camera2 API and would like to perform some image processing on camera preview frames and then display the changes back on the preview (TextureView).
Starting from the common camera2video example, I've setup an ImageReader in my openCamera().
mImageReader = ImageReader.newInstance(mVideoSize.getWidth(),
mVideoSize.getHeight(), ImageFormat.YUV_420_888, mMaxBufferedImages);
mImageReader.setOnImageAvailableListener(mImageAvailable, mBackgroundHandler);
In my startPreview(), I've setup the Surfaces to receive frames from the CaptureRequest.
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
List<Surface> surfaces = new ArrayList<>();
// Here is where we connect the mPreviewSurface to the mTextureView.
mPreviewSurface = new Surface(texture);
surfaces.add(mPreviewSurface);
mPreviewBuilder.addTarget(mPreviewSurface);
// Connect our Image Reader to the Camera to get the preview frames.
Surface readerSurface = mImageReader.getSurface();
surfaces.add(readerSurface);
mPreviewBuilder.addTarget(readerSurface);
Then I'll modify the image data in the OnImageAvailableListener() callback.
ImageReader.OnImageAvailableListener mImageAvailable = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
try {
Image image = reader.acquireLatestImage();
if (image == null)
return;
final Image.Plane[] planes = image.getPlanes();
// Do something to the pixels.
// Black out part of the image.
ByteBuffer y_data_buffer = planes[0].getBuffer();
byte[] y_data = new byte[y_data_buffer.remaining()];
y_data_buffer.get(y_data);
byte y_value;
for (int row = 0; row < image.getHeight() / 2; row++) {
for (int col = 0; col < image.getWidth() / 2; col++) {
y_value = y_data[row * image.getWidth() + col];
y_value = 0;
y_data[row * image.getWidth() + col] = y_value;
}
}
image.close();
} catch (IllegalStateException e) {
Log.d(TAG, "mImageAvailable() Too many images acquired");
}
}
};
As I understand it now I am sending images to 2 Surface instances, the one for mTextureView and the other for my ImageReader.
How can I get my mTextureView to use the same Surface as the ImageReader, or should I be manipulating the image data directly from the mTextureView's Surface?
Thanks

If you only want to display the modified output, then I'm not sure why you have two outputs configured (the TextureView and the ImageReader).
Generally, if you want something like
camera -> in-app edits -> display
You have several options, depending on the kinds of edits you want, and various tradeoffs between ease of coding, performance, and so on.
One of the most efficient options is to do your edits as an OpenGL shader.
In that case, a GLSurfaceView is probably the simplest option.
Create a SurfaceTexture object with a texture ID that's unused in the GLSurfaceView's EGL context, and pass a Surface created from the SurfaceTexture to the camera session and requests.
Then in the SurfaceView drawing method, call SurfaceTexture's updateTexImage() method, and then use the texture ID to render your output as you'd like it.
That does require a lot of OpenGL code, so if you're not familiar with it, that can be challenging.
You can also use RenderScript for a similar effect; there you'll have an output SurfaceView or TextureView, and then a RenderScript script that reads from an input Allocation from the Camera and writes to an output Allocation to the View; you can create such Allocations from a Surface.
The Google HdrViewfinderDemo camera2 sample app uses this approach. It's a lot less boilerplate.
Third, you can just use an ImageReader like you're doing now, but you'll have to do a lot of conversion yourself to write it to the screen. The simplest (but slowest) option is to get a Canvas from a SurfaceView or a ImageView, and just write pixels to it one by one. Or you can do that via the ANativeWindow NDK, which is faster but requires writing JNI code and still requires you to do YUV->RGB conversions yourself (or use undocumented APIs to push YUV into the ANativeWindow and hope it works).

Related

Camera2 ImageReader freezes repeating capture request

I'm trying to capture image data from the camera using the camera2 API. I've mostly used code taken from the android Capture2RAW example. Only a few images come through (i.e. calls to onImageAvailable) before stopping completely. I've tried capturing using the RAW_SENSOR and JPEG formats at different sizes with the same results. What am I doing wrong?
this.mImageReader = ImageReader.newInstance(width, height, ImageFormat.RAW_SENSOR, /*maxImages*/ 1);
Surface surface = this.mImageReader.getSurface();
final List<Surface> surfaces = Arrays.asList(surface);
this.mCamera.createCaptureSession(surfaces, new CameraCaptureSession.StateCallback() {
// Callback methods here
}, null);
CaptureRequest.Builder captureRequestBuilder;
captureRequestBuilder = this.mCamera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureRequestBuilder.addTarget(surface);
this.mCaptureRequest = captureRequestBuilder.build();
this.mCaptureSession.setRepeatingRequest(mCaptureRequest, null, null);
Fixed it. The Images produced by the ImageReader need to be closed, otherwise they quickly fill up memory.
#Override
onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
// Process the image
image.close();
}

Unable to blit from External Texture to EGLSurface in android

When i have tried to render texture and transformation matrix to the EGLSurface, no display is seen in the view.
As a follow up of this issue , slightly i have modified slightly the code by following grafika/fadden sample code continuous capture
Here is my code:
Here is a draw method which runs on RenderThread.
This draw method is getting invoked properly whevener the data is produced at the producer end from Native Code.
public void drawFrame() {
mOffScreenSurface.makeCurrent();
mCameraTexture.updateTexImage();
mCameraTexture.getTransformMatrix(mTmpMatrix);
mSurfaceWindowUser.makeCurrent();
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mSurfaceWindowUser.swapBuffers();
}
run method of RenderThread ->
public void run() {
Looper.prepare();
mHandler = new RenderHandler(this);
mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
mOffScreenSurface = new OffscreenSurface(mEglCore, 640, 480);
mOffScreenSurface.makeCurrent();
mFullFrameBlit = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
mTextureId = mFullFrameBlit.createTextureObject();
mCameraTexture = new SurfaceTexture(mTextureId);
mCameraSurface = new Surface (mCameraTexture); // This surface i am sending to Native Code where i use ANativeWindow reference and copy the data using post method. {producer}
mCameraTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
#Override
public void onFrameAvailable(SurfaceTexture surfaceTexture) {
Log.d (TAG, "Long breath.. data is pumbed by Native Layer producer..");
mHandler.frameReceivedFromProducer();
}
});
mSurfaceWindowUser = new WindowSurface(mEglCore, mSurfaceUser, false); // this mSurfaceUser is a surface received from MainActivity TextureView.
}
To confirm if the produce at the native side producing the data, if i pass directly the user surface Without any EGL configurations, the frames are rendered into the screen.
At the native Level,
geometryResult = ANativeWindow_setBuffersGeometry(userNaiveWindow,640, 480, WINDOW_FORMAT_RGBA_8888);
To Render the frame i use
ANativeWindow_lock and ANativeWindow_unlockAndPost() to render directly frame into buffer.
I could not able to think what could be wrong and where i have to dig more ?
Thanks fadden for your help.

ImageReader in Android needs too long time for one frame to be available

I am developing an Android App in which I'm using ImageReader to get image from a Surface. The surface's data is achieved from the VirtualDisplay when i record screen in Lollipop version. The problem is the image is available with very low rate (1 fps) (OnImageAvailableListener.onImageAvailable() function is invoked). When i tried to use MediaEncoder with this surface as an input surface the output video looks smooth under 30fps.
Is there any suggestion for me to read the image data of surface with high fps?
ImageReader imageReader = ImageReader.newInstance(width, height, PixelFormat.RGBA_8888, 2);
mImageReader.setOnImageAvailableListener(onImageListener, null);
mVirtualDisplay = mMediaProjection.createVirtualDisplay("VideoCap",
mDisplayWidth, mDisplayHeight, mScreenDensity,
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR,
imageReader.getSurface(), null /*Callbacks*/, null /*Handler*/);
//
//
OnImageAvailableListener onImageListener = new OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// TODO Auto-generated method stub
if(reader != mImageReader)
return;
Image image = reader.acquireLatestImage();
if(image == null)
return;
// do some stuff
image.close();
}
};
FPS increased extremely when I switched to another format :
mImageReader = ImageReader.newInstance(mVideoSize.getWidth(), mVideoSize.getHeight(), ImageFormat.YUV_420_888, 2);
Hope this will help you.
I found that in addition to selecting the YUV format, I also had to select the smallest image size available for the device in order to get a significant speed increase.

Camera preview image data processing with Android L and Camera2 API

I'm working on an android app that is processing the input image from the camera and displays it to the user. This is fairly simple, I register a PreviewCallback on the camera object with the setPreviewCallbackWithBuffer.
This is easy and works smoothly with the old camera API
public void onPreviewFrame(byte[] data, Camera cam) {
// custom image data processing
}
I'm trying to port my app to take advantage of the new Camera2 API and I'm not sure how exactly shall I do that. I followed the Camera2Video in L Preview samples that allows to record a video. However, there is no direct image data transfer in the sample, so I don't understand where exactly shall I get the image pixel data and how to process it.
Could anybody help me or suggest the way how one can get the the functionality of PreviewCallback in android L, or how it's possible to process preview data from the camera before displaying it to the screen? (there is no preview callback on the camera object)
Thank you!
Combining a few answers into a more digestible one because #VP's answer, while technically clear, is difficult to understand if it's your first time moving from Camera to Camera2:
Using https://github.com/googlesamples/android-Camera2Basic as a starting point, modify the following:
In createCameraPreviewSession() init a new Surface from mImageReader
Surface mImageSurface = mImageReader.getSurface();
Add that new surface as a output target of your CaptureRequest.Builder variable. Using the Camera2Basic sample, the variable will be mPreviewRequestBuilder
mPreviewRequestBuilder.addTarget(mImageSurface);
Here's the snippet with the new lines (see my #AngeloS comments):
private void createCameraPreviewSession() {
try {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
//#AngeloS - Our new output surface for preview frame data
Surface mImageSurface = mImageReader.getSurface();
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
//#AngeloS - Add the new target to our CaptureRequest.Builder
mPreviewRequestBuilder.addTarget(mImageSurface);
mPreviewRequestBuilder.addTarget(surface);
...
Next, in setUpCameraOutputs(), change the format from ImageFormat.JPEG to ImageFormat.YUV_420_888 when you init your ImageReader. (PS, I also recommend dropping your preview size for smoother operation - one nice feature of Camera2)
mImageReader = ImageReader.newInstance(largest.getWidth() / 16, largest.getHeight() / 16, ImageFormat.YUV_420_888, 2);
Finally, in your onImageAvailable() method of ImageReader.OnImageAvailableListener, be sure to use #Kamala's suggestion because the preview will stop after a few frames if you don't close it
#Override
public void onImageAvailable(ImageReader reader) {
Log.d(TAG, "I'm an image frame!");
Image image = reader.acquireNextImage();
...
if (image != null)
image.close();
}
Since the Camera2 API is very different from the current Camera API, it might help to go through the documentation.
A good starting point is camera2basic example. It demonstrates how to use Camera2 API and configure ImageReader to get JPEG images and register ImageReader.OnImageAvailableListener to receive those images
To receive preview frames, you need to add your ImageReader's surface to setRepeatingRequest's CaptureRequest.Builder.
Also, you should set ImageReader's format to YUV_420_888, which will give you 30fps at 8MP (The documentation guarantees 30fps at 8MP for Nexus 5).
In the ImageReader.OnImageAvailableListener class, close the image after reading as shown below (this will release the buffer for next capture). You will have to handle exception on close
Image image = imageReader.acquireNextImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
image.close();
I needed the same thing, so I used their example and added a call to a new function when the camera is in preview state.
private CameraCaptureSession.CaptureCallback mCaptureCallback
= new CameraCaptureSession.CaptureCallback()
private void process(CaptureResult result) {
switch (mState) {
case STATE_PREVIEW: {
if (buttonPressed){
savePreviewShot();
}
break;
}
The savePreviewShot() is simply a recycled version of the original captureStillPicture() adapted to use the preview template.
private void savePreviewShot(){
try {
final Activity activity = getActivity();
if (null == activity || null == mCameraDevice) {
return;
}
// This is the CaptureRequest.Builder that we use to take a picture.
final CaptureRequest.Builder captureBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureBuilder.addTarget(mImageReader.getSurface());
// Orientation
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
CameraCaptureSession.CaptureCallback CaptureCallback
= new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request,
TotalCaptureResult result) {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd_HH:mm:ss:SSS");
Date resultdate = new Date(System.currentTimeMillis());
String mFileName = sdf.format(resultdate);
mFile = new File(getActivity().getExternalFilesDir(null), "pic "+mFileName+" preview.jpg");
Log.i("Saved file", ""+mFile.toString());
unlockFocus();
}
};
mCaptureSession.stopRepeating();
mCaptureSession.capture(captureBuilder.build(), CaptureCallback, null);
} catch (Exception e) {
e.printStackTrace();
}
};
It's better to init ImageReader with max image buffer is 2 then use reader.acquireLatestImage() inside onImageAvailable().
Because acquireLatestImage() will acquire the latest Image from the ImageReader's queue, dropping older one. This function is recommended to use over acquireNextImage() for most use-cases, as it's more suited for real-time processing. Note that max image buffer should be at least 2.
And remember to close() your image after processing.

YUV_NV21_TO_RGB not working?

I am staring to develop an app which monitors the camera preview, and does some image processing on it and displays int on a canvas. Just as a diagnostic I have the following code:
camera = Camera.open();
ImageFormat imf = new ImageFormat();
Camera.Parameters param = camera.getParameters();
param.setPreviewSize(128, 128);
preview_format = param.getPreviewFormat();
Camera.Size sz = param.getPreviewSize();
myimage = new int[sz.width*sz.height];
At run time it reports that preview_format is 17 which I understand is "NV21".
Later I have:
camera.setPreviewCallback(new PreviewCallback()
{
public void onPreviewFrame(byte[] _data, Camera _camera)
{
YUV_NV21_TO_RGB(myimage , _data, 128, 128) ;
}
});
The function YUV_NV21_TO_RGB was taken from here.
Meanwhile in another thread I have:
canvas.drawBitmap(
myimage, // the int array
0, // where to start in the array
128, // the stride ???
200, // x coord of where to display
200, // y coord of where to display
128, // wid
128, // ht
false, // alpha used?
null); // the paint used
The resulting image can be seen amongst other diagnostics in the square below. The stripes change as I move the phone around and appear to in some way correspond to what the camera is pointing at, but clearly it has been mangled. I tried using an alternative function found here, and another from wikipedia, but with seemingly identical results. Any ideas?
EDIT: One thought I had was that perhaps NV21 may not completely specify the format - maybe its a class of formats, where you need to go on and specify the bits per pixel or similar.
EDIT: An extra clue - if I cover the camera completely, the square goes entirely pure green.
Your preview size is not 128 by 128 because you fail to set it. You set it on the Camera.Parameters instance but you don't apply it to the camera.
You need to add the following line:
camera.setParameters(param);
And it's probably safe to get the parameters directly from the Camera instance:
preview_format = camera.getPreviewFormat();
Camera.Size sz = camera.getPreviewSize();

Categories

Resources