I am currently developing an application which use Camera2.
I display the preview on a TextureView which is scaled and translated (I only need to display a part of the image). My problem is that I need to analyze the entire image.
What I have in my CameraDevice.StateCallback :
#Override
public void onOpened(CameraDevice camera) {
mCameraDevice = camera;
SurfaceTexture texture = mTextureView.getSurfaceTexture();
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
Surface surface = new Surface(texture);
try {
mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
} catch (CameraAccessException e){
e.printStackTrace();
}
try {
mCameraDevice.createCaptureSession(Arrays.asList(surface), mPreviewStateCallback, null);
mPreviewBuilder.addTarget(surfaceFull);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
and in my SurfaceTextureListener :
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
Thread thread = new Thread(new Runnable() {
#Override
public void run() {
my_analyze(mTextureView.getBitmap());
}
});
thread.start();
}
And the bitmap is only what I see in the TextureView (which is logical) but I want the entire image.
Is it possible ?
Thanks,
NiCLO
You can send the frames to a SurfaceTexture you create, rather than one that's part of TextureView, then get the pixels by rendering them to a GLES pbuffer and reading them back with glReadPixels().
If you can work in YUV rather than RGB, you can get to the data faster and more simply by directing the Camera2 output to an ImageReader.
Grafika has some useful examples, e.g. "texture from camera".
For me, the next implementation works fine for me:
Bitmap bitmap = mTextureView.getBitmap(mWidth, mHeight);
int[] argb = new int[mWidth * mHeight];
// get ARGB pixels and then proccess it with 8UC4 OpenCV convertion
bitmap.getPixels(argb, 0, mWidth, 0, 0, mWidth, mHeight);
// native method (NDK or CMake)
processFrame8UC4(argb, mWidth, mHeight);
A complete implementation for Camera API2 and NDK (OpenCV) here: https://stackoverflow.com/a/49331546/471690
Related
I want to take screenshot from camera preview each second.
I show preview of my Camera using SurfaceView. I need to get preview each second(screenshot) but not using photo taking.
I know about method mCamera.setPreviewCallbackWithBuffer but I can take frame only one time from it. To make it updating every second I need to start MediaRecorder and record video. But For video I need to set outputFile, which means that it can use a lot of memory.
mCamera.setPreviewCallbackWithBuffer(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 50, out);
byte[] bytes = out.toByteArray();
final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
}
});
How can I do it without setting outputfile and taking photo each second?
If you are using SurfaceView to display your camera preview you may try calling Camera#takePicture every second.
To schedule it approximately every one second, you may use postDelayed method of any view. For example:
private Runnable capturePreview = new Runnable() {
#Override
public void run() {
camera.takePicture(null, null, callback);
// Run again after approximately 1 second.
surfaceView.postDelayed(this, 1000);
}
}
private Camera.PictureCallback callback = new Camera.PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);
// Do whatever you need with your bitmap.
// Consider freeing the memory afterwards.
bitmap.recycle();
}
}
And you can start it whenever preview is ready by calling:
surfaceView.postDelayed(capturePreview, 1000);
And stop whenever preview is no longer displayed:
surfaceView.removeCallbacks(capturePreview);
If you are using TextureView you can simply use getBitmap() which allows you to easily grab its current contents.
Therefore the code above becomes something like:
private Runnable capturePreview = new Runnable() {
#Override
public void run() {
Bitmap preview = textureView.getBitmap();
// Do whatever you need with the bitmap.
// Run again after approximately 1 second.
textureView.postDelayed(this, 1000);
}
}, 1000);
And again start:
textureView.postDelayed(capturePreview, 1000);
and stop:
textureView.removeCallbacks(capturePreview);
I use the GLSurfaceView as the view to display the camera preview data. I use createBitmapFromGLSurface aim at grabPixels and save it to Bitmap.
However, I always get a whole black picture after save the bitmap to file. Where am i wrong?
Following is my code snippet.
#Override
public void onDrawFrame(GL10 gl) {
if (mIsNeedCaptureFrame) {
mIsNeedCaptureFrame = false;
createBitmapFromGLSurface(width, height);
}
}
private void createBitmapFromGLSurface(int w, int h) {
ByteBuffer buf = ByteBuffer.allocateDirect(w * h * 4);
buf.position(0);
buf.order(ByteOrder.LITTLE_ENDIAN);
GLES20.glReadPixels(0, 0, w, h,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);
buf.rewind();
Bitmap bmp = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
bmp.copyPixelsFromBuffer(buf);
Log.i(TAG, "createBitmapFromGLSurface w:" + w + ",h:" + h);
mFrameCapturedCallback.onFrameCaptured(bmp);
}
Update:
public void captureFrame(FrameCapturedCallback callback) {
mIsNeedCaptureFrame = true;
mCallback = callback;
}
public void takeScreenshot() {
final int width = mIncomingWidth;
final int height = mIncomingHeight;
EglCore eglCore = new EglCore(EGL14.eglGetCurrentContext(), EglCore.FLAG_RECORDABLE);
OffscreenSurface surface = new OffscreenSurface(eglCore, width, height);
surface.makeCurrent();
Bitmap bitmap = surface.captureFrame();
for (int x = 0, y = 0; x < 100; x++, y++) {
Log.i(TAG, "getPixel:" + bitmap.getPixel(x, y));
}
surface.release();
eglCore.release();
mCallback.onFrameCaptured(bitmap);
}
#Override
public void onDrawFrame(GL10 gl) {
mSurfaceTexture.updateTexImage();
if (mIsNeedCaptureFrame) {
mIsNeedCaptureFrame = false;
takeScreenshot();
return;
}
....
}
The logs are following:
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
...
This won't work.
To understand why, bear in mind that a SurfaceView Surface is a queue of buffers with a producer-consumer relationship. When displaying camera preview data, the Camera is the producer, and the system compositor (SurfaceFlinger) is the consumer. Only the producer can send data to the Surface -- there can be only one producer at a time -- and only the consumer can examine buffers from the Surface.
If you were drawing on the Surface yourself, so your app would be the producer, you would be able to see what you've drawn, but only while inonDrawFrame(). When it returns, GLSurfaceView calls eglSwapBuffers(), and the frame you've drawn is sent off to the consumer. (Technically, because the buffers are in a pool and are re-used, you can read frames outside onDrawFrame(); but what you're reading would be stale data from 1-2 frames back, not the one you just drew.)
What you're doing here is reading data from an EGLSurface that has never been drawn on and isn't connected to the SurfaceView. That's why it's always reading black. The Camera doesn't "draw" the preview, it just takes a buffer of YUV data and shoves it into the BufferQueue.
If you want to show the preview and capture frames using GLSurfaceView, see the "show + capture camera" example in Grafika. You can replace the MediaCodec code with a glReadPixels() (see EglSurfaceBase.saveFrame(), which looks very much like what you have).
Is there any equivalent for Camera.PreviewCallback in Camera2 from API 21,better than mapping to a SurfaceTexture and pulling a Bitmap ? I need to be able to pull preview data off of the camera as YUV?
You can start from the Camera2Basic sample code from Google.
You need to add the surface of the ImageReader as a target to the preview capture request:
//set up a CaptureRequest.Builder with the output Surface
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
After that, you can retrieve the image in the ImageReader.OnImageAvailableListener:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
try {
image = reader.acquireLatestImage();
if (image != null) {
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
Bitmap bitmap = fromByteBuffer(buffer);
image.close();
}
} catch (Exception e) {
Log.w(LOG_TAG, e.getMessage());
}
}
};
To get a Bitmap from the ByteBuffer:
Bitmap fromByteBuffer(ByteBuffer buffer) {
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes, 0, bytes.length);
return BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
}
Yes, use the ImageReader class.
Create an ImageReader using the format ImageFormat.YUV_420_888 and your desired size (make sure you select a size that's supported by the camera device you're using).
Then use ImageReader.getSurface() for a Surface to provide to CameraDevice.createCaptureSession(), along with your other preview outputs, if any.
Finally, in your repeating capture request, add the ImageReader provided surface as a target before setting it as the repeating request in your capture session.
I am working with face detection in Android and I want achieve the following:
1. Use face detection listener in Android for detecting faces on camera frame.
2. If a face is detected on the camera frame, then extract the face and save it to external storage.
After surfing through existing questions, I have found that there is no direct way to convert detected face to bitmap and store it on the disk. So now I want to capture and save the entire camera frame in which the face has been detected and I have not been able to do so.
The current code structure is as follows:
FaceDetectionListener faceDetectionListener = new FaceDetectionListener() {
#Override
public void onFaceDetection(Face[] faces, Camera camera) {
if (faces.length == 0) {
} else {
displayMessage("Face detected!");
// CODE TO SAVE CURRENT FRAME AS IMAGE
finish();
}
}
};
I tried to achieve this by calling takePicture in the above method but I was unable to save the frame using that approach. Kindly suggest a way in which I can save the camera frame.
I could not figure out a direct way to save the camera frame within FaceDetectionListener. Therefore, for my application, I changed the way in which I was handling the camera preview data. I used the PreviewCallback interface of Camera class and implemented the logic in onPreviewFrame method of the interface. The outline of implementation is as follows:
class SaveFaceFrames extends Activity implements Camera.PreviewCallback, Camera.FaceDetectionListener {
boolean lock = false;
public void onPreviewFrame(byte[] data, Camera camera) {
...
if(lock) {
Camera.Parameters parameters = camera.getParameters();
Camera.Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, parameters.getPreviewFormat(), size.width, size.height, null);
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
image.compressToJpeg(new Rect(0, 0, image.getWidth(), image.getHeight()), 100, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
lock = false;
}
}
public void onFaceDetection(Camera.Face[] faces, Camera camera) {
...
if(!lock) {
if(faces.length() != 0) lock = true;
}
}
}
This is not an ideal solution, but it worked in my case. There are third party libraries which can be used in these scenarios. One library which I have used and works very well is Qualcomm's Snapdragon SDK. I hope someone finds this useful.
The code below is executed as the jpeg picture callback after TakePicture is called. If I save data to disk, it is a 1280x960 jpeg. I've tried to change the picture size but that's not possible as no smaller size is supported. JPEG is the only available picture format.
PictureCallback jpegCallback = new PictureCallback() {
public void onPictureTaken(byte[] data, Camera camera) {
FileOutputStream out = null;
Bitmap bm = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap sbm = Bitmap.createScaledBitmap(bm, 640, 480, false);
data.Length is something like 500k as expected. After executing BitmapFactory.decodeByteArray(), bm has a height and width of -1 so it appears the operation is failing.
It's unclear to me if Bitmap can handle jpeg data. I would think not but I have seem some code examples that seem to indicate it is.
Does data need to be in bitmap format before decoding and scaling?
If so, how to do this?
Thanks!
On your surfaceCreated, you code set the camara's Picture Size, as shown the code below:
public void surfaceCreated(SurfaceHolder holder) {
camera = Camera.open();
try {
camera.setPreviewDisplay(holder);
Camera.Parameters p = camera.getParameters();
p.set("jpeg-quality", 70);
p.setPictureFormat(PixelFormat.JPEG);
p.setPictureSize(640, 480);
camera.setParameters(p);
} catch (IOException e) {
e.printStackTrace();
}
}