I am creating an Android app to do some image processing techniques with the camera and it needs to be fast. This is the pseudo-code of how the entire system works:
1. loop while not finished
1.1 get image frame
1.2 process image for object detection
2. end loop
I actually have questions on the basics of the Camera class:
Is previewing the perceived image from the camera faster than no previews at all? The former means using SurfaceView to preview the image.
Let's say from the takePicture() method, can the image data array be obtained without the previews?
My real question is, what is the best way to obtain the image data (say, byte[] array) quickly and iteratively after processing the image (as stated on top)?
I planned to use takePicture() method to get the image data, but I need your opinion if this is the only way or if there other better ways.
You can setup a SurfaceView as the Camera's preview display and get the data of every preview frame using the PreviewCallback. This would be better than using takePicture if you don't need the high resolution that takePicture captures. In other words, if you want to capture images of lower quality at a faster rate, use PreviewCallback... if you want to capture images of higher quality at a very slow rate, use takePicture.
As for your questions, I don't think you can take pictures without using a preview display, but i could be wrong.
class MainActivity extends Activity implements Camera.PreviewCallback, SurfaceHolder.Callback {
...
public void surfaceCreated(SurfaceHolder holder) {
camera = Camera.open();
camera.setPreviewCallback(this);
...
}
public void onPreviewFrame(byte[] data, Camera camera) {
// image data contained in data... do as you wish
}
}
Related
I am using Camera2 API to create a Camera component that can scan barcodes and has ability to take pictures during scanning. It is kinda working but the preview is flickering - it seems like previous frames and sometimes green frames are interrupting realtime preview.
My code is based on Google's Camera2Basic. I'm just adding one more ImageReader and its surface as a new output and target for CaptureRequest.Builder. One of the readers uses JPEG and the other YUV. Flickering disappears when I remove the JPEG reader's surface from outputs (not passing this into createCaptureSession).
There's quite a lot of code so I created a gist: click - Tried to get rid of completely irrelevant code.
Is the device you're testing on a LEGACY-level device?
If so, any captures targeting a JPEG output may be much slower since they can run a precapture sequence, and may briefly pause preview as well.
But it should not cause green frames, unless there's a device-level bug.
If anyone ever struggles with this. There is table in the docs showing that if there are 3 targets specified, the YUV ImageReader can use images with maximum size equal to the preview size (maximum 1920x1080). Reducing this helped!
Yes you can. Assuming that you configure your preview to feed the ImageReader with YUV frames (because you could also put JPEG there, check it out), like so:
mImageReaderPreview = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.YUV_420_888, 1);
You can process those frames inside your OnImageAvailable listener:
#Override
public void onImageAvailable(ImageReader reader) {
Image mImage = reader.acquireNextImage();
if (mImage == null) {
return;
}
try {
// Do some custom processing like YUV to RGB conversion, cropping, etc.
mFrameProcessor.setNextFrame(mImage));
mImage.close();
} catch (IllegalStateException e) {
Log.e("TAG", e.getMessage());
}
As per old camera api by following the below code.
mCameraInstance.camera.addCallbackBuffer(imageBuffer);<br>
mCameraInstance.camera.setPreviewCallbackWithBuffer(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
processData(data);
if (mCameraInstance != null)
mCameraInstance.camera.addCallbackBuffer(imageBuffer);
}
});
I can get byte data each sec for every frame by using the above code. Is there any way to achieve this functionality in android camera2api.
Is there any way to achieve the similar funtionality in android 2
The simplest way would probably be to use an ImageReader hooked to the camera device. Attach an ImageReader.OnImageAvailableListener and get new images as they arrive. You can get the Plane[] image data and process it according to the format.
Question
I want to do something similar to what's done on the Camera2Basic sample, that is:
Previewing images from the camera using a TextureView
Processing images from the camera using a ImageReader
With a few differences regarding 2:
I'm only interested on the gray channel (brightness) from the images to be processed. Their dimensions should be around 1000 x 1000 pixels (and not the highest resolution available)
When a image to be processed is available, a generic process(Image) method will be called instead of saving images to disk. What this method does is out of the scope of this question, but it takes around 50 ms to return
The image data should be processed periodically (around 10 FPS, but speed is not critical) instead of eventually
How can I accomplish this using the Camera2 API?
Observations
I've changed the way I'm creating the ImageReader instance, selecting smaller dimensions and a different format (YUV_420_888 instead of JPEG). The Y plane will be accessed in order to get the brightness data. Is there a more efficient format (since I'm simply ignoring the U and V planes)?
Both TextureView and ImageReader surfaces should be filled periodically, but at different rates. Since there can be only one repeating CameraRequest on a CameraCaptureSession (which can be set by calling setRepeatingRequest()), am I supposed to manually call capture() periodically (e.g. call setRepeatingRequest() with the preview request and call capture() periodically with the process request)?
Can the performance be improved by sending reprocessed requests to obtain the images to be processed from the preview images? If so, how can I do it?
I don't know how to help you with the gray channel, I suggest you to try to study the planes of the YUV format image and try to get it from there.
Also try to check all the values that you can set in the CaptureBuilder, maybe you can achieve your objetive using SENSOR_TEST_PATTERN_MODE, COLOR_CORRECTION_MODE, or BLACK_LEVEL_LOCK. You can check all the info in android documentation
About process just one of every 10 frames, just discard the frames in your process() method using a simple:
if (result.getFrameNumber() % 10 != 0) return;
Finally remember to close all the images that you recieve in your ImageReader OnImageAvailableListener to avoid memory leaks and improve your performance :P
#Override
public void onImageAvailable(ImageReader imageReader) {
Image image = null;
try {
image = imageReader.acquireNextImage();
//Do whatever you want with your Image
if (image != null) {
image.close();
}
} catch (IllegalStateException iae) {
if (image != null) {
image.close();
}
}
}
hope that it will help you, let me know if I can help you in something else!
Environment:
Nexus 7 Jelly Bean 4.1.2
Problem:
I'm trying to make a Motion Detection application that works with RTSP using VideoView.
I wish that there was something like an onNewFrameListener
videoView.onNewFrame(Frame frame)
I've tried to get access to the raw frames of an RTSP stream via VideoView but couldn't find any support for that in the Android SDK.
I found out that VideoView encapsulates the Android's MediaPlayer class.
So i dived into the media_jni lib to try and find a way to access the raw frames, But couldn't find the byte buffer or whatever that represents a frame.
Question:
Anyone has an idea where or how can i find this buffer and get access to it ?
Or any other idea of implementing a Motion Detection over a VideoView ?
Even if it's sais that i need to recompile the AOSP.
You can extend the VideoView and override its draw(Canvas canvas) method.
Set your bitmap to the canvas received through draw.
Call super.draw() which will get the frame drawn onto your bitmap.
Access the frame pixels from the bitmap.
class MotionDetectorVideoView extends VideoView {
public Bitmap mFrameBitmap;
...
#Override
public void draw(Canvas canvas) {
// set your own member bitmap to canvas..
canvas.setBitmap(mFrameBitmap);
super.draw(canvas);
// do whatever you want with mFrameBitmap. It now contains the frame.
...
// Allocate `buffer` big enough to hold the whole frame.
mFrameBitmap.copyPixelsToBuffer(buffer);
...
}
}
I don't know whether this will work. Avoid doing heavy calculation in draw, start a thread there.
In your case I would use the Camera Preview instead the VideoView, if you are working with live motion, not recorded videos. You can use a Camera Preview Callback to catch everyframe captured by your camera. This callback implements :
onPreviewFrame(byte[] data, Camera camera)
Called as preview frames are displayed.
Which I think it could be useful for you.
http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html
Tell if that is what you are searching for.
Good luck.
I'm building an application on Android to take frames from the camera, process them, and then display the frame on a surfaceView, as well as drawing on the SurfaceView via the canvas and drawbitmap and all.
Just to check, is SurfaceView and Bitmaps and Canvases the best way to do it ? I'm after speed.
Assuming the answer to the above is Yes, the question would be: Where should I place the following function
camera_object.setPreviewCallback(new PreviewCallback()
public void onPreviewFrame(byte[] data, Camera camera){
should I place it in onCreate() or should I place it in surfaceCreated() or surfaceChanged() ?
I declared my mainactivity class as follows:
public class MainActivity extends Activity implements SurfaceHolder.Callback, Camera.PreviewCallback
{
and in that class Eclipse forces me to create an override function for onpreviewframe in the MainActivity class as follows
public void onPreviewFrame(byte[] data, Camera camera){
}
but it never gets called. Should I try to use this function ? is it better to use it ? or is it just an Eclipse thing ?
Please advise
Are you calling setPreviewDisplay(), startPreview() and setPreviewCallback(this) from the app? Without that you will not get any calls to onPreviewFrame(). In fact if you are using SurfaceView, then the callback preview buffers are a copy of the actual buffers that are being displayed on the screen. So if you want to display these copied buffers, you need to create a new view and overwrite it. This would be inefficient. I would suggest you use SurfaceTexture instead and use 'onFrameAvailable' callback to get the frames and then draw & display manually. An example of this can be found in the PanoramaActivity code of the default Android Camera App.
Without camera_object.setPreviewDisplay(surface_holder); you cannot receive camera callbacks; don't forget also
surface_view.setVisibility(View.VISIBLE);
surface_holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
You can hide the camera preview under another view; on 3.0 and higher you can even push the surface out of the screen (display it below the bottom of the screen). I am not sure if the latter trick works on 2.3.6.