How to process individual camera frames while using Android mobile vision library - android

I am trying to make a camera app that detects faces using the Google mobile vision API with a custom camera instance, NOT the same "CameraSource" in the Google API as I am also processing the frames to detect colors too and with Camerasource I am not allowed to get the camera frames.
After searching for this issue, the only results I've found are about using mobile vision with it's CameraSource, and not with any custom camera1 API.
I've tried to override the frame processing, then do the detection on the outputted pics like here:
camera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Log.d("onPreviewFrame", "" + data.length);
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvimage.compressToJpeg(rect, 20, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
detector = new FaceDetector.Builder(getApplicationContext())
.setTrackingEnabled(true)
.setClassificationType(FaceDetector.ALL_LANDMARKS)
.setMode(FaceDetector.FAST_MODE)
.build();
detector.setProcessor(
new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
.build());
if (detector.isOperational()) {
frame = new Frame.Builder().setBitmap(bmp).build();
mFaces = detector.detect(frame);
// detector.release();
}
}
});
So is there any way that I can link mobile vision with my camera instance for the sake of frame processing and to detect faces with it?
You can see what I've done so far here:
https://github.com/etman55/FaceDetectionSampleApp
**NEW UPDATE
After finding an open source file for the CameraSource class I solved most of my problems, but now when trying to detect faces the detector receives the frames correctly but it can't detect anything >> you can see my last commit in the github repo.

I can provide you with some very useful tips.
Building a new FaceDetector for each frame the camera provides is very bad idea, and also unnecessary. You only have to start it once, outside the camera frames receiver.
It is not necessary to get the YUV_420_SP (or NV21) frames, then convert it to YUV instance, then convert it to Bitmap, then create a Frame.Builder() with the Bitmap. If you take a look at the Frame.Builder Documentation you can see that it allows NV21 directly from Camera Preview.
Like this:
#override public void onPreviewFrame(byte[] data, Camera camera) {detector.detect(new Frame.Builder().setImageData(ByteBuffer.wrap(data), previewW, previewH, ImageFormat.NV21));}

And the Kotin version:
import com.google.android.gms.vision.Frame as GoogleVisionFrame
import io.fotoapparat.preview.Frame as FotoapparatFrame
fun recogniseFrame(frame: FotoapparatFrame) = detector.detect(buildDetectorFrame(frame))
.asSequence()
.firstOrNull { it.displayValue.isNotEmpty() }
?.displayValue
private fun buildDetectorFrame(frame: FotoapparatFrame) =
GoogleVisionFrame.Builder()
.setRotation(frame.rotation.toGoogleVisionRotation())
.setImageData(
ByteBuffer.wrap(frame.image),
frame.size.width,
frame.size.height,
ImageFormat.NV21
).build()

Related

Android Camera2 API Showing Processed Preview Image

New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me. I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear. My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?
Example Flow :
camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?)
Workaround Idea : Sending bitmaps to imageview every time new frame processed.
Edit after clarification of the question; original answer at bottom
Depends on where you're doing your processing.
If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.
If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.
If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.
If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.
Or as you say, draw to an ImageView every frame, but that'll be slow.
Original answer:
If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte[], and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.
If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int[] of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.
If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.
I had the same need, and wanted a quick and dirty manipulation for a demo. I was not worried about efficient processing for a final product. This was easily achieved using the following java solution.
My original code to connect the camera2 preview to a TextureView was commented-out and replaced with a surface to an ImageReader:
// Get the surface of the TextureView on the layout
//SurfaceTexture texture = mTextureView.getSurfaceTexture();
//if (null == texture) {
// return;
//}
//texture.setDefaultBufferSize(mPreviewWidth, mPreviewHeight);
//Surface surface = new Surface(texture);
// Capture the preview to the memory reader instead of a UI element
mPreviewReader = ImageReader.newInstance(mPreviewWidth, mPreviewHeight, ImageFormat.JPEG, 1);
Surface surface = mPreviewReader.getSurface();
// This part stays the same regardless of where we render
mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mCaptureRequestBuilder.addTarget(surface);
mCameraDevice.createCaptureSession(...
Then I registered a listener for the image:
mPreviewReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
if (image != null) {
Image.Plane plane = image.getPlanes()[0];
ByteBuffer buffer = plane.getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap preview = BitmapFactory.decodeByteArray(bytes, 0, buffer.capacity());
image.close();
if(preview != null ) {
// This gets the canvas for the same mTextureView we would have connected to the
// Camera2 preview directly above.
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
float[] colorTransform = {
0, 0, 0, 0, 0,
.35f, .45f, .25f, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 1, 0};
ColorMatrix colorMatrix = new ColorMatrix();
colorMatrix.set(colorTransform); //Apply the monochrome green
ColorMatrixColorFilter colorFilter = new ColorMatrixColorFilter(colorMatrix);
Paint paint = new Paint();
paint.setColorFilter(colorFilter);
canvas.drawBitmap(preview, 0, 0, paint);
mTextureView.unlockCanvasAndPost(canvas);
}
}
}
}
}, mBackgroundPreviewHandler);

Null bitmap in Android Surfaceview

I'm attempting to get a bitmap from a camera preview in Android, then examine the bitmap and draw something to the screen based on what the camera is seeing. This all has to be done live due to the nature of the project I'm working on.
At the moment I'm using a surfaceview to display the live preview and I'm getting the bitmap using the following code I found on a separate question on here.
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
snipeCamera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvimage.compressToJpeg(rect, 100, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
bit = bmp.copy(Bitmap.Config.ARGB_8888, true);
}
});
}
Bit is a:
public static Bitmap bit;
Whenever I try and access this bitmap anywhere else I get a null exception error. I'm guessing it has something to do with the fact that it's being set inside setPreviewCallback, but I don't know enough about Android to fix it. Is there something I can do to get access to this bitmap? Or is there another way I can work with a live bitmap of what the camera is seeing?
Is there something I can do to get access to this bitmap?
You already have access to your Bitmap. You are getting it from decodeByteArray().
(note that I am assuming that the code leading up to and including decodeByteArray() is correct — there could be additional problems lurking in there)
You just need to consume the Bitmap in your onPreviewFrame() method. If your work to do is quick (sub-millisecond), probably just do that work right there. If your work to do is not-so-quick, you'll now need to work out background threading plans, arranging to update your UI with the results of that work on the main application thread, and related issues.
If, to consume the Bitmap, you need access to other objects in your camera UI, just makes sure that the SurfaceHolder has references to those other objects. Your onPreviewFrame() method is in an anonymous inner class inside the SurfaceHolder, and onPreviewFrame() has access to everything that the SurfaceHolder has.

How to retrieve a new picture taken by the camera as OpenCV Mat on Android?

I am trying to take a picture with an Android device. The picture must be converted as Mat to be an input for a computation of which I like to provide the results within an API.
In which format does Android provide the byte[] data in it's callback and how to convert it to an OpenCV Mat in the color-format BGR?
The first problem: "How to take the picture without a SurfaceView" is solved. I used a SurfaceTexture, which must not be visible.
mCamera = Camera.open();
mCamera.setPreviewTexture(new SurfaceTexture(10));
So I was able to start the preview and take a picture. But in which format is the byte[] data and how to convert it to an OpenCV BGR Mat?
mCamera.startPreview();
mCamera.takePicture(null, null, null, new PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
Log.e(MainActivity.APP_ID, "picture-taken");
android.hardware.Camera.Size pictureSize = camera.getParameters().getPictureSize();
Mat mat = new Mat(new Size(pictureSize.width, pictureSize.height), CvType.CV_8U);
mat.put(0,0,data);
mat.reshape(0, pictureSize.height);
// Imgproc.cvtColor(mat, mat, Imgproc.COLOR_YUV420sp2RGBA);
......
As tokan pointed on his comment in the question, this solution works great:
android.hardware.Camera.Size pictureSize = camera.getParameters().getPictureSize();
Mat mat = new Mat(new Size(pictureSize.width, pictureSize.height), CvType.CV_8U);
mat.put(0, 0, data);
Mat img = Imgcodecs.imdecode(mat, Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);

How to get higher camera preview fps like snapchat?

On Android,
Anyone have any idea what trick snapchat pulls to get such high fps on their camera preview? I have tried various methods:
using a textureview instead of surface view
forcing hardware acceleration
using lower resolutions
using different preview formats (YV12 , NV21 drops frames)
changing focusing mode
None have left me anywhere close to the constant 30fps or maybe even above that snapchat seems to get. I can just about get to the same fps as the stock google camera app, but this isn't great, and mine displays at much lower resolution.
EDIT:
The method used is the same as that used by the official android video recording app. The preview there is of the same image quality and is locked to 30fps.
try this it works
public void takeSnapPhoto() {
camera.setOneShotPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int format = parameters.getPreviewFormat();
//YUV formats require more conversion
if (format == ImageFormat.NV21 || format == ImageFormat.YUY2 || format == ImageFormat.NV16) {
int w = parameters.getPreviewSize().width;
int h = parameters.getPreviewSize().height;
// Get the YuV image
YuvImage yuv_image = new YuvImage(data, format, w, h, null);
// Convert YuV to Jpeg
Rect rect = new Rect(0, 0, w, h);
ByteArrayOutputStream output_stream = new ByteArrayOutputStream();
yuv_image.compressToJpeg(rect, 100, output_stream);
byte[] byt = output_stream.toByteArray();
FileOutputStream outStream = null;
try {
// Write to SD Card
File file = createFileInSDCard(FOLDER_PATH, "Image_"+System.currentTimeMillis()+".jpg");
//Uri uriSavedImage = Uri.fromFile(file);
outStream = new FileOutputStream(file);
outStream.write(byt);
outStream.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
}
}
}
});}
I believed they used the android NDK. You can find more information in android developer.
Using pure C/C++ is faster than JAVA code in performance critical tasks such as image and video processing.
You can also try to improve the performance by compiling the application with another compiler, like the Intel compiler.

Capturing camera frame in android after face detection

I am working with face detection in Android and I want achieve the following:
1. Use face detection listener in Android for detecting faces on camera frame.
2. If a face is detected on the camera frame, then extract the face and save it to external storage.
After surfing through existing questions, I have found that there is no direct way to convert detected face to bitmap and store it on the disk. So now I want to capture and save the entire camera frame in which the face has been detected and I have not been able to do so.
The current code structure is as follows:
FaceDetectionListener faceDetectionListener = new FaceDetectionListener() {
#Override
public void onFaceDetection(Face[] faces, Camera camera) {
if (faces.length == 0) {
} else {
displayMessage("Face detected!");
// CODE TO SAVE CURRENT FRAME AS IMAGE
finish();
}
}
};
I tried to achieve this by calling takePicture in the above method but I was unable to save the frame using that approach. Kindly suggest a way in which I can save the camera frame.
I could not figure out a direct way to save the camera frame within FaceDetectionListener. Therefore, for my application, I changed the way in which I was handling the camera preview data. I used the PreviewCallback interface of Camera class and implemented the logic in onPreviewFrame method of the interface. The outline of implementation is as follows:
class SaveFaceFrames extends Activity implements Camera.PreviewCallback, Camera.FaceDetectionListener {
boolean lock = false;
public void onPreviewFrame(byte[] data, Camera camera) {
...
if(lock) {
Camera.Parameters parameters = camera.getParameters();
Camera.Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, parameters.getPreviewFormat(), size.width, size.height, null);
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
image.compressToJpeg(new Rect(0, 0, image.getWidth(), image.getHeight()), 100, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
lock = false;
}
}
public void onFaceDetection(Camera.Face[] faces, Camera camera) {
...
if(!lock) {
if(faces.length() != 0) lock = true;
}
}
}
This is not an ideal solution, but it worked in my case. There are third party libraries which can be used in these scenarios. One library which I have used and works very well is Qualcomm's Snapdragon SDK. I hope someone finds this useful.

Categories

Resources