Get detected face bitmap - android

I'm experimenting with the following Google sample: https://github.com/googlesamples/android-vision/tree/master/visionSamples/FaceTracker
The sample is using the Play Service new Face detection APIs, and draws a square on detected faces on the camera video stream.
I'm trying to figure out if it is possible to save the frames that has detected faces in them, from following the code it seems that the face detector's processor is a good place to perform the 'saving' but it only supplies the detection meta data and not the actual frame.
Your guidance will be appreciated.

You can get it in the following way:
Bitmap source = ((BitmapDrawable) yourImageView.getDrawable()).getBitmap();
// detect faces
Bitmap faceBitmap = createBitmap(source,
face.getPosition().x,
face.getPosition().y,
face.getWidth(),
face.getHeight());

Yes it is possible. I answered to question about getting frames from CameraSource here. Most trickiest parts are to access CameraSource frames and to convert Frame datatype to Bitmap. Then having frames as Bitmaps you can pass them to you FaceGraphic class and in method draw() save those Bitmaps, because draw() is called only when faces are detected.

Related

Is it possible to grap one pixel from CameraPreview?

I followed the Android Studio tutorial to get the CameraPreview to work (Camera API Android Developer Guide). This works fine for me and i can view the camera stream in my FrameLayout.
But I would like to get the RGB values from a specific Pixel in the Preview everytime it changes. I did not find a method which gives me the previewImage as a bitmap and was not able to understand the usage of the onPreviewFrame method
#Override
public void onPreviewFrame(byte[] data, Camera camera) {}
How can I get the RGB values from a Camerapreview Pixel?
If you are using the Camera2 API, you can implement the ImageReader.OnImageAvailableListener class in your application. After that, you override the onImageAvailable function , which gets an ImageReader as argument. Then you can access the image just recorded with imageReader.acquireNextImage().
With either API, you need to handle processing YUV data yourself, unfortunately.
Camera devices natively produce YUV data, not RGB, so the API doesn't spend extra resources to auto-convert the data. The main easy exception is piping data to the GPU, where the GPU driver auto-converts YUV to RGB for you within your pixel shader.
But if you're just in regular app code, you need to parse the data.
For the deprecated android.hardware.Camera API, the output is NV21 by default, and you can usually select YV12 as another option.
The wikipedia article on YUV is relatively helpful: https://en.wikipedia.org/wiki/YUV
But it does have the wrong conversion coefficients for YUV->RGB conversion; they should be:
R = Y + 1.402 (Cr-128)
G = Y - 0.34414 (Cb-128) - 0.71414 (Cr-128)
B = Y + 1.772 (Cb-128)
(Cb = U, Cr = V)
You can also take a look at this stackoverflow post:
Extract black and white image from android camera's NV21 format
which has code that looks to be correct for the conversion.

Take photo and record video of real-time face detection preview

I have used JavaCv (and opencv too) to implement live face detection preview on Android. I work ok. Now I want to take a picture or record a video from live preview which have face detection (I mean when I take a picture, this picture will have a person and a rectangle around his/her face). I have researched a lot but get no result. Can anyone help me please !!!
What you're looking for is the imwrite() method.
Since your question isn't clear on the use-case, I'll give a generic algorithm, as shown:
imwrite writes a specified Mat object to a file and it accepts 2 arguments - fileName and Mat object, for example - imwrite('output.jpg',img);
Here's the logic you can follow:
Receive input frame (Mat input from video and run face detection using your existing method.
Draw a rectangle on an output image (Mat output).
Use imwrite as - imwrite('face.jpg',output)
In case you want to record all the frames with a face in them, replace 'face.jpg' with a string variable that is updated with each loop iteration and run imwrite in a loop
If you wish to record a video. Have a look at VideoWriter() class

Cannot understand the details of FaceDetector object while debugging using Vision API Android

I want to use Vision API in android to detect the face and the landmarks over the face.
I followed the Vision API sample :
https://github.com/googlesamples/android-vision/tree/master/visionSamples/photo-demo/
My issues are:
1) I cannot understand the details of this object while debugging:
FaceDetector detector = new FaceDetector.Builder(context)
.setTrackingEnabled(false)
.setLandmarkType(FaceDetector.ALL_LANDMARKS)
.setProminentFaceOnly(true)
.build();
image that shows the details of 'detector'
Cannot understand 'zzbbc','zzbbd'...etc
2)
Frame frame = new Frame.Builder().setBitmap(bitmap).build();
SparseArray<Face> faces = detector.detect(frame);`
Here the size of faces is returned as zero.
There is no exception thrown, I can see the image but the rectangle and dots cannot be seen.
Can anyone please help me out with this issue?
zzbbc, zzbbd, etc. are internal details of the implementation that aren't meant to be inspected. You don't need to know what these are to use the API.
In this case, no faces were detected. Note that the "prominentFaceOnly" setting will mean that the detector only looks a single large face (i.e., filling greater than a third of the screen width). If the faces in your photo are smaller than this, they will not be detected.

Editing android VideoView frames

Environment:
Nexus 7 Jelly Bean 4.1.2
Problem:
I'm trying to make a Motion Detection application that works with RTSP using VideoView.
I wish that there was something like an onNewFrameListener
videoView.onNewFrame(Frame frame)
I've tried to get access to the raw frames of an RTSP stream via VideoView but couldn't find any support for that in the Android SDK.
I found out that VideoView encapsulates the Android's MediaPlayer class.
So i dived into the media_jni lib to try and find a way to access the raw frames, But couldn't find the byte buffer or whatever that represents a frame.
Question:
Anyone has an idea where or how can i find this buffer and get access to it ?
Or any other idea of implementing a Motion Detection over a VideoView ?
Even if it's sais that i need to recompile the AOSP.
You can extend the VideoView and override its draw(Canvas canvas) method.
Set your bitmap to the canvas received through draw.
Call super.draw() which will get the frame drawn onto your bitmap.
Access the frame pixels from the bitmap.
class MotionDetectorVideoView extends VideoView {
public Bitmap mFrameBitmap;
...
#Override
public void draw(Canvas canvas) {
// set your own member bitmap to canvas..
canvas.setBitmap(mFrameBitmap);
super.draw(canvas);
// do whatever you want with mFrameBitmap. It now contains the frame.
...
// Allocate `buffer` big enough to hold the whole frame.
mFrameBitmap.copyPixelsToBuffer(buffer);
...
}
}
I don't know whether this will work. Avoid doing heavy calculation in draw, start a thread there.
In your case I would use the Camera Preview instead the VideoView, if you are working with live motion, not recorded videos. You can use a Camera Preview Callback to catch everyframe captured by your camera. This callback implements :
onPreviewFrame(byte[] data, Camera camera)
Called as preview frames are displayed.
Which I think it could be useful for you.
http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html
Tell if that is what you are searching for.
Good luck.

Grabbing consecutive frames in android using opencv

I am trying to grab consecutive frames from android using opencv VideoCapture class. Actually I want to implement optical flow on android for which i need 2 frames. I implemented optical flow in C first where I grabbed the frames using using cvQueryFrame and every thing work fine. But in android when I call
if(capture.grab())
{
if(capture.retrieve(mRgba))
Log.i(TAG, "first frame retrived");
}
if(capture.grab())
{
if(capture.retrieve(mRgba2))
Log.i(TAG, "2nd frame retrived");
}
and then subtract the matrices using Imgproc.subtract(mRgba,mRgba2,output) and then display the output it give me black image indicating that mRgba and mRgba2 are image frames with same data. Can any one help how to grab two different images. According to opencv documentation mRgba and mRgba2 should be different.
This question is an exact duplicate of
read successive frames OpenCV using cvQueryframe
You have to copy the image to another memory block, because the capture always returns the same pointer.

Categories

Resources