Improve the quality of android camera frame with openCV - android

I'm using openCV for Android.
I have got an app that can recognise a logo with the camera frame. The recognition is not as good as i expected and i think it's because my frames quality are bad.
My activity implements CvCameraViewListener2 and got a preview of the frame :
<org.opencv.android.NativeCameraView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="#+id/tutorial1_activity_native_surface_view"
opencv:show_fps="true"
opencv:camera_id="any" />
I capture frames with this method :
public Mat onCameraFrame(CvCameraViewFrame inputFrame) { // for every frame
compteurFPS++;
if(compteurFPS%5 == 0){ // every 5 frames
frame = inputFrame.gray();
compareImage();
}
return inputFrame.rgba(); // display frame
The things is my frames are bad so the recognition works but not as i want.
For the recognition is use ORB method comparing 2 Mat Objects but i don't think my issue is here.
Can someone tell me how to improve my frames ?

Related

Capture video frames with opencv in android

I am trying to process android video in real time with opencv-android. So far I am able to access the video with opencv, and display it on a org.opencv.android.JavaCameraView.(I referred to this link) I haven't been able to access the video feed of the camera by frame-wise. I need to get each and every frame in order to apply some opencv algorithms on them for object tracking. Please suggest a method to access and process the frames with opencv. (Redirect this if it's already been asked)
Here is how to do it:
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
//Do your processing here, for example:
Mat image = inputFrame.rgba();
Mat ret_mat=new Mat();
Core.add(image, new Scalar(40, 40, 40, 0), ret_mat); //change brightness of video frame
return ret_mat;
}
OpenCv functions operate with Mat object, which represents matrix of pixels of your image/frame. The code above makes each frame brighter by 40 units.

Get detected face bitmap

I'm experimenting with the following Google sample: https://github.com/googlesamples/android-vision/tree/master/visionSamples/FaceTracker
The sample is using the Play Service new Face detection APIs, and draws a square on detected faces on the camera video stream.
I'm trying to figure out if it is possible to save the frames that has detected faces in them, from following the code it seems that the face detector's processor is a good place to perform the 'saving' but it only supplies the detection meta data and not the actual frame.
Your guidance will be appreciated.
You can get it in the following way:
Bitmap source = ((BitmapDrawable) yourImageView.getDrawable()).getBitmap();
// detect faces
Bitmap faceBitmap = createBitmap(source,
face.getPosition().x,
face.getPosition().y,
face.getWidth(),
face.getHeight());
Yes it is possible. I answered to question about getting frames from CameraSource here. Most trickiest parts are to access CameraSource frames and to convert Frame datatype to Bitmap. Then having frames as Bitmaps you can pass them to you FaceGraphic class and in method draw() save those Bitmaps, because draw() is called only when faces are detected.

How to show in the SurfaceView a real time video from the camera with delay about 5 seconds?

I need to show a camera preview in the SurfaceView with delay about 5 seconds.
So, I think I need somehow to capture Frames from a camera before they go to the SurfaceView and put them to the buffer, and then when buffer will be full, get a stored Frames from the buffer and show them to the SurfaceView.
But I don't know how to get frames before they will be drawn on the SurfaceView.
I only know how to get frames from PreviewCallback, onPreviewFrame(byte[] data, Camera camera) method:
PreviewCallback previewCb = new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
// byte[] data is the Frame
}
};
But I don't know how to get a Frames from a camera directly, to store them to the buffer, and then restore the Frames from buffer to the SurfaceView.
Any help is very appreciated.
Finally, I ended up working with OpenCV. There is a nice example in the samples folder which name is "tutorial-1-camerapreview".
There I have method :
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
return inputFrame.rgba();
}
So, I can do what I want with frames inside this method and then just return them.
Also, maybe to someone it can be useful.
I have found a nice example of how to get a frames from onPreviewFrame, then convert them from yuv into rgb format using jni (because it faster.. I think so..) and then draw the frames to the custom SurfaceView.
Example 1.
Example 2.
Hope it will be useful to someone.

OpenCV Android CameraBridgeViewBase how to grab a frame, process it, and draw it, without being interrupted by frame grabs

I am getting started with OpenCV for Android and I am using the CameraBridgeViewBase class to grab frames. I then call a worker thread to process the frame but I noticed that if my processing takes too long, another frame is grabbed and interrupts my worker thread. How does one get around this? Can you stop the frame grabbing for a period of time? I couldn't find a solution anywhere online!
Cheers,
Kevin
My suggestion is to process the frame in
public Mat onCameraFrame(CvCameraViewFrame inputFrame)
//process before you return the frame
return inputFrame.rgba();
}
your fps will drop but frames will be processed in the same order they are captured

Grabbing consecutive frames in android using opencv

I am trying to grab consecutive frames from android using opencv VideoCapture class. Actually I want to implement optical flow on android for which i need 2 frames. I implemented optical flow in C first where I grabbed the frames using using cvQueryFrame and every thing work fine. But in android when I call
if(capture.grab())
{
if(capture.retrieve(mRgba))
Log.i(TAG, "first frame retrived");
}
if(capture.grab())
{
if(capture.retrieve(mRgba2))
Log.i(TAG, "2nd frame retrived");
}
and then subtract the matrices using Imgproc.subtract(mRgba,mRgba2,output) and then display the output it give me black image indicating that mRgba and mRgba2 are image frames with same data. Can any one help how to grab two different images. According to opencv documentation mRgba and mRgba2 should be different.
This question is an exact duplicate of
read successive frames OpenCV using cvQueryframe
You have to copy the image to another memory block, because the capture always returns the same pointer.

Categories

Resources