I am trying to process android video in real time with opencv-android. So far I am able to access the video with opencv, and display it on a org.opencv.android.JavaCameraView.(I referred to this link) I haven't been able to access the video feed of the camera by frame-wise. I need to get each and every frame in order to apply some opencv algorithms on them for object tracking. Please suggest a method to access and process the frames with opencv. (Redirect this if it's already been asked)
Here is how to do it:
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
//Do your processing here, for example:
Mat image = inputFrame.rgba();
Mat ret_mat=new Mat();
Core.add(image, new Scalar(40, 40, 40, 0), ret_mat); //change brightness of video frame
return ret_mat;
}
OpenCv functions operate with Mat object, which represents matrix of pixels of your image/frame. The code above makes each frame brighter by 40 units.
Related
I need to show a camera preview in the SurfaceView with delay about 5 seconds.
So, I think I need somehow to capture Frames from a camera before they go to the SurfaceView and put them to the buffer, and then when buffer will be full, get a stored Frames from the buffer and show them to the SurfaceView.
But I don't know how to get frames before they will be drawn on the SurfaceView.
I only know how to get frames from PreviewCallback, onPreviewFrame(byte[] data, Camera camera) method:
PreviewCallback previewCb = new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
// byte[] data is the Frame
}
};
But I don't know how to get a Frames from a camera directly, to store them to the buffer, and then restore the Frames from buffer to the SurfaceView.
Any help is very appreciated.
Finally, I ended up working with OpenCV. There is a nice example in the samples folder which name is "tutorial-1-camerapreview".
There I have method :
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
return inputFrame.rgba();
}
So, I can do what I want with frames inside this method and then just return them.
Also, maybe to someone it can be useful.
I have found a nice example of how to get a frames from onPreviewFrame, then convert them from yuv into rgb format using jni (because it faster.. I think so..) and then draw the frames to the custom SurfaceView.
Example 1.
Example 2.
Hope it will be useful to someone.
I have read at several places (source, source) that OpenCV uses BGR color format by default.
But I am writing a class to detect a blob of a certain color (red) in an image (following the Color Blob Detection sample). So in the onCameraFrame(CvCameraViewFrame inputFrame) function, we return the value inputFrame.rgba(). According to the documentation,
rgba() This method returns RGBA Mat with frame
So I assumed that my rgbaFrame, which is the variable storing the value of inputFrame.rgba() in my program, contains the Mat in RGBA format.
But when I run the app, the red color in the original image appeared to be bluish in the rgbaFrame Mat I wrote to the external SD card. So apparently, the Mat is in BGR format, because red is appearing to be blue. (This is discussed in the comments of this question.)
So I changed my cvtColorfunction from
Imgproc.cvtColor(rgbaFrame, hsvImage, Imgproc.COLOR_RGB2HSV_FULL);
to
Imgproc.cvtColor(rgbaFrame, hsvImage, Imgproc.COLOR_BGR2HSV_FULL);
But nothing changed when I run the program. Red in original image still appears blue in the captured frame.
So now I am looking for a way to convert RGB to BGR format, to try and see if that helps solve my problem. But have failed to find one.
How do I convert BGR to RGB? Please share if you have any other suggestions for me.
original screenshot from which camera frame is captured:
rgbaFrame.jpg (because I did Highgui.imwrite("/mnt/sdcard/DCIM/rgbaFrame.jpg", rgbaFrame);//check)
OpenCV uses BGR by default, however, Android frame.rgba() implementation returns RGB (possibly for compliance with imageview and other Android components). However, the OpenCV function imwrite still requires BGR, therefore if you save the image without first converting it to BGR then the blue and red channels are saved incorrectly (swapped), because the Mat file of the frame has red channel in index 0 (RGB) whereas imwrite writes index 0 as blue (BGR). Similarly the frame has blue channel in index 2 whereas imwrite writes index 2 as red. You can call cvtcolor with COLOR_RGB2BGR before saving to a file.
/**
* Callback method that is called on every frame of the CameraBridgeViewBase class of OpenCV
*/
override fun onCameraFrame(inputFrame: CameraBridgeViewBase.CvCameraViewFrame?): Mat {
inputFrame?.let { currentFrame ->
val currentFrameMat = currentFrame.rgba()
// save the RGB2BGR converted version
val convertedMat = Mat()
Imgproc.cvtColor(currentFrameMat, convertedMat, Imgproc.COLOR_RGB2BGR)
Imgcodecs.imwrite(imageFilePath, convertedMat)
return currentFrameMat
}
return Mat()
}
I am trying to apply face detection on camera preview frames. I am using OpenGL and OpenCV to process these camera frames at run-time.
#Override
public void onDrawFrame(GL10 unused) {
if (VERBOSE) {
Log.d(TAG, "onDrawFrame tex=" + mTextureId);
}
mSurfaceTexture.updateTexImage();
mSurfaceTexture.getTransformMatrix(mSTMatrix);
// TODO: need to implement
//JniCppManager.processFrame();
drawFrame(mTextureId, mSTMatrix);
}
I am trying to implement a c++ implementation of processFrame(). How can I get a Mat object in c++ from transformation matrix? Could anyone provide me some pointers to the solution.
Your pipeline is currently:
Camera (produces frame)
SurfaceTexture (receives frame, converts to GLES "external" texture)
[missing stuff]
Array of RGB bytes passed to C++
What you need to do for [missing stuff] is render the pixels to an off-screen pbuffer and read them back with glReadPixels(). You can do this from code written in Java or native; for the former you'd want to read them into a "direct" ByteBuffer so you can easily access the pixels from native code. The EGL context used by GLES is held in thread-local storage, so the native code running on the GLSurfaceView render thread will be able to access it.
An example of this can be found in the bigflake ExtractMpegFramesTest, which differs primarily in that it's grabbing frames from a video rather than a Camera.
For API 19+, if you can process frames in YV12 or NV21 rather than RGB, you can feed the Camera to an ImageReader and get access to the data without having to copy/convert it.
I'm using openCV for Android.
I have got an app that can recognise a logo with the camera frame. The recognition is not as good as i expected and i think it's because my frames quality are bad.
My activity implements CvCameraViewListener2 and got a preview of the frame :
<org.opencv.android.NativeCameraView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="#+id/tutorial1_activity_native_surface_view"
opencv:show_fps="true"
opencv:camera_id="any" />
I capture frames with this method :
public Mat onCameraFrame(CvCameraViewFrame inputFrame) { // for every frame
compteurFPS++;
if(compteurFPS%5 == 0){ // every 5 frames
frame = inputFrame.gray();
compareImage();
}
return inputFrame.rgba(); // display frame
The things is my frames are bad so the recognition works but not as i want.
For the recognition is use ORB method comparing 2 Mat Objects but i don't think my issue is here.
Can someone tell me how to improve my frames ?
Currently trying
<code>
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
Imgproc.Canny(mRgba, markers, 80, 90);
Mat threeChannel = new Mat();
Imgproc.cvtColor(mRgba, threeChannel, Imgproc.COLOR_BGR2GRAY);
Imgproc.watershed(threeChannel, markers);
return threeChannel;
}
</code>
However, it fails with
CvException [org.opencv.core.CvException: /home/reports/ci/slave/50-SDK/opencv/modules/imgproc/src/segmentation.cpp:147: error: (-210) Only 8-bit, 3-channel input images are supported in function void cvWatershed(const CvArr*, CvArr*)
Could you advise how to appropriately use the markers from a Canny/Sobel edge detection to feed a Watershed algorithm? Android-specifics would be greatly helpful as this is my first Android project.
The error states that the input image for watershed() must be an 8-bit 3-channels image. After calling cvtColor(), print the number of channels of threeChannel. Don't be surprised if it outputs 1.
Pass mRgba directly to watershed() and see what happens. One of my previous answers have working code using watershed, you can use that for testing.
You need to just convert your image from 4 channel to 3 channels.
For example
Imgproc.cvtColor(mat , mat, Imgproc.COLOR_BGRA2BGR);