I am trying to apply face detection on camera preview frames. I am using OpenGL and OpenCV to process these camera frames at run-time.
#Override
public void onDrawFrame(GL10 unused) {
if (VERBOSE) {
Log.d(TAG, "onDrawFrame tex=" + mTextureId);
}
mSurfaceTexture.updateTexImage();
mSurfaceTexture.getTransformMatrix(mSTMatrix);
// TODO: need to implement
//JniCppManager.processFrame();
drawFrame(mTextureId, mSTMatrix);
}
I am trying to implement a c++ implementation of processFrame(). How can I get a Mat object in c++ from transformation matrix? Could anyone provide me some pointers to the solution.
Your pipeline is currently:
Camera (produces frame)
SurfaceTexture (receives frame, converts to GLES "external" texture)
[missing stuff]
Array of RGB bytes passed to C++
What you need to do for [missing stuff] is render the pixels to an off-screen pbuffer and read them back with glReadPixels(). You can do this from code written in Java or native; for the former you'd want to read them into a "direct" ByteBuffer so you can easily access the pixels from native code. The EGL context used by GLES is held in thread-local storage, so the native code running on the GLSurfaceView render thread will be able to access it.
An example of this can be found in the bigflake ExtractMpegFramesTest, which differs primarily in that it's grabbing frames from a video rather than a Camera.
For API 19+, if you can process frames in YV12 or NV21 rather than RGB, you can feed the Camera to an ImageReader and get access to the data without having to copy/convert it.
Related
I followed the Android Studio tutorial to get the CameraPreview to work (Camera API Android Developer Guide). This works fine for me and i can view the camera stream in my FrameLayout.
But I would like to get the RGB values from a specific Pixel in the Preview everytime it changes. I did not find a method which gives me the previewImage as a bitmap and was not able to understand the usage of the onPreviewFrame method
#Override
public void onPreviewFrame(byte[] data, Camera camera) {}
How can I get the RGB values from a Camerapreview Pixel?
If you are using the Camera2 API, you can implement the ImageReader.OnImageAvailableListener class in your application. After that, you override the onImageAvailable function , which gets an ImageReader as argument. Then you can access the image just recorded with imageReader.acquireNextImage().
With either API, you need to handle processing YUV data yourself, unfortunately.
Camera devices natively produce YUV data, not RGB, so the API doesn't spend extra resources to auto-convert the data. The main easy exception is piping data to the GPU, where the GPU driver auto-converts YUV to RGB for you within your pixel shader.
But if you're just in regular app code, you need to parse the data.
For the deprecated android.hardware.Camera API, the output is NV21 by default, and you can usually select YV12 as another option.
The wikipedia article on YUV is relatively helpful: https://en.wikipedia.org/wiki/YUV
But it does have the wrong conversion coefficients for YUV->RGB conversion; they should be:
R = Y + 1.402 (Cr-128)
G = Y - 0.34414 (Cb-128) - 0.71414 (Cr-128)
B = Y + 1.772 (Cb-128)
(Cb = U, Cr = V)
You can also take a look at this stackoverflow post:
Extract black and white image from android camera's NV21 format
which has code that looks to be correct for the conversion.
ImageReader and SurfaceTexture is async from app side. SurfaceTexture.OnFrameAvailableListener and ImageReader.OnImageAvailableListener are coming in different time.
Now I will make an AR App. I calculate the object motion with the image from ImageReader and output the object motion information. On the other hand. Call updateTexImage to render background. But the question is the object motion has obviously latency behind background rendering.
The workflow is below:
Camera2->ImageReader->calculate object motion -> Render a virtual object with object motion information
Camera2->SufaceTexture->Render backgroud with updateTexImage
the upateTexImage and rendering-virtual-object is call in Render.onDrawFrame
So obviously the question is how to Sync ImageReader and SurfaceTexture with Android Camera2 output
The easiest option is not to use two data paths, and instead either do image analysis on the SurfaceTexture buffer (either in EGL or read back from GPU to CPU for analysis), or use the ImageReader buffer to draw everything with.
If that's not feasible, you need to look at the timestamps (https://developer.android.com/reference/android/graphics/SurfaceTexture.html#getTimestamp() and https://developer.android.com/reference/android/media/Image.html#getTimestamp()). For the same capture, the two paths will have the same timestamp, so you can queue up and synchronize your final drawing by matching them up.
I need to show a camera preview in the SurfaceView with delay about 5 seconds.
So, I think I need somehow to capture Frames from a camera before they go to the SurfaceView and put them to the buffer, and then when buffer will be full, get a stored Frames from the buffer and show them to the SurfaceView.
But I don't know how to get frames before they will be drawn on the SurfaceView.
I only know how to get frames from PreviewCallback, onPreviewFrame(byte[] data, Camera camera) method:
PreviewCallback previewCb = new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
// byte[] data is the Frame
}
};
But I don't know how to get a Frames from a camera directly, to store them to the buffer, and then restore the Frames from buffer to the SurfaceView.
Any help is very appreciated.
Finally, I ended up working with OpenCV. There is a nice example in the samples folder which name is "tutorial-1-camerapreview".
There I have method :
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
return inputFrame.rgba();
}
So, I can do what I want with frames inside this method and then just return them.
Also, maybe to someone it can be useful.
I have found a nice example of how to get a frames from onPreviewFrame, then convert them from yuv into rgb format using jni (because it faster.. I think so..) and then draw the frames to the custom SurfaceView.
Example 1.
Example 2.
Hope it will be useful to someone.
I would like to make "instant" screenshots of an Presentation object in Android. My Presentation normally renders to a virtual display (PRIVATE) which is backed by the surface from a MediaRecorder that is setup to record video. Recording the presentation as a video works great.
mMediaRecorder = new MediaRecorder();
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mMediaRecorder.setOutputFile(getScratchFile().getAbsolutePath());
mMediaRecorder.setVideoEncodingBitRate(bitrateInBitsPerSecond);
mMediaRecorder.setVideoFrameRate(30);
mMediaRecorder.setVideoSize(mVideoSize.getWidth(), mVideoSize.getHeight());
mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
mVirtualDisplay = mDisplayManager.createVirtualDisplay(
DISPLAY_NAME, // string name required
mVideoSize.getWidth(),
mVideoSize.getHeight(),
160, // screen densityDpi, not sure what it means in this context
mMediaRecorder.getSurface(), // the media recorder must already be {#code prepare()}'d
DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY); // only we can use it
mPresentation = new MyPresentation(getActivity(), mVirtualDisplay().getDisplay());
How can I get a screenshot of the mMediaRecorder.getSurface() at any time, including when the mMediaRecorder is setup but not recording?
I've tried a number of methods related to the view.getDrawingCache() on the Presentation root view object and I only get clear/black output. The presentation itself contains TextureView objects, which I'm guessing are messing up this strategy.
I also tried using an ImageReader with the DisplayPreview of the mMediaRecorder but it receives no images in the callback -- ever
mMediaRecorder.setDisplayPreview(mImageReader.getSurface());
I'd really like some way to mirror the Surface which backs the presentation into an ImageReader and use that as a consumer, I just can't see how to "mirror" one surface as a producer into another "consumer" class. It seems there should be an easy way with SurfaceFlinger.
Surfaces are the producer end of a producer-consumer data structure. A producer can't pull data back out of the pipe, so attempting to read frames back from a Surface isn't possible.
When feeding MediaCodec or MediaRecorder, the consumer end is in a different process (mediaserver) that manages the media hardware. For a SurfaceView, the consumer is in SurfaceFlinger. For a TextureView, both ends are in your app, which is why you can easily get a frame from TextureView (call getBitmap()).
To intercept the incoming data you'd need to have both producer and consumer in the same process. The SurfaceTexture class (also known as "GLConsumer") provides this -- it's a consumer that converts the frames it receives into GLES textures.
So the idea would be to create a SurfaceTexture, create a new Surface from that (you'll note that Surface's only public constructor takes a SurfaceTexture), and pass that Surface as the virtual display output Surface. Then as frames come in you "forward" them to the MediaRecorder by rendering the textures with OpenGL ES.
This is not entirely straightforward, especially if you haven't worked with OpenGL ES before. Various examples can be found in Grafika (e.g. "texture from camera" and "record GL app").
I don't know if setDisplayPreview() is expected to work for anything other than Camera.
On Android, I'm trying to perform some OpenGL processing on camera frames, show those frames in the camera preview and then encode the frames in a video file. I'm trying to do this with OpenGL, using the GLSurfaceView and GLSurfaceView.Renderer and with FFMPEG for video encoding.
I've successfully processed the image frames using a shader. Now I need to encode the processed frames to video. The GLSurfaceView.Renderer provides the onDrawFrame(GL10 ..) method. It's in this method that I'm attempting to read the image frames using just glReadPixels() and then place the frames on a queue for encoding to video. On it's own, glReadPixels() is much too slow - my frame rate is in the single digits. I'm attempting to speed this up using Pixel Buffer Objects. This is not working. After plugging in the pbo, the frame rate is unchanged. This is my first time using OpenGL and I do not know where to begin looking for the problem. Am I doing this right? Can anyone give me some direction? Thanks in advance.
public class MainRenderer implements GLSurfaceView.Renderer, SurfaceTexture.OnFrameAvailableListener {
.
.
public void onDrawFrame ( GL10 gl10 ) {
//Create a buffer to hold the image frame
ByteBuffer byte_buffer = ByteBuffer.allocateDirect(this.width * this.height * 4);
byte_buffer.order(ByteOrder.nativeOrder());
//Generate a pointer to the frame buffers
IntBuffer image_buffers = IntBuffer.allocate(1);
GLES20.glGenBuffers(1, image_buffers);
//Create the buffer
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, image_buffers.get(0));
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, byte_buffer.limit(), byte_buffer, GLES20.GL_STATIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, image_buffers.get(0));
//Read the pixel data into the buffer
gl10.glReadPixels(0, 0, this.width, this.height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, byte_buffer);
//encode the frame to video
enQueueForEncoding(byte_buffer);
//unbind the buffer
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
}
.
.
}
I have never tried something like that before (opengl+video enconding) but I can tell you that reading from device memory is SLOW. Try double buffering, this may help since the GPU can keep rendering to the second buffer while the DMA controller reads back stuff.
Load a profiler (check your devices' GPU vendor), this may give you some idea. Another thing that may help is setting internal pbuffer format to something else, try lower numbers and dropping a channel (alpha).
EDIT: If you feel like that, you can encode the video at the GPU, that's going to boost, memory and processing wise, your application.
As I remember glBufferData() is not mapping your internal buffer onto GPU memory, it just copies data from your memory into the buffer (initializes).
To get access to the memory, which is allocated by glBufferData(), you should use glMapBufferRange(). That function returns a Java Buffer object which you can read.