CPU processing on camera image - android

Currently I'm showing a preview of the camera on the screen providing the camera preview texture - camera.setPreviewTexture(...) (doing it using opengl of course).
I have a native library which get bytes[] as an image, and return a byte[] - the result image related to the input image. I want to call it, and then draw the input image and the result to the screen - one on each other.
I know that in Opengl, in order to get the data of texture back in the CPU we must be read it using glReadPixel and after process i will have to load the result to a texture - which will have big impact on performances to do it each frame.
I thought about using camera.setPreviewCallback(...), There i'm getting the frame (Calling the process method and transfer the result to the my SurfaceView), and parallel continue using the texture preview Technic for drawing on the screen, but than i'm afraid of synchronizing between the frames that i got in the previewCallback to those i got in the texture.
Am i missing anything ? or there is not easy way to solve this issue?

One approach that may be useful is to direct the output of the Camera to an ImageReader, which provides a Surface. Each frame sent to the Surface is made available as YUV data without a copy, which makes it faster than some of the alternatives. The variations in color formats (stride, alignment, interleave) are handled by ImageReader.
Since you want the camera image to be presented simultaneously with the processing output, you can't send frames down two independent paths.
When the frame is ready, you will need to do a color-space conversion and upload the pixels with glTexImage2D(). This will likely be the performance-limiting factor.
From the comments it sounds like you're familiar with image filtering using a fragment shader; for anyone else who finds this, you can see an example here.

Related

Android development,object detect display

In Android development, I have obtained the Yuv video stream byte array, and I want to perform real-time video recognition. I have tried this: first convert the obtained byte array into Bitmap format with the yuvByteArrayToBitmap method, and then use the yolov5ncnn.Detect method to detect , the returned object contains data such as the coordinates of the detection target. Finally, use these data to display the video stream on the ImageView component through the showObjects method and detect the detection frame of the corresponding target in the video. Although object detection can detect rectangular box results. But the whole process is stuck and not smooth.
I have tried commenting out the target detection code, not executing the target detection code, and only displaying the video stream on the Imageview component, but it is still very stuck, so I think it should be slower to display the video stream using the ImageView component. I heard that it is possible to use surface and SurfaceTexture. Is it possible to change each frame of image in the onSurfaceTextureUpdated method to output an image with a target detection rectangle, but I am not familiar with the implementation of this.
Now I want to perform object detection more smoothly and draw rectangles of detected objects on the UI. The byte array of the Yuv video stream has been obtained, and the coordinates of the target rectangle can also be obtained through the target detection algorithm, but I don't know how to display it on the UI smoothly.
Hope to get your help, thank you all.

Use MediaCodec to record 720p video but fps of encoding video is too low

I managed to write a video recording demo, my implementation is the same as ContinuousCaptureActivity of Grafika.
In ContinuousCaptureActivity.java, The author create egl object in SurfaceCreated which run in UI thread and call drawFrame also in UI thread. He did 2 things in drawFrame, draw frame to screen and push data to encoder.
See the code here: ContinuousCaptureActivity
Because I set the encoding video size to 1280*720 which is large, the camera preview is not smooth and fps of target video is low.
I plan to create a new thread to do the encoding work but I do not know how to handle with multithread of opengl es. Who can give some advice?
Add: I found that drawFrame of Texture2dProgram use GLES20.glDrawArrays, Will GLES20.glDrawElements get a better performance?
First, 1280x720 shouldn't be an issue for mainstream devices. Some super-cheap low-end devices might struggle, but there isn't really anything you can do if the hardware simply can't handle 1280x720x30fps.
The most common reasons I've seen for low FPS at 720p are failure to configure the Camera with a reasonable fps value (using setPreviewFpsRange() with a value from getSupportedPreviewFpsRange()), and failing to call setRecordingHint(true) (or the Camera2 equivalent). The latter can take you from 15fps to 30fps, but may affect the aspect ratio of the preview.
The video encoding is performed in a separate process, called mediaserver, which manages all interaction with the video encoder hardware. There are already multiple threads in play, so adding another won't help.
The GLES code is drawing two textured triangles. Using a different API won't change the difference.
If you think there is a performance bottleneck, you need to use tools like systrace to narrow it down.
I finally found a way to make drawing frame to screen faster which eventually save the time of processing each frame.
The detail is as follows:
mPreviewWidth = mCamera.getParameters().getPreviewSize().width;
mPreviewHeight = mCamera.getParameters().getPreviewSize().height;
holder.setFixedSize(mPreviewWidth, mPreviewHeight);
add these code to https://github.com/google/grafika/blob/master/src/com/android/grafika/ContinuousCaptureActivity.java#L352
then use GLES20.glViewport(0, 0, mPreviewWidth, mPreviewHeight); to replace https://github.com/google/grafika/blob/master/src/com/android/grafika/ContinuousCaptureActivity.java#L436
This modification will reduce data size of frame to draw a lot.
But it will make preview image not so smooth if we use TextureView, and we can use setScaleX(1.00001f); setScaleY(1.00001f); to resolve it.

Efficient path for displaying customized video in Android

In Android, I need an efficient way of modifying the camera stream before displaying it on the screen. This post discusses a couple of ways of doing so and I was able to implement the first one:
Get frame buffer from onPreviewFrame
Convert frame to YUV
Modify frame
Convert modified frame to jpeg
Display frame to ImageView which was placed on SurfaceView used for the
preview
That worked but brought down the 30 fps I was getting with a regular camera preview to 5 fps or so. Converting frames back and forth from different image spaces is also power hungry, which I don't want.
Are there examples on how to get access to the raw frames directly and not have to go through so many conversions? Is using OpenGL the right way of doing this? It must be a very common thing to do but I can't find good examples.
Note: I'd rather avoid using the Camera2 APIs for backward compatibility sake.
The most efficient form of your CPU-based pipeline would look something like this:
Receive frames from the Camera on a Surface, rather than as byte[]. With Camera2 you can send the frames directly to an ImageReader; that will get you CPU access to the raw YUV data from the Camera without copying it or converting it. (I'm not sure how to rig this up with the old Camera API, as it wants either a SurfaceTexture or SurfaceHolder, and ImageReader doesn't provide those. You can run the frames through a SurfaceTexture and get RGB values from glReadPixels(), but I don't know if that'll buy you anything.)
Perform your modifications on the YUV data.
Convert the YUV data to RGB.
Either convert the RGB data into a Bitmap or a GLES texture. glTexImage2D will be more efficient, but OpenGL ES comes with a steep learning curve. Most of the pieces you need are in Grafika (e.g. the texture upload benchmark) if you decide to go that route.
Render the image. Depending on what you did in step #4, you'll either render the Bitmap through a Canvas on a custom View, or render the texture with GLES on a SurfaceView or TextureView.
I think the most significant speedup will be from eliminating the JPEG compression and uncompression, so you should probably start there. Convert the output of your frame editor to a Bitmap and just draw it on the Canvas of a TextureView or custom View, rather than converting to JPEG and using ImageView. If that doesn't get you the speedup you want, figure out what's slowing you down and work on that piece of the pipeline.
If you're restricted to the old camera API, then using a SurfaceTexture and doing your processing in a GPU shader may be most efficient.
This assumes whatever modifications you want to do can be expressed reasonably as a GL fragment shader, and that you're familiar enough with OpenGL to set up all the boilerplate necessary to render a single quadrilateral into a frame buffer, using the texture from a SurfaceTexture.
You can then read back the results with glReadPixels from the final rendering output, and save that as a JPEG.
Note that the shader will provide you with RGB data, not YUV, so if you really need YUV, you'll have to convert back to a YUV colorspace before processing.
If you can use camera2, as fadden says, ImageReader or Allocation (for Java/JNI-based processing or Renderscript, respectively) become options as well.
And if you're only using the JPEG to get to a Bitmap to place on an ImageView, and not because you want to save it, then again as fadden says you can skip the encode/decode step and draw to a view directly. For example, if using the Camera->SurfaceTexture->GL path, you can just use a GLSurfaceView as the output destination and render directly into a GLSurfaceView if that's all you need to do with the data.

How to take snapshot of surfaceview?

Am working in H264 video rendering in Android application using SurfaceView. It has one feature to take snapshot while rendering the video on surface view. Whenever I take a snapshot, I get the Transparent/Black screen only. I use getDrawingCache() method to capture the screen that returns a null value only. I use the below code to capture the screen.
SurfaceView mSUrfaceView = new SurfaceView(this); //Member variable
if(mSUrfaceView!=null)
mSUrfaceView.setDrawingCacheEnabled(true); // After video render on surfaceview i enable the drawing cache
Bitmap bm = mSUrfaceView.getDrawingCache(); // return null
Unless you're rendering H.264 video frames in software with Canvas onto a View, the drawing-cache approach won't work (see e.g. this answer).
You cannot read pixels from the Surface part of the SurfaceView. The basic problem is that a Surface is a queue of buffers with a producer-consumer interface, and your app is on the producer side. The consumer, usually the system compositor (SurfaceFlinger), is able to capture a screen shot because it's on the other end of the pipe.
To grab snapshots while rendering video you can render video frames to a SurfaceTexture, which provides both producer and consumer within your app process. You can then render the texture for display with GLES, optionally grabbing pixels with glReadPixels() for the snapshot.
The Grafika app demonstrates various pieces, though none of the activities specifically solves your problem. For example, "continuous capture" directs the camera preview to a SurfaceTexture and then renders it twice (once for display, once for video encoding), which is similar to what you want to do. The GLES utility classes include a saveFrame() function that shows how to use glReadPixels() to create a bitmap.
See also the Android System-Level Graphics Architecture document.

Android Camera APIs picture-taken is larger after autoFocus than without autoFocus

I have tried two different approaches for capturing an image from android camera hardware when User clicks on a capture-picture button. One with calling autoFocus and after the autoFocusCallback completes with a success response, capture the image. Two, capturing the image without calling autoFocus at all. In both cases, I noticed that the resultant byte-array that is passed to the onPictureTaken method has different lengths. The one that comes after autoFocus completes successfully and invokes the autoFocusCallback is usually at least a 50K bytes larger than the one when autoFocus invocation is completely ignored. Why is that so? Could somebody throw some light? What I don't understand is that when autoFocus completes successfully, shouldn't the picture have a good quality? And typically quality is the value of the bits in each of the bytes representing the RGB channels for each pixel. The overall number of pixels, and thereby total number of bytes representing the RGB channels should be the same irrespective of what values of bits are loaded into the RGB bytes. But apparently seems like there are more bytes of data included for a better clarity image after autoFocus is performed than a regular clarity image.
Been researching for over a month now. Would really appreciate a quick answer.
All the image/video capture drivers use the YUV formats for capture. The formats are either YUV420 or YUV422 in most of the cases. Refer to this link for more information on YUV formats http://www.fourcc.org/yuv.php
As mentioned by you the pictures taken after the Auto focus call are much sharper (edges are sharper and better contrast) and the same shapness is missing in images captured without auto focus.
As you know the Jpeg image compression is used to compress the data of the image, the compression is based on Macro blocks (square blocks in image). Image with sharper edges and more details needs more co-efficients to encode than image with blur, since most of the neighbouring pixels looks like they have been averaged out. This is reason why the auto focussed image is bound have more data since it has more details.

Categories

Resources