How to save picture (applied glsl effects) captured by camera in Android? - android

I have applied some effects to camera preview by OpenGL ES 2.0 shaders.
Next, I want to save these effect pictures (grayscale, negative ...)
I call glReadPixels() in onDrawFrame(), create a bitmap based on the pixels I read from opengl frame buffer and then save it to device storage.
However, in this way, I only "snapshot" the camera effect preview. In other words, the saved image resolution (ex: 800*480) is not the same as the image taken by Camera.PictureCallback (ex: 1920 * 1080).
I know the preview size can be changed by setPreviewSize(), but it can't equal to picture size finally.
So, is it possible to use glsl shader to post process directly the image get by Camera.PictureCallback ? Or there is another way to achieve the same goal?
Any suggestion will be greatly appreciated.
Thanks.

Justin, this was my question about setPreviewTexture(), not a suggestion. If you send the pixel data as received from onPreviewFrame() callback, it will naturally be limited by supported preview sizes.
You can use the same logic to push the pixels from onPictureTaken() to a texture, but you should decode them into RGB from JPEG format.

Related

Drop Frames from the camera preview

I am using Camera 1 and I understand that I get range of supported frame rate and set that frame. However I want the preview to be showing on very low frame rate (ie 5 frames per second).
I can't set that as it is below any range. Is there a way I can drop certain frame from the preview? If setPreviewCallbackwithbuffer then I get the frame but at that point it is already displayed. Any way I can just "skip frames" from preview?
Thank you
No, you cannot intervene in frames on preview surface. You could do some tricks if you use TextureSurface for preview, but if you need very low frame rate, you can simply draw the frames (preferably, to an OpenGL texture) yourself. You will get the YUV frames in onPreveiewFrame() callback, and pass them for display when you want. You can use a shader to display the YUV frames without waisting CPU to convert them to RGB.
Usually, we want to skip preview frames because we want to run some CV algorithms on the frames, and often we want to display the frames modified by such CV processing, e.g. with bounding boxes of detected objects. Even if you have the coordinates of the box aside, and want to display the preview frame 'as is', using your own renderer has the advantage that there will be no time lag between the picture and the overlay.

CPU processing on camera image

Currently I'm showing a preview of the camera on the screen providing the camera preview texture - camera.setPreviewTexture(...) (doing it using opengl of course).
I have a native library which get bytes[] as an image, and return a byte[] - the result image related to the input image. I want to call it, and then draw the input image and the result to the screen - one on each other.
I know that in Opengl, in order to get the data of texture back in the CPU we must be read it using glReadPixel and after process i will have to load the result to a texture - which will have big impact on performances to do it each frame.
I thought about using camera.setPreviewCallback(...), There i'm getting the frame (Calling the process method and transfer the result to the my SurfaceView), and parallel continue using the texture preview Technic for drawing on the screen, but than i'm afraid of synchronizing between the frames that i got in the previewCallback to those i got in the texture.
Am i missing anything ? or there is not easy way to solve this issue?
One approach that may be useful is to direct the output of the Camera to an ImageReader, which provides a Surface. Each frame sent to the Surface is made available as YUV data without a copy, which makes it faster than some of the alternatives. The variations in color formats (stride, alignment, interleave) are handled by ImageReader.
Since you want the camera image to be presented simultaneously with the processing output, you can't send frames down two independent paths.
When the frame is ready, you will need to do a color-space conversion and upload the pixels with glTexImage2D(). This will likely be the performance-limiting factor.
From the comments it sounds like you're familiar with image filtering using a fragment shader; for anyone else who finds this, you can see an example here.

Efficient path for displaying customized video in Android

In Android, I need an efficient way of modifying the camera stream before displaying it on the screen. This post discusses a couple of ways of doing so and I was able to implement the first one:
Get frame buffer from onPreviewFrame
Convert frame to YUV
Modify frame
Convert modified frame to jpeg
Display frame to ImageView which was placed on SurfaceView used for the
preview
That worked but brought down the 30 fps I was getting with a regular camera preview to 5 fps or so. Converting frames back and forth from different image spaces is also power hungry, which I don't want.
Are there examples on how to get access to the raw frames directly and not have to go through so many conversions? Is using OpenGL the right way of doing this? It must be a very common thing to do but I can't find good examples.
Note: I'd rather avoid using the Camera2 APIs for backward compatibility sake.
The most efficient form of your CPU-based pipeline would look something like this:
Receive frames from the Camera on a Surface, rather than as byte[]. With Camera2 you can send the frames directly to an ImageReader; that will get you CPU access to the raw YUV data from the Camera without copying it or converting it. (I'm not sure how to rig this up with the old Camera API, as it wants either a SurfaceTexture or SurfaceHolder, and ImageReader doesn't provide those. You can run the frames through a SurfaceTexture and get RGB values from glReadPixels(), but I don't know if that'll buy you anything.)
Perform your modifications on the YUV data.
Convert the YUV data to RGB.
Either convert the RGB data into a Bitmap or a GLES texture. glTexImage2D will be more efficient, but OpenGL ES comes with a steep learning curve. Most of the pieces you need are in Grafika (e.g. the texture upload benchmark) if you decide to go that route.
Render the image. Depending on what you did in step #4, you'll either render the Bitmap through a Canvas on a custom View, or render the texture with GLES on a SurfaceView or TextureView.
I think the most significant speedup will be from eliminating the JPEG compression and uncompression, so you should probably start there. Convert the output of your frame editor to a Bitmap and just draw it on the Canvas of a TextureView or custom View, rather than converting to JPEG and using ImageView. If that doesn't get you the speedup you want, figure out what's slowing you down and work on that piece of the pipeline.
If you're restricted to the old camera API, then using a SurfaceTexture and doing your processing in a GPU shader may be most efficient.
This assumes whatever modifications you want to do can be expressed reasonably as a GL fragment shader, and that you're familiar enough with OpenGL to set up all the boilerplate necessary to render a single quadrilateral into a frame buffer, using the texture from a SurfaceTexture.
You can then read back the results with glReadPixels from the final rendering output, and save that as a JPEG.
Note that the shader will provide you with RGB data, not YUV, so if you really need YUV, you'll have to convert back to a YUV colorspace before processing.
If you can use camera2, as fadden says, ImageReader or Allocation (for Java/JNI-based processing or Renderscript, respectively) become options as well.
And if you're only using the JPEG to get to a Bitmap to place on an ImageView, and not because you want to save it, then again as fadden says you can skip the encode/decode step and draw to a view directly. For example, if using the Camera->SurfaceTexture->GL path, you can just use a GLSurfaceView as the output destination and render directly into a GLSurfaceView if that's all you need to do with the data.

Skewing the SurfaceView of the Camera Preview

Here is some background information to help explain the situation. I've been tasked to build a whiteboard app. This app would require a device's camera to display the whiteboard in a live stream. This device could be positioned at an angle to the white board and yet still display a "flat" image. Pretty much like taking a picture at an angle and then skewing the image to be flat, as if you took the picture directly front of it.
The question I have is if it is possible to skew the SurfaceView of the camera preview so that I can record a video of a skewed image rather then the image itself?
If you send it to a TextureView, rather than a SurfaceView, you can apply a transformation matrix. You can see a trivial example in Grafika's PlayMovieActivity, where adjustAspectRatio() applies a matrix to set the aspect ratio of the video.
If you're not familiar with matrix transformations, take a look at the answers here.
This assumes that you have control over the player, and can send it a "skew this much" value along with the video. To modify the actual video you'll need to apply the transform to the video frames as they're on their way to the encoder. One way to do this would be to send the preview to a SurfaceTexture, draw that on a GLES quad with the appropriate transformation, and capture the GLES rendering with a MediaCodec encoder.
It'll be easier to capture it straight and skew it on playback.

Android Camera APIs picture-taken is larger after autoFocus than without autoFocus

I have tried two different approaches for capturing an image from android camera hardware when User clicks on a capture-picture button. One with calling autoFocus and after the autoFocusCallback completes with a success response, capture the image. Two, capturing the image without calling autoFocus at all. In both cases, I noticed that the resultant byte-array that is passed to the onPictureTaken method has different lengths. The one that comes after autoFocus completes successfully and invokes the autoFocusCallback is usually at least a 50K bytes larger than the one when autoFocus invocation is completely ignored. Why is that so? Could somebody throw some light? What I don't understand is that when autoFocus completes successfully, shouldn't the picture have a good quality? And typically quality is the value of the bits in each of the bytes representing the RGB channels for each pixel. The overall number of pixels, and thereby total number of bytes representing the RGB channels should be the same irrespective of what values of bits are loaded into the RGB bytes. But apparently seems like there are more bytes of data included for a better clarity image after autoFocus is performed than a regular clarity image.
Been researching for over a month now. Would really appreciate a quick answer.
All the image/video capture drivers use the YUV formats for capture. The formats are either YUV420 or YUV422 in most of the cases. Refer to this link for more information on YUV formats http://www.fourcc.org/yuv.php
As mentioned by you the pictures taken after the Auto focus call are much sharper (edges are sharper and better contrast) and the same shapness is missing in images captured without auto focus.
As you know the Jpeg image compression is used to compress the data of the image, the compression is based on Macro blocks (square blocks in image). Image with sharper edges and more details needs more co-efficients to encode than image with blur, since most of the neighbouring pixels looks like they have been averaged out. This is reason why the auto focussed image is bound have more data since it has more details.

Categories

Resources