Skewing the SurfaceView of the Camera Preview - android

Here is some background information to help explain the situation. I've been tasked to build a whiteboard app. This app would require a device's camera to display the whiteboard in a live stream. This device could be positioned at an angle to the white board and yet still display a "flat" image. Pretty much like taking a picture at an angle and then skewing the image to be flat, as if you took the picture directly front of it.
The question I have is if it is possible to skew the SurfaceView of the camera preview so that I can record a video of a skewed image rather then the image itself?

If you send it to a TextureView, rather than a SurfaceView, you can apply a transformation matrix. You can see a trivial example in Grafika's PlayMovieActivity, where adjustAspectRatio() applies a matrix to set the aspect ratio of the video.
If you're not familiar with matrix transformations, take a look at the answers here.
This assumes that you have control over the player, and can send it a "skew this much" value along with the video. To modify the actual video you'll need to apply the transform to the video frames as they're on their way to the encoder. One way to do this would be to send the preview to a SurfaceTexture, draw that on a GLES quad with the appropriate transformation, and capture the GLES rendering with a MediaCodec encoder.
It'll be easier to capture it straight and skew it on playback.

Related

Drop Frames from the camera preview

I am using Camera 1 and I understand that I get range of supported frame rate and set that frame. However I want the preview to be showing on very low frame rate (ie 5 frames per second).
I can't set that as it is below any range. Is there a way I can drop certain frame from the preview? If setPreviewCallbackwithbuffer then I get the frame but at that point it is already displayed. Any way I can just "skip frames" from preview?
Thank you
No, you cannot intervene in frames on preview surface. You could do some tricks if you use TextureSurface for preview, but if you need very low frame rate, you can simply draw the frames (preferably, to an OpenGL texture) yourself. You will get the YUV frames in onPreveiewFrame() callback, and pass them for display when you want. You can use a shader to display the YUV frames without waisting CPU to convert them to RGB.
Usually, we want to skip preview frames because we want to run some CV algorithms on the frames, and often we want to display the frames modified by such CV processing, e.g. with bounding boxes of detected objects. Even if you have the coordinates of the box aside, and want to display the preview frame 'as is', using your own renderer has the advantage that there will be no time lag between the picture and the overlay.

Android custom camera scope (TextureView)

I have a fragment in which I'm using a TextureView, and I'm using the following link as a reference:
https://github.com/googlesamples/android-Camera2Basic
Is there a way to modify the scope of the camera to take a "landscape" (it's not really a landscape, just a different scope) picture, even though I'm on portrait mode?
I'm attaching a photo of what I'm trying to achieve. I have a round white frame, I want to the scope of the camera/TextureView to be in that frame, and I want to add a button that takes exactly what's in that frame (with rounded corners). Is that possible?
It is feasible.
You can use opengl to achieve it.
First, draw camera frame to a external gl texture, then draw the frame in the gl texture to the screen.
Crop the frame during the process of drawing to screen with modifying the texture coordinate and you will get your effect.
the way is here:
Crop video before encoding with MediaCodec for Grafika's "Continuous Capture" Activity
And the round corner effect can be also implement through modifying the texture coordinate. the way is here:
How to make TextureView play video with round corners and bubble effect

Display camera preview in a circle

I want to display a camera preview in a circular shape using the camera2 api. I want to display the preview in a circular shape, but I dont't want the image to be captured in a circular shape.
The captured image would be a face( later want to implement face detection and auto capture). I did have a look at few questions already asked, but none of them are with the new camera2 api's and most of them talk about having an overlay image cropped with a transparent circle. But this will not work in a case where I need to auto detect a face(as the face may appear out side the cropped circular image).
Is there any way I can implement this ? I did try an example with TextureView and set it to a LinearLayout with fixed width and height, but the preview appeared a bit squeezed and in a square shape.
I don't see why face detection matters here - if you enable the camera API's face detector, it'll run on the full image no matter what you do in drawing it inside a circle.
You can either use a circle overlay on top of a correctly-shaped TextureView or SurfaceView, or do your own OpenGL rendering of a circle with the camera preview as a EGL texture.
The latter you'll probably want a GLSurfaceView for the OpenGL drawing context, and a SurfaceTexture to send camera data to and expose it as a EGL texture.
JPEGs captured will still be full-FOV, and the camera API will know nothing about your circular preview drawing, so face detection and everything else will work on the full field of view.

CPU processing on camera image

Currently I'm showing a preview of the camera on the screen providing the camera preview texture - camera.setPreviewTexture(...) (doing it using opengl of course).
I have a native library which get bytes[] as an image, and return a byte[] - the result image related to the input image. I want to call it, and then draw the input image and the result to the screen - one on each other.
I know that in Opengl, in order to get the data of texture back in the CPU we must be read it using glReadPixel and after process i will have to load the result to a texture - which will have big impact on performances to do it each frame.
I thought about using camera.setPreviewCallback(...), There i'm getting the frame (Calling the process method and transfer the result to the my SurfaceView), and parallel continue using the texture preview Technic for drawing on the screen, but than i'm afraid of synchronizing between the frames that i got in the previewCallback to those i got in the texture.
Am i missing anything ? or there is not easy way to solve this issue?
One approach that may be useful is to direct the output of the Camera to an ImageReader, which provides a Surface. Each frame sent to the Surface is made available as YUV data without a copy, which makes it faster than some of the alternatives. The variations in color formats (stride, alignment, interleave) are handled by ImageReader.
Since you want the camera image to be presented simultaneously with the processing output, you can't send frames down two independent paths.
When the frame is ready, you will need to do a color-space conversion and upload the pixels with glTexImage2D(). This will likely be the performance-limiting factor.
From the comments it sounds like you're familiar with image filtering using a fragment shader; for anyone else who finds this, you can see an example here.

How to save picture (applied glsl effects) captured by camera in Android?

I have applied some effects to camera preview by OpenGL ES 2.0 shaders.
Next, I want to save these effect pictures (grayscale, negative ...)
I call glReadPixels() in onDrawFrame(), create a bitmap based on the pixels I read from opengl frame buffer and then save it to device storage.
However, in this way, I only "snapshot" the camera effect preview. In other words, the saved image resolution (ex: 800*480) is not the same as the image taken by Camera.PictureCallback (ex: 1920 * 1080).
I know the preview size can be changed by setPreviewSize(), but it can't equal to picture size finally.
So, is it possible to use glsl shader to post process directly the image get by Camera.PictureCallback ? Or there is another way to achieve the same goal?
Any suggestion will be greatly appreciated.
Thanks.
Justin, this was my question about setPreviewTexture(), not a suggestion. If you send the pixel data as received from onPreviewFrame() callback, it will naturally be limited by supported preview sizes.
You can use the same logic to push the pixels from onPictureTaken() to a texture, but you should decode them into RGB from JPEG format.

Categories

Resources