I have a fragment in which I'm using a TextureView, and I'm using the following link as a reference:
https://github.com/googlesamples/android-Camera2Basic
Is there a way to modify the scope of the camera to take a "landscape" (it's not really a landscape, just a different scope) picture, even though I'm on portrait mode?
I'm attaching a photo of what I'm trying to achieve. I have a round white frame, I want to the scope of the camera/TextureView to be in that frame, and I want to add a button that takes exactly what's in that frame (with rounded corners). Is that possible?
It is feasible.
You can use opengl to achieve it.
First, draw camera frame to a external gl texture, then draw the frame in the gl texture to the screen.
Crop the frame during the process of drawing to screen with modifying the texture coordinate and you will get your effect.
the way is here:
Crop video before encoding with MediaCodec for Grafika's "Continuous Capture" Activity
And the round corner effect can be also implement through modifying the texture coordinate. the way is here:
How to make TextureView play video with round corners and bubble effect
Related
I want to display a camera preview in a circular shape using the camera2 api. I want to display the preview in a circular shape, but I dont't want the image to be captured in a circular shape.
The captured image would be a face( later want to implement face detection and auto capture). I did have a look at few questions already asked, but none of them are with the new camera2 api's and most of them talk about having an overlay image cropped with a transparent circle. But this will not work in a case where I need to auto detect a face(as the face may appear out side the cropped circular image).
Is there any way I can implement this ? I did try an example with TextureView and set it to a LinearLayout with fixed width and height, but the preview appeared a bit squeezed and in a square shape.
I don't see why face detection matters here - if you enable the camera API's face detector, it'll run on the full image no matter what you do in drawing it inside a circle.
You can either use a circle overlay on top of a correctly-shaped TextureView or SurfaceView, or do your own OpenGL rendering of a circle with the camera preview as a EGL texture.
The latter you'll probably want a GLSurfaceView for the OpenGL drawing context, and a SurfaceTexture to send camera data to and expose it as a EGL texture.
JPEGs captured will still be full-FOV, and the camera API will know nothing about your circular preview drawing, so face detection and everything else will work on the full field of view.
I have a camerapreview displayed on a textureView which works pretty well. But I can't mask the textureView with a circular mask. As soon as I use masking nothing gets displayed.
Is this not possible? Or is there another way?
You should use the API SurfaceTexture.
render camera frames to a SurfaceTexture created by yourself
draw the frame in the surfaceTexture to screen. During the drawing, you use a specific opengl vertex array to implement round corners.
Here is an very helpful article with source code on github.
https://medium.com/#fabrantes/rounded-video-corners-on-android-3467841cc1b
I have two surfaces views in a frame layout, which also contains a linear layout with some buttons. One of the buttons should be able to capture and save an image of the two surfaceviews. One surfaceview is a camera preview and the other is an opengl surface with a square in it. How would you go about taking the picture and saving it?
You can't read data back from a SurfaceView Surface. See e.g. this answer.
The way that you "capture" it is by rendering it to something you can read the pixels from. In your case, you'd grab a frame from the camera, render that to an offscreen pbuffer, then render the square with OpenGL ES onto the same pbuffer, and then grab that with glReadPixels(). Essentially you perform the Surface composition yourself.
Here is some background information to help explain the situation. I've been tasked to build a whiteboard app. This app would require a device's camera to display the whiteboard in a live stream. This device could be positioned at an angle to the white board and yet still display a "flat" image. Pretty much like taking a picture at an angle and then skewing the image to be flat, as if you took the picture directly front of it.
The question I have is if it is possible to skew the SurfaceView of the camera preview so that I can record a video of a skewed image rather then the image itself?
If you send it to a TextureView, rather than a SurfaceView, you can apply a transformation matrix. You can see a trivial example in Grafika's PlayMovieActivity, where adjustAspectRatio() applies a matrix to set the aspect ratio of the video.
If you're not familiar with matrix transformations, take a look at the answers here.
This assumes that you have control over the player, and can send it a "skew this much" value along with the video. To modify the actual video you'll need to apply the transform to the video frames as they're on their way to the encoder. One way to do this would be to send the preview to a SurfaceTexture, draw that on a GLES quad with the appropriate transformation, and capture the GLES rendering with a MediaCodec encoder.
It'll be easier to capture it straight and skew it on playback.
I have a special design requiring for the app I'm developing right now.
Right now, I have a third-party private video library which plays a video stream. The design of this screen includes a translucent panel overlaid on top of the video, blurring the portion of the video that lies behind.
Normally in order to blur the background, you are supposed to take a screenshot of the view behind, blur it and use it as an image for the foreground view.
In this case, the video keeps on playing, so the blurred image changes every frame. How would you implement this then?
A possible solution would be to create a thread, taking screenshots, cropping them and put them as a background. Even better if that view is a SurfaceView, I guess. But I'm wondering what would be the best approach in this case. Would a thread that is continually taking screenshots create a huge performance impact? Is it possible to feed a surfaceView buffer with these images?
Thanks!
A SurfaceView surface is a consumer of graphics buffers. You can't have two producers for one consumer, which means you can't send the video to it and draw on it at the same time.
You can have multiple layers; the SurfaceView surface is on a separate layer behind the View UI layer. So you could play the video to the SurfaceView's surface, and draw your blur rectangle on the SurfaceView's view. (Normally the SurfaceView's view is completely transparent, and is just used as a place-holder for layout purposes.)
Another option would be to render the video frame to a SurfaceTexture. You would then render that texture to the SurfaceView surface with GLES, and render the blur rectangle on top. You can find an example of treating live camera input as a GLES texture in Grafika ("texture from camera" activity). This has the additional advantage that, since you're not interacting with the View system -- the SurfaceView surface is composited by the system, not the app -- you can do it all on an independent thread.
In any event, rendering, grabbing a screenshot, and re-rendering is going to be slower than the options described above.
For more details about why things work the way they do, see the Android System-Level Graphics architecture doc.