I have two surfaces views in a frame layout, which also contains a linear layout with some buttons. One of the buttons should be able to capture and save an image of the two surfaceviews. One surfaceview is a camera preview and the other is an opengl surface with a square in it. How would you go about taking the picture and saving it?
You can't read data back from a SurfaceView Surface. See e.g. this answer.
The way that you "capture" it is by rendering it to something you can read the pixels from. In your case, you'd grab a frame from the camera, render that to an offscreen pbuffer, then render the square with OpenGL ES onto the same pbuffer, and then grab that with glReadPixels(). Essentially you perform the Surface composition yourself.
Related
I have a fragment in which I'm using a TextureView, and I'm using the following link as a reference:
https://github.com/googlesamples/android-Camera2Basic
Is there a way to modify the scope of the camera to take a "landscape" (it's not really a landscape, just a different scope) picture, even though I'm on portrait mode?
I'm attaching a photo of what I'm trying to achieve. I have a round white frame, I want to the scope of the camera/TextureView to be in that frame, and I want to add a button that takes exactly what's in that frame (with rounded corners). Is that possible?
It is feasible.
You can use opengl to achieve it.
First, draw camera frame to a external gl texture, then draw the frame in the gl texture to the screen.
Crop the frame during the process of drawing to screen with modifying the texture coordinate and you will get your effect.
the way is here:
Crop video before encoding with MediaCodec for Grafika's "Continuous Capture" Activity
And the round corner effect can be also implement through modifying the texture coordinate. the way is here:
How to make TextureView play video with round corners and bubble effect
I am trying to draw 3D object onto camera preview frames (Android). Should I use two surface views, one for camera preview and another GLSurfaceView for drawing. The views should be synchronized and frame rate of display should be good enough to provide a good user experience. So most of the tutorials talk about using multiple views. The alternate idea is to get texture from camera preview and merge it with the 3D object to be drawn so as to get the appropriate 2D raster image.
Which method would be better for performance gains?
P.S : I will be using the Java APIs for openGL es 2.0
Since two surface views increase the number of API calls per frame and require transparency, they will be slower.
You don't need two surface views for your purpose.
Disable depth writes.
Render the camera preview on a 2D quad filling the screen.
Enable depth writes.
Render 3D object.
This will make sure your 3D objects are rendered over the camera preview.
You can also achieve this with two surface views and transparency, but it will be slower.
I tried to apply some 3D transformation (such as setRotationX) on a surfaceview which is used for camera previewing, but only the frame changes and the content don't.
A SurfaceView has two parts, the Surface and the View. The Surface is a separate layer that is rendered and composited independently. The View part is, by default, a transparent rectangle that creates a "hole" in the View layer, so that you can see through the Views to the Surface behind it.
The transformation you mention (setRotationX()) is a View method, but the camera preview is sent to the Surface. That's why the frame changed but the preview itself didn't.
You can send your preview to a TextureView, which can take an arbitrary transformation matrix (setTransform()), by using the Camera.setPreviewTexture() method. Or you can send it through a SurfaceTexture to an OpenGL ES texture, which can be rendered on the SurfaceView's Surface, using whatever GLES transformations you want. For an example of the latter, see Grafika's "texture from Camera" Activity.
I want to achieve this kind of feature.
My initial camera preview:
Now I want to break this camera preview in two parts:
What I have tried:
Create a surface view to hold preview of camera.[Done]
Shift half of the surface view out off the screen[Done] now half of the surface view shift from out off the screen and only half is visible.
The problem is camera writes it's complete preview only on visible portion of surface view so the preview gets shrink in half of the screen.
Can anybody help me how can I achieve this?
Send the camera preview to a SurfaceTexture, then draw two rects with GLES, one with the left part of the preview, one with the right. Use a single SurfaceView for display.
You can find sample code in Grafika's "texture from Camera" Activity, which manipulates the camera output in various ways. Note in particular the "zoom" feature works by displaying a progressively smaller area of the preview while keeping the output rect the same size.
I have a special design requiring for the app I'm developing right now.
Right now, I have a third-party private video library which plays a video stream. The design of this screen includes a translucent panel overlaid on top of the video, blurring the portion of the video that lies behind.
Normally in order to blur the background, you are supposed to take a screenshot of the view behind, blur it and use it as an image for the foreground view.
In this case, the video keeps on playing, so the blurred image changes every frame. How would you implement this then?
A possible solution would be to create a thread, taking screenshots, cropping them and put them as a background. Even better if that view is a SurfaceView, I guess. But I'm wondering what would be the best approach in this case. Would a thread that is continually taking screenshots create a huge performance impact? Is it possible to feed a surfaceView buffer with these images?
Thanks!
A SurfaceView surface is a consumer of graphics buffers. You can't have two producers for one consumer, which means you can't send the video to it and draw on it at the same time.
You can have multiple layers; the SurfaceView surface is on a separate layer behind the View UI layer. So you could play the video to the SurfaceView's surface, and draw your blur rectangle on the SurfaceView's view. (Normally the SurfaceView's view is completely transparent, and is just used as a place-holder for layout purposes.)
Another option would be to render the video frame to a SurfaceTexture. You would then render that texture to the SurfaceView surface with GLES, and render the blur rectangle on top. You can find an example of treating live camera input as a GLES texture in Grafika ("texture from camera" activity). This has the additional advantage that, since you're not interacting with the View system -- the SurfaceView surface is composited by the system, not the app -- you can do it all on an independent thread.
In any event, rendering, grabbing a screenshot, and re-rendering is going to be slower than the options described above.
For more details about why things work the way they do, see the Android System-Level Graphics architecture doc.