I currently have two different surfaces(one from SurfaceView and another surface created from MediaCodec).
What are the different ways available to copy from one surface to another?
In the Android graphics architecture a Surface plays the role of a consumer of buffers containing the graphical data (e.g. video frames).
A typical consumer does not provide access to the buffers that it holds. An exception is the special type ImageReader that allows direct application access to image data rendered into its Surface.
There is a less efficient way to copy the contents of a SurfaceView into a Bitmap using PixelCopy. While TextureView allows you to get the Bitmap directly.
You can then draw the Bitmap image onto another Surface using its Canvas.
Links:
https://source.android.com/devices/graphics/arch-sh
https://developer.android.com/reference/android/media/ImageReader
https://developer.android.com/reference/android/view/PixelCopy
https://developer.android.com/reference/android/view/Surface#lockCanvas(android.graphics.Rect)
https://developer.android.com/reference/android/graphics/Canvas#drawBitmap(android.graphics.Bitmap,%20android.graphics.Rect,%20android.graphics.Rect,%20android.graphics.Paint)
Related
I am trying to draw 3D object onto camera preview frames (Android). Should I use two surface views, one for camera preview and another GLSurfaceView for drawing. The views should be synchronized and frame rate of display should be good enough to provide a good user experience. So most of the tutorials talk about using multiple views. The alternate idea is to get texture from camera preview and merge it with the 3D object to be drawn so as to get the appropriate 2D raster image.
Which method would be better for performance gains?
P.S : I will be using the Java APIs for openGL es 2.0
Since two surface views increase the number of API calls per frame and require transparency, they will be slower.
You don't need two surface views for your purpose.
Disable depth writes.
Render the camera preview on a 2D quad filling the screen.
Enable depth writes.
Render 3D object.
This will make sure your 3D objects are rendered over the camera preview.
You can also achieve this with two surface views and transparency, but it will be slower.
Most of the code for generating videos using MediaCodec I've seen so far either use pure OpenGL or locking the Canvas from the MediaCodec-generated Surface and editing it. Can I do it with a mix of both?
For example, if I generate my frames in the latter way, is it possible to apply a Fragment Shader on the MediaCodec-generated Surface before or after editing the Surface's Canvas?
A Surface is the producer end of a producer-consumer pair. Only one producer can be connected at a time, so you can't use GLES and Canvas on the same Surface without disconnecting one and attaching the other.
Last I checked (Lollipop) there was no way to disconnect a Canvas. So switching back and forth is not possible.
What you would need to do is:
Create a Canvas backed by a Bitmap.
Render onto that Canvas.
Upload the rendered Bitmap to GLES with glTexImage2D().
Blit the bitmap with GLES, using your desired fragment shader.
The overhead associated with the upload is unavoidable, but remember that you can draw the Bitmap at a smaller resolution and let GLES scale it up. Because you're drawing on a Bitmap rather than a Surface, it's not necessary to redraw the entire screen for every update, so there is some opportunity to reduce Canvas rendering overhead.
All of the above holds regardless of what the Surface is connected to -- could be MediaCodec, SurfaceView, SurfaceTexture, etc.
I have two surfaces views in a frame layout, which also contains a linear layout with some buttons. One of the buttons should be able to capture and save an image of the two surfaceviews. One surfaceview is a camera preview and the other is an opengl surface with a square in it. How would you go about taking the picture and saving it?
You can't read data back from a SurfaceView Surface. See e.g. this answer.
The way that you "capture" it is by rendering it to something you can read the pixels from. In your case, you'd grab a frame from the camera, render that to an offscreen pbuffer, then render the square with OpenGL ES onto the same pbuffer, and then grab that with glReadPixels(). Essentially you perform the Surface composition yourself.
Am working in H264 video rendering in Android application using SurfaceView. It has one feature to take snapshot while rendering the video on surface view. Whenever I take a snapshot, I get the Transparent/Black screen only. I use getDrawingCache() method to capture the screen that returns a null value only. I use the below code to capture the screen.
SurfaceView mSUrfaceView = new SurfaceView(this); //Member variable
if(mSUrfaceView!=null)
mSUrfaceView.setDrawingCacheEnabled(true); // After video render on surfaceview i enable the drawing cache
Bitmap bm = mSUrfaceView.getDrawingCache(); // return null
Unless you're rendering H.264 video frames in software with Canvas onto a View, the drawing-cache approach won't work (see e.g. this answer).
You cannot read pixels from the Surface part of the SurfaceView. The basic problem is that a Surface is a queue of buffers with a producer-consumer interface, and your app is on the producer side. The consumer, usually the system compositor (SurfaceFlinger), is able to capture a screen shot because it's on the other end of the pipe.
To grab snapshots while rendering video you can render video frames to a SurfaceTexture, which provides both producer and consumer within your app process. You can then render the texture for display with GLES, optionally grabbing pixels with glReadPixels() for the snapshot.
The Grafika app demonstrates various pieces, though none of the activities specifically solves your problem. For example, "continuous capture" directs the camera preview to a SurfaceTexture and then renders it twice (once for display, once for video encoding), which is similar to what you want to do. The GLES utility classes include a saveFrame() function that shows how to use glReadPixels() to create a bitmap.
See also the Android System-Level Graphics Architecture document.
I have the task to record user activity in a webview, in other words I need to create an mp4 video file while the user navigates in a webview. Pretty challenging :)
I font that in Android 4.3 introduced MediaCodec : was expanded to include a way to provide input through a Surface (via the createInputSurface method). This allows input to come from camera preview or OpenGL ES rendering.
I even find an example where you could record a game written in opengl : http://bigflake.com/mediacodec/
My question is : how could I record a webview activity ? I assume that If I could draw the webview content to opengl texture, than everything would be fine. But I don't know how to do this.
Can anybody help me on this?
Why not try WebView.onDraw first, instead of using OpenGL? The latter approach may be more complicated, and not supported by all devices.
Once you will be able to obtain the screenshots, then you can create the video (to create video from image sequence on android), a separate task where mediacodec should help.
"I assume that If I could draw the webview content to opengl texture".
It is possible.
The SurfaceTexture is basically your entry point into the OpenGL layer. It is initialized with an OpenGL texture id, and performs all of it's rendering onto that texture.
The steps to render your view to opengl:
1.Initialize an OpenGL texture
2.Within an OpenGL context construct a SurfaceTexture with the texture id. Use SurfaceTexture.setDefaultBufferSize(int width, int height) to make sure you have enough space on the texture for the view to render.
3.Create a Surface constructed with the above SurfaceTexture.
4.Within the View's onDraw, use the Canvas returned by Surface.lockCanvas to do the view drawing. You can obviously do this with any View, and not just WebView. Plus Canvas has a whole bunch of drawing methods, allowing you to do funky, funky things.
The source code can be found here: https://github.com/ArtemBogush/AndroidViewToGLRendering And you can find some explanations here:http://www.felixjones.co.uk/neo%20website/Android_View/