How to take snapshot of surfaceview? - android

Am working in H264 video rendering in Android application using SurfaceView. It has one feature to take snapshot while rendering the video on surface view. Whenever I take a snapshot, I get the Transparent/Black screen only. I use getDrawingCache() method to capture the screen that returns a null value only. I use the below code to capture the screen.
SurfaceView mSUrfaceView = new SurfaceView(this); //Member variable
if(mSUrfaceView!=null)
mSUrfaceView.setDrawingCacheEnabled(true); // After video render on surfaceview i enable the drawing cache
Bitmap bm = mSUrfaceView.getDrawingCache(); // return null

Unless you're rendering H.264 video frames in software with Canvas onto a View, the drawing-cache approach won't work (see e.g. this answer).
You cannot read pixels from the Surface part of the SurfaceView. The basic problem is that a Surface is a queue of buffers with a producer-consumer interface, and your app is on the producer side. The consumer, usually the system compositor (SurfaceFlinger), is able to capture a screen shot because it's on the other end of the pipe.
To grab snapshots while rendering video you can render video frames to a SurfaceTexture, which provides both producer and consumer within your app process. You can then render the texture for display with GLES, optionally grabbing pixels with glReadPixels() for the snapshot.
The Grafika app demonstrates various pieces, though none of the activities specifically solves your problem. For example, "continuous capture" directs the camera preview to a SurfaceTexture and then renders it twice (once for display, once for video encoding), which is similar to what you want to do. The GLES utility classes include a saveFrame() function that shows how to use glReadPixels() to create a bitmap.
See also the Android System-Level Graphics Architecture document.

Related

Copy from one surface to another surface

I currently have two different surfaces(one from SurfaceView and another surface created from MediaCodec).
What are the different ways available to copy from one surface to another?
In the Android graphics architecture a Surface plays the role of a consumer of buffers containing the graphical data (e.g. video frames).
A typical consumer does not provide access to the buffers that it holds. An exception is the special type ImageReader that allows direct application access to image data rendered into its Surface.
There is a less efficient way to copy the contents of a SurfaceView into a Bitmap using PixelCopy. While TextureView allows you to get the Bitmap directly.
You can then draw the Bitmap image onto another Surface using its Canvas.
Links:
https://source.android.com/devices/graphics/arch-sh
https://developer.android.com/reference/android/media/ImageReader
https://developer.android.com/reference/android/view/PixelCopy
https://developer.android.com/reference/android/view/Surface#lockCanvas(android.graphics.Rect)
https://developer.android.com/reference/android/graphics/Canvas#drawBitmap(android.graphics.Bitmap,%20android.graphics.Rect,%20android.graphics.Rect,%20android.graphics.Paint)

CPU processing on camera image

Currently I'm showing a preview of the camera on the screen providing the camera preview texture - camera.setPreviewTexture(...) (doing it using opengl of course).
I have a native library which get bytes[] as an image, and return a byte[] - the result image related to the input image. I want to call it, and then draw the input image and the result to the screen - one on each other.
I know that in Opengl, in order to get the data of texture back in the CPU we must be read it using glReadPixel and after process i will have to load the result to a texture - which will have big impact on performances to do it each frame.
I thought about using camera.setPreviewCallback(...), There i'm getting the frame (Calling the process method and transfer the result to the my SurfaceView), and parallel continue using the texture preview Technic for drawing on the screen, but than i'm afraid of synchronizing between the frames that i got in the previewCallback to those i got in the texture.
Am i missing anything ? or there is not easy way to solve this issue?
One approach that may be useful is to direct the output of the Camera to an ImageReader, which provides a Surface. Each frame sent to the Surface is made available as YUV data without a copy, which makes it faster than some of the alternatives. The variations in color formats (stride, alignment, interleave) are handled by ImageReader.
Since you want the camera image to be presented simultaneously with the processing output, you can't send frames down two independent paths.
When the frame is ready, you will need to do a color-space conversion and upload the pixels with glTexImage2D(). This will likely be the performance-limiting factor.
From the comments it sounds like you're familiar with image filtering using a fragment shader; for anyone else who finds this, you can see an example here.

Render Android MediaCodec output on two views for VR Headset compatibility

What I know so far is that I need to use a SurfaceTexture that can be rendered on two TextureViews simultaneously.
So it will be:
MediaCodec -> SurfaceTexture -> 2x TextureViews
But how do I get a SurfaceTexture programmaticly to be used in the MediaCodec? As far as I know a new SurfaceTexture is created for every TextureView, so if I have two TextureViews in my activity, I will get two TextureViews!? Thats one to much... ;)
Or is there any other way to render the MediaCodec Output to a screen twice?
Do you actually require two TextureViews, or is that just for convenience?
You could, for example, have a single SurfaceView or TextureView that covers the entire screen, and then just render on the left and right sides with GLES. With the video output in a SurfaceTexture, you can render it however you like. The "texture from camera" activity in Grafika demonstrates various ways to manipulate image from a video source.
If you really want two TextureViews, you can have them. Use a single EGL context for the SurfaceTexture and both TextureViews, and just switch between EGL surfaces with eglMakeCurrent() when it's time to render.
In any event, you should be creating your own SurfaceTexture to receive the video, not using one that comes from a TextureView -- see e.g. this bit of code.

Blurred panel over a video player

I have a special design requiring for the app I'm developing right now.
Right now, I have a third-party private video library which plays a video stream. The design of this screen includes a translucent panel overlaid on top of the video, blurring the portion of the video that lies behind.
Normally in order to blur the background, you are supposed to take a screenshot of the view behind, blur it and use it as an image for the foreground view.
In this case, the video keeps on playing, so the blurred image changes every frame. How would you implement this then?
A possible solution would be to create a thread, taking screenshots, cropping them and put them as a background. Even better if that view is a SurfaceView, I guess. But I'm wondering what would be the best approach in this case. Would a thread that is continually taking screenshots create a huge performance impact? Is it possible to feed a surfaceView buffer with these images?
Thanks!
A SurfaceView surface is a consumer of graphics buffers. You can't have two producers for one consumer, which means you can't send the video to it and draw on it at the same time.
You can have multiple layers; the SurfaceView surface is on a separate layer behind the View UI layer. So you could play the video to the SurfaceView's surface, and draw your blur rectangle on the SurfaceView's view. (Normally the SurfaceView's view is completely transparent, and is just used as a place-holder for layout purposes.)
Another option would be to render the video frame to a SurfaceTexture. You would then render that texture to the SurfaceView surface with GLES, and render the blur rectangle on top. You can find an example of treating live camera input as a GLES texture in Grafika ("texture from camera" activity). This has the additional advantage that, since you're not interacting with the View system -- the SurfaceView surface is composited by the system, not the app -- you can do it all on an independent thread.
In any event, rendering, grabbing a screenshot, and re-rendering is going to be slower than the options described above.
For more details about why things work the way they do, see the Android System-Level Graphics architecture doc.

How to record webview activity screen using Android MediaCodec?

I have the task to record user activity in a webview, in other words I need to create an mp4 video file while the user navigates in a webview. Pretty challenging :)
I font that in Android 4.3 introduced MediaCodec : was expanded to include a way to provide input through a Surface (via the createInputSurface method). This allows input to come from camera preview or OpenGL ES rendering.
I even find an example where you could record a game written in opengl : http://bigflake.com/mediacodec/
My question is : how could I record a webview activity ? I assume that If I could draw the webview content to opengl texture, than everything would be fine. But I don't know how to do this.
Can anybody help me on this?
Why not try WebView.onDraw first, instead of using OpenGL? The latter approach may be more complicated, and not supported by all devices.
Once you will be able to obtain the screenshots, then you can create the video (to create video from image sequence on android), a separate task where mediacodec should help.
"I assume that If I could draw the webview content to opengl texture".
It is possible.
The SurfaceTexture is basically your entry point into the OpenGL layer. It is initialized with an OpenGL texture id, and performs all of it's rendering onto that texture.
The steps to render your view to opengl:
1.Initialize an OpenGL texture
2.Within an OpenGL context construct a SurfaceTexture with the texture id. Use SurfaceTexture.setDefaultBufferSize(int width, int height) to make sure you have enough space on the texture for the view to render.
3.Create a Surface constructed with the above SurfaceTexture.
4.Within the View's onDraw, use the Canvas returned by Surface.lockCanvas to do the view drawing. You can obviously do this with any View, and not just WebView. Plus Canvas has a whole bunch of drawing methods, allowing you to do funky, funky things.
The source code can be found here: https://github.com/ArtemBogush/AndroidViewToGLRendering And you can find some explanations here:http://www.felixjones.co.uk/neo%20website/Android_View/

Categories

Resources