How to capture a screenshot programmatically with Lollipop - android

The Media Projection package is new Lollipop, and allows an app to capture the device's screen in realtime for streaming to video. I was hoping this could also be used to capture a single still screenshot, but so far I have not been successful. Of course, the first frame of a captured video could work, but I'm aiming for a perfect, lossless screenshot matching the pixel resolution of the device. A still from a captured video cannot provide that.
I've tried a lot of things, but the closest I came to a solution was to first launch an invisible activity. This activity then follows the API example for starting screen capture, which can include asking the user's permission. Once screen capture is enabled, the screen image is live in a SurfaceView. However, I cannot find a way to capture a bitmap from the SurfaceView. There are lots of questions and discussions about this, but no solutions seem to work, and there is some evidence that it is impossible.
Any ideas?

You can't capture the contents of a SurfaceView.
What you can do is replace the SurfaceView with a Surface object that has an in-process consumer, such as SurfaceTexture. In the android-ScreenCapture example linked from the question, mMediaProjection.createVirtualDisplay() wants a Surface to send images to. If you create a SurfaceTexture, and use that to construct a Surface, the images generated by the MediaProjection will be available from an OpenGL ES texture.
If GLES isn't your thing, the ImageReader class can be used. It also provides a Surface that can be passed to createVirtualDisplay(), but it's easier to access the pixels from software.

Related

(Camera2 API) Can I run 2 ImageReader instances of different configs at the same time?

I am modifying (Java) the TF Lite sample app for object detection. It has a live video feed that shows boxes around common objects. It takes in ImageReader frames at 640*480.
I want to use these bounds to crop the items, but I want to crop them from a high-quality image. I think the 5T is capable of 4K.
So, is it possible to run 2 instances of ImageReader, one low-quality video feed (used by TF Lite), and one for capturing full-quality still images? I also can't pin the 2nd one to any Surface for user preview, pic has to be captured in the background.
In this medium article (https://link.medium.com/2oaIYoY58db) it says "Due to hardware constraints, only a single configuration can be active in the camera sensor at any given time; this is called the active configuration."
I'm new to android here, so couldn't make much sense of this.
Thanks for your time!
PS: as far as I know, this isn't possible with CameraX, yet.
As the cited article explains, you can use a lower-resolution preview stream and periodically capture higher-rez still images. Depending on hardware, this 'switch' may take time, or be really quick.
In your case, I would run a preview capture session at maximum resolution, and shrink (resize) the frames to feed into TFLite when necessary.

Android - Capturing video frame from GLSurfaceView and SurfacView

To play a media using Android MediaPlayer or MediaCodec, most of the time, you use SurfaceView or GLSurfaceView (There is another way to achieve this using TextureView, but let's not talk about it here, since it's a bit different type of view)
And as far as I know, capturing the video frame from SurfaceView is not possible - you don't have access to hw overlay.
How about GLSurfaceView? Since we have access to YUV pixels (we're, right?), is it possible?
Can anyone point me where i can find a sample code to do it?
I don't think below explanation can work, because it's assuming the color format is RGBA, and in above case, I think it's YUV.
When using GLES20.glReadPixels on android, the data returned by it is not exactly the same with the living preview
Thank you and have a great day.
You are correct in that you cannot read back from a Surface. It's the producer side of a producer-consumer pair. GLSurfaceView is just a bunch of code wrapped around a SurfaceView that (in theory) makes working with GLES easier.
So you have to send the preview somewhere else. One approach is to send it to a SurfaceTexture, which converts every frame sent to its Surface into a GLES texture. The texture can then be rendered twice, once for display and once to an offscreen pbuffer that can be saved as a bitmap (just like this question).
I'm not sure why you don't want to talk about TextureView. It's a View that uses SurfaceTexture under the hood, and it provides a getBitmap() call that does exactly what you want.

Android camera preview on SurfaceTexture

This is my scenario: I am trying to take a picture from the front camera when someone puts in the incorrect password in the lock screen. Basically, I need to be able to take a picture out of the front cam without a preview.
After much googling, I figured out that the way to do it is opengl and SurfaceTexture. You direct the camera preview to a SurfaceTexture, and later extract the picture from this texture somehow. I found this out from the following resources:
https://stackoverflow.com/a/10776349/902572 (suggestion 1)
http://www.freelancer.com/projects/Android-opengl/Android-OpenGL-App-Access-Raw.html, which is the same as (1)
https://groups.google.com/forum/#!topic/android-developers/U5RXFGpAHPE (See Romain's post on 12/22/11)
I understand what is to be done, but i have been unable to correctly put them into code, as I am new to opengl.
The CameraToMpegTest example has most of what you need, though it goes well beyond your use case (it's recording a series of preview frames as a video).
The rest of what you need is in ExtractMpegFramesTest. In particular, you want to render to an offscreen pbuffer (rather than a MediaCodec encoder input surface), and you can save the pbuffer contents as a PNG file using saveFrame().
The above are written as small "headless" test cases. You can see similar code in a full app in Grafika.

How to pass Camera preview to the Surface created by MediaCodec.createInputSurface()?

Ideally I'd like to accomplish two goals:
Pass the Camera preview data to a MediaCodec encoder via a Surface. I can create the Surface using MediaCodec.createInputSurface() but the Camera.setPreviewDisplay() takes a SurfaceHolder, not a Surface.
In addition to passing the Camera preview data to the encoder, I'd also like to display the preview on-screen (so the user can actually see what they are encoding). If the encoder wasn't involved then I'd use a SurfaceView, but that doesn't appear to work in this scenario since SurfaceView creates its own Surface and I think I need to use the one created by MediaCodec.
I've searched online quite a bit for a solution and haven't found one. Some examples on bigflake.com seem like a step in the right direction but they take an approach that adds a bunch of EGL/SurfaceTexture overhead that I'd like to avoid. I'm hoping there is a simpler example or solution where I can get the Camera and MediaCodec talking more directly without involving EGL or textures.
As of Android 4.3 (API 18), the bigflake CameraToMpegTest approach is the correct way.
The EGL/SurfaceTexture overhead is currently unavoidable, especially for what you want to do in goal #2. The idea is:
Configure the Camera to send the output to a SurfaceTexture. This makes the Camera output available to GLES as an "external texture".
Render the SurfaceTexture to the Surface returned by MediaCodec#createInputSurface(). That feeds the video encoder.
Render the SurfaceTexture a second time, to a GLSurfaceView. That puts it on the display for real-time preview.
The only data copying that happens is performed by the GLES driver, so you're doing hardware-accelerated blits, which will be fast.
The only tricky bit is you want the external texture to be available to two different EGL contexts (one for the MediaCodec, one for the GLSurfaceView). You can see an example of creating a shared context in the "Android Breakout game recorder patch" sample on bigflake -- it renders the game twice, once to the screen, once to a MediaCodec encoder.
Update: This is implemented in Grafika ("Show + capture camera").
Update: The multi-context approach in "show + capture camera" approach is somewhat flawed. The "continuous capture" Activity uses a plain SurfaceView, and is able to do both screen rendering and video recording with a single EGL context. This is recommended.

Android Capture Surface Flinger

Many devices do not store the final display data in framebuffer, hence screen capture methods will not work on those devices.
I want to know how can get the final composition data from Surface Flinger?
If we can achieve the capture from surface flinger, it can help us to retrieve the video and camera preview despite there being no framebuffer.
You don't want or need the final composited video data. To record the camera preview, you can just feed it into a MediaCodec (requires Android 4.1, API 16). In Android 4.3 (API 18) this got significantly easier with some tweaks to MediaCodec and the introduction of the MediaMuxer class. See this page for examples, especially CameraToMpegTest.
It is possible to capture the composited screen; for example, the system UI does it to grab screenshots for the recent apps menu, and DDMS/ADT can capture screen shots for debugging. You need the appropriate permission to do this, though, and normal apps don't have it. It's restricted to make certain phishing schemes harder.
In no event are you able to capture DRM-protected video. Not even SurfaceFlinger gets to see that.
From the shell, you can use the screencap command (see source code).

Categories

Resources