Many devices do not store the final display data in framebuffer, hence screen capture methods will not work on those devices.
I want to know how can get the final composition data from Surface Flinger?
If we can achieve the capture from surface flinger, it can help us to retrieve the video and camera preview despite there being no framebuffer.
You don't want or need the final composited video data. To record the camera preview, you can just feed it into a MediaCodec (requires Android 4.1, API 16). In Android 4.3 (API 18) this got significantly easier with some tweaks to MediaCodec and the introduction of the MediaMuxer class. See this page for examples, especially CameraToMpegTest.
It is possible to capture the composited screen; for example, the system UI does it to grab screenshots for the recent apps menu, and DDMS/ADT can capture screen shots for debugging. You need the appropriate permission to do this, though, and normal apps don't have it. It's restricted to make certain phishing schemes harder.
In no event are you able to capture DRM-protected video. Not even SurfaceFlinger gets to see that.
From the shell, you can use the screencap command (see source code).
Related
I am modifying (Java) the TF Lite sample app for object detection. It has a live video feed that shows boxes around common objects. It takes in ImageReader frames at 640*480.
I want to use these bounds to crop the items, but I want to crop them from a high-quality image. I think the 5T is capable of 4K.
So, is it possible to run 2 instances of ImageReader, one low-quality video feed (used by TF Lite), and one for capturing full-quality still images? I also can't pin the 2nd one to any Surface for user preview, pic has to be captured in the background.
In this medium article (https://link.medium.com/2oaIYoY58db) it says "Due to hardware constraints, only a single configuration can be active in the camera sensor at any given time; this is called the active configuration."
I'm new to android here, so couldn't make much sense of this.
Thanks for your time!
PS: as far as I know, this isn't possible with CameraX, yet.
As the cited article explains, you can use a lower-resolution preview stream and periodically capture higher-rez still images. Depending on hardware, this 'switch' may take time, or be really quick.
In your case, I would run a preview capture session at maximum resolution, and shrink (resize) the frames to feed into TFLite when necessary.
The Media Projection package is new Lollipop, and allows an app to capture the device's screen in realtime for streaming to video. I was hoping this could also be used to capture a single still screenshot, but so far I have not been successful. Of course, the first frame of a captured video could work, but I'm aiming for a perfect, lossless screenshot matching the pixel resolution of the device. A still from a captured video cannot provide that.
I've tried a lot of things, but the closest I came to a solution was to first launch an invisible activity. This activity then follows the API example for starting screen capture, which can include asking the user's permission. Once screen capture is enabled, the screen image is live in a SurfaceView. However, I cannot find a way to capture a bitmap from the SurfaceView. There are lots of questions and discussions about this, but no solutions seem to work, and there is some evidence that it is impossible.
Any ideas?
You can't capture the contents of a SurfaceView.
What you can do is replace the SurfaceView with a Surface object that has an in-process consumer, such as SurfaceTexture. In the android-ScreenCapture example linked from the question, mMediaProjection.createVirtualDisplay() wants a Surface to send images to. If you create a SurfaceTexture, and use that to construct a Surface, the images generated by the MediaProjection will be available from an OpenGL ES texture.
If GLES isn't your thing, the ImageReader class can be used. It also provides a Surface that can be passed to createVirtualDisplay(), but it's easier to access the pixels from software.
One of the features of Android 4.4 (Kit Kat) is that it provides a way for developers to capture an MP4 video of the screen using adb shell screenrecord. Does Android 4.4 provide any new API's for applications to capture and encode video, or does it just provide the screenrecord utility/binary?
I ask because I would like to do some screen capture work in an application I'm writing. Before anyone asks, yes, the application would have framebuffer access. However, the only Android-provided capturing/encoding API that I've seen (MediaRecorder) seems to be limited to recording video from the device's camera.
The only screen capture solutions I've seen mentioned on StackOverfow seem to revolve around taking screenshots at a regular interval or using JNI to encode the framebuffer with a ported version of ffmpeg. Are there more elegant, native solutions?
The screenrecord utility uses private APIs, so you can't do exactly what it does.
The way it works is to create a virtual display, route the virtual display to a video encoder, and then save the output to a file. You can do essentially the same thing, but because you're not running as the "shell" user you'd only be able to see the layers you created. The relevant APIs are designed around creating a Presentation, which may not be exactly what you want.
See the source code for a CTS test with a trivial example (just uses an ImageView).
Of course, if you happen to be a GLES application, you can just record the output directly (see e.g. EncodeAndMuxTest and the "Record GL app" activity in Grafika).
Well, AFAIK, i don't see an API support equivalent to capturing what's going on the screen.
So I read here that it's not possible to capture preview frames without a valid Surface. However, I saw that the IP Webcam app can do this, and I wanna know how.
Can that app do it on versions below v2.3? If so, how?
Furthermore, the bug isn't marked as fixed, so I'm wondering if the restriction was ever lifted.
Also, if I don't wanna save the video stream from the Preview, but rather stream it over the network, is that possible with the MediaRecorder? All the examples I see use a file for saving, but I reckon the IP Webcam app uses the Preview. Or maybe it writes to a pipe?
When using Android, you must have a valid Surface object to take pictures or video. The Preview also requires the Surface object. I would guess that the IP Webcam uses native calls (C or C++) to the Dalvik lower layers, bypassing the Java layer(s). That way, they can access the hardware more directly. If you have the skills you should be able to do this using the Android NDK.
I am trying to display a filtered camera preview, using onPreviewFrame() callback.
The problem is that when i remove this line:
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
to hide the preview, the app crashes. The log reads:
08-19 15:57:51.042: ERROR/CameraService(59): registerBuffers failed with status -38
What does this mean? Is this documented anywhere?
I am using the CameraPreview from the SDK APIDemos: http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/CameraPreview.html
SURFACE_TYPE_PUSH_BUFFERS generates several buffers for the SurfaceView. Components are locking (fill with data) and pushing (display data) these buffers deep in the OS code. Especially OpenMax (camera hardware device interface) is using "graphic buffers"="push buffers" to fill data and display data. To be specific, the camera hardware can fill a push buffer directly and die graphics hardware can display a push buffer directly (they share these buffers). Conclusion: The OS forces you to create a SurfaceView with push buffers. Then it can use the buffers for the camera device.
In Google's Camera guide you can find a brief mention of this. According to the guide SURFACE_TYPE_PUSH_BUFFERS is a deprecated setting necessary for pre-3.0 devices.
Look in the code example in the "Creating a preview class" section, at the bottom of the constructor it says:
// deprecated setting, but required on Android versions prior to 3.0
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
It's a good idea to read the camera guide carefully if you haven't done so, it contains some important stuff that's not in the API documentation of the Camera classes.
What does this mean?
It means that you did not properly configure the SurfaceView via the SurfaceHolder.
Is this documented anywhere?
What is "this"? Here is the documentation for SurfaceView, SurfaceHolder, SURFACE_TYPE_PUSH_BUFFERS, and Camera.
If your real question is "where is it documented that Camera requires SURFACE_TYPE_PUSH_BUFFERS", I suspect that is undocumented. You use SURFACE_TYPE_PUSH_BUFFERS for camera preview and video playback, and perhaps other situations as well.