Is it possible to use SurfaceComposerClient to get screenshots, the way MediaCodec does with createInputSurface().
I cant use MediaCodec for that because I need raw video and not encoded data.
since 4.3 it seems that ScreenshotClient cant do multiple screenshots.
Yes, assuming you're running as shell or root, and you don't mind using non-public native APIs (i.e. you don't care if your app breaks every time a new version of the OS rolls out).
The canonical example is screenrecord, introduced in Android 4.4. It creates a virtual display and directs the output to a Surface. For normal operation a MediaCodec input surface receives the output. For the "bugreport" mode introduced in screenrecord v1.1, the output goes to a GLConsumer (roughly equivalent to a SurfaceTexture), which is rendered to a Surface with overlaid text.
There's a bug in Android 4.3 (see issues 59649 or 60638 on the Android Open Source Project Issue Tracker) which means ScreenshotClient can't be used to take more than one screenshot.
Related
I am trying to mirror cast using my own app into a Fire TV Stick that is connected to the televsion. It has an option to Mirror the display. My phone can connect to the Fire TV Stick this way, but I would like to mirror something with a smaller resolution and even if I change my phone's resolution using adb, I think it sends the native resolution anyway.
I looked into MediaRouter and MediaRouteProvider. Also downloaded the Media Router sample that it's snippets are used in the documentation. The sample ran but didn't work. And this API is super complex and have so many things in it. I am not sure how to build a simple app that cast video(and later phone's screen) into another device, either the Amazon Fire TV stick mirror display or at least into a client app I will also write.
I couldn't find compact enough samples to do what I want. Do you have any idea where there is a sample that works and is not a massive amount of code?
I couldn't make it work following the documentation.
Instead of finding something in the API to do the mircast for me, I was able to just read pixel data from the MediaProjection and VirtualDisplay and send that using sockets.
It wasn't easy, I had to use a GLES11Ext.GL_TEXTURE_EXTERNAL_OES from the SurfaceTexture, render that into my own offscreen GL_TEXTURE2D and then read that using glReadPixels and the attached framebuffer.
What I'm trying to achieve: access both front and back cameras at the same time.
What I've researched: I know android camera API doesn't give support for using multiple instances of the Camera and you have to release a camera before using the other one. I've read tens of questions about this, I know on some devices it's possible (like Samsung S4, or other new devices from them).
I've also found out that it's possible to have access to both of them in Android KitKat on SOME devices.
I also know that on api >= 21, using the camera2 API, it's possible to access both of them at the same time because it's thread safe.
What I've got so far: implementation for accessing the cameras one at the time in order to provide a picture-in-picture.
I know it's not possible to implement dual simultaneously camera on every device, I just want a way to make it available to some devices.
How can I test to see if the device is capable of accessing both of them?
I've also searched for a library that can allow me such thing, but I didn't find anything. Is there such a library?
I would like to make this feature available for as many devices as possible, and for the others, I'll leave the current state (one by one) of the feature.
Can anyone please help me, at least with some pieces of advice?
Thanks
!
The Android camera APIs generally allow multiple cameras to be used at the same time, but most devices do not have enough hardware resources to support that in practice - for example, there's often only one camera image processor shared by both cameras.
There's no query that's included in the Android APIs that'll tell you up front if you can use multiple cameras at the same time.
The only way to tell is to try to open a second camera when you already have one open. If you can open the second camera, then you can do picture-in-picture, etc. If you get an exception trying to open the second camera, then that particular device doesn't support having both cameras open.
It is possible using the Android Camera2 API, but as indicated above most devices don't have hardware support. If you have a Nexus 5X, Nexus 6, or Nexus 6P it will work and you can test with this BothCameras app. I've implemented blitting to allow video recording as well (in addition to still pictures) using the hardware h264 encoder.
You can not access both the cameras in all android mobile phones due to hardware limitations. The best alternative can be using both the camera one by one. For that you can use single camera object and can change camera face to take another photo.
I have done this in one of my application.
https://play.google.com/store/apps/details?id=com.ushaapps.bothie
I've decided to mention that in some cases just opening two cameras with Camera2 API is not enough to know about support.
There are some devices which are not throwing error during opening. The second camera is opened correctly but the first one will call onCaptureFailed callback.
So the most accurate way is starting both cameras and wait frames from each of them and check there are no capture failure errors.
One of the features of Android 4.4 (Kit Kat) is that it provides a way for developers to capture an MP4 video of the screen using adb shell screenrecord. Does Android 4.4 provide any new API's for applications to capture and encode video, or does it just provide the screenrecord utility/binary?
I ask because I would like to do some screen capture work in an application I'm writing. Before anyone asks, yes, the application would have framebuffer access. However, the only Android-provided capturing/encoding API that I've seen (MediaRecorder) seems to be limited to recording video from the device's camera.
The only screen capture solutions I've seen mentioned on StackOverfow seem to revolve around taking screenshots at a regular interval or using JNI to encode the framebuffer with a ported version of ffmpeg. Are there more elegant, native solutions?
The screenrecord utility uses private APIs, so you can't do exactly what it does.
The way it works is to create a virtual display, route the virtual display to a video encoder, and then save the output to a file. You can do essentially the same thing, but because you're not running as the "shell" user you'd only be able to see the layers you created. The relevant APIs are designed around creating a Presentation, which may not be exactly what you want.
See the source code for a CTS test with a trivial example (just uses an ImageView).
Of course, if you happen to be a GLES application, you can just record the output directly (see e.g. EncodeAndMuxTest and the "Record GL app" activity in Grafika).
Well, AFAIK, i don't see an API support equivalent to capturing what's going on the screen.
Many devices do not store the final display data in framebuffer, hence screen capture methods will not work on those devices.
I want to know how can get the final composition data from Surface Flinger?
If we can achieve the capture from surface flinger, it can help us to retrieve the video and camera preview despite there being no framebuffer.
You don't want or need the final composited video data. To record the camera preview, you can just feed it into a MediaCodec (requires Android 4.1, API 16). In Android 4.3 (API 18) this got significantly easier with some tweaks to MediaCodec and the introduction of the MediaMuxer class. See this page for examples, especially CameraToMpegTest.
It is possible to capture the composited screen; for example, the system UI does it to grab screenshots for the recent apps menu, and DDMS/ADT can capture screen shots for debugging. You need the appropriate permission to do this, though, and normal apps don't have it. It's restricted to make certain phishing schemes harder.
In no event are you able to capture DRM-protected video. Not even SurfaceFlinger gets to see that.
From the shell, you can use the screencap command (see source code).
So I read here that it's not possible to capture preview frames without a valid Surface. However, I saw that the IP Webcam app can do this, and I wanna know how.
Can that app do it on versions below v2.3? If so, how?
Furthermore, the bug isn't marked as fixed, so I'm wondering if the restriction was ever lifted.
Also, if I don't wanna save the video stream from the Preview, but rather stream it over the network, is that possible with the MediaRecorder? All the examples I see use a file for saving, but I reckon the IP Webcam app uses the Preview. Or maybe it writes to a pipe?
When using Android, you must have a valid Surface object to take pictures or video. The Preview also requires the Surface object. I would guess that the IP Webcam uses native calls (C or C++) to the Dalvik lower layers, bypassing the Java layer(s). That way, they can access the hardware more directly. If you have the skills you should be able to do this using the Android NDK.