In a fragment I have a Texture View displaying the camera preview and on the top of that I draw several other views.
My goal is to record a short video of what the user sees (all views) or save several screenshots to compile later into a video.
Since I don't want any disclaimer to show up and Intent associated, I don't want to use MediaProjection.
I've tried many things but all either don't work or take screenshot/record all views except for the TextureView, which turns out black in the outcome. Note that I don't wish to use MediaRecorder either because it'll only allow me to record the textureview and I want all of the contents to be recorded/screenshot.
I understand that this is the reason TextureView comes out black.
I have actually managed to get screenshots with the PixelCopy api, particularly this call, but minimum sdk version is 26 and I needed a solution to work for minimum 24 sdk version, otherwise it would be an option for me... Also, the ideal scenario would be getting a video and not the frames for later making the video.
So, can anyone point out the better way of doing this? I'm currently not seeing any alternatives...
Again, I want to give the user a small video of the entire screen display (all views).
Thanks a lot in advance!
Related
I am currently working on an application in android studios that messes with the colors of live camera feed from a phones camera. For example, I may want to filter out all the reds, or maybe I want to make displayed camera image black-and-white.
However, I haven't really found much on how to do this. I've found tutorials on using both the deprecated Camera class and the android.hardware.camera2 class. My preferred sample code was for camera2, found directly here (takes you directly to Java class files, not whole project).
So does anyone know how to use camera2 to do what I want? Do I need to use the deprecated class Camera instead? My idea is that I need to have an activity that has the main job of displaying images, and behind the scenes the phone camera is running, sending the image (in whatever format, Bitmap) to have the colors messed with (by some code I will make), which then sends the image to be displayed in the main activity.
So that is three main pieces: (1) Camera to Bitmap, to get what is currently seen by the phone Camera and store it in code; (2) mess with the colors of the Bitmap to distort the current view in my desired way; and (3) then a way of taking the resulting distorted view and displaying that on the screen. Of course, as mentioned, it's the first and last of the three just mentioned that I really need help with.
Please let me know what other details will be helpful to know about.
In one of my application i need to record video of my own screen.
I need to make a video in that user can take video of their app how it possible?
first question is it possible so? if yes then how? any useful links or some help
Thanks
Pragna Bhatt
Yes, it is possible, but with certain limitations. I have done it in one of my projects.
Android 4.4 adds support for screen recording. See here
If you are targeting lower versions, you can achieve it, but it will be slow. There is no direct or easy way to do it. What you will do is, create drawable from your view/layout. Convert that drawable to YUV format and send it to camera (see some library where you can give custom yuv image to camera), camera will play it like a movies, you can save that movies to storage. Use threads to increase frame-rate (new multi-core device will have high frame-rate).
Only create drawable (from view) when there is any change in view or its children, for that you can use Global Layout Listener. Otherwise send same YUV image to camera.
Limitation:
You can not create video of more than one activities (at a time), or their transactions, because you are creating image from view. (Work on it yourself, may be you'll find a way)
You can not increase frame rate from a certain point, because it depends on hardware of your device.
(I tried to stuff the question with keywords in case someone else has this issue - I couldn't find much help.)
I have a custom View in Android that contains an LED bargraph that displays levels received via socket communication. It's basically just a clipped image. The higher the level, the less clipped the image is.
When I update the level and then invalidate the View, some devices seem to "collect" multiple updates and render them in chunks. The screen visibly hesitates for say 1/10th of a second, then rapidly paints multiple frames, and then hesitates again. It looks like it's overwhelmed and dropping frames.
However, when changing another UI control on the screen, the LED bargraph paints much more frequently and smoothly. I'm thinking Android is trying to help me by "collecting" multiple invalidations and then doing them all at once. Perhaps by manipulating controls, I'm "increasing" my frame rate simply by giving it "more to do" so it delays less between actual paints.
Unlike animation (with smooth transitions) I want to show the absolute latest value as quickly as possible. My data samples aren't faster than 10-20fps anyway.
Is there an easy way to "force" a paint at certain points, or is this a limit of how Views work? Should I be implementing this in a SurfaceView instead? (I have not played with that yet... want advice first.) Thanks in advance for suggestions.
(Later that same day...)
Update: I found a page in the Docs that does suggest implementing my widget as a SurfaceView is the way to go:
http://developer.android.com/guide/topics/graphics/2d-graphics.html
(An hour after that...)
SurfaceView seems overkill for what I want to do. The best-practice method is to "own" the whole canvas, but I have already developed the rest of my controls and layouts and they work well. It must be possible to get some better performance with what I have, especially since interacting with the UI makes the redraw speed satisfactory.
It turns out SurfaceView was the way to go. I was benchmarking on an older phone which didn't help. (The frame rate using a standard View was fine on an ASUS eeePad). I had to throw away some code, but the end result is smoother and faster with SurfaceView. Further, I was able to re-use more code than I expected and actually dramatically simplified my multitouch handling code (since everything I want to touch is in the same SurfaceView.
FYI: I'm still only getting about 15fps on Droid X, but half of the CPU load appears to be data packet processing. The eeePad is doing almost 40fps now -- and my data rate is only 20 samples/sec.
So... a win I guess. I want the Droid X to run better, but it flies on a real tablet.
I'm trying to figure out if Android can handle two video video players occupying the same screen space, preferably with the one on top having alpha channel regions that are transparent to the one behind.
I know how to implement this code wise, I'm curious if anyone knows if this is physically possible before I bother throwing coding time at it.
TIA
AFAIK, no, at least before Android 4.0. You can't have two SurfaceViews overlap.
Now, it is conceivable that this is possible with TextureView with Android 4.0, though I am far from confident of that.
Another option :
Player 1 : Stock MediaPlayer that renders on a SurfaceView
Player 2 : Yet-another-player that can render on a GLSurfaceView or a Bitmap. This must be custom-built to decode frames and write on a GLSurfaceView's context or Native-bitmap via JNI.
I want to write an activity that:
Shows the camera preview (viewfinder), and has a "capture" button.
When the "capture" button is pressed, takes a picture and returns it to the calling activity (setResult() & finish()).
Are there any complete examples out there that works on every device? A link to a simple open source application that takes pictures would be the ideal answer.
My research so far:
This is a common scenario, and there are many questions and tutorials on this.
There are two main approaches:
Use the android.provider.MediaStore.ACTION_IMAGE_CAPTURE event. See this question
Use the Camera API directly. See this example or this question (with lots of references).
Approach 1 would have been perfect, but the issue is that the intent is implemented differently on each device. On some devices it works well. However, on some devices you can take a picture but it is never returned to your app. On some devices nothing happens when you launch the intent. Typically it also saves the picture to the SD card, and requires the SD card to be present. The user interaction is also different on every device.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
I would have used ZXing as an example application (I work with it a lot), but it only uses the preview (viewfinder), and doesn't take any pictures. I also found that on some devices, ZXing did not automatically adjust the white balance when the lighting conditions changed, while the native camera app did it properly (not sure if this can be fixed).
Update:
For a while I used the camera API directly. This gives more control (custom UI, etc), but I would not recommend it to anyone. I would work on 90% of devices, but every now and again a new device would be released, with a different problem.
Some of the problems I've encountered:
Handling autofocus
Handling flash
Supporting devices with a front camera, back camera or both
Each device has a different combination of screen resolution, preview resolutions (doesn't always match the screen resolution) and picture resolutions.
So in general, I'd not recommend going this route at all, unless there is no other way. After two years I dumped by custom code and switched back to the Intent-based approach. Since then I've had much less trouble. The issues I've had with the Intent-based approach in the past was probably just my own incompetence.
If you really need to go this route, I've heard it's much easier if you only support devices with Android 4.0+.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
Either there is a bug in the examples or there is a compatibility issue with the devices.
The example that CommonsWare gave works well. The example works when using it as-is, but here are the issues I ran into when modifying it for my use case:
Never take a second picture before the first picture has completed, in other words PictureCallback.onPictureTaken() has been called. The CommonsWare example uses the inPreview flag for this purpose.
Make sure that your SurfaceView is full-screen. If you want a smaller preview you might need to change the preview size selection logic, otherwise the preview might not fit into the SurfaceView on some devices. Some devices only support a full-screen preview size, so keeping it full-screen is the simplest solution.
To add more components to the preview screen, FrameLayout works well in my experience. I started by using a LinearLayout to add text above the preview, but that broke rule #2. When using a FrameLayout to add components on top of the preview, you don't have any issues with the preview resolution.
I also posted a minor issue relating to Camera.open() on GitHub.
"the recommended way to access the camera is to open Camera on a separate thread". Otherwise, Camera.open() can take a while and might bog down the UI thread.
"Callbacks will be invoked on the event thread open(int) was called from". That's why to achieve best performance with camera preview callbacks (e.g. to encode them in a low-latency video for live communication), I recommend to open camera in a new HandlerThread, as shown here.