Currently my application uses android.provider.MediaStore.ACTION_VIDEO_CAPTURE intent to capture a video, but it seems I can't programatically set the maximum resolution (for example 720p).
Are there any methods/libraries to mimic this behavior, but at the same time with resolution control? Or should I create custom capture myself using MediaRecorder, SurfaceView etc?
If anyone is wondering I've switched to https://github.com/JeroenMols/LandscapeVideoCamera/
This really allowed to change only a couple lines of code to work. The downsize is that it supports only landscape mode. But maybe this is a plus, since less people would record vertical videos.
Related
I am working on an Android TV app project. I need to split one video into 2, 3 or X screens in equal parts. Each screen has an Android TV stick plugged on it with my app on it.
For example:
If we have 2 screens each one will show 50% of the video played.
If we have 3 screens each one will show 33,33% of the video played.
If we have 4 screens each one will show 25% of the video played.
Here is one image to have a better understanding of my expectations:
The video is played simultaneously in each screens of the wall and about this point I have already think about it : one screen will be the NTP (network time protocol) master and the other screen(s) will be the slave(s). To synchronize the players.
My first idea is to have on each app the complete video, playing it and having visible only the part that I need. How can I achieve that ? Is it possible ?
In advance thank you for your help.
I'm not clear how you'll handle the height (e.g., if you have a 1080p video but span it across four screens, you're going to have to cut off 3/4 of the pixels to "zoom in" on it across the screens), but some thoughts:
If you don't have to worry about HDCP, an HDMI splitter might work. If not, but it's for a one-off event (e.g., setting up a kiosk for a trade show), then it's probably least risky and easiest to create separate video files with them actually split how you'd want. If this has to be more flexible/robust, then it's going to be a bit of a journey with some options.
Simplest
You should be able to set up a SurfaceView as large as you need with the offsets adjusted for each device. For example, screen 2 might have a SurfaceView set with a width of #_of_screens * 1920 (or whatever the appropriate resolution is) and an X starting position of -1920. The caveat is that I don't know how large of a SurfaceView this could support. For example, this might work great for just two screens but not work for ten screens.
You can try using VIDEO_SCALING_MODE_SCALE_TO_FIT_WITH_CROPPING to scale the video output based on how big you need it to display.
For powerful devices
If the devices you're working with are powerful enough, you may be able to render to a SurfaceTexture off screen then copy the portion of the texture to a GLSurfaceView. If this is DRMed content, you'll also have to check for the EGL_EXT_protected_content extension.
For Android 10+
If the devices are running Android 10 or above, SurfaceControl may work for you. You can use a SurfaceControl.Transaction to manipulate the SurfaceControl, including the way the buffer coordinates are mapped. The basic code ends up looking like this:
new SurfaceControl.Transaction()
.setGeometry(surfaceControl, sourceRect, destRect, Surface.ROTATION_0)
.apply();
There's also a SurfaceControl sample in the ExoPlayer v2 demos: https://github.com/google/ExoPlayer/tree/release-v2/demos/surface
I am using this project android-camera2-secret-picture-taker to capture image without open camera view, but the captured images is very bad like this
any help to make this better?
thanks
[Edit]
I tried other phones and it works fine, I take this bad images on Huawei Y6II only and I don't know why? the phone camera is 13 mpx and works fine with native camera app.
Did you issue only a single capture request to the camera device? (No free-running preview or such).
Generally, the auto-exposure, focus, and white-balance routines take a second or so of streaming before they stabilize to good values.
Even if you don't want a preview on screen, you need to request 10-30 frames of data from the camera to start before you save a final image. Or to be more robust, set a repeating request targeting some low-resolution SurfaceTexture, and wait until the CaptureResult CONTROL_AE_STATE / AWB_STATE fields reach CONVERGED, and the AF_STATE field is what you want as well (depends on what AF mode you're using). Then capture your image.
This is a wildly blind guess, but hey, worth a try.
If you used some code snippet from the web which suggests to get a list of supported image sizes and just pick the first one - well this has backfired for me on Huawei devices (more than one model) because Huawei seems to provide the list in the ascending order of resolution (i.e. smallest-first), whereas most other devices I've seen does that in descending order (i.e. largest-first).
So if this is a resolution issue, it might be worth a check.
I am trying to capture a high resolution frame (1280x720) from the camera in a pair of Google Glass using OpenCV 2.4.10 for Android. I have implemented the CameraBridgeViewBase.CvCameraViewListener2 in my Activity and try to grab the frame in the onCameraFrame method. So far everything works well, and i get a 512x288 Mat object.
My problem is that the 512x288 resolution is not high enough for what I need. So I tried to setup my project the same way as they do in Sample 3 that follows with OpenCV: http://goo.gl/iDyqQj. The problem is that it only works for resolutions below 512x288, as soon as I increase the resolution above this level it defaults back to to being 512x288 (without any notice).
I found some suggestions, http://goo.gl/X2wtM4, that OpenCV is restricting the frame size to a maximum of the screen resolution. But the Google Glass screen should have a 640x360 resolution? I tried to do as described in the answer, but when I override calculateCameraFrameSize and return a Size-object larger than 512x288, I get a distorted frame (but with the larger dimensions, see below).
Does anyone have a suggestion on how capture a higher captured resolution on the Google Glass using OpenCV?
So I found a solution. It seem to be two separate problems. As I thought in my question you need to override calculateCameraFrameSize in JavaCameraView to be able to fetch higher resolutions than the device's screen in onCameraFrame. This is apparently a design choice by OpenCV and have been since version 2.4.5. So this is why I could not get a frame with higher resolution.
Even though I now can get a frame with higher resolution, it still is distorted for most preview sizes. This is a bug in the GDK that seem to have been known for quite some time (since XE10 if I understood correctly), but still is not fixed. Fortunately there is a workaround! The issue is avoided by manually setting the FPS of the preview using setPreviewFpsRange after you acquire the Camera.
Camera.Parameters params = camera.getParameters();
params.setPreviewFpsRange(30000, 30000);
camera.setParameters(params);
Can the android camera be programmatically set to send gray scale images.
I'm not asking about conversion after the images have been received, I'm asking for a way to programmatically set it like we do with fps, resolution etc.
Are you talking about the built in camera app (via an intent) or inside your app?
If it's the built in camera app, i don't think it's possible, and even if it was , you can't assume it will work the same on all devices, since many of them have customized camera apps.
About camera within your app, yes, you can do it, and there are examples out there how to do show things on the camera based on its content , like this one
I've been looking at different ways of grabbing a YUV frame from a video stream but most of what I've seen rely on getting the width and height from previewSize. However, a cell phone can shoot video at 720p but a lot of phones can only display it at a lower resolution (ie 800x480) so is it possible to grab a screen shot that's closer to 1920x1080 (if video is being shot at 720p)? Or Am i forced to use the preview resolution (800x400 on some phones)?
Thanks
Yes, you can. *
* Conditions Apply -
You need access to middle layer, mediaframe work to be more precise
No, it cannot be done only through the application
Now if you want to do it at the mediaframe work level here are steps -
Assuming you are using Froyo and above, the default mediaframe work used is StageFright
In StageFright go to method onVideoEvent after a buffer is read from the mVideoSource use the mVideoBuffer to access the video frame at its original resolution
Linking this with your application -
You will need a button in the application to indicate screen capture
Once the user presses this button then you read the video frame from the location mentioned above and then return this buffer to the Java layer
From here you can use the JPEG Encoder to convert the raw video frame to an image.
EDIT:
Re read your question, you were asking for the screen capture during recording or the camera path. Even for this there is no way to achieve this in application alone, you will have to do something similar but you will need access to CameraSource in the StageFright framework.