Load and play ultra high resolution vídeos on android - android

Does anyone know how to load a high resolution video in android programmatically, such as 3000 x 3000, to display only a portion of this video, such as 1000 x 1000?
I tried to use the MediaPlayer android sdk official in a TextureView, but this method has media size limitations, I think, because the video plays but and texture view is black ..
I appreciate the help.

Media size limitations are for a reason. 3k x 3k resolution video is huge for such small device like phone. Consider that you are not able to decrypt only small portion of video frame, that is not like video works. So you need do decrypt whole big frame (wchich is calculated from I-Frame and following P-frames ) than take screenshot of it after that take part which you are interested in and present on your textureView, and all of this in realtime. Think about device memory and CPU. In my opinion it's not possible with such small resources

Related

Play and split video on a wall of screens

I am working on an Android TV app project. I need to split one video into 2, 3 or X screens in equal parts. Each screen has an Android TV stick plugged on it with my app on it.
For example:
If we have 2 screens each one will show 50% of the video played.
If we have 3 screens each one will show 33,33% of the video played.
If we have 4 screens each one will show 25% of the video played.
Here is one image to have a better understanding of my expectations:
The video is played simultaneously in each screens of the wall and about this point I have already think about it : one screen will be the NTP (network time protocol) master and the other screen(s) will be the slave(s). To synchronize the players.
My first idea is to have on each app the complete video, playing it and having visible only the part that I need. How can I achieve that ? Is it possible ?
In advance thank you for your help.
I'm not clear how you'll handle the height (e.g., if you have a 1080p video but span it across four screens, you're going to have to cut off 3/4 of the pixels to "zoom in" on it across the screens), but some thoughts:
If you don't have to worry about HDCP, an HDMI splitter might work. If not, but it's for a one-off event (e.g., setting up a kiosk for a trade show), then it's probably least risky and easiest to create separate video files with them actually split how you'd want. If this has to be more flexible/robust, then it's going to be a bit of a journey with some options.
Simplest
You should be able to set up a SurfaceView as large as you need with the offsets adjusted for each device. For example, screen 2 might have a SurfaceView set with a width of #_of_screens * 1920 (or whatever the appropriate resolution is) and an X starting position of -1920. The caveat is that I don't know how large of a SurfaceView this could support. For example, this might work great for just two screens but not work for ten screens.
You can try using VIDEO_SCALING_MODE_SCALE_TO_FIT_WITH_CROPPING to scale the video output based on how big you need it to display.
For powerful devices
If the devices you're working with are powerful enough, you may be able to render to a SurfaceTexture off screen then copy the portion of the texture to a GLSurfaceView. If this is DRMed content, you'll also have to check for the EGL_EXT_protected_content extension.
For Android 10+
If the devices are running Android 10 or above, SurfaceControl may work for you. You can use a SurfaceControl.Transaction to manipulate the SurfaceControl, including the way the buffer coordinates are mapped. The basic code ends up looking like this:
new SurfaceControl.Transaction()
.setGeometry(surfaceControl, sourceRect, destRect, Surface.ROTATION_0)
.apply();
There's also a SurfaceControl sample in the ExoPlayer v2 demos: https://github.com/google/ExoPlayer/tree/release-v2/demos/surface

Exceeding maximum texture size when playing video with OpenGL on Android

I am using the GL_OES_EGL_image_external extension to play a video with OpenGL. The problem is that on some devices the video dimensions are exceeding the maximum texture size of OpenGL. Is there any way how I can dynamically deal with this issue, e.g. downscale the frames on the fly or do I have to reduce the video size beforehand?
If you are really hitting the max texture size in OpenGL ES (FWIW I believe this is about 2048x2048 with recent devices) then you could do a few things:
You could set setVideoScalingMode(VIDEO_SCALING_MODE_SCALE_TO_FIT) on your MediaPlayer. I believe this will scale the video resoltion to the size of the SurfaceTexture/Surface that it is attached to.
You could alternatively have four videos playing and render them to seperate TEXTURE_EXTERNAL_OES's then render these four textures seperately in GL. However that could kill your performance.
If I saw the error message and some context of the code I could maybe provide some more information.

Android MediaMux & MediaCodec too slow for saving video

My Android app does live video processing using OpenGL. I'm trying to save it to video using MediaMuxer and MediaCodex.
The performance it not good enough. Each cycle the screen is updated, and it is saved to file. The screen is smooth, the video file is horrible. By this I mean major motion blur when it changes quickly and the frame-rate appears to be 1/2 or 1/3rd of what it should be.
It seems to be a limitation due to clamping of settings internally. I can't get it to spit out a video with a bit rate greater than 288KBPS. I think it is not clamping the requested parameters because there is no difference in frame rate for 1024x1024, 480x480, and 240x240. If it was having trouble keeping up, it should at least improve when the number of pixels drops by a factor > 10.
The app is here : https://play.google.com/store/apps/details?id=com.matthewjmouellette.snapdat.
I would love to post a code sample, but my program is 10K lines of code, with a lot of relevant code just for this problem.
Any help would be greatly appreciated.
EDIT:
I've tried like 10+ different things. I'm out of ideas right now. I wish I could just save the video uncompressed, the hard-drive should be able to keep up with a small enough image and medium fps.
It seems to be that the encoding method just doesn't work for my video. The frames differ to much, to try to "move" one part of the frame, as a sort of encoding. Instead I need full frames throughout. I am thinking something along the lines of M-JPEG would work really well. JPEGs tend to take 1/10th the size of a bitmap. It should allow a reasonable size, with almost no processing power required by the CPU, since it is image compression not video compression which we are doing. I wish I had a good library for this.

Video camera interface between beaglebone and android

I am trying to develop an application in which a Beaglebone platform captures video images from a camera connected to it, and then send them (through an internet socket) to an Android application such the application shows the video images.
I have read that openCV may be a very good option to capture the images from a camera, but then I am not sure how the images can be sent through a socket.
On the other end, I think that the video images received by the Android application could be treated by simple images. With this in mind I think I can refresh the image every second or so.
I am not sure if I am in the right way for the implementation, so I really appreciate any suggestion and help you could provide.
Thanks in advance, Gus.
The folks at OpenROV have done something like you've said. Instead of using a custom Android app, which is certainly possible, they've simply used a web browser to display the images captured.
https://github.com/OpenROV/openrov-software
This application uses OpenCV to perform the capture and analysis, a Node.JS application to transmit the data over socket.io to the web browser and a web client to display the video. An architecture description on how this works is given here:
http://www.youtube.com/watch?v=uvnAYDxbDUo
You can also look at running something like mjpg-streamer:
http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1051&context=cpesp
Note that displaying the video stream as a set of images can have big performance impact. For example, if you are not careful how you encode each frame, you can more than double the traffic between the two systems. ARGB takes 32 bits to encode a pixel, YUV takes 12 bits, so even accounting for the frame compression, you still are doubling the storage per frame. Also, rendering ARGB is much, much slower than rendering YUV, as most of the Android phones actually have hardware-optimized YUV rendering (as in the GPU can directly blit the YUV in the display memory). In addition, rendering separate frames as approach usually make sone take the easy way and render a Bitmap on a Canvas, which works if you are content with something in the order of 10-15 fps, but can never get to 60 fps, and can get to a peak (not sustained) of 30 fps only on very few phones.
If you have a hardware MPEG encoder on the Beaglebone board, you should use it to encode and stream the video. This would allow you to directly pass the MPEG stream to the standard Android media player for rendering. Of course, using the standard media player will not allow you to process the video stream in real time, so depending on your scenario this might not be an option for you.

grab frame from video in Android

I've been looking at different ways of grabbing a YUV frame from a video stream but most of what I've seen rely on getting the width and height from previewSize. However, a cell phone can shoot video at 720p but a lot of phones can only display it at a lower resolution (ie 800x480) so is it possible to grab a screen shot that's closer to 1920x1080 (if video is being shot at 720p)? Or Am i forced to use the preview resolution (800x400 on some phones)?
Thanks
Yes, you can. *
* Conditions Apply -
You need access to middle layer, mediaframe work to be more precise
No, it cannot be done only through the application
Now if you want to do it at the mediaframe work level here are steps -
Assuming you are using Froyo and above, the default mediaframe work used is StageFright
In StageFright go to method onVideoEvent after a buffer is read from the mVideoSource use the mVideoBuffer to access the video frame at its original resolution
Linking this with your application -
You will need a button in the application to indicate screen capture
Once the user presses this button then you read the video frame from the location mentioned above and then return this buffer to the Java layer
From here you can use the JPEG Encoder to convert the raw video frame to an image.
EDIT:
Re read your question, you were asking for the screen capture during recording or the camera path. Even for this there is no way to achieve this in application alone, you will have to do something similar but you will need access to CameraSource in the StageFright framework.

Categories

Resources