I want to use glSurfaceView to render video in GB(API 10), so that i can use glReadPixels call to get the frame data. But setSurface in MediaPlayer requires API 14. Now any way can i get my work done, any Support libraries, or native code?
Ultimately i want to capture frames of video to JPEG images while the video is rendered in fullscreen. The cartoon video is in BGRA format(hope so!)
Related
I have a set of video animations with an alpha channel. I would like the user of the android app to record a video and play the animation that overlay the camera view at the same time. I have searched the Internet but didn't find any solution to achieve this. Have you any ideas of how that can be achieved?
For Preview:
You could try using opencv to read the video frame by frame and then overlay it with the camera frame and display it on a textureview.
[EDIT]: This doesn't work since opencv on android doesn't have the ability to extract frames from a video, as there is no native ffmpeg support.
For Recording:
You cannot record video this way and your best bet is to use ffmpeg to overlay the animation video on the recorded video.
Opencv4android: http://docs.opencv.org/2.4/doc/tutorials/introduction/android_binary_package/dev_with_OCV_on_Android.html
Read Video frame by frame: (http://answers.opencv.org/question/5768/how-can-i-get-one-single-frame-from-a-video-file/)
Blending frames on opencv: (http://docs.opencv.org/2.4/doc/tutorials/core/adding_images/adding_images.html)
FFFMPEG on Android: https://android-arsenal.com/details/1/931
I play a video with mediaPlayer and modify it using surfaceTexture and OpenGL ES 2.0.
In the doc, it says that
surfaceTexture.updateTexImage();
will "update the texture image to the most recent frame from the image stream".
So if I call 2 times updateTexImage, the texture image will not necessarily be the 2nd frame of the video ?
If this is the case, then I guess there is no way to control the speed of the video with media player and OpenGL ?
Yes, if you call 2 times updateTexImage, it may not be 2nd frame of the video.
There is no way you can fasten the (increase fps) video than input. However, with timing of updateTexImage, you can slow down (reduced fps), skipping frames.
I am doing a project on image processing stuff. I receive a raw h264 video stream in real time and decode it using MediaCodec. I have successfully displayed the decoded video on a TextureView or SurfaceView. Now I want to process each frame, do something to it using OpenCV4Android and show the updated video frame on the screen. I know OpenCV has a sample project that demonstrates how to process video frames from the phone camera, but I wonder how to do it if I have another video source.
Also I have some questions on TextureView:
What does the onSurfaceTextureUpdated() from SurfaceTextureListener do? If I call getBitmap() in this function, then does that mean I get each frame of the video? And what about SurfaceTexture.onFrameAvailableListener?
Is it possible to use a hidden TextureView as an intermediate, extract its frames for processing and render it back to another surface, say, OpenGL ES texture for displaying?
The various examples in Grafika that use Camera as input can also work with input from a video stream. Either way you send the video frame to a Surface.
If you want to work with a frame of video in software, rather than on the GPU, things get more difficult. You either have to receive the frame on a Surface and copy it to a memory buffer, probably performing an RGB-to-YUV color conversion in the process, or you have to get the YUV buffer output from MediaCodec. The latter is tricky because a few different formats are possible, including Qualcomm's proprietary tiled format.
With regard to TextureView:
onSurfaceTextureUpdated() is called whenever TextureView receives a new frame. You can use getBitmap() to get every frame of the video, but you need to pace the video playback to match your filter speed -- TextureView will drop frames if you fall behind.
You could create a "hidden TextureView" by putting other View elements on top of it, but that would be silly. TextureView uses a SurfaceTexture to convert the video frames to OpenGL ES textures, then renders them as part of drawing the View UI. The bitmap data is retrieved with glReadPixels(). You can just use these elements directly. The bigflake ExtractMpegFramesTest demonstrates this.
So, In my application, I am able to show effects(like blur filter, gaussian) to video that comes from Camera using GPUImage library.
Basically, I (library) will take the input from the Camera, get's the raw byte data, converts it into RGBA format from YUV format, then applies effects to this image and displays on the Surface of GLSurfaceView using OpenGL. finally, to the user, it looks like a video with effects applied.
Now I want to record the frames of Surface as a video using MediaCodec API.
but this discussion says that we can not pass a predefined Surface to the MediaCodec.
I have seen some samples at bigflake where he is creating Surface using MediaCodec.createInputSurface() but for me, Surface comes from the GLSurfaceView.
So, how can I record a frames of a Surface as a video?
I will record the audio parallelly, merge that video and audio using FFMPEG and present to the user as a Video with effects applied.
You can see a complete example of this in Grafika.
In particular, the "Show + capture camera" activity records camera output to .mp4. It also demonstrates applying some simple image processing techniques in the GL shader. It uses a GLSurfaceView and a convoluted dance to keep the recording going across orientation changes.
Also possibly of interest, the "Record GL app with FBO" activity records OpenGL ES rendering a couple different ways. It uses plain SurfaceView and is much more straightforward.
What I need to do is to decode video frames and render the frames on a trapezoidal surface. I'm using Android 2.2 as my development platform
I'm not using the mediaplayer service since I need access to the decoded frames.
Here's what I have so far:
I am using stagefright framework to extract decoded video frames.
each frame is then converted from YUV420 to RGB format
the converted frames are then copied to a texture and rendered to an OpenGL surface
Note that I am using Processing and not using OpenGL calls directly.
So now my problems are
i can only decode mp4 files with stagefright
the rendering is too slow, around 100ms for a 320x420 frame
there is no audio yet, I can only render videos but I still don't know how to synchronize the playing of the audio frames.
So for my questions...
how can I support other video formats? Shoud I use stagefright or should I switch to ffmpeg?
how can I improve the performance? I should be able to support at least 720p?
Should I use OpenGL calls directly instead of Processing? Will this improve the performance?
How can I sync the audio frames during playback?
Adding other video formats and codecs to stagefright
If you have parsers for "other" video formats, then you need to implement Stagefright media extractor plug-in and integrate into awesome player. Similarly if you have OMX Components for required Video Codecs, you need to integrate them into OMXCodec class.
Using FFMPEG components in stagefright, or using FFMPEG player instead of stagefright does not seem trivial.
However if required formats are already available in Opencore, then you can modify Android Stack so that Opencore gets chosen for those formats. You need to port the logic of getting YUV data to Opencore.
(get dirty with MIOs)
Playback performance
The surface flinger, used for normal playback uses Overlay for rendering. It usually provides around 4 - 8 video buffers (so far what I have seen). So you can check how many different buffers you are getting in OPEN GL rendering. Increasing buffer will definitely improve the performance.
Also, check time taken for YUV to RGB conversion. Can optimize or use opensource library to improve performance.
Usually Open GL is not used for Video Rendering (known for Graphics). So not sure on the performance.
Audio Video Sync
Audio time is used as reference. In Stagefright, awesome player uses Audio Player for playing out audio. This player implements an interface for providing time data. Awesome player uses this for rendering Video. Basically Video frames are rendered when their presentation time matches with that of audio sample being played.
Shash