I play a video with mediaPlayer and modify it using surfaceTexture and OpenGL ES 2.0.
In the doc, it says that
surfaceTexture.updateTexImage();
will "update the texture image to the most recent frame from the image stream".
So if I call 2 times updateTexImage, the texture image will not necessarily be the 2nd frame of the video ?
If this is the case, then I guess there is no way to control the speed of the video with media player and OpenGL ?
Yes, if you call 2 times updateTexImage, it may not be 2nd frame of the video.
There is no way you can fasten the (increase fps) video than input. However, with timing of updateTexImage, you can slow down (reduced fps), skipping frames.
Related
I modify a video with glsl shaders, using a SurfaceTexture and OpenGL ES 2.0. I can also encode the result video with MediaCodec.
The problem is that the only way I've found to decode the video is with MediaPlayer and SurfaceTexture, but MediaPlayer doesn't have a frame by frame decoding option. So right now, it's like a live encoding/decoding, there is no pause.
I've also tried to use seekTo / pause / start, but it would never update the texture..
So would it be possible to do a step by step decoding instead, to follow the encoding process ? I'm afraid that my current method is not very accurate.
Thanks in advance !
Yes, instead of using MediaPlayer, you need to use MediaExtractor and MediaCodec to decode it (into the same SurfaceTexture that you're already using with MediaPlayer).
An example of this would be ExtractMpegFramesTest at http://bigflake.com/mediacodec/, possibly also DecodeEditEncodeTest (or for a >= Android 5.0 async version of it, see https://github.com/mstorsjo/android-decodeencodetest).
EDIT : Wrong, mediaPlayer's stream cannot be used frame by frame, seems like it only works in "real" speed.
I've managed to do it with MediaPlayer actually. Following this answer :
stackoverflow - SurfaceTexture.OnFrameAvailableListener stops being called
Using counters, you can speed up or speed down the video stream, and synchronize it with the preview or the encoding.
But - If you want to do a real seek to a particular frame, then mstorsjo's solution is way better. In my case, I just wanted to make sure the encoding process is not going faster or slower than the video input stream.
I am doing a project on image processing stuff. I receive a raw h264 video stream in real time and decode it using MediaCodec. I have successfully displayed the decoded video on a TextureView or SurfaceView. Now I want to process each frame, do something to it using OpenCV4Android and show the updated video frame on the screen. I know OpenCV has a sample project that demonstrates how to process video frames from the phone camera, but I wonder how to do it if I have another video source.
Also I have some questions on TextureView:
What does the onSurfaceTextureUpdated() from SurfaceTextureListener do? If I call getBitmap() in this function, then does that mean I get each frame of the video? And what about SurfaceTexture.onFrameAvailableListener?
Is it possible to use a hidden TextureView as an intermediate, extract its frames for processing and render it back to another surface, say, OpenGL ES texture for displaying?
The various examples in Grafika that use Camera as input can also work with input from a video stream. Either way you send the video frame to a Surface.
If you want to work with a frame of video in software, rather than on the GPU, things get more difficult. You either have to receive the frame on a Surface and copy it to a memory buffer, probably performing an RGB-to-YUV color conversion in the process, or you have to get the YUV buffer output from MediaCodec. The latter is tricky because a few different formats are possible, including Qualcomm's proprietary tiled format.
With regard to TextureView:
onSurfaceTextureUpdated() is called whenever TextureView receives a new frame. You can use getBitmap() to get every frame of the video, but you need to pace the video playback to match your filter speed -- TextureView will drop frames if you fall behind.
You could create a "hidden TextureView" by putting other View elements on top of it, but that would be silly. TextureView uses a SurfaceTexture to convert the video frames to OpenGL ES textures, then renders them as part of drawing the View UI. The bitmap data is retrieved with glReadPixels(). You can just use these elements directly. The bigflake ExtractMpegFramesTest demonstrates this.
I am using Android MediaPlayer to play a video file on TextureView. I count the times the onSurfaceTextureUpdated callback is made and on some devices it is less than the number of the frames of the video, while in others it reaches the total number of frames.
Why is this happening? Isn't the Surface Texture supposed to be updating for every frame? Could it be a TextureView or a codec implementation issue?
This is memory issue.
Please try to finish other activities and release unneeded data before play the video on TextureView.
I want to use glSurfaceView to render video in GB(API 10), so that i can use glReadPixels call to get the frame data. But setSurface in MediaPlayer requires API 14. Now any way can i get my work done, any Support libraries, or native code?
Ultimately i want to capture frames of video to JPEG images while the video is rendered in fullscreen. The cartoon video is in BGRA format(hope so!)
I'm currently working with Android Jelly Bean MediaCodec API to develop a simple video player.
I extract tracks, play audio and video in separate threads. The problem is that video track always is played too fast.
Where can be the problem hidden?
Both audio and video are treated almost the same way, except audio is played via AudioTrack and video is rendered to the surface.
If you render frames at maximum speed you'll hit 60fps on most devices. You need to pace it according to the presentation time stamps provided by the encoder.
For example, if the input is a format supported by Android (e.g. a typical .mp4 file), you can use the MediaExtractor class to extract each frame. The time stamp can be retrieved with getSampleTime(). You want to delay rendering by the difference between timestamps on consecutive frames -- don't assume that the first frame will have a timestamp of zero.
Also, don't assume that video frames appear at a constant rate (e.g. 30fps). For some sources the frames will arrive unevenly.
See the "Play video (TextureView)" example in Grafika, particularly the SpeedControlCallback class. The gen-eight-rects.mp4 video uses variable frame durations to exercise it. If you check the "Play at 60fps" box, the presentation time stamps are ignored.