Android - Encode a video while decoding it - android

I modify a video with glsl shaders, using a SurfaceTexture and OpenGL ES 2.0. I can also encode the result video with MediaCodec.
The problem is that the only way I've found to decode the video is with MediaPlayer and SurfaceTexture, but MediaPlayer doesn't have a frame by frame decoding option. So right now, it's like a live encoding/decoding, there is no pause.
I've also tried to use seekTo / pause / start, but it would never update the texture..
So would it be possible to do a step by step decoding instead, to follow the encoding process ? I'm afraid that my current method is not very accurate.
Thanks in advance !

Yes, instead of using MediaPlayer, you need to use MediaExtractor and MediaCodec to decode it (into the same SurfaceTexture that you're already using with MediaPlayer).
An example of this would be ExtractMpegFramesTest at http://bigflake.com/mediacodec/, possibly also DecodeEditEncodeTest (or for a >= Android 5.0 async version of it, see https://github.com/mstorsjo/android-decodeencodetest).

EDIT : Wrong, mediaPlayer's stream cannot be used frame by frame, seems like it only works in "real" speed.
I've managed to do it with MediaPlayer actually. Following this answer :
stackoverflow - SurfaceTexture.OnFrameAvailableListener stops being called
Using counters, you can speed up or speed down the video stream, and synchronize it with the preview or the encoding.
But - If you want to do a real seek to a particular frame, then mstorsjo's solution is way better. In my case, I just wanted to make sure the encoding process is not going faster or slower than the video input stream.

Related

Is it possible to render frames in Exoplayer?

I am pulling h264 and AAC frames and at the moment I am feeding them to MediaCodec, decoding and rendering them myself, but the code is getting too complicated and I need to cover all cases. I was thinking if it's possible to set up an Exoplayer instance and feed them as a source.
I can only find that it supports normal files and streams, but not separate frames? Do I need to mux the frames myself, and if so is there an easy way to do it?
If you mean that you are extracting frames from a video file or a live stream, and then want to work on them individually or display them individually, you may find that OpenCV would suit your use case.
You can fairly simply open a stream or file, go frame by frame and do what you want with the resulting decoded bitmap.
This answer has a Python and Android example that might be useful: https://stackoverflow.com/a/58921325/334402

Video in Android : change visual properties (e.g. saturation, brightness)

Assuming we have a Surface in Android that displays a video (e.g. h264) with a MediaPlayer:
1) Is it possible to change the displayed saturation, contrast & brightness of the displayed on the surface video? In real time? E.g. Images can use setColorFilter is there anything similar in Android to process the video frames?
Alternative question (if no. 1 is too difficult):
2) If we would like to export this video with e.g. an increased saturation, we should use a Codec, e.g. MediaCodec. What technology (method, class, library, etc...) should we use before the codec/save action to apply the saturation change?
For display only, one easy approach is to use a GLSurfaceView, a SurfaceTexture to render the video frames, and a MediaPlayer. Prokash's answer links to an open source library that shows how to accomplish that. There are a number of other examples around if you search those terms together. Taking that route, you draw video frames to an OpenGL texture and create OpenGL shaders to manipulate how the texture is rendered. (I would suggest asking Prokash for further details and accepting his answer if this is enough to fill your requirements.)
Similarly, you could use the OpenGL tools with MediaCodec and MediaExtractor to decode video frames. The MediaCodec would be configured to output to a SurfaceTexture, so you would not need to do much more than code some boilerplate to get the output buffers rendered. The filtering process would be the same as with a MediaPlayer. There are a number of examples using MediaCodec as a decoder available, e.g. here and here. It should be fairly straightforward to substitute the TextureView or SurfaceView used in those examples with the GLSurfaceView of Prokash's example.
The advantage of this approach is that you have access to all the separate tracks in the media file. Because of that, you should be able to filter the video track with OpenGL and straight copy other tracks for export. You would use a MediaCodec in encode mode with the Surface from the GLSurfaceView as input and a MediaMuxer to put it all back together. You can see several relevant examples at BigFlake.
You can use a MediaCodec without a Surface to access decoded byte data directly and manipulate it that way. This example illustrates that approach. You can manipulate the data and send it to an encoder for export or render it as you see fit. There is some extra complexity in dealing with the raw byte data. Note that I like this example because it illustrates dealing with the audio and video tracks separately.
You can also use FFMpeg, either in native code or via one of the Java wrappers out there. This option is more geared towards export than immediate playback. See here or here for some libraries that attempt to make FFMpeg available to Java. They are basically wrappers around the command line interface. You would need to do some extra work to manage playback via FFMpeg, but it is definitely doable.
If you have questions, feel free to ask, and I will try to expound upon whatever option makes the most sense for your use case.
If you are using a player that support video filters then you can do that.
Example of such a player is VLC, which is built around FFMPEG [1].
VLC is pretty easy to compile for Android. Then all you need is the libvlc (aar file) and you can build your own app. See compile instructions here.
You will also need to write your own module. Just duplicate an existing one and modify it. Needless to say that VLC offers strong transcoding and streaming capabilities.
As powerful VLC for Android is, it has one huge drawback - video filters cannot work with hardware decoding (Android only). This means that the entire video processing is on the CPU.
Your other options are to use GLSL / OpenGL over surfaces like GLSurfaceView and TextureView. This guaranty GPU power.

MediaCodecMuxer encode video too slow

I am using MediaCodec Muxer to encode videos,but the process is too slow. Sometimes 60 seconds video, the encode process takes more than 90 seconds. The encode plan comes from ExtractDecodeEditEncodeMuxTest(BigFlake) and I interpret this example into jni layer. I don't know whether it is because of the using of reflection in my code to call java api that leads to encode video very slow or the swap process between GLDisplay and MediaCodec inputSurface causes this problem? I use eglCreateWindowSurface to create GLSurface, I wonder if I can use eglCreatePbufferSurface to create off-screen surface that may speed up the encode process?
Can any one give some advice ? thanks!
I speed up by encoding audio and video in different thread and speed up audio encoding by enlarge audio writing buffer.

Mediamuxer produces corrupted video when samples are written in batch

I'm trying to use Android's MediaMuxer and MediaCodec to produce MP4 videos.
If I drain frames from the codec directly to the muxer by calling writeSampleData(), everything works fine and the correct video is produced.
But if I try to first store these frames on an array and decide later to send them to the muxer, I'm unable to produce a working video, even if the presentation timestamps are correct.
For some reason, it seem that the mediamuxer output depends not only on the presentation timestamps, but also on the actual time "writeSampleData" is called, although it's my understanding that having the correct timestamps should be enough.
Can anyone shed some light on this issue?
Thanks mstorsjo and fadden. I had actually a combination of errors which didn't allow me to understand what was really going on. Both your questions led me to the correct code and the conviction that using writeSampleData() was not time sensitive.
Yes, I was getting the wrong buffers at the first time. The problem was not initially noticeable because the muxer was writing the frames before the buffers got rewritten. When I introduced the delays and decided to duplicate the buffers contents, I hit another issue (basically a race condition) and concluded it was not the case.
What this code does (for the SmartPolicing project) is capture video and audio to create a MP4 file. I could use MediaRecorder (this was the initial solution), but we also wanted to intercep the frames and stream the video via web, so we dropped the MediaRecorder and created a custom solution.
Now it is running smoothly. Thanks a lot, guys.
Are you sure you actually store the complete data for the frames to be written, not only the buffer indices?

Android MediaRecorder API keeps cropping the video bitrate

I'm working with the MediaRecorder API for a while, I thought all problems are behind me but I guess I was wrong.
I'm using the MediaRecorder API for recording video to a file.
When I use the setProfile with high quality I get good quality but when I try to set the parameters manually (as in the code below) the quality is bad (since for some reason the bitrate is cropped).
I want to get 720p with 1fps.
I keep getting the following warning:
WARN/AuthorDriver(268): Video encoding bit rate is set to 480000 bps
The code I'm running:
m_MediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
m_MediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
m_MediaRecorder.setVideoSize(1280, 720);
m_MediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
m_MediaRecorder.setVideoFrameRate(1);
m_MediaRecorder.setVideoEncodingBitRate(8000000);
Any idea?
Thanks a lot.
Found the solution...very weird however.
Setting the bit-rate before setting the compression type somehow solved the problem.
The only question is whether it is a bug in google's code or something else that I don't understand.
Original:
m_MediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
m_MediaRecorder.setVideoFrameRate(1);
m_MediaRecorder.setVideoEncodingBitRate(8000000);
Solution:
m_MediaRecorder.setVideoEncodingBitRate(8000000);
m_MediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
m_MediaRecorder.setVideoFrameRate(1);
The documentation for setVideoEncodingBitRate() says:
Sets the video encoding bit rate for recording. Call this method
before prepare(). Prepare() may perform additional checks on the
parameter to make sure whether the specified bit rate is applicable,
and sometimes the passed bitRate will be clipped internally to ensure
the video recording can proceed smoothly based on the capabilities of
the platform.
Because the MediaRecorder API is dealing with a hardware encoding chip of some sort, that is different from device to device, it can't always give you every combination of codec, frame size, frame rate and encoding bitrate you ask for.
Your needs are somewhat unusual, in that you are trying to record at 1 fps. If you are developing your app for Honeycomb, there is a "time lapse" API for MediaRecorder along with an associated setCaptureRate() call that might be useful.

Categories

Resources