I am using MediaCodec Muxer to encode videos,but the process is too slow. Sometimes 60 seconds video, the encode process takes more than 90 seconds. The encode plan comes from ExtractDecodeEditEncodeMuxTest(BigFlake) and I interpret this example into jni layer. I don't know whether it is because of the using of reflection in my code to call java api that leads to encode video very slow or the swap process between GLDisplay and MediaCodec inputSurface causes this problem? I use eglCreateWindowSurface to create GLSurface, I wonder if I can use eglCreatePbufferSurface to create off-screen surface that may speed up the encode process?
Can any one give some advice ? thanks!
I speed up by encoding audio and video in different thread and speed up audio encoding by enlarge audio writing buffer.
Related
I am pulling h264 and AAC frames and at the moment I am feeding them to MediaCodec, decoding and rendering them myself, but the code is getting too complicated and I need to cover all cases. I was thinking if it's possible to set up an Exoplayer instance and feed them as a source.
I can only find that it supports normal files and streams, but not separate frames? Do I need to mux the frames myself, and if so is there an easy way to do it?
If you mean that you are extracting frames from a video file or a live stream, and then want to work on them individually or display them individually, you may find that OpenCV would suit your use case.
You can fairly simply open a stream or file, go frame by frame and do what you want with the resulting decoded bitmap.
This answer has a Python and Android example that might be useful: https://stackoverflow.com/a/58921325/334402
I am working on native app (no Java API calls) on which I need to encode camera's live feed and dump it. And avoid any memcpy to encoder's input buffer.
Previously I was able to capture yuv data from camera using AImage reader, and save it , also able to encode it by passing saved yuv it to encoder's input buffer , But now want to avoid saving and then passing it to encoder.
Is there any way we can achieve this using only AMediacodec API available in android ndk .
There is no guarantee that a built-in encoder will accept one of the image formats that you receive from the camera, but if it does, there is no need to 'save' the frame. The tricky part is that both camera and encoder work asynchronously, so the frames that you receive from AImageReader must be queued to be consumed by AMediaCodec. If you don't want to memcpy these threads to the queue, your camera may stumble when there are not free buffers,
But it may be easier and more efficient to wire the encoder to surface via AMediaCodec_createInputSurface() instead of relying on buffers.
Currently I'm trying to concatenate multiple video files together, and the easiest way to do that is by using MP4Parser. However, with MP4Parser one has to make the videos have the same dimensions and framerates, for it is only manipulating the containers.
fadden has said that MP4 supports Variable Frame Rate videos and that MediaCodec can generate them, so I'm thinking of using the MediaCodec suite for this task.
Assuming that I have 3 videos to concatenate, I'm thinking of having 3 instances of MediaExtractors and MediaCodec decoders, one for each video, and one MediaCodec encoder that will put the decoded buffers into the final video file.
The extractors and decoders will be run separately one after the other, and they'll be fed into the same encoder. However, I'm concerned about the encoder's EOS signal flag.
Can I hold said signal flag off until the third decoder's finished? Should I also use the Circular Buffer for this task?
Yes you can send EOS signal flag anytime you want, as long as you do not send a new frame to the encoder after that. In fact you should not send EOS flag if you still want to feed more video frames
Few things that you might want to know:
It is safer to configure second decoder after you release the first one,
Some devices may not allow you to allocate multiple decoders, especially if you are decoding high resolution video.
You should add bias to the presentation time to the second and third video(obviously)
Yes, MediaCodec supports variable frame rate, but I am not sure whether it supports variable dimension. You may need to do some resizing or cropping by yourself (via openGL rendering)
I modify a video with glsl shaders, using a SurfaceTexture and OpenGL ES 2.0. I can also encode the result video with MediaCodec.
The problem is that the only way I've found to decode the video is with MediaPlayer and SurfaceTexture, but MediaPlayer doesn't have a frame by frame decoding option. So right now, it's like a live encoding/decoding, there is no pause.
I've also tried to use seekTo / pause / start, but it would never update the texture..
So would it be possible to do a step by step decoding instead, to follow the encoding process ? I'm afraid that my current method is not very accurate.
Thanks in advance !
Yes, instead of using MediaPlayer, you need to use MediaExtractor and MediaCodec to decode it (into the same SurfaceTexture that you're already using with MediaPlayer).
An example of this would be ExtractMpegFramesTest at http://bigflake.com/mediacodec/, possibly also DecodeEditEncodeTest (or for a >= Android 5.0 async version of it, see https://github.com/mstorsjo/android-decodeencodetest).
EDIT : Wrong, mediaPlayer's stream cannot be used frame by frame, seems like it only works in "real" speed.
I've managed to do it with MediaPlayer actually. Following this answer :
stackoverflow - SurfaceTexture.OnFrameAvailableListener stops being called
Using counters, you can speed up or speed down the video stream, and synchronize it with the preview or the encoding.
But - If you want to do a real seek to a particular frame, then mstorsjo's solution is way better. In my case, I just wanted to make sure the encoding process is not going faster or slower than the video input stream.
I am working on an implementation of one of the Android Test Cases regarding previewTexture recording with the new MediaCodec and MediaMuxer API's of Android 4.3.
I've managed to record the preview stream with a framerate of about 30fps by setting the recordingHint to the camera paremeters.
However, I ran into a delay/lag problem and don't really know how to fix that. When recording the camera preview with quite standard quality settings (1280x720, bitrate of ~8.000.000) the preview and the encoded material suffers from occasional lags. To be more specific: This lag occurs about every 2-3 seconds and takes about 300-600ms.
By tracing the delay I was able to figure out the delay comes from the following line of code in the "drainEncoder" method:
mMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
This line is called in a loop if the encoder has data available for muxing. Currently I don't record audio so only the h264 streams is converted to a mp4 format by the MediaMuxer.
I don't know if this has something to do with that delay, but it always occurs when the loop needs two iterations to dequeue all available data of the encoder (to be even more specific it occurs always in the first of these two iterations). In most cases one iteration is enough to dequeue the encoder.
Since there is not much information online about these new API's any help is very appreciated!
I suspect you're getting bitten by the MediaMuxer disk write. The best way to be sure is to run systrace during recording and see what's actually happening during the pause. (systrace docs, explanation, bigflake example -- as of right now only the latter is updated for Android 4.3)
If that's the case, you may be able to mitigate the problem by running the MediaMuxer instance on a separate thread, feeding the H.264 data to it through a synchronized queue.
Do these pauses happen regularly, every 5 seconds? The CameraToMpegTest example configures the encoder to output an I-frame every 5 seconds (with an expected frame rate of 30fps), which results in a full-sized frame being output rather than tiny deltas.
As #fadden points out, this is a disk write issue that occurs mostly on devices with lower writing flash speeds or if you try to write to the SD card.
I have written a solution on how to buffer MediaMuxer's write in a similar question here.