I'm moving around some samples of MediaCodec usage through the BigFlake and Grafika examples.
I cant find any example or way to create video without GLsurface as a buffer.
my app using customize byte array and send it to FFmpeg library-which create and encode video file.
but i like to add support for 4.3/4.4 version with the media codec.
Is there any way to send a byte[] array to the Encoder ?
for example if i like to use onPreviewFrame(byte[] frame) ,from the camera.
I dont want to send it to the glsurface,just encode it as is.
Any idea?
Related
I am pulling h264 and AAC frames and at the moment I am feeding them to MediaCodec, decoding and rendering them myself, but the code is getting too complicated and I need to cover all cases. I was thinking if it's possible to set up an Exoplayer instance and feed them as a source.
I can only find that it supports normal files and streams, but not separate frames? Do I need to mux the frames myself, and if so is there an easy way to do it?
If you mean that you are extracting frames from a video file or a live stream, and then want to work on them individually or display them individually, you may find that OpenCV would suit your use case.
You can fairly simply open a stream or file, go frame by frame and do what you want with the resulting decoded bitmap.
This answer has a Python and Android example that might be useful: https://stackoverflow.com/a/58921325/334402
I'm trying to make some deep learning experiments on android on video samples. And I've got stuck into remuxing videos. I have a couple of questions to arrange information in my head:) I have read some pages: https://vec.io/posts/android-hardware-decoding-with-mediacodec and https://bigflake.com/mediacodec/#ExtractMpegFramesTest but still I have a mess.
My questions:
Can I read video with MediaExtractor and then pass data to MediaMuxer to save video in another file? Without using MediaCodec?
If I want to modify frames before saving, can I do that without using Surface? Just by modifying ByteBuffer? I assume that I need to decode data from MediaExtractor, then modify content, then encode it to MediaMuxer.
Does sample is the same as frame in context of method MediaExtractor::readSampleData ?
Do I need to decode sample?
This is a brief description of what each class does:
MediaExtrator: Extracts encoded video/audio data
MediaCodec: Depending on how its configured it can be a decoder or an encoder.
MediaMuxer: Muxes streams of data into an output file.
This is how you pipeline should generally look like:
MediaExtractor -> MediaCodec(As Decoder) -> Your editing -> MediaCodec(As Encoder) -> MediaMuxer
To answer you questions:
MediaExtractor will give you encoded data, if you want to do
anything with it you will have to decode it using a MediaCodec.
It might be possible to do so without a surface but it will be
pretty limited. Surfaces is the way to go. You can find more info
here:
Editing frames and encoding with MediaCodec
Sample can be a video frame or an audio sample
Yes you do need to decode samples to edit them
I am working on native app (no Java API calls) on which I need to encode camera's live feed and dump it. And avoid any memcpy to encoder's input buffer.
Previously I was able to capture yuv data from camera using AImage reader, and save it , also able to encode it by passing saved yuv it to encoder's input buffer , But now want to avoid saving and then passing it to encoder.
Is there any way we can achieve this using only AMediacodec API available in android ndk .
There is no guarantee that a built-in encoder will accept one of the image formats that you receive from the camera, but if it does, there is no need to 'save' the frame. The tricky part is that both camera and encoder work asynchronously, so the frames that you receive from AImageReader must be queued to be consumed by AMediaCodec. If you don't want to memcpy these threads to the queue, your camera may stumble when there are not free buffers,
But it may be easier and more efficient to wire the encoder to surface via AMediaCodec_createInputSurface() instead of relying on buffers.
As I described in a previous post, I'm working on an Android mobile app oriented to the real time augmented visualization of a drone's camera view (specifically I'm working on a DJI Phantom 3 Professional with relative SDK), using Wikitude framework for the AR part. Thanks to Alex's response, I implemented my own Wikitude Input Plugin in combination with dji's Video Stream Decoding.
I have some issues now. First of all, "DJI's Video Stream Decoding" demo uses FFmpeg for video frame parsing and MediaCodec for hardware decoding. So, it helps to parse video frames and decode the raw video stream data from DJI Camera and output the YUV data. You adviced me to "get the raw video data from the dji sdk and pass it to the Wikitude SDK": since Wikitude Input Plugin needs YUV 420 format, arranged to be compliant to the NV21 standard in order to provide the custom camera, I should pass to it the YUV data output of the MediaCodec, right?
About this point, I tried to retrieve bytebuffers from the MediaCodec output (and this is possible by setting Surface parameter to null into configure() method, which have the effect to invoke a callback and pass it out to an external listener), but I'm having some issues about colours in visualization, because the encoded video colour is not right (blue and red seem to be reversed, and there is too much noise when camera moves).. (please note that, when I pass a Surface not null, after the instruction codec.releaseOutputBuffer(outIndex, true), MediaCodec renders frames on that and shows video stream properly, but I need to pass the video stream to Wikitude Plugin and so I must set surface to null).
I tried to set different MediaFormat.KEY_COLOR_FORMAT but none of them works properly. How can I solve this point?
When decoding into bytebuffers with MediaCodec, you can't decide what color format the buffer uses; the decoder decides, and you have to deal with it. Each decoder can use a different format; some of them can be a standard format like COLOR_FormatYUV420Planar (corresponding to I420) or COLOR_FormatYUV420SemiPlanar (corresponding to NV12 - not NV21), while others can use completely proprietary formats.
See e.g. https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#401 for an example on what formats the decoder can return that are supported, and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#963 for a reference showing that it is ok for decoders to return private formats.
You can have a look at e.g. http://git.videolan.org/?p=vlc.git;a=blob;f=modules/codec/omxil/qcom.c;h=301e9150ae66075ca264e83566504802ed57578c;hb=bdc690e9c0e2516c00a6d3733a77a87a25d9b6e3 for an example on how to interpret one common proprietary color format.
I understand like, there are two ways of capturing video in android.
1) using SurfaceView API
2) using MediaRecorder API
I want to capture the H.264 encoded frames using the Android (3.0+) 's default encoder to send it over network using RTP.
While using preview callbacks with SurfaceView and SurfaceHolder classes, we are able to get raw frames shown as preview to the user. We were getting the frames in "onPreviewFrame" method of "PreviewCallback" class.
But, those frames are H.264 encoded.
So, I tried with "MediaRecorder" API to set H.264 encoding and "SurfaceView" to get the preview frames.
In this case, the previewcallbacks are not getting called.
Can you please let me know how to achieve this. Our main aim is to get the H.264 encoded frame (which hass been encoded using android's default codec).
Ref: 1) https://stackoverflow.com/a/8655244/698316
2) Similar issue: http://permalink.gmane.org/gmane.comp.handhelds.android.devel/214422
Can you suggest a way to capture the H.264 encoded frames using android's default H.264 codec support.
See Spydroid http://code.google.com/p/spydroid-ipcamera/
Basically you let the video encoder write a .mp4 with H.264 to a special file descriptor that calls your code on write. Then strip off the MP4 header and turn the H.264 NALUs into RTP packets.