how to retrieve NV21 data from DJI camera Phantom 3 Professional drone - android

As I described in a previous post, I'm working on an Android mobile app oriented to the real time augmented visualization of a drone's camera view (specifically I'm working on a DJI Phantom 3 Professional with relative SDK), using Wikitude framework for the AR part. Thanks to Alex's response, I implemented my own Wikitude Input Plugin in combination with dji's Video Stream Decoding.
I have some issues now. First of all, "DJI's Video Stream Decoding" demo uses FFmpeg for video frame parsing and MediaCodec for hardware decoding. So, it helps to parse video frames and decode the raw video stream data from DJI Camera and output the YUV data. You adviced me to "get the raw video data from the dji sdk and pass it to the Wikitude SDK": since Wikitude Input Plugin needs YUV 420 format, arranged to be compliant to the NV21 standard in order to provide the custom camera, I should pass to it the YUV data output of the MediaCodec, right?
About this point, I tried to retrieve bytebuffers from the MediaCodec output (and this is possible by setting Surface parameter to null into configure() method, which have the effect to invoke a callback and pass it out to an external listener), but I'm having some issues about colours in visualization, because the encoded video colour is not right (blue and red seem to be reversed, and there is too much noise when camera moves).. (please note that, when I pass a Surface not null, after the instruction codec.releaseOutputBuffer(outIndex, true), MediaCodec renders frames on that and shows video stream properly, but I need to pass the video stream to Wikitude Plugin and so I must set surface to null).
I tried to set different MediaFormat.KEY_COLOR_FORMAT but none of them works properly. How can I solve this point?

When decoding into bytebuffers with MediaCodec, you can't decide what color format the buffer uses; the decoder decides, and you have to deal with it. Each decoder can use a different format; some of them can be a standard format like COLOR_FormatYUV420Planar (corresponding to I420) or COLOR_FormatYUV420SemiPlanar (corresponding to NV12 - not NV21), while others can use completely proprietary formats.
See e.g. https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#401 for an example on what formats the decoder can return that are supported, and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#963 for a reference showing that it is ok for decoders to return private formats.
You can have a look at e.g. http://git.videolan.org/?p=vlc.git;a=blob;f=modules/codec/omxil/qcom.c;h=301e9150ae66075ca264e83566504802ed57578c;hb=bdc690e9c0e2516c00a6d3733a77a87a25d9b6e3 for an example on how to interpret one common proprietary color format.

Related

How to feed camera's raw data directly to encoder in android NDK

I am working on native app (no Java API calls) on which I need to encode camera's live feed and dump it. And avoid any memcpy to encoder's input buffer.
Previously I was able to capture yuv data from camera using AImage reader, and save it , also able to encode it by passing saved yuv it to encoder's input buffer , But now want to avoid saving and then passing it to encoder.
Is there any way we can achieve this using only AMediacodec API available in android ndk .
There is no guarantee that a built-in encoder will accept one of the image formats that you receive from the camera, but if it does, there is no need to 'save' the frame. The tricky part is that both camera and encoder work asynchronously, so the frames that you receive from AImageReader must be queued to be consumed by AMediaCodec. If you don't want to memcpy these threads to the queue, your camera may stumble when there are not free buffers,
But it may be easier and more efficient to wire the encoder to surface via AMediaCodec_createInputSurface() instead of relying on buffers.

I want to attach Pre-Rolls to videos taken on android devices

I'm using mp4parser and the videos need to be of the same kind.
I was thinking of using android's media codec to decode & encode the preroll video to fit the same encoding output of the cameras (front & back)
any suggestion on how this can be done (how to get specific device encoding params)?
If you want to find out what encoding your Android camera is using, try using this: https://developer.android.com/reference/android/media/CamcorderProfile
This should suffice to answer your questions for detecting the video encoding including : The file output format, Video codec format, Video bit rate in bits per second, Video frame rate in frames per second, Video frame width and height, Audio codec format, Audio bit rate in bits per second, Audio sample rate Number of audio channels for recording.
Pulled a lot of the above information from here as well: https://developer.android.com/guide/topics/media/camera#capture-video
As for transcoding videos that are already in the users roll, I found this useful transcoder that was written in pure java using the Android MediaCodec API and can be found here: https://github.com/ypresto/android-transcoder
Also, as rupps mentioned below, you can use ffmeg which is proven to work countless times on Android. However, the reasoning for me linking the other transcoder first is because, as the author states, "FFmpeg is the most famous solution for transcoding. But using FFmpeg binary on Android can cause GPL and/or patent issues. Also using native code for Android development can be troublesome because of cross-compiling, architecture compatibility, build time and binary size." So use whichever one you believe better suits you. Here is the link for ffmeg for Android:
https://github.com/WritingMinds/ffmpeg-android
If you don't want to use the transcoder that someone else made then I reccomend making your own transcoder using the MediaCodec API that can be found here: https://developer.android.com/reference/android/media/MediaCodec
If you want magic, try this library.
https://github.com/INDExOS/media-for-mobile/
Take a look at the MediaComposer class.
Here's also a code snippet on how it's done.
org.m4m.MediaComposer mediaComposer = new org.m4m.MediaComposer(factory, progressListener);
mediaComposer.addSourceFile(mediaUri1);
int orientation = mediaFileInfo.getRotation();
mediaComposer.setTargetFile(dstMediaPath, orientation);
// set video encoder
VideoFormatAndroid videoFormat = new VideoFormatAndroid(videoMimeType, width, height);
videoFormat.setVideoBitRateInKBytes(videoBitRateInKBytes);
videoFormat.setVideoFrameRate(videoFrameRate);
videoFormat.setVideoIFrameInterval(videoIFrameInterval);
mediaComposer.setTargetVideoFormat(videoFormat);
// set audio encoder
AudioFormatAndroid aFormat = new AudioFormatAndroid(audioMimeType, audioFormat.getAudioSampleRateInHz(), audioFormat.getAudioChannelCount());
aFormat.setAudioBitrateInBytes(audioBitRate);
aFormat.setAudioProfile(MediaCodecInfo.CodecProfileLevel.AACObjectLC);
mediaComposer.setTargetAudioFormat(aFormat);
mediaComposer.setTargetFile(dstMediaPath, orientation);
mediaComposer.start();

Android - using MediaCodec with byte array only

I'm moving around some samples of MediaCodec usage through the BigFlake and Grafika examples.
I cant find any example or way to create video without GLsurface as a buffer.
my app using customize byte array and send it to FFmpeg library-which create and encode video file.
but i like to add support for 4.3/4.4 version with the media codec.
Is there any way to send a byte[] array to the Encoder ?
for example if i like to use onPreviewFrame(byte[] frame) ,from the camera.
I dont want to send it to the glsurface,just encode it as is.
Any idea?

Decoding Raw H264 stream in android?

I have a project where I have been asked to display a video stream in android, the stream is raw H.264 and I am connecting to a server and will receive a byte stream from the server.
Basically I'm wondering is there a way to send raw bytes to a decoder in android and display it on a surface?
I have been successful in decoding H264 wrapped in an mp4 container using the new MediaCodec and MediaExtractor API in android 4.1, unfortunately I have not found a way to decode a raw H264 file or stream using these API's.
I understand that one way is to compile and use FFmpeg but I'd rather use a built in method that can use HW acceleration. I also understand RTSP streaming is supported in android but this is not an option. Android version is not an issue.
I can't provide any code for this unfortunately, but I'll do my best to explain it based on how I got it to work.
So here is my overview of how I got raw H.264 encoded video to work using the MediaCodec class.
Using the link above there is an example of getting the decoder setup and how to use it, you will need to set it up for decoding H264 AVC.
The format of H.264 is that it’s made up of NAL Units, each starting with a start prefix of three bytes with the values 0x00, 0x00, 0x01 and each unit has a different type depending on the value of the 4th byte right after these 3 starting bytes. One NAL Unit IS NOT one frame in the video, each frame is made up of a number of NAL Units.
Basically I wrote a method that finds each individual unit and passes it to the decoder (one NAL Unit being the starting prefix and any bytes there after up until the next starting prefix).
Now if you have the decoder setup for decoding H.264 AVC and have an InputBuffer from the decoder then you are ready to go. You need to fill this InputBuffer with a NAL Unit and pass it back to the decoder and continue doing this for the length of the stream.
But, to make this work I had to pass the decoder a SPS (Sequence Parameter Set) NAL Unit first. This unit has a byte value of 0x67 after the starting prefix (the 4th byte), on some devices the decoder would crash unless it received this Unit first.
Basically until you find this unit, ignore all other NAL Units and keep parsing the stream until you get this unit, then you can pass all other units to the decoder.
Some devices didn't need the SPS first and some did, but you are better of passing it in first.
Now if you had a surface that you passed to the decoder when you configured it then once it gets enough NAL units for a frame it should display it on the surface.
You can download the raw H.264 from the server, then offer it via a local HTTP server running on the phone and then let VLC for Android do playback from that HTTP server. You should use VLC's http/h264:// scheme to force the demuxer to raw H.264 (if you don't force the demuxer VLC may not be able to recognize the stream, even when the MIME type returned by the HTTP server is set correctly). See
https://github.com/rauljim/tgs-android/blob/integrate_record/src/com/tudelft/triblerdroid/first/VideoPlayerActivity.java#L211
for an example on how to create an Intent that will launch VLC.
Note: raw H.264 apparently has no timing info, so VLC will play as fast as possible.
First embedding it in MPEGTS will be better. Haven't found a Android lib that will do that yet.
Here are the resources I've found helpful in a similar project:
This video has been super insightful in understanding how MediaCodec handles raw h.264 streams on a high level.
This thread goes into a bit more detail as to handling the SPS/PPS NALUs specifically. As was mentioned above, you need to separate individual NAL Units using the start prefix, and then hand the remaining data to the MediaCodec.
This repo (libstreaming) is a great example of decoding an H264 stream in Android using RTSP/RTP for transmission.

How to capture H.264 encoded frame in android?

I understand like, there are two ways of capturing video in android.
1) using SurfaceView API
2) using MediaRecorder API
I want to capture the H.264 encoded frames using the Android (3.0+) 's default encoder to send it over network using RTP.
While using preview callbacks with SurfaceView and SurfaceHolder classes, we are able to get raw frames shown as preview to the user. We were getting the frames in "onPreviewFrame" method of "PreviewCallback" class.
But, those frames are H.264 encoded.
So, I tried with "MediaRecorder" API to set H.264 encoding and "SurfaceView" to get the preview frames.
In this case, the previewcallbacks are not getting called.
Can you please let me know how to achieve this. Our main aim is to get the H.264 encoded frame (which hass been encoded using android's default codec).
Ref: 1) https://stackoverflow.com/a/8655244/698316
2) Similar issue: http://permalink.gmane.org/gmane.comp.handhelds.android.devel/214422
Can you suggest a way to capture the H.264 encoded frames using android's default H.264 codec support.
See Spydroid http://code.google.com/p/spydroid-ipcamera/
Basically you let the video encoder write a .mp4 with H.264 to a special file descriptor that calls your code on write. Then strip off the MP4 header and turn the H.264 NALUs into RTP packets.

Categories

Resources