I want to attach Pre-Rolls to videos taken on android devices - android

I'm using mp4parser and the videos need to be of the same kind.
I was thinking of using android's media codec to decode & encode the preroll video to fit the same encoding output of the cameras (front & back)
any suggestion on how this can be done (how to get specific device encoding params)?

If you want to find out what encoding your Android camera is using, try using this: https://developer.android.com/reference/android/media/CamcorderProfile
This should suffice to answer your questions for detecting the video encoding including : The file output format, Video codec format, Video bit rate in bits per second, Video frame rate in frames per second, Video frame width and height, Audio codec format, Audio bit rate in bits per second, Audio sample rate Number of audio channels for recording.
Pulled a lot of the above information from here as well: https://developer.android.com/guide/topics/media/camera#capture-video
As for transcoding videos that are already in the users roll, I found this useful transcoder that was written in pure java using the Android MediaCodec API and can be found here: https://github.com/ypresto/android-transcoder
Also, as rupps mentioned below, you can use ffmeg which is proven to work countless times on Android. However, the reasoning for me linking the other transcoder first is because, as the author states, "FFmpeg is the most famous solution for transcoding. But using FFmpeg binary on Android can cause GPL and/or patent issues. Also using native code for Android development can be troublesome because of cross-compiling, architecture compatibility, build time and binary size." So use whichever one you believe better suits you. Here is the link for ffmeg for Android:
https://github.com/WritingMinds/ffmpeg-android
If you don't want to use the transcoder that someone else made then I reccomend making your own transcoder using the MediaCodec API that can be found here: https://developer.android.com/reference/android/media/MediaCodec

If you want magic, try this library.
https://github.com/INDExOS/media-for-mobile/
Take a look at the MediaComposer class.
Here's also a code snippet on how it's done.
org.m4m.MediaComposer mediaComposer = new org.m4m.MediaComposer(factory, progressListener);
mediaComposer.addSourceFile(mediaUri1);
int orientation = mediaFileInfo.getRotation();
mediaComposer.setTargetFile(dstMediaPath, orientation);
// set video encoder
VideoFormatAndroid videoFormat = new VideoFormatAndroid(videoMimeType, width, height);
videoFormat.setVideoBitRateInKBytes(videoBitRateInKBytes);
videoFormat.setVideoFrameRate(videoFrameRate);
videoFormat.setVideoIFrameInterval(videoIFrameInterval);
mediaComposer.setTargetVideoFormat(videoFormat);
// set audio encoder
AudioFormatAndroid aFormat = new AudioFormatAndroid(audioMimeType, audioFormat.getAudioSampleRateInHz(), audioFormat.getAudioChannelCount());
aFormat.setAudioBitrateInBytes(audioBitRate);
aFormat.setAudioProfile(MediaCodecInfo.CodecProfileLevel.AACObjectLC);
mediaComposer.setTargetAudioFormat(aFormat);
mediaComposer.setTargetFile(dstMediaPath, orientation);
mediaComposer.start();

Related

How to combine 2 .mp4 videos on android device?

The goal is to combine 2 .mp4 videos like one playing after the other. There are countless references to ffmpeg, which is not a very good option since it involves NDK making the project really heavy for some not so significant function.
I came to know that mediacodec has been improved a lot. I need a guide to help me through it. I couldn't find any.
Also I am looking for combining .mp3 and a photo (single) and making a .mp4
Please help.
I tried using javacv library for 2nd query, it was a dead end.
The code is as follows:
{FFmpegFrameGrabber videoFrames = FFmpegFrameGrabber.createDefault(videoSource);
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(videoFile, 200, 200);
recorder.setFrameRate(10);
recorder.start();
Frame frame = videoFrames.grab();
for (int i = 0; i < (numSeconds * 10); i++) {
recorder.record(frame);
}
}
It takes 12 seconds just to make a 5 frame video. fps is 25.
If you want to merge two video files the easy way is mp4parser (it has everything you need). The other way is to try to extract the audio and video channels with MediaExtractor, than use the MediaMuxer to merge the two files into one. Also, make sure the files are identical in terms of resolution, framerate and bitrate, otherwise the merge may not be perfect. In such case, you may need to re-encode the files. You can do that with FFMPEG, or you might want to consider sophisticated Android tools such as the MediaCodec.
UPDATE:
Creating an audio .mp4 with a single frame should be the easiest in FFMPEG. If you need a lighter solution, you have to encode your image with MediaCodec, and extract the audio from the .mp3 first. After that, you have to mux the given video and audio samples into a video file. You can find brilliant MediaCodec examples in Grafika and BigFlake

how to retrieve NV21 data from DJI camera Phantom 3 Professional drone

As I described in a previous post, I'm working on an Android mobile app oriented to the real time augmented visualization of a drone's camera view (specifically I'm working on a DJI Phantom 3 Professional with relative SDK), using Wikitude framework for the AR part. Thanks to Alex's response, I implemented my own Wikitude Input Plugin in combination with dji's Video Stream Decoding.
I have some issues now. First of all, "DJI's Video Stream Decoding" demo uses FFmpeg for video frame parsing and MediaCodec for hardware decoding. So, it helps to parse video frames and decode the raw video stream data from DJI Camera and output the YUV data. You adviced me to "get the raw video data from the dji sdk and pass it to the Wikitude SDK": since Wikitude Input Plugin needs YUV 420 format, arranged to be compliant to the NV21 standard in order to provide the custom camera, I should pass to it the YUV data output of the MediaCodec, right?
About this point, I tried to retrieve bytebuffers from the MediaCodec output (and this is possible by setting Surface parameter to null into configure() method, which have the effect to invoke a callback and pass it out to an external listener), but I'm having some issues about colours in visualization, because the encoded video colour is not right (blue and red seem to be reversed, and there is too much noise when camera moves).. (please note that, when I pass a Surface not null, after the instruction codec.releaseOutputBuffer(outIndex, true), MediaCodec renders frames on that and shows video stream properly, but I need to pass the video stream to Wikitude Plugin and so I must set surface to null).
I tried to set different MediaFormat.KEY_COLOR_FORMAT but none of them works properly. How can I solve this point?
When decoding into bytebuffers with MediaCodec, you can't decide what color format the buffer uses; the decoder decides, and you have to deal with it. Each decoder can use a different format; some of them can be a standard format like COLOR_FormatYUV420Planar (corresponding to I420) or COLOR_FormatYUV420SemiPlanar (corresponding to NV12 - not NV21), while others can use completely proprietary formats.
See e.g. https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#401 for an example on what formats the decoder can return that are supported, and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#963 for a reference showing that it is ok for decoders to return private formats.
You can have a look at e.g. http://git.videolan.org/?p=vlc.git;a=blob;f=modules/codec/omxil/qcom.c;h=301e9150ae66075ca264e83566504802ed57578c;hb=bdc690e9c0e2516c00a6d3733a77a87a25d9b6e3 for an example on how to interpret one common proprietary color format.

Trying to capture a video on Android with timestamps

I'm trying to implement an Android app that records a video, while also writing a file containing the times at which each frame was taken. I tried using MediaRecorder, got to a point where I can get the video saved, but I couldn't find a way to get the timestamps. I tried doing something like:
while (previous video file length != current video file length)
write current time to text file;
but this did not seem to work, as file length doesn't seem to be updated frequently enough (or am I wrong?).
I then tried using OpenCV and managed to capture the images frame by frame (and therefore getting timestamps was easy), but I couldn't find a way to join all the frames to one video. I saw answers referring to using NDK and FFmpeg, but I feel like there should be an easier solution (perhaps similar to the one I tried at the top?).
You could use MediaCodec to capture the video, but that gets complicated quickly, especially if you want audio as well. You can recover timestamps from a video file by walking through it with MediaExtractor. Just call getSampleTime() on each frame to get the presentation time in microseconds.
This question is old, but I just needed the same thing. With the camera2 api you could stream to multiple targets. So you can stream from one and the same physical camera to the MediaRecorder and at the same time to an ImageReader. The ImageReader has the ImageReader.OnImageAvailableListener, the "onImageAvailable" callback you can use for querying the frames for their timestamps:
image = reader.acquireLatestImage();
long cameraTime = image.getTimestamp();
Not sure whether streaming to mulitple targets requires certain hardware capabilities, though (such as multiple image processors). I guess, modern devices have like 2-3 of them.
EDIT: This looks like every camera2 capable device can stream to up to 3 targets.
EDIT: Apparently, you can use an ImageReader for capturing the individual frames and their timestamps, and then use an ImageWriter to provide those images to downstream consumers such as a MediaCodec, which records the video for you.
Expanding on fadden's answer, the easiest way to go is to record using MediaRecorder (the Camera2 example helped me a lot) and after that extracting timestamps via MediaExtractor. I want to add a working MediaExtractor code sample:
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(pathToVideo);
do {
long sampleMillis = extractor.getSampleTime() / 1000;
// do something with sampleMillis
}
while (extractor.advance() && extractor.getSampleTime() != -1);

Hardware accelerated video decode for H.264 in android prior to Jelly Bean

I am working on a video conferencing project. We were using software codec for encode and decode of video frames which will do fine for lower resolutions( up to 320p). We have planned to support our application for higher resolutions also up to 720p. I came to know that hardware acceleration will do this job fairly well.
As the hardware codec api Media codec is available from Jelly Bean onward I have used it for encode and decode and are working fine. But my application is supported from 2.3 . So I need to have an hardware accelerated video decode for H.264 frames of 720p at 30fps.
On research came across the idea of using OMX codec by modifying the stage fright framework.I had read that the hardware decoder for H.264 is available from 2.1 and encoder is there from 3.0. I have gone through many articles and questions given in this site and confirmed that I can go ahead.
I had read about stage fright architecture here -architecture and here- stagefright how it works
And I read about OMX codec here- use-android-hardware-decoder-with-omxcodec-in-ndk.
I am having a starting trouble and some confusions on its implementation.I would like to have some info about it.
For using OMX codec in my code should I build my project with the whole android source tree or can I do by adding some files from AOSP source(if yes which all).
What are the steps from scratch I should follow to achieve it.
Can someone give me a guideline on this
Thanks...
The best example to describe the integration of OMXCodec in native layer is the command line utility stagefright as can be observed here in GingerBread itself. This example shows how a OMXCodec is created.
Some points to note:
The input to OMXCodec should be modeled as a MediaSource and hence, you should ensure that your application handles this requirement. An example for creating a MediaSource based source can be found in record utility file as DummySource.
The input to decoder i.e. MediaSource should provide the data through the read method and hence, your application should provide individual frames for every read call.
The decoder could be created with NativeWindow for output buffer allocation. In this case, if you wish to access the buffer from the CPU, you should probably refer to this query for more details.

How to modify FFmpegFrameRecorder.java in javacv to support audio sample rate change?

I want to know if ffmpeg port from javacv supports audio sample rate change, and if so, I would need some guidelines for changing FFmpegFrameRecorder.java to support it.
Yes, it supports. Care should be taken in FFmpegFrameRecorder.java that SwrContext object is correctly configured and that the number of samples provided to avcodec_fill_audio_frame(..) match the codec input frame size.

Categories

Resources