Trying to capture a video on Android with timestamps - android

I'm trying to implement an Android app that records a video, while also writing a file containing the times at which each frame was taken. I tried using MediaRecorder, got to a point where I can get the video saved, but I couldn't find a way to get the timestamps. I tried doing something like:
while (previous video file length != current video file length)
write current time to text file;
but this did not seem to work, as file length doesn't seem to be updated frequently enough (or am I wrong?).
I then tried using OpenCV and managed to capture the images frame by frame (and therefore getting timestamps was easy), but I couldn't find a way to join all the frames to one video. I saw answers referring to using NDK and FFmpeg, but I feel like there should be an easier solution (perhaps similar to the one I tried at the top?).

You could use MediaCodec to capture the video, but that gets complicated quickly, especially if you want audio as well. You can recover timestamps from a video file by walking through it with MediaExtractor. Just call getSampleTime() on each frame to get the presentation time in microseconds.

This question is old, but I just needed the same thing. With the camera2 api you could stream to multiple targets. So you can stream from one and the same physical camera to the MediaRecorder and at the same time to an ImageReader. The ImageReader has the ImageReader.OnImageAvailableListener, the "onImageAvailable" callback you can use for querying the frames for their timestamps:
image = reader.acquireLatestImage();
long cameraTime = image.getTimestamp();
Not sure whether streaming to mulitple targets requires certain hardware capabilities, though (such as multiple image processors). I guess, modern devices have like 2-3 of them.
EDIT: This looks like every camera2 capable device can stream to up to 3 targets.
EDIT: Apparently, you can use an ImageReader for capturing the individual frames and their timestamps, and then use an ImageWriter to provide those images to downstream consumers such as a MediaCodec, which records the video for you.

Expanding on fadden's answer, the easiest way to go is to record using MediaRecorder (the Camera2 example helped me a lot) and after that extracting timestamps via MediaExtractor. I want to add a working MediaExtractor code sample:
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(pathToVideo);
do {
long sampleMillis = extractor.getSampleTime() / 1000;
// do something with sampleMillis
}
while (extractor.advance() && extractor.getSampleTime() != -1);

Related

Exoplayer 2: Play video in reverse

My android app plays videos in Exoplayer 2, and now I'd like to play a video backwards.
I searched around a lot and found only the idea to convert it to a gif and this from WeiChungChang.
Is there any more straight-forward solution? Another player or a library that implements this for me is probably too much to ask, but converting it to a reverse gif gave me a lot of memory problems and I don't know what to do with the WeiChungChang idea. Playing only mp4 in reverse would be enough tho.
Videos are frequently encoded such that the encoding for a given frame is dependent on one or more frames before it, and also sometimes dependent on one or more frames after it also.
In other words to create the frame correctly you may need to refer to one or more previous and one or more subsequent frames.
This allows a video encoder reduce file or transmission size by encoding fully the information for every reference frame, sometimes called I frames, but for the frames before and/or after the reference frames only storing the delta to the reference frames.
Playing a video backwards is not a common player function and the player would typically have to decode the video as usual (i.e. forwards) to get the frames and then play them in the reverse order.
You could extend ExoPlayer to do this yourself but it may be easier to manipulate the video on the server side if possible first - there exist tools which will reverse a video and then your players will be able to play it as normal, for example https://www.videoreverser.com, https://www.kapwing.com/tools/reverse-video etc
If you need to reverse it on the device for your use case, then you could use ffmpeg on the device to achieve this - see an example ffmpeg command to do this here:
https://video.stackexchange.com/a/17739
If you are using ffmpeg it is generally easiest to use via a wrapper on Android such as this one, which will also allow you test the command before you add it to your app:
https://github.com/WritingMinds/ffmpeg-android-java
Note that video manipulation is time and processor hungry so this may be slow and consume more battery than you want on your mobile device if the video is long.

I want to attach Pre-Rolls to videos taken on android devices

I'm using mp4parser and the videos need to be of the same kind.
I was thinking of using android's media codec to decode & encode the preroll video to fit the same encoding output of the cameras (front & back)
any suggestion on how this can be done (how to get specific device encoding params)?
If you want to find out what encoding your Android camera is using, try using this: https://developer.android.com/reference/android/media/CamcorderProfile
This should suffice to answer your questions for detecting the video encoding including : The file output format, Video codec format, Video bit rate in bits per second, Video frame rate in frames per second, Video frame width and height, Audio codec format, Audio bit rate in bits per second, Audio sample rate Number of audio channels for recording.
Pulled a lot of the above information from here as well: https://developer.android.com/guide/topics/media/camera#capture-video
As for transcoding videos that are already in the users roll, I found this useful transcoder that was written in pure java using the Android MediaCodec API and can be found here: https://github.com/ypresto/android-transcoder
Also, as rupps mentioned below, you can use ffmeg which is proven to work countless times on Android. However, the reasoning for me linking the other transcoder first is because, as the author states, "FFmpeg is the most famous solution for transcoding. But using FFmpeg binary on Android can cause GPL and/or patent issues. Also using native code for Android development can be troublesome because of cross-compiling, architecture compatibility, build time and binary size." So use whichever one you believe better suits you. Here is the link for ffmeg for Android:
https://github.com/WritingMinds/ffmpeg-android
If you don't want to use the transcoder that someone else made then I reccomend making your own transcoder using the MediaCodec API that can be found here: https://developer.android.com/reference/android/media/MediaCodec
If you want magic, try this library.
https://github.com/INDExOS/media-for-mobile/
Take a look at the MediaComposer class.
Here's also a code snippet on how it's done.
org.m4m.MediaComposer mediaComposer = new org.m4m.MediaComposer(factory, progressListener);
mediaComposer.addSourceFile(mediaUri1);
int orientation = mediaFileInfo.getRotation();
mediaComposer.setTargetFile(dstMediaPath, orientation);
// set video encoder
VideoFormatAndroid videoFormat = new VideoFormatAndroid(videoMimeType, width, height);
videoFormat.setVideoBitRateInKBytes(videoBitRateInKBytes);
videoFormat.setVideoFrameRate(videoFrameRate);
videoFormat.setVideoIFrameInterval(videoIFrameInterval);
mediaComposer.setTargetVideoFormat(videoFormat);
// set audio encoder
AudioFormatAndroid aFormat = new AudioFormatAndroid(audioMimeType, audioFormat.getAudioSampleRateInHz(), audioFormat.getAudioChannelCount());
aFormat.setAudioBitrateInBytes(audioBitRate);
aFormat.setAudioProfile(MediaCodecInfo.CodecProfileLevel.AACObjectLC);
mediaComposer.setTargetAudioFormat(aFormat);
mediaComposer.setTargetFile(dstMediaPath, orientation);
mediaComposer.start();

Mediamuxer produces corrupted video when samples are written in batch

I'm trying to use Android's MediaMuxer and MediaCodec to produce MP4 videos.
If I drain frames from the codec directly to the muxer by calling writeSampleData(), everything works fine and the correct video is produced.
But if I try to first store these frames on an array and decide later to send them to the muxer, I'm unable to produce a working video, even if the presentation timestamps are correct.
For some reason, it seem that the mediamuxer output depends not only on the presentation timestamps, but also on the actual time "writeSampleData" is called, although it's my understanding that having the correct timestamps should be enough.
Can anyone shed some light on this issue?
Thanks mstorsjo and fadden. I had actually a combination of errors which didn't allow me to understand what was really going on. Both your questions led me to the correct code and the conviction that using writeSampleData() was not time sensitive.
Yes, I was getting the wrong buffers at the first time. The problem was not initially noticeable because the muxer was writing the frames before the buffers got rewritten. When I introduced the delays and decided to duplicate the buffers contents, I hit another issue (basically a race condition) and concluded it was not the case.
What this code does (for the SmartPolicing project) is capture video and audio to create a MP4 file. I could use MediaRecorder (this was the initial solution), but we also wanted to intercep the frames and stream the video via web, so we dropped the MediaRecorder and created a custom solution.
Now it is running smoothly. Thanks a lot, guys.
Are you sure you actually store the complete data for the frames to be written, not only the buffer indices?

mediacodec vs mediaplayer and mediarecorder

I'm a bit confused about how to play and record video/audio in Android. I don't really understand in what situations one should use these classes:
-To play: MediaPlayer vs MediaExtractor + MediaCodec
-To record: MediaRecorder vs MediaCodec + MediaMuxer
When do I have to use one or the others?
Sorry if it's a repeated question, I think it should be a common one but I haven't found any.
If the high level interfaces (MediaPlayer, MediaRecorder) can do what you want (play back video from a format that the system supports to the display, or record video from the camera into a file), you should probably just use them, it will be much much simpler.
If you want to do something more custom, when you notice that the part of the chain that you want to modify is hidden inside the high level classes, you'll want to move on to the lower level ones. E.g. for MediaExtractor; if you only want to extract packets of data from a file but not decode and display/play them back them immediately, you'll want to use MediaExtractor. If you want to receive packets from some other source that the system itself doesn't support, you'll want to use MediaCodec without MediaExtractor. Likewise, if you want to record something else than the camera, or write the output somewhere else than to a file that MediaRecorder supports, you'll want to use MediaCodec directly instead of MediaRecorder.
Also note that the high level classes also improve and get more flexible with newer API versions, allowing you to do things that previously required you to manually use the lower level classes. E.g. in Android 5.0, MediaRecorder got the ability to record from a custom Surface, allowing you to record a video of something you render yourself, not just the camera. This was previously possible since 4.3 by using the lower level classes.

Muxing camera preview h264 encoded elementary stream with MediaMuxer

I am working on an implementation of one of the Android Test Cases regarding previewTexture recording with the new MediaCodec and MediaMuxer API's of Android 4.3.
I've managed to record the preview stream with a framerate of about 30fps by setting the recordingHint to the camera paremeters.
However, I ran into a delay/lag problem and don't really know how to fix that. When recording the camera preview with quite standard quality settings (1280x720, bitrate of ~8.000.000) the preview and the encoded material suffers from occasional lags. To be more specific: This lag occurs about every 2-3 seconds and takes about 300-600ms.
By tracing the delay I was able to figure out the delay comes from the following line of code in the "drainEncoder" method:
mMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
This line is called in a loop if the encoder has data available for muxing. Currently I don't record audio so only the h264 streams is converted to a mp4 format by the MediaMuxer.
I don't know if this has something to do with that delay, but it always occurs when the loop needs two iterations to dequeue all available data of the encoder (to be even more specific it occurs always in the first of these two iterations). In most cases one iteration is enough to dequeue the encoder.
Since there is not much information online about these new API's any help is very appreciated!
I suspect you're getting bitten by the MediaMuxer disk write. The best way to be sure is to run systrace during recording and see what's actually happening during the pause. (systrace docs, explanation, bigflake example -- as of right now only the latter is updated for Android 4.3)
If that's the case, you may be able to mitigate the problem by running the MediaMuxer instance on a separate thread, feeding the H.264 data to it through a synchronized queue.
Do these pauses happen regularly, every 5 seconds? The CameraToMpegTest example configures the encoder to output an I-frame every 5 seconds (with an expected frame rate of 30fps), which results in a full-sized frame being output rather than tiny deltas.
As #fadden points out, this is a disk write issue that occurs mostly on devices with lower writing flash speeds or if you try to write to the SD card.
I have written a solution on how to buffer MediaMuxer's write in a similar question here.

Categories

Resources