ffmpeg video streaming deep understanding - android

The main question is if it is possible to somehow go around the frame index checking that ffmpeg does when writing the frame to a file.
Now I will explain my exact problem so you can understand better what I need or maybe think of an alternative solution.
Problem n0.1: I am getting video stream from two independent cameras and for some reason I want to save it in the same video file. First the frames from the first camera and then the frames from the second. When writing the frames from the second camera av_write_frame would return the error code -22 and will fail to add the frame. That's because the writing context is expecting a frame index following the index of the previously written frame (the last frame from camera 1) but he receives a frame with the index 0, the first frame from the second camera.
Problem no.2: Consider the following problem independently to the first one.
I am trying to save a video stream to a file but the frame rate is double the real speed. So because I couldn't find any working solution to speed down the frame rate i thought to write every frame twice in the video file. But it won't make any difference to the frame rate.
I also tried a different approach on the frame rate problem but it also failed(question here).
Any kind of working solution would be highly appreciated.
Also it's important that I can't use console commands, I need C code, as I need to integrate those functionalities in an Android application that is automated.

Related

How to get frames from video files and process them in real time

I'm writing an application to add some effects by editing frames (like moving pixels, removing, adding... not just put effects on it) from video files. Is there a way to get frames and render edited frames to the screen in real time?
Many thanks in advance.

Copy consecutive frames from video

I'm trying to copy a part of a video, and save it as a GIF into the disk. The video can be local or remote, and the copy should be 2s max. I don't need to save every single frame, but every other frame (12-15 fps). I have the "frames to gif" part working, but the "get the frames" part is not great.
Here is what I tried so far:
- MediaMetadataRetriever: too slow (~1s per frame on a Nexus4), and only works with local files
- FFmpegMediaMetadataRetriever: same latency, but works with remote video
- TextureView.getBitmap(): I'm using a ScheduledExecutorService and every 60ms, grab the Bitmap (while playing...) It works well with small size getBitmap(100, 100), but for bigger ones (> 400), the whole process becomes really slow. And as the doc says Do not invoke this method from a drawing method anyway.
It seems that the best solution would be to access every frame while decoding, and save them. I tried OpenCV for Android but couldn't find an API to grab a frame at a specific time.
Now, I'm looking into those samples to understand how to use MediaCodec, but while running ExtractMpegFramesTest.java, I can't seem to extract any frame ("no output from decoder available").
Am I on the right track? Any other suggestion?
edit: went further with ExtractMpegFramesTest.java, thanks for this post.
edit 2: just to clarify, what I'm trying to achieve here is to play a video, and press a button to start capturing the frames.

Problems with MediaExtractor

I am trying to get specific frames at specific times as images from a movie using MediaExtractor and MediaCodec. I can do it successfully if:
I use extractor.seekTo(time, MediaExtractor.SEEK_TO_PREVIOUS_SYNC); , however, this only gives the nearest sync frame not the target frame.
I sequentially extract all frames using extractor.advance(); , but I need to get the target frame not all.
So, I try the following:
extractor.seekTo(time, MediaExtractor.SEEK_TO_PREVIOUS_SYNC);
while(extractor.getSampleTime()<time /*target time*/) extractor.advance();
This provides the correct frame, but for some reason the image is corrupted. It looks like the correct image (the one I get from the successful cases), but with some pixelation and a strange haze.
The while-loop is the only thing that is different between the successful cases and the corrupted ones. What to do to advance MediaExtractor to a specific time (not just sync time) without getting a corrupted image?
Thanks to fadden comment, I have to keep feeding the encoder since the I-frame has the full picture and the P and B frames have differences (this is how compression is achieved). So I need to start with an I-frame (it was same as sync frame) and keep feeding the other frames to the decoder to receive the full image.

Getting video frames from a video in Android

I'm writing an app that captures a video from the camera of the Android device. I'm trying to get all the frames of a video without repeating frames and store them in some kind of list. By "without repeating frames", I mean this:
If I call MediaMetadataRetriever.getFrameAtTime(time, OPTION), there is a chance that two calls to this method can return the same frame if time hasn't changed much. I want to increment time enough before the next call to getFrameAtTime() such that I don't get the same frame again.
Obviously, I also want to make sure I don't miss any frames.
One way to do this is to get the frames per second of the video and increment time by the frequency of frame capture. But how would I get the frames per second of the video I captured?
Or how else would I accomplish this?
You can register a PreviewCallback handler. This interface declares onPreviewCallback() method which is called exactly as you want: once for every distinct video frame. The parameter is the raw camera image.

Using mp4 files/jpeg sequences/gif animations in android?Which is better and How to efficiently use these?

I have been posting quite some questions recently without getting a response. Hope this one gets.
Iv decided to go with mp4 as its compression is much better than gif sequences. Trying movie.decodestream(InputStream) helps to get the stream. But when movie.duration() is aquired, a null pointer exception is thrown. Going around the web, I found that duration works if the movies are frame by frame(with durations for each).
So, is the mp4 sequence a bad way to read the stream and get the duration?
Is there any way to convert the mp4 into a "frame by frame" sequence?
What is required is an easy way to play some 30 frame animations on screen.
The problems of each solution Iv found are: Jpeg sequences are pretty much memory consuming, Gif sequences when used have jagged edges and bigger file sizes and mp4 sequences cannot be acquired for frame by frame playback.
- Since the animations have a dimension of 800*600, I guess Jpeg sequences are out of the question, or please do tell if there are any good options for this.
- Presently Im using gif sequences,with half the dimension. The edges are jagged and are displayed very bad in different devices.
-Mp4 Sequences are just too good. But I need the control over the frames so that I can play it forward and reverse, and display the first and last frames at the resting stage of the animation(the animations are triggered on touch).

Categories

Resources