I'm writing an app that captures a video from the camera of the Android device. I'm trying to get all the frames of a video without repeating frames and store them in some kind of list. By "without repeating frames", I mean this:
If I call MediaMetadataRetriever.getFrameAtTime(time, OPTION), there is a chance that two calls to this method can return the same frame if time hasn't changed much. I want to increment time enough before the next call to getFrameAtTime() such that I don't get the same frame again.
Obviously, I also want to make sure I don't miss any frames.
One way to do this is to get the frames per second of the video and increment time by the frequency of frame capture. But how would I get the frames per second of the video I captured?
Or how else would I accomplish this?
You can register a PreviewCallback handler. This interface declares onPreviewCallback() method which is called exactly as you want: once for every distinct video frame. The parameter is the raw camera image.
Related
I am writing some code that adds a watermark to an already existing video using OpenGL.
I took most of the code from ContinuousCaptureActivity in Grafika - https://github.com/google/grafika/blob/master/app/src/main/java/com/android/grafika/ContinuousCaptureActivity.java
Where instead of using camera to render on a SurfaceTexture, I use the MoviePlayer class, also present in grafika. Also, instead of rendering random boxes, I render the watermark.
I run MoviePlayer at its full speed, i.e., reading from source and rendering on to the surface as soon as a frame is decoded. This does the job for a 30s video in 2s or less.
Now the issue comes with onFrameAvailable callback. It is called only once after every 4 or 5 frames are rendered by the MoviePlayer class. This makes me lose frames in the output video. Now if I make the MoviePlayer thread go to sleep until the corresponding onFrameAvailable is called, everything is fine and no frames are missed. However, now processing my 30s video takes around 5s.
My question is how do I make SurfaceTexture faster? Or is there some completely different approach that I have to look into?
Note that I do not need to render anything on the screen.
I'm trying to copy a part of a video, and save it as a GIF into the disk. The video can be local or remote, and the copy should be 2s max. I don't need to save every single frame, but every other frame (12-15 fps). I have the "frames to gif" part working, but the "get the frames" part is not great.
Here is what I tried so far:
- MediaMetadataRetriever: too slow (~1s per frame on a Nexus4), and only works with local files
- FFmpegMediaMetadataRetriever: same latency, but works with remote video
- TextureView.getBitmap(): I'm using a ScheduledExecutorService and every 60ms, grab the Bitmap (while playing...) It works well with small size getBitmap(100, 100), but for bigger ones (> 400), the whole process becomes really slow. And as the doc says Do not invoke this method from a drawing method anyway.
It seems that the best solution would be to access every frame while decoding, and save them. I tried OpenCV for Android but couldn't find an API to grab a frame at a specific time.
Now, I'm looking into those samples to understand how to use MediaCodec, but while running ExtractMpegFramesTest.java, I can't seem to extract any frame ("no output from decoder available").
Am I on the right track? Any other suggestion?
edit: went further with ExtractMpegFramesTest.java, thanks for this post.
edit 2: just to clarify, what I'm trying to achieve here is to play a video, and press a button to start capturing the frames.
I have a class setup that now gives me a preview on my surface view and allows me to record a video file. I would like to increase the FPS to create a slow motion video and decrease the FPS to create a time lapse video. I have tried using recorder.setVideoFrameRate() and ensured I call it in the right sequence but I just keep getting illegal state exceptions when ever I try to alter any settings on the recording.
Can anyone point me to a tutorial or an example on how to effectively control the settings for the camera?
Cheers.
The main question is if it is possible to somehow go around the frame index checking that ffmpeg does when writing the frame to a file.
Now I will explain my exact problem so you can understand better what I need or maybe think of an alternative solution.
Problem n0.1: I am getting video stream from two independent cameras and for some reason I want to save it in the same video file. First the frames from the first camera and then the frames from the second. When writing the frames from the second camera av_write_frame would return the error code -22 and will fail to add the frame. That's because the writing context is expecting a frame index following the index of the previously written frame (the last frame from camera 1) but he receives a frame with the index 0, the first frame from the second camera.
Problem no.2: Consider the following problem independently to the first one.
I am trying to save a video stream to a file but the frame rate is double the real speed. So because I couldn't find any working solution to speed down the frame rate i thought to write every frame twice in the video file. But it won't make any difference to the frame rate.
I also tried a different approach on the frame rate problem but it also failed(question here).
Any kind of working solution would be highly appreciated.
Also it's important that I can't use console commands, I need C code, as I need to integrate those functionalities in an Android application that is automated.
I have an Android application in mind that would require taking as many camera pictures as possible in say about 1 or 2 seconds.
I've thought of two possibilities:
1) Take various pictures for 2 seconds.
2) Record a video for 2 seconds and extract the frame images.
Which option do you suggest?
Do you think it would be possible to have at least 5 images per second with current hardware?
i suggest the second. Because if the shutter speed of camera is low, the the app cant take multiple photos in one or two second. So better capture video and extract frames. but the problem is that video quality my be lower than image.
The fastest Method is to use the video function (multiple Frames per second).
But if you want to have hq pictures, it depends on your device:
This should help:
Android camera takePicture() method execution time
http://www.workreloaded.com/2011/06/how-to-use-the-android-camera/
Im quite sure extracting frames from video will be all blurry, at least when objects or the phone are moving. Just take a video and pause it to check.