I want to implement the functionality which will do the video recording but instead of saving the video I want to save the frames of video from that recording.How to do it
If the aim is to create jpegs of the video Camera class will not be of much help here. It captures videos or jpegs. Does not do any conversion.
If you want a full video to be converted to a set of images then simplest way is to use ffmpeg. Install it. After creating the video run a command after capturing the video and convert it to images.
ffmpeg -i input -s widthxheight -f image2 out%05d.jpg
that will create out00000.jpg out00001.jpg and so on from the video. You can use the -r option to generate every second frame or 10th frame and so on.
Related
I would like to set the starting point of the video file a little bit later like 0.01 seconds
when I open the detail page of the video file at smartphon site by using the ffmpeg, and the coding I tried is below.
for i in /xxxxxx/xxxxxxx/*.mp4; do ffmpeg -i "$i" -pix_fmt yuv420p -movflags +faststart -ss 0.01 "/xxxxxx/xxxxxxx/$(basename "$i" .mp4).mp4"; done
The reason for that is because when I open the detail page of the video file at smartphone site(iOS and android)
it is set to stop to play the video by default(I can not set to start the video automatically) with the circled start button at the white background.(The pc site starts the video automatically by default.)
So I would like to see the actual picture from the video file by delaying the starting point at the smartphone detail page.
My understanding is that the starting point of 0 second is the white screen, and then the picture starts to be displayed continuosly after that.
With the command above it does not set it later even though I don't get any error.
Can anyone please help me out to code the right command for this case? Or is there any other way for this case?
Use the poster attribute in the HTML5 <video> element to choose an image to represent the video before playback begins.
<video src="video.webm" poster="image.jpg">
MY GOALS
1. Decode an MP4 Video.
2. Decode camera frames, edit with renderscript (apply effects)
3. Display the camera data inside an oval on top on the background video in Goal 1.
4. Encode the frames and use the MediaMuxer to save the video.
MY PROBLEM
I can successfully do goals 1-3 but I am stuck on goal 4. When I am using the second MediaCodec for encoding frames (1st MediaCodec is used in Goal 1), my whole app freezes and needs to be forced closed.
Can android actually handle two MediaCodecs simultaneously?
Can anyone offer any help on this please?
Thanks
Here is my scenario:
Download an avi movie from the web
Open a bitmap resource
Overlay this bitmap at the bottom of the movie on all frames in the background
Save the video on extarnal storage
The video length is 15 seconds usually
Is this possible to achieve using MediaMuxer ? Any info on the matter is gladly received
I've been looking to http://bigflake.com/mediacodec/#DecodeEditEncodeTest (Thanks #fadden) and it says there:
"Decoding the frame and copying it into a ByteBuffer with
glReadPixels() takes about 8ms on the Nexus 5, easily fast enough to
keep pace with 30fps input, but the additional steps required to save
it to disk as a PNG are expensive (about half a second)"
So having almost 1 sec/frame is not acceptable. From what I am thinking one way would be to save each frame as PNG, open it, add the bitmap overlay on it and then save it. However this would take an enormous time to accomplish.
I wonder if there is a way to do things like this:
Open video file from external storage
Start decoding it
Each decoded frame will be altered with the bitmap overlay in memory
The frame is sent to an encoder.
On iOS I saw that there a way to take the original audio + original video + an image and add them in a container and then just encode the whole thing...
Should I switch to ffmpeg ? How stable and compatible is ffmpeg ? Am I risking compatibility issues with android 4.0+ devices ? Is there a way to use ffmpeg to acomplish this ? I am new to this domain and still doing research.
Years later edit:
Years have passed since the question and ffmpeg isn't really easy to add to a commercial software in terms of license. How did this evolved? Newer versions of android are more capable on this with the default sdk?
Some more time later edit
I got some negative votes for posting info as an answer so I'll edit the original question. Here is a great library which, from my testing does apply watermark to video and does it with progress callback making it a lot easier to show progress to the user and also uses the default android sdks. https://github.com/MasayukiSuda/Mp4Composer-android
This library generate an Mp4 movie using Android MediaCodec API and apply filter, scale, and rotate Mp4.
Sample code, could look like:
new mp4Composer(sourcePath, destinationPath)
.filter(new GlWatermarkFilter(watermarkBitmap)
.listener(){
#Override
private void onProgress(double value){}
#Override
private void onCompleted(double value){
runOnUiThread( () ->{
showSneakbar
}
}
#Override
private void onCancelled(double value){}
#Override
private void onFailed(Exception e){}
}).start();
Testing on emulator, seems to work fine on android 8+ while on older generates a black video file.However, when testing on real device seems to work.
I don't know much about the MediaMuxer but ffmpeg does support overlaying functionality. FFMPEG has various filters one of them is overlay filter. What I understand is you want to overlay an image (i.e. png) on the video, ffmpeg surely is a useful framework to do this job. You can set the output format you can set the co-ordinates of the image which is to be overplayed.
E.g.
ffmpeg -i input.avi -i logo.png -filter_complex 'overlay=10:main_h-overlay_h-10' output.avi
Above command adds overlays logo.png on the input.avi video file in bottom left corner.
More information about the filters is available at following website,
https://www.ffmpeg.org/ffmpeg-filters.html#overlay-1
If this is a solution to your problem you need the C code equivalent to the above command. You also need to see the performance of the ffmpeg because it a pure software framework.
Hope I have understood your question correctly and this helps.
If you need do this without ffmpeg on Android device:
Start from : https://github.com/google/grafika
The answer on your question between Play video (PlayMovieActivity.java) and Record Gl App (RecordFBOActivity.java) examples.
Steps:
Setup mInputWindowSurface as Video Encoder Input Surface.
Decode frame from video stream using MoviePlayer as video (external) texture.
Draw this video texture on Surface.
Draw watermark on the same Surface over video texture.
Notify MediaCodec that surface ready for encoding:
mVideoEncoder.frameAvailableSoon();
mInputWindowSurface.setPresentationTime(timeStampNanos);
and then goto Step 2.
Don't forget to adjust speed of decoding. Just remove SpeedControlCallback which in example set to decode 60 FPS video.
Advantages of this way:
Media Codec use hardware decoder/encoder for video processing.
You can change bit rate of result video.
You can try INDE Media Pack - https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
It has transcoding\remuxing functionality as MediaComposer class and several sample effects like JpegSubstituteEffect - it shows how substitute video frame by a picture from jpg file and TextOverlayEffect to overlay text on video frame etc. It could be easily enhanced to watermark effect
This is what worked for me:
ffmpeg -i input.avi -i logo.png -filter_complex 'overlay=10:main_h-overlay_h-10' -strict -2 output.avi
ffmpeg recommended the usage -strict -2 inorder to allow the usage of experimental codecs. without the inclusion, the accepted answer above fails to work.
I am using ffmpeg to create video from an image sequence that is taken from the Android Camera's PreviewCallback method onPreviewFrame...
The images are written to a pipe that is connected to ffmpeg's stdin using the command :
ffmpeg -f image2pipe -vcodec mjpeg -i - -f flv -vcodec libx264 <output_file>
The problem that's arising is that the output video is very short as compared to the actual recording time and all of the frames are shown very rapidly...
But when the frame size is set to the lowest supported preview size, the video appears to be in sync with the actual recording time...
As far as I reckon this seems to be an issue related to the frame rate of the input image sequence and that of the output video...
But the main problem is that the frames that are generated from onPreviewFrame are of variable rates...
Is there any way to construct a smooth video from an image sequence having variable frame rate...?
Also, the image sequence is muxed with audio from the microphone which also appears to be out of sync with the video...
Could the video generated using the above process and audio from the microphone be muxed in perfect synchronization...?
I know how to use ffmpeg to covert image sequence to a video.
What I want to do is start converting images to video, before I have all the images ready, i.e. as soon as I start to output images, ffmpeg starts conversion, and stops when the images stop coming. Is there any way to achieve this?
Edit : I'm trying this in Android.
If you want to store video on sdcard, you should start with FFMpegFrameRecorder class from OpenCV for Android. You can google it easily. It will allow you to add single frames and create a video bit-by-bit.
If you need to keep your video in memory, you will have to write your own frame recorder, which is not that trivial, but doable and I can help you a bit.