Is there any way for getting the total number of frames from a video in android? I'm using com.github.wseemann:FFmpegMediaMetadataRetriever:1.0.14 library for getting a frame on specific time. In order to get frames and save them as an image with specific time intervals, I need to get frame numbers.
I found the solution for getting frame numbers and frame rates using FFmpeg library. If anyone is struggling with frame extraction, you can refer to https://github.com/wseemann/FFmpegMediaMetadataRetriever. Below is my code to get the total number of frames.
FFmpegMediaMetadataRetriever mmr = new FFmpegMediaMetadataRetriever();
try {
//path of the video of which you want frames
mmr.setDataSource(absolutePath);
} catch (Exception e) {
System.out.println("Exception= "+e);
}
long duration = mmr.getMetadata().getLong("duration");
double frameRate = mmr.getMetadata().getDouble("framerate");
int numberOfFrame = (int) (duration/frameRate);
Related
I'm trying to get last frame of mp4 video using MediaMetadataRetriever but it always return first Frame for short videos (like 3s long videos), it work fine for long videos. FFmpegMediaMetadataRetriever also give same result.
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(video);
String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
Bitmap frameAtTime = retriever.getFrameAtTime(Long.parseLong(time)*1000, MediaMetadataRetriever.OPTION_CLOSEST);
mImage.setImageBitmap(frameAtTime);
Any suggestions would be appreciated.
I would like to perform face detection / tracking on a video file (e.g. an MP4 from the users gallery) using the Android Vision FaceDetector API. I can see many examples on using the CameraSource class to perform face tracking on the stream coming directly from the camera (e.g. on the android-vision github), but nothing on video files.
I tried looking at the source code for CameraSource through Android Studio, but it is obfuscated, and I couldn't see the original online. I image there are many commonalities between using the camera and using a file. Presumably I just play the video file on a Surface, and then pass that to a pipeline.
Alternatively I can see that Frame.Builder has functions setImageData and setTimestampMillis. If I was able to read in the video as ByteBuffer, how would I pass that to the FaceDetector API? I guess this question is similar, but no answers. Similarly, decode the video into Bitmap frames and pass that to setBitmap.
Ideally I don't want to render the video to the screen, and the processing should happen as fast as the FaceDetector API is capable of.
Alternatively I can see that Frame.Builder has functions setImageData and setTimestampMillis. If I was able to read in the video as ByteBuffer, how would I pass that to the FaceDetector API?
Simply call SparseArray<Face> faces = detector.detect(frame); where detector has to be created like this:
FaceDetector detector = new FaceDetector.Builder(context)
.setProminentFaceOnly(true)
.build();
If processing time is not an issue, using MediaMetadataRetriever.getFrameAtTime solves the question. As Anton suggested, you can also use FaceDetector.detect:
Bitmap bitmap;
Frame frame;
SparseArray<Face> faces;
MediaMetadataRetriever mMMR = new MediaMetadataRetriever();
mMMR.setDataSource(videoPath);
String timeMs = mMMR.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION); // video time in ms
int totalVideoTime= 1000*Integer.valueOf(timeMs); // total video time, in uS
for (int time_us=1;time_us<totalVideoTime;time_us+=deltaT){
bitmap = mMMR.getFrameAtTime(time_us, MediaMetadataRetriever.OPTION_CLOSEST_SYNC); // extract a bitmap element from the closest key frame from the specified time_us
if (bitmap==null) break;
frame = new Frame.Builder().setBitmap(bitmap).build(); // generates a "Frame" object, which can be fed to a face detector
faces = detector.detect(frame); // detect the faces (detector is a FaceDetector)
// TODO ... do something with "faces"
}
where deltaT=1000000/fps, and fps is the desired number of frames per second. For example, if you want to extract 4 frames every second, deltaT=250000
(Note that faces will be overwritten on every iteration, so you should do something (store/report results) inside the loop
I want to create a GIF from mp4 video. So I need to extract the frames from video first. Here is the code I use to extract frames:
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(mFilePath);
Bitmap bitmap = retriever.getFrameAtTime(i,
MediaMetadataRetriever.OPTION_CLOSEST);
Note that variable i is time in microseconds. Since I want to get 24 frames/second, I call retriever.getFrameAtTime() with i = 42000, 84000, .... (microseconds).
The problem is: when I collect extracted frames to a video, I see only 4-5 different frames. In other words, I didn't get a smooth video. It seems that MediaMetadataRetriever often returns the same frame with different given time. Please help me!
I am trying to get a frame out of a videoStream in H.264 format on Android. I have a callback function that receives (byte[] video, int size) where video is composed of NALU packets and the 'size' int seems to be the size of the video (I've been logging them and both video.length and size seem to be a size of 1024). I'm using this jcodec to try to decode a frame:
ByteBuffer videoBuffer = ByteBuffer.wrap(video);
H264Decoder decoder = new H264Decoder();
Picture out = Picture.create(1920, 1088, ColorSpace.YUV420);
Picture real = decoder.decodeFrame(videoBuffer, out.getData());
Bitmap outputImage = AndroidUtil.toBitmap(real);
However, video is not the entire video, it's packets of the video. How should I collect these packets in order to decode them? How do I need to modify the code in order to actually get the full current frame?
You need to parse our each nalu-segment from the payload you are receiving. Please see my previous answer:
MediaCodec crash on high quality stream
I have successfuly get thumbnail for video with code like this:
Bitmap bitmap = ThumbnailUtils.createVideoThumbnail(mediaFile.getAbsolutePath(),
MediaStore.Video.Thumbnails.MINI_KIND);
But this code can only get bitmap for few videos because android can not decode most part of videos for android video decoder is weak.
if I just want to get a thumbnail for video, do I have any good solution. If I do not need to use ndk is much better! But if ndk solution is much more efficient or easy, I think it also OK.
You can use-
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
try {
retriever.setDataSource(filePath);
//here 5 means frame at the 5th sec.
bitmap = retriever.getFrameAtTime(5);
} catch (Exception ex) {
// Assume this is a corrupt video file
}