I want to create a GIF from mp4 video. So I need to extract the frames from video first. Here is the code I use to extract frames:
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(mFilePath);
Bitmap bitmap = retriever.getFrameAtTime(i,
MediaMetadataRetriever.OPTION_CLOSEST);
Note that variable i is time in microseconds. Since I want to get 24 frames/second, I call retriever.getFrameAtTime() with i = 42000, 84000, .... (microseconds).
The problem is: when I collect extracted frames to a video, I see only 4-5 different frames. In other words, I didn't get a smooth video. It seems that MediaMetadataRetriever often returns the same frame with different given time. Please help me!
Related
I'm trying to get last frame of mp4 video using MediaMetadataRetriever but it always return first Frame for short videos (like 3s long videos), it work fine for long videos. FFmpegMediaMetadataRetriever also give same result.
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(video);
String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
Bitmap frameAtTime = retriever.getFrameAtTime(Long.parseLong(time)*1000, MediaMetadataRetriever.OPTION_CLOSEST);
mImage.setImageBitmap(frameAtTime);
Any suggestions would be appreciated.
I am writing an app to grab every frame from a video,so that I can do some cv processing.
According to Android `s API doc`s description,I should set Mediaplayer`s surface as ImageReader.getSurface(). so that I can get every video frame on the callback OnImageAvailableListener .And it really work on some device and some video.
However ,on my Nexus5(API24-25).I got almost green pixel when ImageAvailable.
I have checked the byte[] in image`s Yuv planes,and i discover that the bytes I read from video must some thing wrong!Most of the bytes are Y = 0,UV = 0,which leed to a strange imager full of green pixel.
I have make sure the Video is YUV420sp.Could anyone help me?Or recommend another way for me to grab frame ?(I have try javacv but the grabber is too slow)
I fix my question!!
when useing Image ,we should use getCropRect to get the valid area of the Image.
forexample ,i get image.width==1088 when I decode a 1920*1080 frame,I should use image.getCropImage() to get the right size of the image which will be 1920,1080
I would like to perform face detection / tracking on a video file (e.g. an MP4 from the users gallery) using the Android Vision FaceDetector API. I can see many examples on using the CameraSource class to perform face tracking on the stream coming directly from the camera (e.g. on the android-vision github), but nothing on video files.
I tried looking at the source code for CameraSource through Android Studio, but it is obfuscated, and I couldn't see the original online. I image there are many commonalities between using the camera and using a file. Presumably I just play the video file on a Surface, and then pass that to a pipeline.
Alternatively I can see that Frame.Builder has functions setImageData and setTimestampMillis. If I was able to read in the video as ByteBuffer, how would I pass that to the FaceDetector API? I guess this question is similar, but no answers. Similarly, decode the video into Bitmap frames and pass that to setBitmap.
Ideally I don't want to render the video to the screen, and the processing should happen as fast as the FaceDetector API is capable of.
Alternatively I can see that Frame.Builder has functions setImageData and setTimestampMillis. If I was able to read in the video as ByteBuffer, how would I pass that to the FaceDetector API?
Simply call SparseArray<Face> faces = detector.detect(frame); where detector has to be created like this:
FaceDetector detector = new FaceDetector.Builder(context)
.setProminentFaceOnly(true)
.build();
If processing time is not an issue, using MediaMetadataRetriever.getFrameAtTime solves the question. As Anton suggested, you can also use FaceDetector.detect:
Bitmap bitmap;
Frame frame;
SparseArray<Face> faces;
MediaMetadataRetriever mMMR = new MediaMetadataRetriever();
mMMR.setDataSource(videoPath);
String timeMs = mMMR.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION); // video time in ms
int totalVideoTime= 1000*Integer.valueOf(timeMs); // total video time, in uS
for (int time_us=1;time_us<totalVideoTime;time_us+=deltaT){
bitmap = mMMR.getFrameAtTime(time_us, MediaMetadataRetriever.OPTION_CLOSEST_SYNC); // extract a bitmap element from the closest key frame from the specified time_us
if (bitmap==null) break;
frame = new Frame.Builder().setBitmap(bitmap).build(); // generates a "Frame" object, which can be fed to a face detector
faces = detector.detect(frame); // detect the faces (detector is a FaceDetector)
// TODO ... do something with "faces"
}
where deltaT=1000000/fps, and fps is the desired number of frames per second. For example, if you want to extract 4 frames every second, deltaT=250000
(Note that faces will be overwritten on every iteration, so you should do something (store/report results) inside the loop
I am trying to get a frame out of a videoStream in H.264 format on Android. I have a callback function that receives (byte[] video, int size) where video is composed of NALU packets and the 'size' int seems to be the size of the video (I've been logging them and both video.length and size seem to be a size of 1024). I'm using this jcodec to try to decode a frame:
ByteBuffer videoBuffer = ByteBuffer.wrap(video);
H264Decoder decoder = new H264Decoder();
Picture out = Picture.create(1920, 1088, ColorSpace.YUV420);
Picture real = decoder.decodeFrame(videoBuffer, out.getData());
Bitmap outputImage = AndroidUtil.toBitmap(real);
However, video is not the entire video, it's packets of the video. How should I collect these packets in order to decode them? How do I need to modify the code in order to actually get the full current frame?
You need to parse our each nalu-segment from the payload you are receiving. Please see my previous answer:
MediaCodec crash on high quality stream
I've implemented an android app that implements the CvCameraListener interface. In the onCameraFrame(Mat inputFrame) method I process the captured inputFrame from the camera.
Now to my problem: Is there a way that I can use a saved video file on my phone as an input instead of getting the frames directly from camera? That means I would like to have a video file input frame by frame in Mat format.
Is there a possible way to do that?
Thanks for your answers
though it is not tested and I don't have much experience in OpenCV on Android. Still, you can try like this:
//[FD : File descriptor or path.]
Bitmap myBitmapFrame;
MediaMetadataRetriever video_retriever = new MediaMetadataRetriever();
try {
retriever.setDataSource(FD);
myBitmapFrame = retriever.getFrameAtTime(..);
}
catch(...
:
Utils.bitmapToMat(myBitmapFrame, myCVMat);
You may have to implement some callback system as you can work with only OpenCV after it is initialized. Also, you can convert frame number to time-code.
Good Luck and Happy Coding. :)