I just want to get frames(bitmap) from a video in android.
Just like a method:
MediaCodecUtil.getFrameAt(long timeUs,Object otherParams);
I have query some blog,And I found MediaCodec is a good choose to do that.
but,how to do?Anyone help?
I believe that the easiest way will be to use MediaPlayer and set its output Surface to be ImageReader's Surface.
MediaPlayer will parse the video file, seek to the relevant frame, and decode it for you.
Related
I am going to get depth image when playback with mp4 file in ARCore.
First, I recorded live camera frames(depth, color) to mp4 file.
this mp4 file has 3 tracks(1280720(color), 640480(color), 640*480(depth)).
Next, I start playback this mp4 file using session.setPlaybackDataSet() function in ARCore.
and I tried to get color image depth image using below functions.
textureImage = frame.acquireCameraImage();
depthImage = frame.acquireDepthImage();
In this case, textureImage's size is 640480, but depthImage's size is 640360(cropped).
But I want to get 640*480 depthImage(non-cropped).
I tried to search functions for changing frame's size before start playback.
But I cannot find any solution.
How can I get non-cropped depth image. My tested device is Samsung Galaxy S20+.
Please help me.
I'm looking for the fastest way to "decode edit encode" video on Android devices. So I choose MediaCodec with Surface for Input And Output.
This is my idea:
1. decode the mp4 file with MediaCoder, the output is SurfaceTexture;
2. use OpenGL's to edit surface, the output is texture;
3. use MediaCodec to encode, the input is Surface
the problem is:
decode and edit are much faster than encode, so when I had decode and edit 50 frames, encode maybe just consume 10 frames. but as I use the surface for input with Encode, I don't know if the encoder has consumed all previous frames. so the 40 frames are lost.
Is there any way let me know the surface consume state, so I can control decode speed or any other idea?
I'm using VideoSurfaceView extends GLSurfaceView to render filtered video. I'm doing it buy changing the fragment shader according to my needs. Now I would like to save/render the video after the changes to a file of the same format(Ex. mp4 - h264) but couldn't find how to do it.
I am using this library - https://github.com/krazykira/VidEffects.
Any experts here?
I would try and use the MediaProjection API. Here is an Google sample for this api. Notice: this might not work on the new emulators.
I am writing an app to grab every frame from a video,so that I can do some cv processing.
According to Android `s API doc`s description,I should set Mediaplayer`s surface as ImageReader.getSurface(). so that I can get every video frame on the callback OnImageAvailableListener .And it really work on some device and some video.
However ,on my Nexus5(API24-25).I got almost green pixel when ImageAvailable.
I have checked the byte[] in image`s Yuv planes,and i discover that the bytes I read from video must some thing wrong!Most of the bytes are Y = 0,UV = 0,which leed to a strange imager full of green pixel.
I have make sure the Video is YUV420sp.Could anyone help me?Or recommend another way for me to grab frame ?(I have try javacv but the grabber is too slow)
I fix my question!!
when useing Image ,we should use getCropRect to get the valid area of the Image.
forexample ,i get image.width==1088 when I decode a 1920*1080 frame,I should use image.getCropImage() to get the right size of the image which will be 1920,1080
I've implemented an android app that implements the CvCameraListener interface. In the onCameraFrame(Mat inputFrame) method I process the captured inputFrame from the camera.
Now to my problem: Is there a way that I can use a saved video file on my phone as an input instead of getting the frames directly from camera? That means I would like to have a video file input frame by frame in Mat format.
Is there a possible way to do that?
Thanks for your answers
though it is not tested and I don't have much experience in OpenCV on Android. Still, you can try like this:
//[FD : File descriptor or path.]
Bitmap myBitmapFrame;
MediaMetadataRetriever video_retriever = new MediaMetadataRetriever();
try {
retriever.setDataSource(FD);
myBitmapFrame = retriever.getFrameAtTime(..);
}
catch(...
:
Utils.bitmapToMat(myBitmapFrame, myCVMat);
You may have to implement some callback system as you can work with only OpenCV after it is initialized. Also, you can convert frame number to time-code.
Good Luck and Happy Coding. :)