I am trying to extract all frames from a video.
By following code I want to fetch the first 30 frames of a video, but I got only first frame 30 times.
private ArrayList<Bitmap> getFrames(String path) {
try {
ArrayList<Bitmap> bArray = new ArrayList<Bitmap>();
bArray.clear();
MediaMetadataRetriever mRetriever = new MediaMetadataRetriever();
mRetriever.setDataSource("/sdcard/myvideo.mp4");
for (int i = 0; i < 30; i++) {
bArray.add(mRetriever.getFrameAtTime(1000*i,
MediaMetadataRetriever.OPTION_CLOSEST_SYNC));
}
return bArray;
} catch (Exception e) { return null; }
}
Now, how can I get all frames from a video?
Video support in Android SDK is limited and frame extraction for H264 encoded videos is only possible for key frames. In order to extract an arbitrary frame, you'll need to use a library like FFmpegMediaMetadataRetriever which uses native code to extract data from the video. It is very fast, comes with precompiled binaries (for ARM and x86) so you don't need to delve into C++ and makefiles, is licensed under Apache 2.0 and it comes with a demo Android app.
There is also a pure Java library, JCodec but it's slower and when I used it last year the colors of the extracted frame were distorted.
you have to pass the path to this method...Perfectly working code ! hope it helpfull
gradle--
implementation 'com.github.wseemann:FFmpegMediaMetadataRetriever-core:1.0.15'
public void VideoToGif(String uri) {
Uri videoFileUri = Uri.parse(uri);
FFmpegMediaMetadataRetriever retriever = new FFmpegMediaMetadataRetriever();
retriever.setDataSource(uri);
List<Bitmap> rev = new ArrayList<Bitmap>();
MediaPlayer mp = MediaPlayer.create(GitToImage.this, videoFileUri);
int millis = mp.getDuration();
System.out.println("starting point");
for (int i = 100000; i <=millis * 1000; i += 100000*2) {
Bitmap bitmap = retriever.getFrameAtTime(i, FFmpegMediaMetadataRetriever.OPTION_CLOSEST);
rev.add(bitmap);
}
GiftoImage((ArrayList) rev);
}
getFrameAt get data in milliseconds but you are incrementing .001 miliseconds in for loop.
for(int i=1000000;i<millis*1000;i+=1000000) // for incrementing 1s use 1000
{
bArray.add(mRetriever.getFrameAtTime(i,
MediaMetadataRetriever.OPTION_CLOSEST_SYNC));
}
change it like above . Above is sample for creating what you want. I also answered it here
Starting with Android 9.0 (API level 28), MediaMetadataRetriever has a getFrameAtIndex (int frameIndex) method, which accepts the zero-based index of the frame you want and returns a Bitmap.
See https://developer.android.com/reference/android/media/MediaMetadataRetriever.html#getFrameAtIndex(int)
Related
Is there any way to read frames from an mp4 video using JavaCV in parallel?
I know that we could grab frames using FFmpegFrameGrabber but is there any other efficient method like using FrameGrabber.Array ?, I tried the below code but its not working.
frames = new Frame[grabber.getLengthInFrames()];
frameGrabbers = new FFmpegFrameGrabber[grabber.getLengthInFrames()];
*//*for (FFmpegFrameGrabber grabber : frameGrabbers) {
grabber = new FFmpegFrameGrabber(path);
}*//*
for (int i = 0; i < grabber.getLengthInFrames(); i++) {
frameGrabbers[i] = new FFmpegFrameGrabber(path);
}
grabberArray = grabber.createArray(frameGrabbers);
grabberArray.start();
frames = grabberArray.grab();
grabberArray.release();
The app crashes when I call grabberArray.start().
Thanks.
I am wondering if it is possible to reuse single MediaMetadataRetriever object for the purpose of getting metadata from multiple files?
If yes - should I call release() method after each file or just set different datasource and call release() after all files being processed?
API refererence is not precise about that :/
thanks :)
Yes, you can reuse the object. You code would look something like this:
MediaMetadataRetriever mmr = new MediaMetadataRetriever();
for (int i = 0; i < files.length; i++) {
mmr.setDataSource(files[i]);
mmr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_ALBUM);
mmr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_ARTIST);
Bitmap b = mmr.getFrameAtTime(2000000, MediaMetadataRetriever.OPTION_CLOSEST); // frame at 2 seconds
byte [] artwork = mmr.getEmbeddedPicture();
}
mmr.release(); // all done, release the object
I've compiled an FFMPEG library for use on Android with libx264 and using the NDK.
I want to encode an MPEG video file however the application is failing when opening the encoder codec, in avcodec_open2.
The FFMPEG logs I receive from avcodec_open2 are below with the function returning -22.
Picture size %ux%u is invalid.
ignoring invalid width/height values
Specified pix_fmt is not supported
On windows this code works fine, it's only on Android that there is a failure. Any ides why this would fail on Android?
if (!(codec = avcodec_find_encoder(AV_CODEC_ID_MPEG1VIDEO)))
{
return -1;
}
//Allocate context based on codec
if (!(context = avcodec_alloc_context3(codec)))
{
return -2;
}
//Setup Context
// put sample parameters
context->bit_rate = 4000000;
// resolution must be a multiple of two
context->width = 1280;
context->height = 720;
// frames per second
context->time_base = (AVRational){1,25};
context->inter_quant_bias = 96;
context->gop_size = 10;
context->max_b_frames = 1;
//IDs
context->pix_fmt = AV_PIX_FMT_YUV420P;
context->codec_id = AV_CODEC_ID_MPEG1VIDEO;
context->codec_type = AVMEDIA_TYPE_VIDEO;
if (AV_CODEC_ID_MPEG1VIDEO == AV_CODEC_ID_H264)
{
av_opt_set(context->priv_data, "preset", "slow", 0);
}
if ((result = avcodec_open2(context, codec, NULL)) < 0)
{
//Failed opening Codec!
}
This problem was caused by building FFMPEG with outdated source code.
I got the most recent source from https://www.ffmpeg.org/ and compiled it in the same way and the new library works fine.
Note: I hadn't considered the full implications regarding licenses of using libx264. I've since dropped it.
For my application i created video from set of images by using javacv/opencv in android.but that video plays with out sound.so i want to add my recorded audio(mp3 file) to that generated video how can i achieve it?
This is my code which is used to get video from images,
String path =SCREENSHOT_FOLDER2;
File folder = new File(path);
listOfFiles = folder.listFiles();
if(listOfFiles.length>0)
{
iplimage = new opencv_core.IplImage[listOfFiles.length];
for (int j = 0; j < listOfFiles.length; j++) {
String files="";
if (listOfFiles[j].isFile())
{
files = listOfFiles[j].getName();
}
String[] tokens = files.split("\\.(?=[^\\.]+$)");
String name=tokens[0];
System.out.println(" j " + name);
iplimage[j]=cvLoadImage("/mnt/sdcard/images/"+name+".png");
}
}
File videopath = new File(SCREENSHOT_FOLDER3);
videopath.mkdirs();
FFmpegFrameRecord recorder = new
FFmpegFrameRecord(SCREENSHOT_FOLDER3+
"video"+System.currentTimeMillis()+".mp4",320,480);
try {
recorder.setCodecID(CODEC_ID_MPEG4); //CODEC_ID_MPEG4
//CODEC_ID_MPEG1VIDEO
recorder.setBitrate(sampleVideoBitRate);
recorder.setFrameRate(10);
recorder.setPixelFormat(PIX_FMT_YUV420P); //PIX_FMT_YUV420P
recorder.start();
int x = 0;
int y = 0;
for (int i=0;i< 300 && x<iplimage.length;i++)
{
recorder.record(image[x]);
if (i>(y+10)) {
y=y+1;
x++;
}
}
recorder.stop();
}
catch (Exception e){
e.printStackTrace();
}
now how to integrate audio file(.mp3) file in this code.
OpenCV, and subsequently JavaCV has no support for audio.
You have to go with a different library for it. Look at the Android support for video/audio, third-pary libs, or any other way you may find useful.
But don't just expect OpenCV to help you because it's support for audio is 0%.
I would not say that JavaCV has no support for audio, as it integrates a lot of libraries that opencv does not - ffmpeg for example. Check this long thread for that issue.
I am working on an android application in which a video is dynamically generated by compositing a sequence of animation frames. I tried to use the Android Media Recorder API for this but have not found a way to get it to accept a non-camera source as input. I have been attempting to use a FFMPEG port (based on the Rockplayer build) but am running into difficulties with missing functions since I am using it as an encoder, not a decoder.
The iPhone version of this app uses AVAssetWriter from the AVFoundation framework.
Is there an easier way to do this or am I stuck slugging it out with FFMPEG?
This may help (see the note on resolution though):-
How to encode using the FFMpeg in Android (using H263)
I'm not sure if they did a custom build of ffmpeg, or not, if so they may be able to offer advice on porting a more feature complete version.
-Anthony
Opencv has ViewBase class which takes the input from the camera as a frame and represent the frame as a bitmap , you can extand the class View base and make it for your own use , even though installing opencv on the android isn't very easy.
When you extend SampleCvViewBase you will have the following function which you can use pretty much hard work but the best I can think of.
#Override
protected Bitmap processFrame(VideoCapture capture) {
capture.retrieve(picture, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
if (Utils.matToBitmap(picture, bmp))
return bmp;
bmp.recycle();
return null;
}
You can use a pure Java open source library called JCodec ( http://jcodec.org ).
It contains a simple yet working H.264 encoder and MP4 muxer. The class below uses JCodec low level API and should be what you need ( CORRECTED ):
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
for (int i = 0; i < 3; i++)
Arrays.fill(toEncode.getData()[i], 0);
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
public static void main(String[] args) throws IOException {
SequenceEncoder encoder = new SequenceEncoder(new File("video.mp4"));
for (int i = 1; i < 100; i++) {
BufferedImage bi = ImageIO.read(new File(String.format("folder/img%08d.png", i)));
encoder.encodeImage(bi);
}
encoder.finish();
}
}
You can get JCodec jar from a project web-site.