how to read video file and split it into frames - android

I have this question: how can I load, in Android, a video file stored in my device, and how can I split it into frames?
I'm using IntelliJ and I want to split the video into frames in order to process them with some image processing techniques (with OpenCv for Android library).

You don't strictly need to use OpenCV for this. You can use the MediaMetaDataRetreiver class provided by the SDK. It provides methods to extract metadata from all kinds of media files. You can try something like:
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(file.getAbsolutePath());
imgView.setImageBitmap(retriever.getFrameAtTime(TIME_OFFSET,MediaMetadataRetriever.OPTION_CLOSEST));
where TIME_OFFSET is in microseconds.

Grabbing a video frame in OpenCV is pretty easy. There are lots of examples on OpenCV site. However crucial thing is to set-up opencv on andriod. You can follow this link on getting started with Opencv on andriod.
http://opencv.org/android
Once you have opencv installed on andriod. You can easily load video file and grab frame in Mat structure and then do some processing on it.
Here is the sample one. It will need some modification to run it on andriod. I think you will need to used NDK on andriod for this.
int main(int argc, char*argv[])
{
char *my_file = "C:\\vid_an2\\desp_me.avi";
std::cout<<"Video File "<<my_file<<std::endl;
cv::VideoCapture input_video;
if(input_video.open(my_file))
{
std::cout<<"Video file open "<<std::endl;
}
else
{
std::cout<<"Not able to Video file open "<<std::endl;
}
namedWindow("My_Win",1);
namedWindow("Segemented", 1);
Mat cap_img;
for(;;)
{
input_video >> cap_img;
imshow("My_Win", cap_img);
waitKey(0);
}
return 0;
}

Related

use FFMpeg in xamarin android

I'm using xamarin.android to create an android application. for working on the output of recorded audios in android for changing its pitch and the other thing with NWaves package, I need to change its format to .wav. I tried to use many audio convertors but all of them threw exceptions, but it should work with FFmpeg. However, I used this code but it doesn't generate any file.
List<string> cmd = new List<string>();
cmd.Add("ffmpeg");
cmd.Add("-i");
cmd.Add("/storage/emulated/0/Android/data/com.companyname.pushersvc/demo.3GP");
cmd.Add("/storage/emulated/0/Android/data/com.companyname.pushersvc/test.wav");
string cmdParams = string.Join(" ", cmd);
await FFMpeg.Xamarin.FFMpegLibrary.Run(
Application.Context,
cmdParams
);
return new FileInfo(path);
I read this and FFMpeg docs, but all of them used .mp4 convertor.
I also read ffmpeg doc for audio converting which you can see the code, which is not working

How To: Android OpenCV VideoCapture with File

I am trying to pass a video to the OpenCV VideoCapture class. However when I call the VideoCapture.isOpened() method it always returns false. I have tried two methods:
Saving the video file to the internal memory, context.getFilesDir() => /data/data/package_name/files/VideoToAnalyze/Recording.mp4
and also one to environment.getExternalStorageDirectory() => sdcard/appName/Recording.mp4.
Nothing seems to work here. My question is how do I pass a video file (or what is the correct file path) to a VideoCapture OpenCV object? I've posted some code below as an example. Note that I don't get an error. The file is always found/exists, but when I call isOpened() I always get false.
UPDATE:
So it looks like everyone on the web is saying that OpenCV (I'm using 3.10) is lacking a ffmpeg backend and thus cannot process videos. I'm wondering does anyone know this as a fact ?? And is there a work around. Almost all other alternatives to process videos frame by frame is deathly slow.
String x = getApplicationContext().getFilesDir().getAbsolutePath();
File dir = new File(x + "/VideoToAnalyze");
if(dir.isDirectory()) {
videoFile = new File(dir.getAbsolutePath() + "/Recording1.mp4");
} else {
// handle error
}
if(videoFile.exits(){
String absPath = videoFile.getAbsolutePath();
VideoCapture vc = new VideoCapture();
try{
vc.open(absPath);
} catch (Exception e) {
/// handle error
}
if(!vc.isOpened(){
// this code is always hit
Log.v("VideoCapture", "failed");
} else {
Log.v("VideoCapture", "opened");
.....
Its an old question but nevertheless I had same issue.
Opencv for Android only supports MJPEG codec in AVI container. See this
Just to close this .. I downloaded JavaCV .. included .so files into the android project and then used FFMPEGFrameGrabber.

Create video file from images in Android

I am reading found many articles and info about creating video from sequence of images. They all recommend to use ffmpeg. The thing is that this pretty complicated. There is simple way to do this without ffmpeg? I need that the result video will be readable to regular video player on the device.
Not sure what you mean by complicated. If you are not very comfortable with native layer then you might use javaCV. It provides java wrapper for ffmpeg among other open source library and works very well.
Possibly, you want to make use of the Movie. The reference is here:
http://developer.android.com/reference/android/graphics/Movie.html
And, a sample example is here:
https://code.google.com/p/animated-gifs-in-android/
You can use JCodec library.
It now supports android too.
You need to download the library and add it in your project.
here is an example of using the library:
SequenceEncoder se = null;
try {
se = new SequenceEncoder(new File(Environment.getExternalStorageDirectory(),
"jcodec_enc.mp4"));
File[] files = yourDirectory.listFiles();
for (int i = 0;i<files.length; i++) {
if (!files[i].exists())
break;
Bitmap frame = BitmapFactory.decodeFile(files[i]
.getAbsolutePath());
se.encodeImage(frame);
}
se.finish();
} catch (IOException e) {
Log.e(TAG, "IO", e);
}

CvCaptureFromAVI problems - OpenCV Android

I need to capture frame by frame from a video stored in my sd card of the Android device (in this case my emulator). I am using Android and OpenCV through NDK. I pushed manually the file "SinglePerson.avi" inside the sdcard through file explorer of DDBS (eclipse) and I used the code below to read the file:
JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial4_Sample4Mixed_VideoProcessing(JNIEnv*, jobject)
{
LOGI("INSIDE VideoProcessing ");
CvCapture* capture = cvCaptureFromAVI("/mnt/sdcard/SinglePerson.avi");
IplImage* img = 0;
if(!cvGrabFrame(capture)){ // capture a frame
LOGI("Inside the if");
printf("Could not grab a frame\n\7");
exit(0);
}
img=cvRetrieveFrame(capture);// retrieve the captured frame
cvReleaseCapture(&capture);
}
The problem is that cvGrabFrame(capture) results always false.
Any suggestion to correctly open the video and grab the frames?
Thanks in advance
Some versions of OpenCV (in package opencv2) build without video support. If it is your case you have to enable "-D WITH_FFMPEG=ON" in pkg's Makefile and recompile.
Look at "Displaying AVI Video using OpenCV" tutorial:
"You may need to ensure that ffmpeg has been successfully installed in order to allow video encoding and video decoding in different formats. Not having the ffmpeg functionality may cause problems when trying to run this simple example and produce a compilation errors".
Also check path in cvCaptureFromAVI for correctness.
Hope this will help!
The behavior you are observing is probably due to cvCaptureFromAVI() failing. You need to start coding safely and check the return of the calls you make:
CvCapture* capture = cvCaptureFromAVI("/mnt/sdcard/SinglePerson.avi");
if (!capture)
{
printf("!!! Failed to open video\n\7");
exit(0);
}
This function usually fails for 2 reasons:
When it's unable to access the file (due to wrong filesystem permissions);
Missing codecs on the system (or the video format is not supported by OpenCV).
If you are new to OpenCV, I suggest you test your OpenCV code on a desktop (PC) first.

Android - Include native StageFright features in my own project

I am currently developing an application that needs to record audio, encode it as AAC, stream it, and do the same in reverse - receiving stream, decoding AAC and playing audio.
I successfully recorded AAC (wrapped in a MP4 container) using the MediaRecorder, and successfully up-streamed audio using the AudioRecord class. But, I need to be able to encode the audio as I stream it, but none of these classes seem to help me do that.
I researched a bit, and found that most people that have this problem end up using a native library like ffmpeg.
But I was wondering, since Android already includes StageFright, that has native code that can do encoding and decoding (for example, AAC encoding and AAC decoding), is there a way to use this native code on my application? How can I do that?
It would be great if I only needed to implement some JNI classes with their native code. Plus, since it is an Android library, it would be no licensing problems whatever (correct me if I'm wrong).
yes, you can use libstagefright, it's very powerful.
Since stagefright is not exposed to NDK, so you will have to do extra work.
There are two ways:
(1) build your project using android full source tree. This way takes a few days to setup, once ready, it's very easy, and you can take full advantage of stagefright.
(2) you can just copy include file to your project, it's inside this folder:
android-4.0.4_r1.1/frameworks/base/include/media/stagefright
then you will have export the library function by dynamically loading libstagefright.so, and you can link with your jni project.
To encode/decode using statgefright, it's very straightforward, a few hundred of lines can will do.
I used stagefright to capture screenshots to create a video, which will be available in our Android VNC server, to be released soon.
the following is a snippet, I think it's better than using ffmpeg to encode a movie. You can add audio source as well.
class ImageSource : public MediaSource {
ImageSource(int width, int height, int colorFormat)
: mWidth(width),
mHeight(height),
mColorFormat(colorFormat)
{
}
virtual status_t read(
MediaBuffer **buffer, const MediaSource::ReadOptions *options) {
// here you can fill the buffer with your pixels
}
...
};
int width = 720;
int height = 480;
sp<MediaSource> img_source = new ImageSource(width, height, colorFormat);
sp<MetaData> enc_meta = new MetaData;
// enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_H263);
// enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_MPEG4);
enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC);
enc_meta->setInt32(kKeyWidth, width);
enc_meta->setInt32(kKeyHeight, height);
enc_meta->setInt32(kKeySampleRate, kFramerate);
enc_meta->setInt32(kKeyBitRate, kVideoBitRate);
enc_meta->setInt32(kKeyStride, width);
enc_meta->setInt32(kKeySliceHeight, height);
enc_meta->setInt32(kKeyIFramesInterval, kIFramesIntervalSec);
enc_meta->setInt32(kKeyColorFormat, colorFormat);
sp<MediaSource> encoder =
OMXCodec::Create(
client.interface(), enc_meta, true, image_source);
sp<MPEG4Writer> writer = new MPEG4Writer("/sdcard/screenshot.mp4");
writer->addSource(encoder);
// you can add an audio source here if you want to encode audio as well
//
//sp<MediaSource> audioEncoder =
// OMXCodec::Create(client.interface(), encMetaAudio, true, audioSource);
//writer->addSource(audioEncoder);
writer->setMaxFileDuration(kDurationUs);
CHECK_EQ(OK, writer->start());
while (!writer->reachedEOS()) {
fprintf(stderr, ".");
usleep(100000);
}
err = writer->stop();

Categories

Resources