I have a set of videos stored in a folder on the android file system.
I would like to read each frame by frame so that i can perform some OpenCv functions on them and then display them in a Bitmap.
I'm not sure how to do this correctly, any help would be appreciated.
You can take a look at Javacv.
"JavaCV first provides wrappers to commonly used libraries by researchers in the field of computer vision: OpenCV, FFmpeg, libdc1394, PGR FlyCapture, OpenKinect, videoInput, and ARToolKitPlus"
To read each frame by frame you'd have to do something like below
FrameGrabber videoGrabber = new FFmpegFrameGrabber(videoFilePath);
try
{
videoGrabber.setFormat("video format goes here");//mp4 for example
videoGrabber.start();
} catch (com.googlecode.javacv.FrameGrabber.Exception e)
{
Log.e("javacv", "Failed to start grabber" + e);
return -1;
}
Frame vFrame = null;
do
{
try
{
vFrame = videoGrabber.grabFrame();
if(vFrame != null)
//do your magic here
} catch (com.googlecode.javacv.FrameGrabber.Exception e)
{
Log.e("javacv", "video grabFrame failed: "+ e);
}
}while(vFrame != null);
try
{
videoGrabber.stop();
}catch (com.googlecode.javacv.FrameGrabber.Exception e)
{
Log.e("javacv", "failed to stop video grabber", e);
return -1;
}
Hope that helps. Goodluck
i know it's to late but any one can use it if he need it
so you can use #Pawan Kumar code and you need to add read permession to your manifest file <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
and you will get it working.
I donĀ“t know about Android but generally you would have to use VideoCapture::open to open your video and then use VideoCapture::grab to get the next frame. See the Documentation of OpenCV for more information on this.
Update:
It seems like camera access is not officially supported for Android at the moment, see this issue on the OpenCV Github: https://github.com/opencv/opencv/issues/11952
You can either try the unofficial branch linked in the issue: https://github.com/komakai/opencv/tree/android-ndk-camera
or use another library to read in the frames and then create an OpenCV image from the data buffer like in this question.
Related
i am trying to compress videos in project therefore using silicompressor. but when i pass it the destination path my application gets and hang and does nothing. But it does create a folder in my storage and stores a video file but when i try to play it, it gives error "Failed to play video". And this file has size of 24 bytes. so take a look and tell what i have done wrong.
Here is my code.
File destinationPath = new File("/storage/emulated/0/DCIM/Camera/myvideo");
destinationPath.mkdir();
File file = new File(destinationPath.getAbsolutePath());
Toast.makeText(Post.this, "folder: " + file, Toast.LENGTH_SHORT).show();
try {
filePath = SiliCompressor.with(Post.this).compressVideo(videouri, file.toString());
video.setVideoURI(Uri.parse(filePath));
Toast.makeText(Post.this, "Completed", Toast.LENGTH_SHORT).show();
} catch (URISyntaxException e) {
Log.d("EXCEPTION", e.toString());
Toast.makeText(Post.this, e.getMessage(), Toast.LENGTH_SHORT).show();
e.printStackTrace();
}
Try running compression code using AsyncTask
Here you can find demo app code for video compression.
I tried this SiliCompressor for video compressor, which is very well, and in this version, Audio and Video work well but not maintaining the resolution.
https://github.com/Tourenathan-G5organisation/SiliCompressor/tree/v2.2.2
Note: In the latest version(2.2.3) of SiliCompressor audio is not working after video compressor
OR
I made this and working well
https://github.com/iamkdblue/CompressVideo
I hope it helps you.
I am trying to pass a video to the OpenCV VideoCapture class. However when I call the VideoCapture.isOpened() method it always returns false. I have tried two methods:
Saving the video file to the internal memory, context.getFilesDir() => /data/data/package_name/files/VideoToAnalyze/Recording.mp4
and also one to environment.getExternalStorageDirectory() => sdcard/appName/Recording.mp4.
Nothing seems to work here. My question is how do I pass a video file (or what is the correct file path) to a VideoCapture OpenCV object? I've posted some code below as an example. Note that I don't get an error. The file is always found/exists, but when I call isOpened() I always get false.
UPDATE:
So it looks like everyone on the web is saying that OpenCV (I'm using 3.10) is lacking a ffmpeg backend and thus cannot process videos. I'm wondering does anyone know this as a fact ?? And is there a work around. Almost all other alternatives to process videos frame by frame is deathly slow.
String x = getApplicationContext().getFilesDir().getAbsolutePath();
File dir = new File(x + "/VideoToAnalyze");
if(dir.isDirectory()) {
videoFile = new File(dir.getAbsolutePath() + "/Recording1.mp4");
} else {
// handle error
}
if(videoFile.exits(){
String absPath = videoFile.getAbsolutePath();
VideoCapture vc = new VideoCapture();
try{
vc.open(absPath);
} catch (Exception e) {
/// handle error
}
if(!vc.isOpened(){
// this code is always hit
Log.v("VideoCapture", "failed");
} else {
Log.v("VideoCapture", "opened");
.....
Its an old question but nevertheless I had same issue.
Opencv for Android only supports MJPEG codec in AVI container. See this
Just to close this .. I downloaded JavaCV .. included .so files into the android project and then used FFMPEGFrameGrabber.
Just got a Project Tango Development Kit tablet and have worked through some of the demos and examples.
Some older blog posts use the log files from a "Tango Mapper" application that should be preloaded on the device.
Interactive Visualization of Google Project Tango Data with ParaView
Ologic Announces integration between ROS and Project Tango
Google Tango and ROS integration at Bosch
Mapping Hints and Tips
Unfortunately, the "Tango Mapper" application did not come preloaded on my device and I can't seem to find it on the Play Store.
Is there some other method to simply export or retrieve the PointCloud data for downstream rendering?
[Model number: yellowstone, Tango Core Version: 1.1:2014.11.14-bernoulli-release]
Not sure if you ever got to solve this, but I was able to find the APK along with a method to export using Tango updated tablet version. I successfully exported the point cloud data using the method described in this blog.
http://www.kitware.com/blog/home/post/838
Edit
Procedure download the APK or use the source code found found in the GITHUB project folder.
Once that is done boot up the app as you normally would. There will a slider record, and auto. If you slide record it will only wait until you hit the snap shot button to record the point cloud data you are currently viewing.
If you slide the auto it will continuously record the point cloud data and create files as it tracks where you are moving. Keep in mind the larger the file the larger it takes to save as a zip.
Once done slide the record and it will prompt you to save and send.
I find it easier to save to the Google Drive as other the other methods sometimes fail to send.
Once done download the free Paraview App found http://www.paraview.org/download/ load up your Point cloud data.
It should be two files one your pose data and the other point cloud. (you could individually load each data using the collapse arrow you see before importing it in.)
That will be it you will be able to see your data and actually play back the animation of you recording it because of your pose data collected.
( only wrote this out because you were looking for an easier way to export data) This is probably the easiest. You could take said data and begin to reconstructed the room based on the pose data collected.)
all credit for source code and tutorial goes to the The Kitware blog
If links are broken DM me and I will send the file to you.
APK is found here
APK DOWNLOAD
they also have listed their source code at the bottom of the blog. It is based on the tango Explorer found in the app store.
Tango Mapper is an internal tool, and it's currently not public to developers. I think the best way to log the point cloud data is using the c or java example code provided, and maybe do some small modification to log the data to a file.
c example: https://github.com/googlesamples/tango-examples-c
java example: https://github.com/googlesamples/tango-examples-java
Sparse mapping: https://www.youtube.com/watch?v=x5C_HNnW_3Q
More indoor mapping: https://www.youtube.com/watch?v=3BNOsxMZD14
It appears that more than a few of the contributors to the Tango project were hired or bought by google. As an example most of the links to code and/or articles by Hidof are MIA, only a facebook page with few clues remains. The internet archive's wayback machine has a few snapshots of their website for the curious.
Go take a look at the Java Point Cloud sample on GitHub - The function you want to look at is onXyzIsAvailable in PointCloudActivity. Extracting a few relevant lines....
public void onXyzIjAvailable(final TangoXyzIjData xyzIj) {
....
byte[] buffer = new byte[xyzIj.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
xyzIj.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer,
xyzIj.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
At this point buffer contains the point cloud data - I would strongly recommend you ship this off the device via a binary service call, as I think making the poor thing try and convert it to JSON or XML would make things slower than you would like
Thank you Mark for your advice. I am a novice programmer and it is my first time working with java...
I am interested in exporting the Tango acquired PointCloud data to a file and I would like to ask for your feedback on my approach (I created a Save button, and onClick the data would be saved to a file on an external drive). Please find the code bellow for the part that should save the xyzIj data:
#Override
public void onClick(View v) {
switch (v.getId()) {
...
case R.id.save_button:
savePointCloud();
break;
default:
Log.w(TAG, "Unrecognized button click.");
}
}
private static void savePointCloud(final TangoXyzIjData xyzIj, String file) {
File directoryName = getAlbumStorageDir(file);
FileOutputStream out = new FileOutputStream(directoryName,"text.txt");
byte[] buffer = new byte[xyzIj.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
xyzIj.xyzParcelFileDescriptor.getFileDescriptor());
int read;
while ((read=fileStream.read(buffer))!=1){
try{
out.write(buffer, 0, read);
out.close();
System.out.println("Printed to file");
}catch(IOException e){e.printStackTrace();}
}
}
public File getAlbumStorageDir(String dirName) {
if (!isExternalStorageWritable()) {
return null;
} else {
// Get the directory for the user's public downloads directory.
File file = new File(Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_DOWNLOADS), dirName);
if (!file.mkdirs() || !file.exists()) {
Log.e(TAG, "Directory not created");
return null;
}
return file;
}
}
public boolean isExternalStorageWritable() {
String state = Environment.getExternalStorageState();
if ((Environment.MEDIA_MOUNTED.equals(state)
&& Environment.MEDIA_MOUNTED_READ_ONLY.equals(state))) {
return true;
} else {
Log.e(TAG, "External storage is not mounted READ/WRITE.");
return false;
}
}
I am reading found many articles and info about creating video from sequence of images. They all recommend to use ffmpeg. The thing is that this pretty complicated. There is simple way to do this without ffmpeg? I need that the result video will be readable to regular video player on the device.
Not sure what you mean by complicated. If you are not very comfortable with native layer then you might use javaCV. It provides java wrapper for ffmpeg among other open source library and works very well.
Possibly, you want to make use of the Movie. The reference is here:
http://developer.android.com/reference/android/graphics/Movie.html
And, a sample example is here:
https://code.google.com/p/animated-gifs-in-android/
You can use JCodec library.
It now supports android too.
You need to download the library and add it in your project.
here is an example of using the library:
SequenceEncoder se = null;
try {
se = new SequenceEncoder(new File(Environment.getExternalStorageDirectory(),
"jcodec_enc.mp4"));
File[] files = yourDirectory.listFiles();
for (int i = 0;i<files.length; i++) {
if (!files[i].exists())
break;
Bitmap frame = BitmapFactory.decodeFile(files[i]
.getAbsolutePath());
se.encodeImage(frame);
}
se.finish();
} catch (IOException e) {
Log.e(TAG, "IO", e);
}
I'm trying to send h264/AAC video from Android's MediaRecorder through a local Socket. The goal is to to send video to a WOWZA server throught RTMP or RTSP, but it's giving me a lot of trouble and for now I'm just trying to write the data to a file from the LocalServerSocket.
Here is some code. Sorry it's not really clean, but I spent hours testing many things and my project is a mess right now.
In the Camera activity, the output file setup:
LocalSocket outSocket = new LocalSocket();
try {
outSocket.connect(new LocalSocketAddress(LOCAL_SOCKET));
} catch (Exception e) {
Log.i(LOG_TAG, "Error connecting socket: "+e);
}
mMediaRecorder.setOutputFile(outSocket.getFileDescriptor());
The LocalServerSocket implementation:
try {
mLocalServerSocket = new LocalServerSocket(mName);
} catch (Exception e) {
Log.e(LOG_TAG, "Error creating server socket: "+e);
return;
}
while (true) {
File out = null;
FileOutputStream fop = null;
try {
mLocalClientSocket = mLocalServerSocket.accept();
InputStream in = mLocalClientSocket.getInputStream();
out = new File(mContext.getExternalFilesDir(null), "testfile.mp4");
fop = new FileOutputStream(out);
int len = 0;
byte[] buffer = new byte[1024];
while ((len = in.read(buffer)) >= 0) {
Log.i(LOG_TAG, "Writing "+len+" bytes");
fop.write(buffer, 0, len);
}
} catch (Exception e) {
e.printStackTrace();
}
finally{
try {
fop.close();
mLocalClientSocket.close();
} catch (Exception e2) {}
}
}
The problem is that the file resulting from this is not readable by any media player. Do you think this is because of an encoding issue? This code should generate a binary file if I understand well?!
Thanks in advance, cheers.
Ok, I've found why the files couldn't play. In MP4 and 3GPP files, there is a header containing the bytes:
ftyp3gp4 3gp43gp6 wide mdat
in HEX
0000001866747970336770340000030033677034336770360000000877696465000392D86D6461740000
The 4 bytes before the 'mdat' tag represent the position of another 'moov' tag situated at the end of the file. The position is usually set when the recording is over, but as MediaRecorder can't seek sockets, it can't set these bytes to the correct value in our case.
My problem now is to find a way to make such a file streamable, as it involves for it to be played before the recording is over.
You could try using mp4box to restructure your file. The moov box gives the indexes for each audio and video sample. If that is at the end of the file, it makes it difficult to stream.
This might help:
http://boliston.wordpress.com/tag/moov-box/
Or this:
mp4box -inter 0.5 some_file.mp4
(I don't have the chance to try currently)
If you need this to work with your app, I am not aware of any activities to port mp4box to Android.
I tried today to do the same, but mp4 is not very easy to stream (as said before some parts are written at the end). I don't say it's impossible but it seems at least quite hard.
So a workaround for newer Android APIs (4.3) could be this one:
Set the camera preview to a SurfaceTexture: camera.setPreviewTexture
Record this texture using OpenGL and MediaCodex + Muxer
The drawback of this solution is that the preview size of a camera might be smaller than the video size. This means depending on you device you can't record at the highest resolution. Hint: some cameras say they don't support higher preview sizes but they do and you can try to configure the camera to set the preview size to the video size. If you do so catch the RuntimeException of camera.setParameters and if it fails only use the supported preview sizes.
Some links how to record from a SurfaceTexture:
Bigflage: great examples for MediaCodec stuff.
The VideoRecorder class from Lablet.
May also be useful: spydroid-ipcamera streams the data from the MediaRecorder socket as RTP streams but I have found no way to feed that to the MediaCodec. (I already got stuck reading the correct NAL unit sizes as they do...)