I'm working on an image processing project for the Parrot AR.drone, using opencv4Android, i'm so new to the whole thing! ,
does anyone have an idea about how to read in video streams from the ARDrone using OpenCV, the samples shows how to get video input from a webcam only
the video is encoded in H.264 format,and the drone adds a proprietary header (called PaVE) to every video frame, apparently that's why Android fails to load the video stream..
thanks
You need a PaVE parser that will strip the PaVE headers off the H.264 frames before you can decode them and feed them to OpenCV.
There are some PaVE parsers around. Maybe you can use one as-is, or adapt it for your use.
The official AR.Drone SDK (downloadable here: https://projects.ardrone.org/) includes C code for decoding PaVE; see the video_com_stage.c, video_stage_tcp.c, video_stage_decoder.c and video_stage_ffmpeg_decoder.c files in its ARDroneLib/Soft/lib/ardrone_tool/Video folder
Javascript (part of the node-ar-drone project): https://github.com/felixge/node-ar-drone/blob/master/lib/video/PaVEParser.js
C gstreamer module: https://projects.ardrone.org/boards/1/topics/show/4282
ROS drivers (by Willow Garage, who also created OpenCV): https://github.com/AutonomyLab/ardrone_autonomy
Related
I have been banging my head against the wall on this for a while now. I need to trim part of an mp4 file, using ffmpeg on android. I have ffmpeg compiled, linked, and I can confirm that it works as expected in my ndk calls. What I am running into is getting an mp4 file to ffmpeg in a protocol it can actually work with.
Right now, I have a Uri for the file that I get a ParcelFileDescriptor from. I pass its file descriptor to an ndk call that does all the ffmpeg processing using the pipe protocol. Issue: ffmpeg cannot read an mp4 over the pipe protocol, because it needs to seek back to the start once it finds the moov atom.
All I'm doing is remuxing the videos. I'm not doing any heavy work, or more complicated ffmpeg calls
Attempted solution: Set up custom AVio calls that open the descriptor as a file stream, and handle it that way. Issue: file descriptor from java is not seekable, it's more of a stream.
Possible solution: Preprocess videos to have the moov atom at the front. Issue: Not allowed, the files come from somewhere I cannot control.
Possible solution: Run one call to find all the file information, then another to actually remux the file. Issue: I don't know what I need to save from the first parse through to make this possible. Just the moov atom? Can I just replace the io object in the first call's inputFormatContext with a new one in the second call? Can I pass it two distinct file descriptors, both to the same file, and not have to make two ndk calls?
Any help or ideas you can offer is greatly appreciated.
My application deals video sharing. Here i need to reduce the size of videos/ compress the video, so that i can minimize the data upload/download. I went through many threads and most of the threads suggesting FFMPEG. I could integrate it with my application and it is working exactly how i want it to be. But now i came to know that it's a commercial library, and i just have 15 days of trial period. :(
Any other alternatives? Free/open source libraries which satisfies my requirements?
ffmpeg is open source under LGPL - https://www.ffmpeg.org/legal.html
However it includes tons of different codecs and some external libraries might be non-opensource.
If you are using Android, it already has MediaCodec class that can deal with encoding and decoding videos. If you search that, you will find what you need. Very similar topic: Video compression on android using new MediaCodec Library
I am trying to build an video recording system on Android 4.2.2, I've done the encoding part, which is using OMX. Now I am working on the Muxer part, since the code stream of the video can be a little different if I use FFMpeg, so I wish to use the exact same Muxer tool of the original system.
So I want to extract the Muxer part of StagefrightRecorder, compile it into a .so file, and then call it via JNI in my application. But there are a lot of stuffs in StagefrightRecorder, I am confused.
Can this way work? Can I just extract the code relevant to MPEG4Writer? Can anyone give me any instructions?
Thanks!
If you are compiling within the context of the framework, you could simply include the relevant header files and create the MPEG4Writer object directly. A very good example for this is the command line utility recordVideo as can be observed from this file.
If you wish to write a separate application, then you need to link with libstagefright.so and include the relevant header files and their path.
Note: If you wish to work with the standard MPEG4Writer, it's source i.e. source of the MPEG4Writer which would be an encoder should be modeled as a MediaSource. The writer pulls the metadata and actual bitstream through the read method and hence, it is recommended to employ a standard built-in object such as OMXCodec or ACodec for the encoder.
I want to create a video using ffmpeg by taking byte[] data from Android Camera. Now the problem is i don't have much knowledge about ffmpeg. So i need some documentation on ffmpeg. I will appreciate if anyone can provide some useful tutorial / sample code / example on ffmpeg and how it works , how it can be used to create video programmatically, Thanks.
ffmpeg is a C library so you will have to use NDK to build it and bridge it together with your Android device using a JNI interface. As far as I know, I dont think its possible to record a video directly using ffmpeg. However, you can use openCv to capture video stream then decode/encode it with ffmpeg if you decided to go this route. Once again, all of this must be done in C/C++, and the information can be sent to the android device via JNI using NDK once you finished processing it with ffmpeg.
Here is the link to OpenCV Library for Android
http://opencv.org/downloads.html
Once downloaded, there are sample projects which show you how to record video using native android camera and as well as using opencv feature.
I am working on an Android application which is supposed to play videos over HTTP on Android devices. Before we setup a server to host the video files just wanted a few things clarified:
As per the developer documentation, Android supports .mp4 and .3gp container formats for video. If we use H.263(video) - AAC LC (Audio) audio-video codec used for our media files will we be able to play the video by passing the URL to MediaPlayer class?
I did a little experiment and passed URL of one of the video files(.mp4) to the MediaPlayer class and got the following error:
Command PLAYER_INIT completed with an
error or info
PVMFErrContentInvalidForProgressivePlayback
From the docs, I came to know that for progressive playback, the video's index (e.g moov atom) should be at the start of the file.
Questions:
1. How do we make our videos Android-ready?
2. What are the different considerations that we need to make?
Please help.
Thanks.
You can actually achieve this using a pure Java implementation of ISO BMF ( MP4 ) container used JCodec ( http://jcodec.org ). For this use the following code:
MovieBox movie = MP4Util.createRefMovie(new File("bad.mp4"));
new Flattern().flattern(movie, new File("good.mp4"));
The side effect of 'Flattern' is creating a web optimized movie file that has it's header BEFORE the data.
You can also use similar functionality from command line:
java -cp jcodec-0.1.3-uberjar.jar org.jcodec.movtool.WebOptimize <movie>
The JCodec library can be downloaded from a project website.
I cross posted this question on Android-developers google group. Mark answered it there. Thanks Mark!
See this thread