I have been banging my head against the wall on this for a while now. I need to trim part of an mp4 file, using ffmpeg on android. I have ffmpeg compiled, linked, and I can confirm that it works as expected in my ndk calls. What I am running into is getting an mp4 file to ffmpeg in a protocol it can actually work with.
Right now, I have a Uri for the file that I get a ParcelFileDescriptor from. I pass its file descriptor to an ndk call that does all the ffmpeg processing using the pipe protocol. Issue: ffmpeg cannot read an mp4 over the pipe protocol, because it needs to seek back to the start once it finds the moov atom.
All I'm doing is remuxing the videos. I'm not doing any heavy work, or more complicated ffmpeg calls
Attempted solution: Set up custom AVio calls that open the descriptor as a file stream, and handle it that way. Issue: file descriptor from java is not seekable, it's more of a stream.
Possible solution: Preprocess videos to have the moov atom at the front. Issue: Not allowed, the files come from somewhere I cannot control.
Possible solution: Run one call to find all the file information, then another to actually remux the file. Issue: I don't know what I need to save from the first parse through to make this possible. Just the moov atom? Can I just replace the io object in the first call's inputFormatContext with a new one in the second call? Can I pass it two distinct file descriptors, both to the same file, and not have to make two ndk calls?
Any help or ideas you can offer is greatly appreciated.
Related
In my case I wanted to compile ffmpeg for android using the ffmpeg-android git repo. To understand my problem you should know the basics of building ffmpeg.
I only work with audio files but this question may give you points to start at in other cases of problems.
I modified the ./configure command of FFmpeg to avoid gpl usage and with (important) --disable-everything.
This disables everything. I needed to overlay two audio files.
But I had some problems.
And because I had to search several days for finding many solutions I had to combine, I started this Q&A.
I added to ./configure:
--enable-filter=amix, aresample .
Rule 1
You should inform yourself about the most neccessary filters. In my case I needed aresample as well for writing the output file.
Rule2
Be sure what formats you want to use.
In my case I wanted to process and/or output mp3 and aac files.
Inform yourself by searching the internet for your wishes.
Good search terms: ffmpeg {aac, mp3, etc.} encoder
.||. decoder
.||. muxer
.||. demuxer
For aac I needed --enable-muxer=adts and --enable-dexmuxer=adts,aac and --enable-bsf=adtstoasc
Rule3
Enable the protocols (uri schemes) of your given input files. If you want to use inputs from /media/audios/myAudio.mp3 you have to add the file protocol: --enable-protocol=file
Rule 4
Addititional libraries:
For mp3 files use --enable-libmp3lame
Search the internet for more information.
Rule 5
If you want to combine a low quality audio file with a good quality audio file the output may become as bad as the low quality audio file. You should specify the bitrate when using ffmpeg. Search the internet for more information.
Rule 6
Search the internet. You may not fint the solution within seconds but if you do not stop you may find the solution. Inspect the compilation console output as well and check if every decoder and encoder you wanted is really there. FFmpeg gives this information. Maybe you used a wrong name for a file format which has a different name than the encoder. Or the encoder has another name than the decoder.
Rule 7
You can get a list of all supported encoders and decoders and muxers and so on by calling
./configure --list-{what you want to list in plural}
Example: ./confige --list-demuxers
I am trying to build an video recording system on Android 4.2.2, I've done the encoding part, which is using OMX. Now I am working on the Muxer part, since the code stream of the video can be a little different if I use FFMpeg, so I wish to use the exact same Muxer tool of the original system.
So I want to extract the Muxer part of StagefrightRecorder, compile it into a .so file, and then call it via JNI in my application. But there are a lot of stuffs in StagefrightRecorder, I am confused.
Can this way work? Can I just extract the code relevant to MPEG4Writer? Can anyone give me any instructions?
Thanks!
If you are compiling within the context of the framework, you could simply include the relevant header files and create the MPEG4Writer object directly. A very good example for this is the command line utility recordVideo as can be observed from this file.
If you wish to write a separate application, then you need to link with libstagefright.so and include the relevant header files and their path.
Note: If you wish to work with the standard MPEG4Writer, it's source i.e. source of the MPEG4Writer which would be an encoder should be modeled as a MediaSource. The writer pulls the metadata and actual bitstream through the read method and hence, it is recommended to employ a standard built-in object such as OMXCodec or ACodec for the encoder.
I'm working on an image processing project for the Parrot AR.drone, using opencv4Android, i'm so new to the whole thing! ,
does anyone have an idea about how to read in video streams from the ARDrone using OpenCV, the samples shows how to get video input from a webcam only
the video is encoded in H.264 format,and the drone adds a proprietary header (called PaVE) to every video frame, apparently that's why Android fails to load the video stream..
thanks
You need a PaVE parser that will strip the PaVE headers off the H.264 frames before you can decode them and feed them to OpenCV.
There are some PaVE parsers around. Maybe you can use one as-is, or adapt it for your use.
The official AR.Drone SDK (downloadable here: https://projects.ardrone.org/) includes C code for decoding PaVE; see the video_com_stage.c, video_stage_tcp.c, video_stage_decoder.c and video_stage_ffmpeg_decoder.c files in its ARDroneLib/Soft/lib/ardrone_tool/Video folder
Javascript (part of the node-ar-drone project): https://github.com/felixge/node-ar-drone/blob/master/lib/video/PaVEParser.js
C gstreamer module: https://projects.ardrone.org/boards/1/topics/show/4282
ROS drivers (by Willow Garage, who also created OpenCV): https://github.com/AutonomyLab/ardrone_autonomy
Requirement
Android open a .wav file in sd card, play it , add some effect (like echo, pitch shift etc), save the file with effect. Simple :(
What I know
I can open and play file using Soundpool or MediaPlayer.
I can give some effect while playing using both. ie for Media Player
I can set Environmental Reverb effect. Using SoundPool I can set
playing rate, which is kind of like pitch shift. I am successful in
implementing these right now.
But either of this classes doesn't have any method to save the
played file. So I can only play, I cannot save the music with
effect.
What I want to know
Is there any other classes of interest, other than MediaPlayer or
SoundPool. Never mind about saving, you just mention the class, I will do the
research about saving file with them.
Any 3rd party libraries where I can add effects and save? Happy if
it is open source and free. But mention them even if it is
proprietary.
Any other areas where I can look into. Does OpenAL support voice
filtering along with voice positioning? Will it work with Android?
Ready to do the dirty work. You please lend me the path..
EDIT: Did some more searching, and come across AudioTrack. But it also won't support saving to a file. So no luck there also..
EDIT Ok, what if I do it myself? Get raw bytes from a wav file, and work on that. I recorded a wav file using AudioRecord, got a wav file. Is there any resource describing low level audio processing (I mean at the bytes level).
EDIT Well bounty time is up, and I am giving bounty to the only answer that I got. After 7 days, what I understood is
We can't save what we play using MediaPlayer, AudioTrack etc.
There is no audio processing libraries available to use.
You can get raw wav files, and do the audio processing yourself. The
answer gave a good wrapper class for reading/writing wav files. A
good java code to read and change pitch of wav files is here.
The WavFile class http://www.labbookpages.co.uk/audio/javaWavFiles.html claims to read and write wav files and allow per-sample manipulation through arrays of sample values. It's certainly reasonably small, 23kbytes total source code.
I did struggle for a while to build an android app with the Wavfile Class included. This turned out to be because both WavFile and ReadExample (from the above link) were intended as standalone java programs, so include a method main(String [] args){}. Eclipse sees this and thinks the Class is a standalone runnable program, and, when I click the run button, tries to execute just the one Class with the java in the development machine, instead of launching the whole app to my phone. When I take care to run the whole app with the little drop-down menu on the run button, I don't have any trouble, and the WavFile Class and examples drop straight in, give zero warnings in the IDE, and work as advertised running on my phone.
I am trying to build the FFmpeg library to use in my android app with the NDK. The reason for this is because I am using the native video capture feature in android because I really don't want to write my own video recorder. However, the native video capture only allows for either high-quality encoding, or low quality encoding. I want something in between, and I believe that the solution is to use the FFmpeg library to re-encode the high quality video to be lighter.
So far I have been able to build the FFmpeg library according to this guide: http://www.roman10.net/how-to-build-ffmpeg-for-android/ and which a few tweaks I have been able to get it to work.
However, everything that I've found seems to be about writing your own encoder, which seems like overkill to me. All that I really want to do is send a string in command line format to the main() function of FFmpeg and re-encode my video. However, I can't seem to figure out how I build FFmpeg to give me access to the main method. I found this post: Compile ffmpeg.c and call its main() via JNI which links to a project doing what I want more of less, but for the life of me I cannot figure out what is going on. It also seems like he is compiling more than I want, and I would really like to keep my application as light weight as possible.
Some additional direction would be extremely helpful. Thank you.
In Android NDK there is no main() in your application in the typical sense, so you are unable to do what you want to directly. However, you can still call the main() of FFmpeg yourself and provide all necessary parameters to it. Here are 2 possibilities to get the parameters:
Android Activity receives an Intent after creation. You can pass the parameters through the intent while starting your activity and then extract it like this:
Intent CommandLine = this.getIntent();
Uri uri = CommandLine.getData();
You can read parameters from file you create somewhere on a SD-card and pass the to FFmpeg.