In my case I wanted to compile ffmpeg for android using the ffmpeg-android git repo. To understand my problem you should know the basics of building ffmpeg.
I only work with audio files but this question may give you points to start at in other cases of problems.
I modified the ./configure command of FFmpeg to avoid gpl usage and with (important) --disable-everything.
This disables everything. I needed to overlay two audio files.
But I had some problems.
And because I had to search several days for finding many solutions I had to combine, I started this Q&A.
I added to ./configure:
--enable-filter=amix, aresample .
Rule 1
You should inform yourself about the most neccessary filters. In my case I needed aresample as well for writing the output file.
Rule2
Be sure what formats you want to use.
In my case I wanted to process and/or output mp3 and aac files.
Inform yourself by searching the internet for your wishes.
Good search terms: ffmpeg {aac, mp3, etc.} encoder
.||. decoder
.||. muxer
.||. demuxer
For aac I needed --enable-muxer=adts and --enable-dexmuxer=adts,aac and --enable-bsf=adtstoasc
Rule3
Enable the protocols (uri schemes) of your given input files. If you want to use inputs from /media/audios/myAudio.mp3 you have to add the file protocol: --enable-protocol=file
Rule 4
Addititional libraries:
For mp3 files use --enable-libmp3lame
Search the internet for more information.
Rule 5
If you want to combine a low quality audio file with a good quality audio file the output may become as bad as the low quality audio file. You should specify the bitrate when using ffmpeg. Search the internet for more information.
Rule 6
Search the internet. You may not fint the solution within seconds but if you do not stop you may find the solution. Inspect the compilation console output as well and check if every decoder and encoder you wanted is really there. FFmpeg gives this information. Maybe you used a wrong name for a file format which has a different name than the encoder. Or the encoder has another name than the decoder.
Rule 7
You can get a list of all supported encoders and decoders and muxers and so on by calling
./configure --list-{what you want to list in plural}
Example: ./confige --list-demuxers
Related
First, I'm new in this Import thing
I've compiled latest version of ffmpeg with NDK r14b,
and managed to get some Library after compiling(like libavutil.so,libavcodec.so,libavfilter.so,etc)
Click here for pic
but im a bit confused importing it to my android project.
First, I want to make an Audio editing app that can merge several mp3 file at specific time, and adjust volume (Increase or Decrease its sound) mp3 file.
I've read about ffmpeg recently, since most people recommended to use this Library.
For merging :
-https://superuser.com/questions/1092291/merge-many-audio-files-with-specific-positions/1092346#1092346
For adjust volume :
-Decode MP3, then increase the audio volume, and then encode the new audio
What i want to ask is:
-from guardian's ffmpeg project and WritingMinds ffmpeg lib (can't post the link since i got not enough rep and this 2 are often mentioned in stackoverflow), does this 2 Libraries have what i needed? I'm a bit confused in this thing. An explanation would be a help.
-from http://ffmpeg-android.blogspot.co.id/ i cant run the ndk-build command at the last part. How should i use it?
Thank you in advance!
I have been banging my head against the wall on this for a while now. I need to trim part of an mp4 file, using ffmpeg on android. I have ffmpeg compiled, linked, and I can confirm that it works as expected in my ndk calls. What I am running into is getting an mp4 file to ffmpeg in a protocol it can actually work with.
Right now, I have a Uri for the file that I get a ParcelFileDescriptor from. I pass its file descriptor to an ndk call that does all the ffmpeg processing using the pipe protocol. Issue: ffmpeg cannot read an mp4 over the pipe protocol, because it needs to seek back to the start once it finds the moov atom.
All I'm doing is remuxing the videos. I'm not doing any heavy work, or more complicated ffmpeg calls
Attempted solution: Set up custom AVio calls that open the descriptor as a file stream, and handle it that way. Issue: file descriptor from java is not seekable, it's more of a stream.
Possible solution: Preprocess videos to have the moov atom at the front. Issue: Not allowed, the files come from somewhere I cannot control.
Possible solution: Run one call to find all the file information, then another to actually remux the file. Issue: I don't know what I need to save from the first parse through to make this possible. Just the moov atom? Can I just replace the io object in the first call's inputFormatContext with a new one in the second call? Can I pass it two distinct file descriptors, both to the same file, and not have to make two ndk calls?
Any help or ideas you can offer is greatly appreciated.
i want to decode the PCM data from MP3 file using MediaExtractor and MediaCodec , can anyone post me some samples ?
Blockquote
There are CTS tests that do this, for a number of different file types. I'd look at some of the older versions of DecoderTest (e.g. https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/DecoderTest.java) because the later versions gained a lot of complexity that improved test coverage but made it harder to understand.
Requirement
Android open a .wav file in sd card, play it , add some effect (like echo, pitch shift etc), save the file with effect. Simple :(
What I know
I can open and play file using Soundpool or MediaPlayer.
I can give some effect while playing using both. ie for Media Player
I can set Environmental Reverb effect. Using SoundPool I can set
playing rate, which is kind of like pitch shift. I am successful in
implementing these right now.
But either of this classes doesn't have any method to save the
played file. So I can only play, I cannot save the music with
effect.
What I want to know
Is there any other classes of interest, other than MediaPlayer or
SoundPool. Never mind about saving, you just mention the class, I will do the
research about saving file with them.
Any 3rd party libraries where I can add effects and save? Happy if
it is open source and free. But mention them even if it is
proprietary.
Any other areas where I can look into. Does OpenAL support voice
filtering along with voice positioning? Will it work with Android?
Ready to do the dirty work. You please lend me the path..
EDIT: Did some more searching, and come across AudioTrack. But it also won't support saving to a file. So no luck there also..
EDIT Ok, what if I do it myself? Get raw bytes from a wav file, and work on that. I recorded a wav file using AudioRecord, got a wav file. Is there any resource describing low level audio processing (I mean at the bytes level).
EDIT Well bounty time is up, and I am giving bounty to the only answer that I got. After 7 days, what I understood is
We can't save what we play using MediaPlayer, AudioTrack etc.
There is no audio processing libraries available to use.
You can get raw wav files, and do the audio processing yourself. The
answer gave a good wrapper class for reading/writing wav files. A
good java code to read and change pitch of wav files is here.
The WavFile class http://www.labbookpages.co.uk/audio/javaWavFiles.html claims to read and write wav files and allow per-sample manipulation through arrays of sample values. It's certainly reasonably small, 23kbytes total source code.
I did struggle for a while to build an android app with the Wavfile Class included. This turned out to be because both WavFile and ReadExample (from the above link) were intended as standalone java programs, so include a method main(String [] args){}. Eclipse sees this and thinks the Class is a standalone runnable program, and, when I click the run button, tries to execute just the one Class with the java in the development machine, instead of launching the whole app to my phone. When I take care to run the whole app with the little drop-down menu on the run button, I don't have any trouble, and the WavFile Class and examples drop straight in, give zero warnings in the IDE, and work as advertised running on my phone.
We need an Android app that can encode a folder of images to a video. I have been looking for solutions a while now, but cannot find anything good. The Android API does not support it. We are trying ffmpeg, but cannot get it to work. We need a working solution, using ffmpeg is not mandatory. A full Android Java solution is also a possibility, since this would work on all Android devices, possibly at the cost of some performance.
The app also needs to be able to add an audio track to the movie if the user chooses to do this.
Any help would be appreciated.
Kind regards,
AƤron
From the FFmpeg FAQ entry "How do I encode single pictures into movies?":
First, rename your pictures to follow a numerical sequence. For example, img1.jpg, img2.jpg, img3.jpg,... Then you may run:
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
Adding an audio track should just involve add another input (e.g., -i audio.mp3), but could also require explicit -maping with older versions.