I am developing a karaoke app and i have been dealing with audio from android.I want an opinion on how can i save a single song from a voice recorded and another song playing in the background.
Thanks
You can try FFMPEG Library for Android, consider using android wrapper. There are several SO answers of help
Audio song mixer in android programmatically
How to overlay two audio files using ffmpeg
FFmpeg on Android
You may want to check amerge and amix ffmpeg filters based on your needs
SoundPool is a good option if requirements can be met just by playing two audio streams together.
SoundPool can also manage the number of audio streams being rendered
at once. When the SoundPool object is constructed, the maxStreams
parameter sets the maximum number of streams that can be played at a
time from this single SoundPool.
Also note Android - Lollipop version now supports multi-channel audio stream mixing
Multi-channel audio stream mixing means professional audio
applications can now mix up to eight channels including 5.1 and 7.1
channels.
Multi-channel support
If your hardware and driver supports multichannel audio via HDMI, you
can output the audio stream directly to the audio hardware.
Unfortunately, I don't think there's an "On device" way to combine two audio files into one. The best you can do is play both at the same time from what I've read.
See Sound Pool Documentation
Related
I am looking for a way to mix an audio into an already-playing mixed audio stream. For example, when a sound is being played half way through, I want to add in another sound to play together without interrupting the first sound to continue. I would also like to have an ability to withdraw a playing sound stream from the mixed playing stream. Going through Android's relevant document, I think that the only possible solution is to use native OpenSL ES via JNI to develop my own library where I can programmatically mix in/take out an audio stream from mixed audio streams. I would like to hear if anyone has a way to achieve it with less effort.
Thank you
Chris
Have you considered SoundPool?
http://developer.android.com/reference/android/media/SoundPool.html
I have already developed Streaming Audio application using MediaPlayer API. All the features are works fine, expect it takes more time to start the playback (buffer time is more).
i want to add recording live audio stream(save the live stream data in disk, not the recording from MIC). as MediaPlayer does not provide any API to access the raw data stream, i am planning to build custom audio player.
i want to control the buffering time, access to the raw audio stream, should able to play all the audio format which are supported in android natively. which api (libPd or OpenSL ES or AudioTrack) will be suitable to build the custom audio player in Android?
In my experience OpenSL_ES would be the choice, here there is a Link that explains how to do audio streaming that you may find useful. bufferframes determines how many samples you will collect before playing, so smaller bufferframes faster response time, but you have to balance that with your device processing capabilities.
You can record with libpd (pd-for-android) too.
All the recording process is managed by libpd.
Check the ScenePlayer project, it uses libpd and lets you record audio into a folder on sdcard:
https://github.com/libpd/pd-for-android/tree/master/ScenePlayer
I have seen how the output-mix can be shared between players but I am wondering how you would play multiple mp3 files simultaneously without using multiple players.
Obviously I could decode the mp3s with a 3rd party library (e.g. ffmpeg) and then mix the buffer streams myself and pass into a player but this about over the top for my needs.
Decoding with opensles seems to require a FD/URI->BufferQueue player object so that would not change the number of players.
Are there techniques I am missing?
The only limitations are that the solution must use c++ and opensles.
I'm building a buffering engine to play streams from url.. I need to buffering both mp3 and aac ( on device that can support it ) so I can't pass directly the url to MediaPlayer.. I tried this method: I have 2 synchronized thread, one that running creates some file with data from buffer and the second playing files created: the problem is that when mediaplayer switch from a file to another, there is a little gap... how can I remove it?? is very annoying...
Maybe my method is wrong, if so can anyone provide a working method without chopping sound??
Thank you very much in advance..
It seems you are trying to implement Gapless Playback. (Right ? )
Towards this you need to define level of Gapless Playback you want to achieve. Should it be across fileformats / codecs, audio attributes like sample rate, number of channels etc.
With your approach, you ll surely see gaps across different streams. (Fileformats , compression, audio attributes).
To achive true Gapless playback at application level (My Approach) you need to do the following
Implement custom stack, that would take the input files, decode it and produce pcm samples. This stack will have Parsers (MP3, AAC), and decoders (MP3, AAC..)
Pass pcm samples through resampler, to produce pcm samples having same sample rate.
Add buffering modules at input (File) and output (resampled pcm data).
Use AudioTrack class of Android SDK for playout.
If you stick to one fileformat, Codec and audio attributes, then at application level, you can concatenate all the files in the playlist and provide it to MediaPLayer for playback. (Since audio streams have less size, this solution can be practical. Only obstacle would be streams attributes. If the Audio OMX Components within Android Multimedia stack support dynamic reconfiguration, then this should be no issue at all)
Shash
I am new to android and presently doing android voice recording application. I want top know which format is best for saving audio file in android. (i.e RAW-AMR or 3gp or mp4).So rhat we can hear playback sound loudly in device.
Is there any alternative way to increase audio sound through voice processing in android.
Thanks in advance.
Question: Which bear is best? Answer: Black Bear
Seriously though, you would need to state your criteria for the audio file for us to make a codec recommendation. Does it need to be portable? Best compression? Highest fidelity?
The codec that you choose has no affect on the loudness of audio that will be played over the device, so this should not factor into your criteria.
Is there an alternative way to increase audio?
Yes, if you are recording audio from the microphone then you can amplify the audio data before you save it to a file.
Let an audio sample from the microphone be represented by the function:
f(t)
Amplification is achieved by multiplying the audio sample by some factor A
A * f(t)
You can use AGC(Automatic Gain Control) module from WebRTC to increase sound level.
I didn't find any simple Java API yet. You can use C++ API via JNI.
Have a look here, WebRTC AGC (Automatic Gain Control) .
I want top know which format is best for saving audio file in android.
To save voice audio on Android (or any other platform), take a look at Opus. It's a free, state-of-the-art audio codec that also supports voice mode.