3GP/AMR mix/merge tracks - android

Is there an easy way to merge 2 3gp (amr) audio files into a single audio file?
I need them to be synchronous/over top of each other not one after the other. I am using android to do this. I have heard somewhere that for some audio formats you can simply add the bytes (being careful that you dont get a too high or too low result). Is this true with the 3gp/amr format on android?

Android only allows playback/recording of 3GP/AMR files. To mix audio you will need the decoded PCM data. This means you have to decode both streams mix (this is indeed adding + normalizing) and then playback.
The bad side - there no way to get access to the build in AMR decoder which allows you to decode without playback.
So ... no easy way.

Related

Decoding only some PCM bytes at a time from an mp3 file

How do I decode something on the order of a 1000 bytes of PCM audio from an mp3 file, without decoding the whole thing?
I need to mix together four to six tracks, to one, so that they're played simultaneously on an AudioTrack in the Android app.
This can be done if I can get a stream of PCM samples, and so simple add the decoded tracks together (and maybe adjust for clipping and volume), and then write them to an AudioTrack buffer.
That part is simple.
But how do I decode the individual mp3 files, to inputstreams I can get byte arrays from? I've found something called JLayer, but its not quite clear to me how to do this.
I'd rather avoid doing it in C++ (I'm a bit rusty, and my team doesn't like it), though if that's needed I can do it. Though I'd need a short example of how get say 240 decoded bytes from a file via mpg123, or other such libraries.
Any help is appreciated.
The smallest you can do is 576 samples, which is the smallest MP3 frame size. However, most MP3 streams use the bit reservoir meaning you likely have to decode frames around the frame you want to decode as well.
Complicating things further, bare MP3 streams don't have any internal timestamping, so if you want to drop accurately in the middle of a file, you have to decode up until that point. (MP3 frame headers don't contain byte lengths, so you can't just skim frame headers accurately.) You can try to needle-drop into the middle of the file based on byte length, but this isn't an accurate way of seeking and can be off by several seconds, even for CBR. For VBR, it's all over the place.
It sounds like all you need to do is have a stream decoder, so that the decoding happens as playback is occurring. I'm no Android developer, but it seems you can just use AudioTrack from the framework, in streaming mode. https://developer.android.com/reference/android/media/AudioTrack.html And then the MediaCodec to actually do the decoding. https://developer.android.com/reference/android/media/MediaCodec.html Android devices support MP3, so you don't need to do anything else.

android get frequencies peaks from mp3

im creating android audio player and i want to add the possibility of drawing live chart with frequencies amplitudes (i dont know how it is named exactly). I know how to do it with FFT. But FFT can be applied on raw data only. My player gets mp3's. So how to extract frequencies and their amplitudes from mp3 file?
I see single possible decision: to write own native library that would decompress mp3 file (AFAIK android hasn't tools for decompressing mp3s) and after that to create spectrogram using FFT.
But this method has one essential minus - it needs large quantity of time. Converting mp3 to wav and applying FFT on full raw data consumes many time. Obviously better to do it on-the-go during playback. But i dont know how to do that.
Are there any other ways to achieve my goal?
P.S. I need something like this
This goal can be achived by using mp3 decoders, for example, mpg123. It can be build for android and gives possibility for accessing to raw song data. And i've found this project which use different mp3 decoders.

Android AudioRecord and MediaRecorder

I'm developing an audio processing application where I need to record audio, and then process it to obtain features of that recording. However, I want the audio in a playable format to play it after with MediaPlayer.
I've seen that to record audio to process it it's better to use AudioRecord, because I can get the raw audio from there. But then I can't write the data to a file in a playable format (is there any library to do this in android?).
I used this method to record raw data and then write it into a file:
http://andrewbrobinson.com/2011/11/27/capturing-raw-audio-data-in-android/
But when i try to play this file on the device, it is not playable.
Then if I use MediaRecorder I don't know how to decode the data to extract the features. I've been looking at MediaExtractor, but it seams that MediaExtractor doesn't decode the frames.
So.. what's the best way to do this? I imagine that's common in any audio processing application, but I wasn't able to find the way to manage this.
Thanks to your replies.
Using AudioRecord is the right way to go if you need to do any kind of processing. To play it back, you have a couple options. If you're only going to be playing it back in your app, you can use AudioTrack instead of MediaPlayer to play raw PCM streams.
If you want it to be playable with other applications, you'll need to convert it to something else first. WAV is normally the simplest, since you just need to add the header. You can also find libraries for converting to other formats, like JOrbis for OGG, or JLayer for MP3, etc.
For best quality result you have to use AudioRecord class instead of MediaRecorder.
Please have a look to below link:
Need a simple example for audio recording
Also have a look to this link: http://i-liger.com/article/android-wav-audio-recording
If you use AudioRecord object to get the raw audio signal, the next step to save it save as a playable file is not so difficult, you just need to add a WAV Head before the audio data you capture, then you get a .WAV file which you can play it on most mobile phones.
A .WAV file is a file under the RIFF format. the RIFF header for WAV file is 44 byte long and contains the sample rate, sample width and channel counts information. you can get the detail information from here
I did the sample function on Android phones and it worked.

Gap buffering in Android devices

I'm building a buffering engine to play streams from url.. I need to buffering both mp3 and aac ( on device that can support it ) so I can't pass directly the url to MediaPlayer.. I tried this method: I have 2 synchronized thread, one that running creates some file with data from buffer and the second playing files created: the problem is that when mediaplayer switch from a file to another, there is a little gap... how can I remove it?? is very annoying...
Maybe my method is wrong, if so can anyone provide a working method without chopping sound??
Thank you very much in advance..
It seems you are trying to implement Gapless Playback. (Right ? )
Towards this you need to define level of Gapless Playback you want to achieve. Should it be across fileformats / codecs, audio attributes like sample rate, number of channels etc.
With your approach, you ll surely see gaps across different streams. (Fileformats , compression, audio attributes).
To achive true Gapless playback at application level (My Approach) you need to do the following
Implement custom stack, that would take the input files, decode it and produce pcm samples. This stack will have Parsers (MP3, AAC), and decoders (MP3, AAC..)
Pass pcm samples through resampler, to produce pcm samples having same sample rate.
Add buffering modules at input (File) and output (resampled pcm data).
Use AudioTrack class of Android SDK for playout.
If you stick to one fileformat, Codec and audio attributes, then at application level, you can concatenate all the files in the playlist and provide it to MediaPLayer for playback. (Since audio streams have less size, this solution can be practical. Only obstacle would be streams attributes. If the Audio OMX Components within Android Multimedia stack support dynamic reconfiguration, then this should be no issue at all)
Shash

Programmatically Writing PCM WAV Data in Android

I'm looking for a way to programmatically save an array of shorts as PCM data. I know that this should be possible, but I haven't found a very easy way to do this on Android.
Essentially, I'm taking voltage data, and I want to save it in PCM format. My function looks something like this:
public void audifySignal(short[] signal) {
// Create a WAV file from the incoming signal
}
Any suggestions would be awesome, or even references. Seems like the audio APIs built in to android are more geared for directly recording from the mic, and not so much for lower level signal processing type work (at least for saving raw data to a file). I'd also like to avoid having to manually write the PCM file headers and what not...
Thanks!
Sam, I dunno about Android-specific libraries, but I'll go ahead and say this:
Raw PCM data is pretty straight forward. It's generally just sequential data. Maybe you need to understand the WAV format in order to understand what PCM is and how it works.
WAV is fairly widely used as a container for uncompressed audio. Gaining an understanding of how the WAV file contains the data will cast a fair bit of light on how raw digital audio works in general.
This page helped me a fair bit:
http://www.sonicspot.com/guide/wavefiles.html
Interestingly you can more or less fire ANY data at a sound-card and it'll play it. It'll probably sound crazy to us humans as the sound card doesn't care about whether it sounds garbled or not.
Whether it sounds pleasing to the ear or not will depend upon whether you've provided the correct sample size, number of channels, frequency and some PCM data that conforms to the former.
See you can't "detect" the sample size, the number of channels or the correct frequency from the raw PCM data itself. You have to store this crucial data ALONG with the PCM data so that other pieces of software can let the sound-card know how to handle your PCM data.
That's where the WAV container format comes in.
There are other formats but WAV is pretty commonplace and it's therefore a good place to start.
Cheers
Tristen
You can use Android's AudioTrack to write raw PCM data that you want to get played, but it's not a function to generate the wav file or so.

Categories

Resources