im creating android audio player and i want to add the possibility of drawing live chart with frequencies amplitudes (i dont know how it is named exactly). I know how to do it with FFT. But FFT can be applied on raw data only. My player gets mp3's. So how to extract frequencies and their amplitudes from mp3 file?
I see single possible decision: to write own native library that would decompress mp3 file (AFAIK android hasn't tools for decompressing mp3s) and after that to create spectrogram using FFT.
But this method has one essential minus - it needs large quantity of time. Converting mp3 to wav and applying FFT on full raw data consumes many time. Obviously better to do it on-the-go during playback. But i dont know how to do that.
Are there any other ways to achieve my goal?
P.S. I need something like this
This goal can be achived by using mp3 decoders, for example, mpg123. It can be build for android and gives possibility for accessing to raw song data. And i've found this project which use different mp3 decoders.
Related
I want to write an app on Android to record snoring sounds of a sleeper and analyze it afterwards (i.e., not in real-time) for signs of a medical condition called obstructive sleep apnea.
The Android devices I've experimented with have voice recorders that produce a file format called .3ga. I want to programmatically read in the audio file and look at the amplitude for each individual time-sample. Then I can analyze that for patterns. Would this be easier if I converted this to a different format, e.g., MP3, and if so how can I do that programmatically?
I did a Google search on this and most of the hits seemed to be related to audio recording or playback which are unrelated to what I'm trying to do. I haven't coded anything yet because I don't know how to get started.
You are looking to do sample-based analysis on a raw audio signal, but the formats you mention are compressed. You will need to either deal with raw samples directly, or decompress the audio and then analyze.
Since you said you can do this work after-the-fact, why not upload to a server and analyze there?
I am developing Recording App that includes Pause/Play option.
I tried with both Media Recorder and AudioRecord
In case of AudioRecord , the recorded audio consumes larger size, so if the recording size increases say for eg: if i record 1 min audio it consumes 40 to 50MB an it really paining to combine by converting it to .raw file and send to php server.
So i tried with Media Recorder, it consumes less size,but not able to combine using the previous way handled in Audio Record.
Next step i tried with Android NDK- really paining for even Set up process.
Now my question is that which is the best way to combine recorded audio files
Using Android NDk
Reading the byte data from Audio and combining -If i use this there is problem with Headers of Recording format say amr,wav like that.
Also if i try with this , i am not able to get javax.sound package , So i tried with Plugins but no luck..
Please Suggest best way to do this. Also i tried with all this following links
Audio Link 1
Audio Link 2
Audio Link 3
Audio Link 4
Provide me Good tutorial or samples or links.Thanks.
For something like this your best bet would be to develop native C++ code using the NDK.
I'm building a buffering engine to play streams from url.. I need to buffering both mp3 and aac ( on device that can support it ) so I can't pass directly the url to MediaPlayer.. I tried this method: I have 2 synchronized thread, one that running creates some file with data from buffer and the second playing files created: the problem is that when mediaplayer switch from a file to another, there is a little gap... how can I remove it?? is very annoying...
Maybe my method is wrong, if so can anyone provide a working method without chopping sound??
Thank you very much in advance..
It seems you are trying to implement Gapless Playback. (Right ? )
Towards this you need to define level of Gapless Playback you want to achieve. Should it be across fileformats / codecs, audio attributes like sample rate, number of channels etc.
With your approach, you ll surely see gaps across different streams. (Fileformats , compression, audio attributes).
To achive true Gapless playback at application level (My Approach) you need to do the following
Implement custom stack, that would take the input files, decode it and produce pcm samples. This stack will have Parsers (MP3, AAC), and decoders (MP3, AAC..)
Pass pcm samples through resampler, to produce pcm samples having same sample rate.
Add buffering modules at input (File) and output (resampled pcm data).
Use AudioTrack class of Android SDK for playout.
If you stick to one fileformat, Codec and audio attributes, then at application level, you can concatenate all the files in the playlist and provide it to MediaPLayer for playback. (Since audio streams have less size, this solution can be practical. Only obstacle would be streams attributes. If the Audio OMX Components within Android Multimedia stack support dynamic reconfiguration, then this should be no issue at all)
Shash
Is there an easy way to merge 2 3gp (amr) audio files into a single audio file?
I need them to be synchronous/over top of each other not one after the other. I am using android to do this. I have heard somewhere that for some audio formats you can simply add the bytes (being careful that you dont get a too high or too low result). Is this true with the 3gp/amr format on android?
Android only allows playback/recording of 3GP/AMR files. To mix audio you will need the decoded PCM data. This means you have to decode both streams mix (this is indeed adding + normalizing) and then playback.
The bad side - there no way to get access to the build in AMR decoder which allows you to decode without playback.
So ... no easy way.
I'm looking for a way to programmatically save an array of shorts as PCM data. I know that this should be possible, but I haven't found a very easy way to do this on Android.
Essentially, I'm taking voltage data, and I want to save it in PCM format. My function looks something like this:
public void audifySignal(short[] signal) {
// Create a WAV file from the incoming signal
}
Any suggestions would be awesome, or even references. Seems like the audio APIs built in to android are more geared for directly recording from the mic, and not so much for lower level signal processing type work (at least for saving raw data to a file). I'd also like to avoid having to manually write the PCM file headers and what not...
Thanks!
Sam, I dunno about Android-specific libraries, but I'll go ahead and say this:
Raw PCM data is pretty straight forward. It's generally just sequential data. Maybe you need to understand the WAV format in order to understand what PCM is and how it works.
WAV is fairly widely used as a container for uncompressed audio. Gaining an understanding of how the WAV file contains the data will cast a fair bit of light on how raw digital audio works in general.
This page helped me a fair bit:
http://www.sonicspot.com/guide/wavefiles.html
Interestingly you can more or less fire ANY data at a sound-card and it'll play it. It'll probably sound crazy to us humans as the sound card doesn't care about whether it sounds garbled or not.
Whether it sounds pleasing to the ear or not will depend upon whether you've provided the correct sample size, number of channels, frequency and some PCM data that conforms to the former.
See you can't "detect" the sample size, the number of channels or the correct frequency from the raw PCM data itself. You have to store this crucial data ALONG with the PCM data so that other pieces of software can let the sound-card know how to handle your PCM data.
That's where the WAV container format comes in.
There are other formats but WAV is pretty commonplace and it's therefore a good place to start.
Cheers
Tristen
You can use Android's AudioTrack to write raw PCM data that you want to get played, but it's not a function to generate the wav file or so.