I am currently working with AudioTrack. I am loading an mp3 file and playing it. However, depending on devices, the music plays either at half the rate or normal rate.
My code:
AudioTrack track= new AudioTrack( AudioManager.STREAM_MUSIC,
sampleRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
initBuffer,
AudioTrack.MODE_STREAM);
sampleRate is the sample rate return by AudioFormat and initBuffer the buffer size from AudioTrack.getMinBufferSize().
I have tried to change the sample rate but no difference. Buffer size also has no impact.
In fact, switching to CHANNEL_CONFIGURATION_STEREO does make the music play at normal rate on the devices that were slow, but it also makes the ones working fine playing at twice the normal speed. My problem is that I want all devices to play at the normal speed.
Any suggestions?
I have read this thread:Android AudioTrack slow playback, but it doesn't tell me how to find out which devices should play in mono or stereo.
Devices at normal speed: Urbano 4.2.2, Galaxy S4 4.3
Devices at half the speed, Galaxy S4 4.2.2, Experia Z 4.2.2
BTW, I cannot use MediaPlayer for playback. The AudioTrack is included in a custom player and I need to write audio data as I extract it. MediaPlayer won't do the trick.
Thanks.
I experienced it when trying to decode mono file using MediaCodec. On some devices,the codec would output stereo stream while mono on other. When setting up the audio track I used media format returned by MediaExtractor, which was mono. On devices where the codec would produce stereo stream, the audio track would be fed with twice as many samples. The solution is to listen for MediaFormatChanged event from MediaCodec and adjust the MediaFormat of the AudioTrack.
AudioTrack only allows PCM audio data, you cannot play mp3 wit AudioTrack directly, you have to convert your mp3 to PCM or use something else (e.g. MediaPlayer) than AudioTrack.
I finally solved my problem. Not the best fix but it works at least.
For those who are interested, I am reading the BufferInfo size and based on it, decide at which playback rate I should play. Basically when playback is slow the size is twice bigger than for normal speed playback. Just a guess, but the MediaCodec might duplicate data for stereo configuration.
This one is a little tricky but here is what you can do using adb and ALSA.
Android internally uses ALSA.
Your device should have ALSA, try :
root#user:/$ adb shell cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version 1.0.25.
Just take a look at Reading ALSA Device names.
Every device (subdivision of a card) has a capture and a playback component.
Like /proc/asound/card0/pcm1p and /proc/asound/card0/pcm1p where card is 0 device is 1
pcm1p is your playback and pcm1c is capture (for recording).
Access your device using adb:
root#user:/$ adb shell
shell#android:/$:
Identifying your device:
So you see /proc/asound/pcm will give you a long list.
shell#android:/$: cat /proc/asound/pcm
00-00: MultiMedia1 (*) : : playback 1 : capture 1
00-01: MultiMedia2 (*) : : playback 1 : capture 1
00-02: CS-Voice (*) : : playback 1 : capture 1
From the above I find 00-00: MultiMedia1 (*) as card0 and device0 is for Multimedia playback.
Getting Playback Parameter:
Play your loaded mp3 file using your standard music player Application.
While the song is playing.
Issue the following commands for card0, device0(p - playback), and subdevice0
shell#android:/$: cat /proc/asound/card0/pcm0p/sub0/hw_params
access: RW_INTERLEAVED
format: S16_LE
subformat: STD
channels: 2
rate: 44100 (44100/1)
period_size: 1920
buffer_size: 3840
So try using the same values when you call AudioTrack track= new AudioTrack(...);
The above value will only be visible when the device is in open (Playing some audio).
If you issue this to a wrong device (like say pcm1p), you will see the following:
shell#android:/$: cat /proc/asound/card0/pcm1p/sub0/hw_params
closed
Note :
Step 1 doesn't require phone to be rooted, How to Setup ADB.
Alternatives:
Have you tried the AudioTrack APIs like getChannelConfiguration() or getChannelCount() from here - Just Asking.
Have you tried looking at the MP3 file properties, the metadata has information on the audio parameters.
Related
I'm making an app which is using MediaCodec APIs.
The app runs on two phones. The first phone reads the video from the sdcard and then uses the MediaCodec encoder to encode the frames in avc format and then streams the frames to another device. The second device has a MediaCodec decoder running. The decoder decodes the frames and render them on a Surface.
The code is running fine but after sometime when the size of the frames gets more, the first device is sometime not able to stream the video and the encoder stops reporting the following log :
E/OMX-VENC-720p( 212): Poll timedout, pipeline stalled due to client/firmware ETB: 496, EBD: 491, FTB: 492, FBD: 492
So I want to implement frame skipping on the encoder side.
What's the best way to skip the frames and not stream them to the other device. ?
PS. On a separate note if anyone can suggest me of any other way of streaming a video to other device it'll be really nice.
Please try Intel INDE Media Pack with tutorials on https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials. It has Camera, File and Game streaming components, which make streaming with help of Wowza and a set of samples demonstrating how to use it as a server and as a client
An audio capture application on rooted MK809/Android 4.1.1. There is no internal mic so I am trying to use a USB one which is correctly detected as "USB Audio Device" in Settings/Sound/Sound Devices Manager/Sound Input Devices when connected.
What is this device's AudioSource value to pass into AudioRecord constructor (first argument). I tried every one in MediaRecorder.AudioSource, none worked. I am only interested in reading the capture buffer, not saving into a file.
Answering my own question. The following values did work: DEFAULT, MIC, CAMCORDER, probably others too as it is the only input device.
I was trying to use sample rate of 48000 (works on Windows) and AudioRecord creation failed with:
ERROR/AudioRecord(1615): Could not get audio input for record source 1
ERROR/AudioRecord-JNI(1615): Error creating AudioRecord instance: initialization check failed.
ERROR/AudioRecord-Java(1615): [ android.media.AudioRecord ] Error code -20 when initializing native AudioRecord object.
Somewhat misleading info considering that a call to getMinBufferSize() with the same set of agruments does not return an error as it is supposed to. I assumed that it was a valid sample rate for the device. Setting it to 44100 (guranteed) fixed the problem.
USB audio input devices do work on Android, Jelly Bean at least. Hope this helps someone.
FWIW, this is implementation specific (it can differ between different platform vendors and OEMs).
On the devices I've worked on, the USB accessory's mic would be chosen if the AudioSource is DEFAULT, MIC or VOICE_RECOGNITION, and the only sample rates supported in the audio HAL for USB audio recording were 8, 16 and 48 kHz (although the AudioFlinger is able to resample to other rates within a certain range).
I'm writing an audio streaming app that buffers AAC file chunks, decodes those chunks to PCM byte arrays, and writes the PCM audio data to AudioTrack. Occasionally, I get the following error when I try to either skip to a different song, call AudioTrack.pause(), or AudioTrack.flush():
obtainbuffer timed out -- is cpu pegged?
And then what happens is that a split second of audio continues to play. I've tried reading a set of AAC files from the sdcard and got the same result. The behavior I'm expecting is that the audio stops immediately. Does anyone know why this happens? I wonder if its an Audio latency issue with Android 2.3.
edit: The AAC audio contains an ADTS Header. The header + audio payload constitute what I'm calling ADTSFrame. These are fed to the decoder one frame at a time. The resulting PCM byte array that gets returned from the C layer to the Java Layer gets fed to Android's AudioTrack API.
edit 2: I got my nexus 7 (Android 4.1 OS) today. Loaded the same APP onto the device. Didn't have any of these problems at all.
it is highly possible about sample rate. one of your devices might be supporting the sample rate u used while the other could not. Please check it. I had the same issue, it was about sample rate. use 44.1kHz (44100) and try again please.
In my application I issue the following statement:
toneGenerator.startTone(ToneGenerator.TONE_PROP_ACK, 600);
Which works very well on a cheap LG LS670 running Android 2.3.3 but doesn't sound at all on all other phones I have, ranging from Android 2.2.1 to Android 2.3.4.
So I know the OS version doesn't play a role here (I also verified in the documentation that it has been supported since API 1).
Also, both Ringer volume and Media volume are set to maximum and toneGenerator is initialized with:
toneGenerator = new ToneGenerator(ToneGenerator.TONE_DTMF_1, 100);
And I verified that Settings.System.DTMF_TONE_WHEN_DIALING is set to 1.
Baffled by this inconsistent behavior (across different phones), I examined the system logs when this happens and the only suspicious difference I have been able to find is that the phones who fail to sound TONE_PROP_ACK have this line in their log:
AudioFlinger setParameters(): io 25, keyvalue routing=0, tid 155, calling tid 121
What is the purpose of AudioFlinger and what could be its connection to muting TONE_PROP_ACK?
Any idea how to fix my code so that that TONE_PROP_ACK always sounds, regardless of phone model?
One work around is to generate the tone in something like Audacity and play it through SoundPool or the api of your choice.
According to the Android docs ToneGenerator.TONE_PROP_ACK is:
1200Hz, 100ms ON, 100ms OFF 2 bursts
If you choose SoundPool, I suggest saving in ogg file format and loop the tone until complete. This while provide seamless audio with a very small clip and not using a lot of resources.
The parsing/decoding is handled by Stage fright, which is used by the
media player service. The decoded data is written to an Audio Track
through an Audio Sink, and the tracks are then mixed by the
Audio Flinger's mixer thread(s) and written to an output stream
(Audio Hardware). The output stream object fills up its own buffer(s)
and then writes the data to the PCM output device file (which may or
may not be an ALSA driver).
I am developing a karaoke app and i have been dealing with audio from android.I want an opinion on how can i save a single song from a voice recorded and another song playing in the background.
Thanks
You can try FFMPEG Library for Android, consider using android wrapper. There are several SO answers of help
Audio song mixer in android programmatically
How to overlay two audio files using ffmpeg
FFmpeg on Android
You may want to check amerge and amix ffmpeg filters based on your needs
SoundPool is a good option if requirements can be met just by playing two audio streams together.
SoundPool can also manage the number of audio streams being rendered
at once. When the SoundPool object is constructed, the maxStreams
parameter sets the maximum number of streams that can be played at a
time from this single SoundPool.
Also note Android - Lollipop version now supports multi-channel audio stream mixing
Multi-channel audio stream mixing means professional audio
applications can now mix up to eight channels including 5.1 and 7.1
channels.
Multi-channel support
If your hardware and driver supports multichannel audio via HDMI, you
can output the audio stream directly to the audio hardware.
Unfortunately, I don't think there's an "On device" way to combine two audio files into one. The best you can do is play both at the same time from what I've read.
See Sound Pool Documentation