I'm using a socket as a proxy to the MediaPlayer so I can download and decrypt mp3 audio before writing it to the socket. This is similar to the example shown in the NPR news app however I'm using this for all Android version 2.1 - 4 atm.
NPR StreamProxy code - http://code.google.com/p/npr-android-app/source/browse/Npr/src/org/npr/android/news/StreamProxy.java
My issue is that playback is fast for 2.1 - 2.3, but in Android 4.0 ICS the MediaPlayer buffers too much data before firing the onPrepared listener.
An example amount of data written to the Socket OutputStream before onPrepared():
On SGS2 with 2.3.4 - onPrepared() after ~ 133920 bytes
On Nexus S with 4.0.4 - onPrepared() after ~ 961930 bytes
This also occurs on the Galaxy Nexus.
Weirdly the 4.0 emulator doesn't buffer as much data as 4.0 devices. Anyone experience a similar issue with the MediaPlayer on ICS?
EDIT
Here's how the proxy is writing to the socket. In this example it's from a CipherInputStream loaded from a file, but the same occurs when it's loaded from the HttpResponse.
final Socket client = (setup above)
// encrypted file input stream
final CipherInputStream inputStream = getInputStream(file);
// setup the socket output stream
final OutputStream output = client.getOutputStream();
// Writing the header
final String httpHeader = buildHttpHeader(file.length());
final byte[] buffer = httpHeader.getBytes("UTF-8");
output.write(buffer, 0, buffer.length);
int writtenBytes = 0;
int readBytes;
final byte[] buff = new byte[1024 * 12]; // 12 KB
while (mIsRunning && (readBytes = inputStream.read(buff)) != -1) {
output.write(buff, 0, readBytes);
writtenBytes += readBytes;
}
output.flush();
output.close();
The HTTP Headers that are written to the MediaPlayer before the audio..
private String buildHttpHeader(final int contentLength) {
final StringBuilder sb = new StringBuilder();
sb.append("HTTP/1.1 200 OK\r\n");
sb.append("Content-Length: ").append(contentLength).append("\r\n");
sb.append("Accept-Ranges: bytes\r\n" );
sb.append("Content-Type: audio/mpeg\r\n");
sb.append("Connection: close\r\n" );
sb.append("\r\n");
return sb.toString();
}
I've looked around for alternate implementations but as I have encrypted audio and the MediaPlayer does not support InputStreams as a data source my only option (I think..) is to use a proxy such as this.
Again, this is working fairly well Android 2.1 - 2.3 but in ICS the MediaPlayer is buffering a huge amount of this data before playing.
EDIT 2 :
Further testing is showing that this is also an issue on the SGS2 once upgraded to Android 4.0.3. So it seems like the MediaPlayer's buffering implementation has changed significantly in 4.0. This is frustrating as the API provides no way to alter the behaviour.
EDIT 3 :
Android bug created. Please add comments and star there as well
http://code.google.com/p/android/issues/detail?id=29870
EDIT 4 :
My playback code is fairly standard.. I have the start() call on the MediaPlayer in my onPrepared() method.
mCurrentPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
mCurrentPlayer.setDataSource(url);
mCurrentPlayer.prepareAsync();
Have tried it using just prepare() and also ajacian81's recommended way but to no avail.
I should add that recently a Google employee got back to me about my question and confirmed that the buffer size was intentionally increased in ICS (for HD content). It has been requested to the API developers to add the ability to set a buffer size on MediaPlayer.
Though I think this API change request had been around before I came along so I wouldn't advise anyone to hold their breath.
Would it be possible to see the code where you're start()ing the MediaPlayer?
Are you using the STREAM_MUSIC audio stream type?
player.setAudioStreamType(AudioManager.STREAM_MUSIC);
Have you also experimented between player.prepareAsync(); and player.prepare();?
There was a similar issue last year I remember, where the solution was to: start, pause and then onPrepared to start():
player.setAudioStreamType(AudioManager.STREAM_MUSIC);
player.setDataSource(src);
player.prepare();
player.start();
player.pause();
player.setOnPreparedListener(new OnPreparedListener() {
#Override
public void onPrepared(MediaPlayer mp) {
player.start();
}
});
Unlikely to be the fix in this case, but while you're spinning your wheels this might be worth a shot.
For me the solution was to use MediaCodec with AudioTrack, I found all I need to know here:
This could be a solution: http://www.piterwilson.com/blog/2014/03/15/mediacodec-mediaextractor-and-audiotrack-to-the-rescue/
Related
I would like to modify Android OS (official image from AOSP) to add preprocessing to a normal phone call playback sound.
I've already achieved this filtering for app audio playback (by modifying HAL and audioflinger).
I'm OK with targeting only a specific device (Nexus 5X). Also, I only need to filter playback - I don't care about recording (uplink).
UPDATE #1:
To make it clear - I'm OK with modifying Qualcomm-specific drivers, or whatever part that it is that runs on Nexus 5X and can help me modify in-call playback.
UPDATE #2:
I'm attempting to create a Java layer app that routes the phone playback to the music stream in real time.
I've already succeeded in installing it as a system app, getting permissions for initializing AudioRecord with AudioSource.VOICE_DOWNLINK. However, the recording gives blank samples; it doesn't record the voice call.
This is the code inside my worker thread:
// Start recording
int recBufferSize = AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mRecord = new AudioRecord(MediaRecorder.AudioSource.VOICE_DOWNLINK, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, recBufferSize);
// Start playback
int playBufferSize = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, playBufferSize, AudioTrack.MODE_STREAM);
mRecord.startRecording();;
mTrack.play();
int bufSize = 1024;
short[] buffer = new short[bufSize];
int res;
while (!interrupted())
{
// Pull recording buffers and play back
res = mRecord.read(buffer, 0, bufSize, AudioRecord.READ_NON_BLOCKING);
mTrack.write(buffer, 0, res, AudioTrack.WRITE_BLOCKING);
}
// Stop recording
mRecord.stop();
mRecord.release();
mRecord = null;
// Stop playback
mTrack.stop();
mTrack.release();;
mTrack = null;
I'm running on a Nexus 5X, my own AOSP custom ROM, Android 7.1.1. I need to find the place which will allow call recording to work - probably somewhere in hardware/qcom/audio/hal in platform code.
Also I've been looking at the function voice_check_and_set_incall_rec_usecase at hardware/qcom/audio/hal/voice.c However, I wasn't able to make sense of it (how to make it work the way I want it to).
UPDATE #3:
I've opened a more-specific question about using AudioSource.VOICE_DOWNLINK, which might draw the right attention and will eventually help me solve this question's problem as well.
There are several possible issues that come to my mind. The blank buffer might indicate that you have the wrong source selected. Also since according to https://developer.android.com/reference/android/media/AudioRecord.html#AudioRecord(int,%20int,%20int,%20int,%20int) you might not always get an exception even if something's wrong with the configuration, you might want to confirm whether your object has been initialized properly. If all else fails, you could also do an
"mRecord.setPreferredDevice(AudioDeviceInfo.TYPE_BUILTIN_EARPIECE);"
to route the phone's built-in earpiece directly to the input of your recorder. Yeah, it's kinda dirty and hacky, but perhaps suits the purpose.
The other thing what was puzzling me that instead of using the builder class you've tried to configure the object directly via its constructor. Is there a specific reason why you don't want to use AudioRecord.Builder (there's even a nice example at https://developer.android.com/reference/android/media/AudioRecord.Builder.html ) instead?
I am transcoding videos based on the example given by Google (https://android.googlesource.com/platform/cts/+/master/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java)
Basically, transocding of MP4 files works, but on some phones I get some weird results. If for example I transcode a video with audio on an HTC One, the code won't give any errors but the file cannot play afterward on the phone. If I have a 10 seconds video it jumps to almost the last second and you only here some crackling noise. If you play the video with VLC the audio track is completely muted.
I did not alter the code in terms of encoding/decoding and the same code gives correct results on a Nexus 5 or MotoX for example.
Anybody having an idea why it might fail on that specific device?
Best regard and thank you,
Florian
I made it work in Android 4.4.2 devices by following changes:
Set AAC profile to AACObjectLC instead of AACObjectHE
private static final int OUTPUT_AUDIO_AAC_PROFILE = MediaCodecInfo.CodecProfileLevel.AACObjectLC;
During creation of output audio format, use sample rate and channel count of input format instead of fixed values
MediaFormat outputAudioFormat = MediaFormat.createAudioFormat(OUTPUT_AUDIO_MIME_TYPE,
inputFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
inputFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT));
Put a check just before audio muxing audio track to control presentation timestamps. (To avoid timestampUs X < lastTimestampUs X for Audio track error)
if (audioPresentationTimeUsLast == 0) { // Defined in the begining of method
audioPresentationTimeUsLast = audioEncoderOutputBufferInfo.presentationTimeUs;
} else {
if (audioPresentationTimeUsLast > audioEncoderOutputBufferInfo.presentationTimeUs) {
audioEncoderOutputBufferInfo.presentationTimeUs = audioPresentationTimeUsLast + 1;
}
audioPresentationTimeUsLast = audioEncoderOutputBufferInfo.presentationTimeUs;
}
// Write data
if (audioEncoderOutputBufferInfo.size != 0) {
muxer.writeSampleData(outputAudioTrack, encoderOutputBuffer, audioEncoderOutputBufferInfo);
}
Hope this helps...
If original CTS tests fail you need to go to device vendors and ask for fixes
I have an application that is playing MP3 files that are available at a public URL. Unfortunately the server does not support streaming, but the Android makes the user experience quite acceptable.
It all works fine for all platforms except for JellyBean. When requesting the MP3, JB requests for a Range-Header for 10 times. Only after the 10-th attempt it seems to revert to the old behavior. Looks like this already reported issue.
I found another SO thread where a solution recommended is to use Tranfer-Encoding: chunked header. But just below there is a comment that this doesn't work.
For the moment I have no control whatsoever to deliver above response headers, but until I will be able to do that I thought to search for an alternative at client side. (even so, I can only return a Content-Range that contains indexes from 0 to Content-Length - 1. Ex. Content-Range: bytes 0-3123456/3123457).
What I tried to do is to implement a pseudo-streaming at client side by:
Open an input stream to the MP3.
Decode the incoming bytes using JLayer. I found the decoding at this link.
Send the decoded array bytes to an already playeable stream_mode AudioTrack.
The piece of code that does the decoding can be found there, I have only modified it so it will receive an InputStream:
public byte[] decode(InputStream inputStream, int startMs, int maxMs) throws IOException {
ByteArrayOutputStream outStream = new ByteArrayOutputStream(1024);
float totalMs = 0;
boolean seeking = true;
try {
Bitstream bitstream = new Bitstream(inputStream);
Decoder decoder = new Decoder();
boolean done = false;
while (!done) {
Header frameHeader = bitstream.readFrame();
if (frameHeader == null) {
done = true;
} else {
totalMs += frameHeader.ms_per_frame();
if (totalMs >= startMs) {
seeking = false;
}
if (!seeking) {
// logger.debug("Handling header: " + frameHeader.layer_string());
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
if (output.getSampleFrequency() != 44100 || output.getChannelCount() != 2) {
throw new IllegalArgumentException("mono or non-44100 MP3 not supported");
}
short[] pcm = output.getBuffer();
for (short s : pcm) {
outStream.write(s & 0xff);
outStream.write((s >> 8) & 0xff);
}
}
if (totalMs >= (startMs + maxMs)) {
done = true;
}
}
bitstream.closeFrame();
}
return outStream.toByteArray();
} catch (BitstreamException e) {
throw new IOException("Bitstream error: " + e);
} catch (DecoderException e) {
throw new IOException("Decoder error: " + e);
}
}
I am requesting the decoded bytes in time chunks: starting with (0, 5000) so I will have a bigger array to play at first, then I am requesting the next byte arrays that span over a second: (5000, 1000), (6000, 1000), (7000, 1000), etc.
The decoding is fast enough and is done in another thread and once a decoded byte array is available I am using a blocking queue to write it to the AudioTrack that is playing in another thread.
The problem is that the playback is not smooth as the chunks are not continuous in a track (each chunk is continuous, but added in the AudioTrack results in a sloppy playback).
To wrap up:
If you have bumped into this JellyBean issue, how did you solve it?
If any of you tried my approach, what am I doing wrong in above code? If this is the solution you used, I can publish the rest of the code.
Thanks!
It looks like you are trying to develop your own streaming type. This can get blocky or interrupted playback because you have to attempt continuous information piping w/out running out of bytes to read from.
Basically, you will have to account for all the situations that a normal streaming client takes care of. For instance, sometimes some blocks may be dropped or lost in transmission; sometimes the audio playback may catch up to the download; the cpu starts lagging which affects playback; etc. etc.
Something to research if you want to continue down this path would be Sliding Window implementation, it is essentially an abstract technique to try to keep the network connectivity always active and fluid. You should be able to find several examples through google, here is a place to start: http://en.wikipedia.org/wiki/Sliding_window_protocol
Edit: One workaround that may help you until this is fixed would be to include the source code for MediaPlayer.java and AudioManager.java from SDK <16 into your project and see if that resolves the problem. If you do not have the source code you can download it with the SDK Manager.
AudioTrack is blocking by nature from docs(Will block until all data has been written to the audio mixer.). I'm not sure if you're reading from the file and writing to AudioTrack in the same Thread; if so, then I'd suggest you spin up a thread for AudioTrack.
I've got an AudioTrack in my application, which is set to Stream mode. I want to write audio which I receive over a wireless connection. The AudioTrack is declared like this:
mPlayer = new AudioTrack(STREAM_TYPE,
FREQUENCY,
CHANNEL_CONFIG_OUT,
AUDIO_ENCODING,
PLAYER_CAPACITY,
PLAY_MODE);
Where the parameters are defined like:
private static final int FREQUENCY = 8000,
CHANNEL_CONFIG_OUT = AudioFormat.CHANNEL_OUT_MONO,
AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT,
PLAYER_CAPACITY = 2048,
STREAM_TYPE = AudioManager.STREAM_MUSIC,
PLAY_MODE = AudioTrack.MODE_STREAM;
However, when I write data to the AudioTrack with write(), it will play choppy... The call
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
is made whenever a packet is received over the network connection. Does anybody have an idea on why it sounds choppy? Maybe it has something to do with the WiFi connection itself? I don't think so, as the sound doesn't sound horrible the other way around, when I send data from the Android phone to another source over UDP. The sound then sounds complete and not choppy at all... So does anybody have an idea on why this is happening?
Do you know how many bytes per second you are recieving, the average time between packets compares, and the maximum time between packets? If not, can you add code to calculate it?
You need to be averaging 8000 samples/second * 2 bytes/sample = 16,000 bytes per second in order to keep the stream filled.
A gap of more than 2048 bytes / (16000 bytes/second) = 128 milliseconds between incoming packets will cause your stream to run dry and the audio to stutter.
One way to prevent it is to increase the buffer size (PLAYER_CAPACITY). A larger buffer will be more able to handle variation in the incoming packet size and rate. The cost of the extra stability is a larger delay in starting playback while you wait for the buffer to initially fill.
I have partially solved it by placing the mPlayer.write(audio, 0, audio.length); in it's own Thread. This does take away some of the choppy-ness (due to the fact that write is a blocking call), but it still sounds choppy after a good second or 2. It still has a significant delay of 2-3 seconds.
new Thread(){
public void run(){
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
}
}.start();
Just a little anonymous Thread that does the writing now...
Anybody have an idea on how to solve this issue?
Edit:
After some further checking and debugging, I've noticed that this is an issue with obtainBuffer.
I've looked at the java code of the AudioTrack and the C++ code of AudioTrack And I've noticed that it only can appear in the C++ code.
if (__builtin_expect(result!=NO_ERROR, false)) {
LOGW( "obtainBuffer timed out (is the CPU pegged?) "
"user=%08x, server=%08x", u, s);
mAudioTrack->start(); // FIXME: Wake up audioflinger
timeout = 1;
}
I've noticed that there is a FIXME in this piece of code. :< But anyway, could anybody explain how this C++ code works? I've had some experience with it, but it was never as complicated as this...
Edit 2:
I've tried somewhat different now, the difference being that I buffer the data I receive, and then when the buffer is filled with some data, it is being written to the player. However, the player keeps up with consuming for a few cycles, then the obtainBuffer timed out (is the CPU pegged?) warning kicks in, and there is no data at all written to the player untill it is kick started back to life... After that, it will continually get data written to it untill the buffer is emptied.
Another slight difference is that I stream a file to the player now. That is, reading it in chunks, the writing those chunks to the buffer. This simulates the packages being received over wifi...
I am beginning to wonder if this is just an OS issue that Android has, and it isn't something I can solve on my own... Anybody got any ideas on that?
Edit 3:
I've done more testing, but this doesn't help me any further. This test shows me that I only get lag when I try to write to the AudioTrack for the first time. This takes somewhat between 1 and 3 seconds to complete. I did this by using the following bit of code:
long beforeTime = Utilities.getCurrentTimeMillis(), afterTime = 0;
mPlayer.write(data, 0, data.length);
afterTime = Utilities.getCurrentTimeMillis();
Log.e("WriteToPlayerThread", "Writing a package took " + (afterTime - beforeTime) + " milliseconds");
However, I get the following results:
Logcat Image http://img810.imageshack.us/img810/3453/logcatimage.png
These show that the lag initially occurs at the beginning, after which the AudioTrack keeps getting data continuously... I really need to get this one fixed...
I need to determine if a mediaplayer is using the opencore media framework, so that I can disable seeking for my streams. The opencore framework appears to fail silently with seeking, which I am having a hard time believing they allowed into production, but that seems the case nonetheless.
I wish it were as simple as determining their SDK version, but droid phones that have api 8 seem to use opencore still, so doesn't seem to be a good option. Any ideas?
EDIT:
After the response from Jesus, I came up with this code. It seems to work well in my tests so far. If anybody doesn't think it is a sound method for seeking streams, let me know
if (Build.VERSION.SDK_INT < 8) //2.1 or earlier, opencore only, no stream seeking
mStreamSeekable = false;
else { // 2.2, check to see if stagefright enabled
mStreamSeekable = false;
try {
FileInputStream buildIs = new FileInputStream(new File("/system/build.prop"));
if (CloudUtils.inputStreamToString(buildIs).contains("media.stagefright.enable-player=true"))
mStreamSeekable = true;
} catch (IOException e) { //problem finding build file
e.printStackTrace();
}
}
}
That method does not work on the Samsung Galaxy S, which says Stagefright is enabled but does not use it, at least not for streaming. A more secure check is to open a local socket and connect the MediaPlayer to it and see what it reports as User-Agent.
For instance, this is what I see on my Samsung Galaxy S and the 2.2 Emulator;
Galaxy S:
User-Agent: CORE/6.506.4.1 OpenCORE/2.02 (Linux;Android 2.2)
Emulator:
User-Agent: stagefright/1.0 (Linux;Android 2.2)
In one thread, do something like this;
volatile int socketPort;
ServerSocket serverSocket = new ServerSocket(0);
socketPort = serverSocket.getLocalPort();
Socket socket = serverSocket.accept();
InputStream is = socket.getInputStream();
byte [] temp = new byte [2048];
int bsize = -1;
while(bsize <= 0) {
bsize = is.read(temp);
}
String res = new String(temp, 0, bsize);
if(res.indexOf("User-Agent: stagefright") >= 0) {
// Found stagefright
}
socket.close();
serverSocket.close();
And like this in another thread (makes the blocking accept() call above return);
MediaPlayer mp = new MediaPlayer();
mp.setDataSource(String.format("http://127.0.0.1:%d/", socketPort));
mp.prepare();
mp.start();
With Android 2.3.5, now the media.stagefright.enable-player property does not exist in /system/build.prop
You might be able to detect if stagefright is enabled for streaming by searching for
media.stagefright.enable-http=true
instead of
media.stagefright.enable-player=true
To get that information you can read the file /system/build.prop of your device. In this file there is a parameter named media.stagefright.enable-player. If that parameter is set to true, then stagefright is active, otherwise your device is using opencore.