Android JellyBean network media issue - android

I have an application that is playing MP3 files that are available at a public URL. Unfortunately the server does not support streaming, but the Android makes the user experience quite acceptable.
It all works fine for all platforms except for JellyBean. When requesting the MP3, JB requests for a Range-Header for 10 times. Only after the 10-th attempt it seems to revert to the old behavior. Looks like this already reported issue.
I found another SO thread where a solution recommended is to use Tranfer-Encoding: chunked header. But just below there is a comment that this doesn't work.
For the moment I have no control whatsoever to deliver above response headers, but until I will be able to do that I thought to search for an alternative at client side. (even so, I can only return a Content-Range that contains indexes from 0 to Content-Length - 1. Ex. Content-Range: bytes 0-3123456/3123457).
What I tried to do is to implement a pseudo-streaming at client side by:
Open an input stream to the MP3.
Decode the incoming bytes using JLayer. I found the decoding at this link.
Send the decoded array bytes to an already playeable stream_mode AudioTrack.
The piece of code that does the decoding can be found there, I have only modified it so it will receive an InputStream:
public byte[] decode(InputStream inputStream, int startMs, int maxMs) throws IOException {
ByteArrayOutputStream outStream = new ByteArrayOutputStream(1024);
float totalMs = 0;
boolean seeking = true;
try {
Bitstream bitstream = new Bitstream(inputStream);
Decoder decoder = new Decoder();
boolean done = false;
while (!done) {
Header frameHeader = bitstream.readFrame();
if (frameHeader == null) {
done = true;
} else {
totalMs += frameHeader.ms_per_frame();
if (totalMs >= startMs) {
seeking = false;
}
if (!seeking) {
// logger.debug("Handling header: " + frameHeader.layer_string());
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
if (output.getSampleFrequency() != 44100 || output.getChannelCount() != 2) {
throw new IllegalArgumentException("mono or non-44100 MP3 not supported");
}
short[] pcm = output.getBuffer();
for (short s : pcm) {
outStream.write(s & 0xff);
outStream.write((s >> 8) & 0xff);
}
}
if (totalMs >= (startMs + maxMs)) {
done = true;
}
}
bitstream.closeFrame();
}
return outStream.toByteArray();
} catch (BitstreamException e) {
throw new IOException("Bitstream error: " + e);
} catch (DecoderException e) {
throw new IOException("Decoder error: " + e);
}
}
I am requesting the decoded bytes in time chunks: starting with (0, 5000) so I will have a bigger array to play at first, then I am requesting the next byte arrays that span over a second: (5000, 1000), (6000, 1000), (7000, 1000), etc.
The decoding is fast enough and is done in another thread and once a decoded byte array is available I am using a blocking queue to write it to the AudioTrack that is playing in another thread.
The problem is that the playback is not smooth as the chunks are not continuous in a track (each chunk is continuous, but added in the AudioTrack results in a sloppy playback).
To wrap up:
If you have bumped into this JellyBean issue, how did you solve it?
If any of you tried my approach, what am I doing wrong in above code? If this is the solution you used, I can publish the rest of the code.
Thanks!

It looks like you are trying to develop your own streaming type. This can get blocky or interrupted playback because you have to attempt continuous information piping w/out running out of bytes to read from.
Basically, you will have to account for all the situations that a normal streaming client takes care of. For instance, sometimes some blocks may be dropped or lost in transmission; sometimes the audio playback may catch up to the download; the cpu starts lagging which affects playback; etc. etc.
Something to research if you want to continue down this path would be Sliding Window implementation, it is essentially an abstract technique to try to keep the network connectivity always active and fluid. You should be able to find several examples through google, here is a place to start: http://en.wikipedia.org/wiki/Sliding_window_protocol
Edit: One workaround that may help you until this is fixed would be to include the source code for MediaPlayer.java and AudioManager.java from SDK <16 into your project and see if that resolves the problem. If you do not have the source code you can download it with the SDK Manager.

AudioTrack is blocking by nature from docs(Will block until all data has been written to the audio mixer.). I'm not sure if you're reading from the file and writing to AudioTrack in the same Thread; if so, then I'd suggest you spin up a thread for AudioTrack.

Related

Underrun in Oboe/AAudio playback stream

I'm working on an Android app dealing with a device which is basically a USB microphone. I need to read the input data and process it. Sometimes, I need to send data the device (4 shorts * the number of channels which is usually 2) and this data does not depend on the input.
I'm using Oboe, and all the phones I use for testing use AAudio underneath.
The reading part works, but when I try to write data to the output stream, I get the following warning in logcat and nothing is written to the output:
W/AudioTrack: releaseBuffer() track 0x78e80a0400 disabled due to previous underrun, restarting
Here's my callback:
oboe::DataCallbackResult
OboeEngine::onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) {
// check if there's data to write, agcData is a buffer previously allocated
// and h2iaudio::getAgc() returns true if data's available
if (h2iaudio::getAgc(this->agcData)) {
// padding the buffer
short* padPos = this->agcData+ 4 * playStream->getChannelCount();
memset(padPos, 0,
static_cast<size_t>((numFrames - 4) * playStream->getBytesPerFrame()));
// write the data
oboe::ResultWithValue<int32_t> result =
this->playStream->write(this->agcData, numFrames, 1);
if (result != oboe::Result::OK){
LOGE("Failed to create stream. Error: %s",
oboe::convertToText(result.error()));
return oboe::DataCallbackResult::Stop;
}
}else{
// if there's nothing to write, write silence
memset(this->agcData, 0,
static_cast<size_t>(numFrames * playStream->getBytesPerFrame()));
}
// data processing here
h2iaudio::processData(static_cast<short*>(audioData),
static_cast<size_t>(numFrames * oboeStream->getChannelCount()),
oboeStream->getSampleRate());
return oboe::DataCallbackResult::Continue;
}
//...
oboe::AudioStreamBuilder *OboeEngine::setupRecordingStreamParameters(
oboe::AudioStreamBuilder *builder) {
builder->setCallback(this)
->setDeviceId(this->recordingDeviceId)
->setDirection(oboe::Direction::Input)
->setSampleRate(this->sampleRate)
->setChannelCount(this->inputChannelCount)
->setFramesPerCallback(1024);
return setupCommonStreamParameters(builder);
}
As seen in setupRecordingStreamParameters, I'm registering the callback to the input stream. In all the Oboe examples, the callback is registered on the output stream, and the reading is blocking. Does this have an importance? If not, how many frames do I need to write to the stream to avoid underruns?
EDIT
In the meantime, I found the source of the underruns. The output stream was not reading the same amount of frames as the input stream (which in hindsight seems logical), so writing the amount of frames given by playStream->getFramesPerBurst() fix my issue. Here's my new callback:
oboe::DataCallbackResult
OboeEngine::onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) {
int framesToWrite = playStream->getFramesPerBurst();
memset(agcData, 0, static_cast<size_t>(framesToWrite *
this->playStream->getChannelCount()));
h2iaudio::getAgc(agcData);
oboe::ResultWithValue<int32_t> result =
this->playStream->write(agcData, framesToWrite, 0);
if (result != oboe::Result::OK) {
LOGE("Failed to write AGC data. Error: %s",
oboe::convertToText(result.error()));
}
// data processing here
h2iaudio::processData(static_cast<short*>(audioData),
static_cast<size_t>(numFrames * oboeStream->getChannelCount()),
oboeStream->getSampleRate());
return oboe::DataCallbackResult::Continue;
}
It works this way, I'll change which stream has the callback attached if I notice any performance issue, for now I'll keep it this way.
Sometimes, I need to send data the device
You always need to write data to the output. Generally you need to write at least numFrames, maybe more. If you don't have any valid data to send then write zeros.
Warning: in your else block you are calling memset() but not writing to the stream.
->setFramesPerCallback(1024);
Do you need 1024 specifically? Is that for an FFT? If not then AAudio can optimize the callbacks better if the FramesPerCallback is not specified.
In all the Oboe examples, the callback is registered on the output stream,
and the reading is blocking. Does this have an importance?
Actually the read is NON-blocking. Whatever stream does not have the callback should be non-blocking. Use a timeoutNanos=0.
It is important to use the output stream for the callback if you want low latency. That is because the output stream can only provide low latency mode with callbacks and not with direct write()s. But an input stream can provide low latency with both callback and with read()s.
Once the streams are stabilized then you can read or write the same number of frames in each callback. But before it is stable, you may need to to read or write extra frames.
With an output callback you should drain the input for a while so that it is running close to empty.
With an input callback you should fill the output for a while so that it is running close to full.
write(this->agcData, numFrames, 1);
Your 1 nanosecond timeout is very small. But Oboe will still block. You should use a timeoutNanos of 0 for non-blocking mode.
According to Oboe documentation, during the onAudioReady callback, you have to write exactly numFrames frames directly into the buffer pointed to by *audioData. And you do not have to call Oboe "write" function but, instead, fill the buffer by yourself.
Not sure how your getAgc() function works but maybe you can give that function the pointer audioData as an argument to avoid having to copy data again from one buffer to another one.
If you really need the onAudioReady callback to request the same amount of frames, then you have to set that number while building the AudioStream using:
oboe::AudioStreamBuilder::setFramesPerCallback(int framesPerCallback)
Look here at the things that you should not do during an onAudioReady callback and you will find that oboe write function is forbidden:
https://google.github.io/oboe/reference/classoboe_1_1_audio_stream_callback.html

MediaFormat.getByteBuffer("csd-0") returning null on some devices

I am trying to decode AAC encoded files in my application and to initialise the MediaFormatobject used to initialise my MediaCodec object, This is the code for setting up the variables for the MediaFormat object
MediaExtractor mediaExtractor = new MediaExtractor();
try {
mediaExtractor.setDataSource(audioFilePath);
} catch (IOException e) {
return false;
}
Log.d(TAG, "Number of tracks in the file are:" + mediaExtractor.getTrackCount());
MediaFormat mediaFormat = mediaExtractor.getTrackFormat(0);
Log.d(TAG, "mediaFormat:" + mediaFormat.toString());
mSampleRate = mediaFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);
Log.d(TAG, "mSampleRate: " + mSampleRate);
mChannels = mediaFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
Log.d(TAG, "mChannels number of channels: " + mChannels);
// Reading the duration from the file and converting from micro seconds to milliseconds.
mDuration = (int) (mediaFormat.getLong(MediaFormat.KEY_DURATION) / 1000);
Log.d(TAG, "duration: " + mDuration);
// Getting the csd-0 info from the file ..
mCSDBuffer = mediaFormat.getByteBuffer("csd-0");
The problem I am facing is that the statement mCSDBuffer = mediaFormat.getByteBuffer("csd-0") fetches me null for the same file on some devices. The application is in production and I see this error on armabi-v7a/armabiprocessors devices with android API level of 17, 18 and 19 and most of these errors are on Samsung devices. Any direction on this?
If the csd-0 buffer is null, then I would expect it still to decode correctly when passed into MediaCodec. Does it, if you just choose not to set the csd-0 data as input to MediaCodec, if it is null? In general, you should be able to decode the MediaExtractor output if you just pipe it straight to MediaCodec.
The actual format of the data output from MediaExtractor is not very strictly specified though, so in practice it is known that some manufacturers (mainly Samsung) change this in a way that only their own decoder handles. See e.g. https://code.google.com/p/android/issues/detail?id=74356 for another case of the same.
Ideally, the Android CTS tests would be made stricter to make sure that MediaExtractor behaves consistently, allowing its use in a more generic context, or use another decoder than MediaCodec. (E.g. with the current Samsung issues, you can't use MediaExtractor on one device, send the extracted data over a network to another device and decode it there.)

Android: How to use MediaMuxer with video/mp4v-es instead of video/avc?

I want to be able to use mp4v-es instead of avc on some devices. The encoder runs fine using avc, but when I replace it with mp4v-es, the muxer reports:
E/MPEG4Writer(12517): Missing codec specific data
as in MediaMuxer error "Failed to stop the muxer", and the video cannot be played. The difference is that I am adding the correct track/format to the muxer, without receiving any error:
...else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
MediaFormat newFormat = encoder.getOutputFormat();
mTrackIndex[encID] = mMuxer.addTrack(newFormat);
Is there any difference in handling mp4v-es compared to avc? One mention, I just skip "bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG" when it occurs, as for avc it was not needed.Thanks.
Just as Ganesh pointed out, unfortunately it does seem that this isn't possible right now, without modifying the platform source.
There's actually two ways that the codec specific data can be passed to the internal MPEG4Writer class, but neither of them actually work without modifications.
As Ganesh found, the logic for remapping MediaFormat keys to the internal format seems to be missing handling of codec specific data for any other video codec than H264. A tested modification that fixes this issue is as follows:
diff --git a/media/libstagefright/Utils.cpp b/media/libstagefright/Utils.cpp
index 25afc5b..304fe59 100644
--- a/media/libstagefright/Utils.cpp
+++ b/media/libstagefright/Utils.cpp
## -549,14 +549,14 ## void convertMessageToMetaData(const sp<AMessage> &msg, sp<MetaData> &meta) {
// reassemble the csd data into its original form
sp<ABuffer> csd0;
if (msg->findBuffer("csd-0", &csd0)) {
- if (mime.startsWith("video/")) { // do we need to be stricter than this?
+ if (mime == MEDIA_MIMETYPE_VIDEO_AVC) {
sp<ABuffer> csd1;
if (msg->findBuffer("csd-1", &csd1)) {
char avcc[1024]; // that oughta be enough, right?
size_t outsize = reassembleAVCC(csd0, csd1, avcc);
meta->setData(kKeyAVCC, kKeyAVCC, avcc, outsize);
}
- } else if (mime.startsWith("audio/")) {
+ } else if (mime == MEDIA_MIMETYPE_AUDIO_AAC || mime == MEDIA_MIMETYPE_VIDEO_MPEG4) {
int csd0size = csd0->size();
char esds[csd0size + 31];
reassembleESDS(csd0, esds);
Secondly, instead of passing the codec specific data as csd-0 in MediaFormat, you could in principle pass the same buffer (with the MediaCodec.BUFFER_FLAG_CODEC_CONFIG flag set) to MediaMuxer.writeSampleData. This approach doesn't work currently since this method doesn't check for the codec config flag at all - it could be fixed with this modification:
diff --git a/media/libstagefright/MediaMuxer.cpp b/media/libstagefright/MediaMuxer.cpp
index c7c6f34..d612e01 100644
--- a/media/libstagefright/MediaMuxer.cpp
+++ b/media/libstagefright/MediaMuxer.cpp
## -193,6 +193,9 ## status_t MediaMuxer::writeSampleData(const sp<ABuffer> &buffer, size_t trackInde
if (flags & MediaCodec::BUFFER_FLAG_SYNCFRAME) {
sampleMetaData->setInt32(kKeyIsSyncFrame, true);
}
+ if (flags & MediaCodec::BUFFER_FLAG_CODECCONFIG) {
+ sampleMetaData->setInt32(kKeyIsCodecConfig, true);
+ }
sp<MediaAdapter> currentTrack = mTrackList[trackIndex];
// This pushBuffer will wait until the mediaBuffer is consumed.
As far as I can see, there's no way to mux MPEG4 video with MediaMuxer right now while using the public API, without modifying the platform source. Given the issues in Utils.cpp above, you can't mux any video format that requires codec specific data, except for H264. If VP8 is an option, you can mux that into webm files (together with vorbis audio), but hardware encoders for VP8 is probably much less common than hardware encoders for MPEG4.
I presume you have the ability to modify the Stagefright sources and hence, I have a proposed solution for your problem, but one which requires a customization.
Background:
When an encoder completes encoding, the first buffer will have the csd information which is usually tagged with OMX_BUFFERFLAG_CODECCONFIG flag. When such a buffer is returned to the MediaCodec, it shall store the same as csd-0 in MediaCodec::amendOutputFormatWithCodecSpecificData.
Now, when this buffer is given to MediaMuxer, the same is processed as part of addTrack,in which convertMessageToMetadata is invoked. If you refer to the implementation of the same, we can observe that only AVC is handled for video and defaults to audio for ESDS creation.
EDIT:
Here, my recommendation is to modify this line as below and try your experiment
}
if (mime.startsWith("audio/") || (!strcmp(mime, MEDIA_MIMETYPE_VIDEO_MPEG4)) {
With this change, I feel it should work for MPEG4 video track also. The change is to convert the else if into if as the previous check for video will also try to process the data, but only for AVC.

Android - MediaPlayer Buffer Size in ICS 4.0

I'm using a socket as a proxy to the MediaPlayer so I can download and decrypt mp3 audio before writing it to the socket. This is similar to the example shown in the NPR news app however I'm using this for all Android version 2.1 - 4 atm.
NPR StreamProxy code - http://code.google.com/p/npr-android-app/source/browse/Npr/src/org/npr/android/news/StreamProxy.java
My issue is that playback is fast for 2.1 - 2.3, but in Android 4.0 ICS the MediaPlayer buffers too much data before firing the onPrepared listener.
An example amount of data written to the Socket OutputStream before onPrepared():
On SGS2 with 2.3.4 - onPrepared() after ~ 133920 bytes
On Nexus S with 4.0.4 - onPrepared() after ~ 961930 bytes
This also occurs on the Galaxy Nexus.
Weirdly the 4.0 emulator doesn't buffer as much data as 4.0 devices. Anyone experience a similar issue with the MediaPlayer on ICS?
EDIT
Here's how the proxy is writing to the socket. In this example it's from a CipherInputStream loaded from a file, but the same occurs when it's loaded from the HttpResponse.
final Socket client = (setup above)
// encrypted file input stream
final CipherInputStream inputStream = getInputStream(file);
// setup the socket output stream
final OutputStream output = client.getOutputStream();
// Writing the header
final String httpHeader = buildHttpHeader(file.length());
final byte[] buffer = httpHeader.getBytes("UTF-8");
output.write(buffer, 0, buffer.length);
int writtenBytes = 0;
int readBytes;
final byte[] buff = new byte[1024 * 12]; // 12 KB
while (mIsRunning && (readBytes = inputStream.read(buff)) != -1) {
output.write(buff, 0, readBytes);
writtenBytes += readBytes;
}
output.flush();
output.close();
The HTTP Headers that are written to the MediaPlayer before the audio..
private String buildHttpHeader(final int contentLength) {
final StringBuilder sb = new StringBuilder();
sb.append("HTTP/1.1 200 OK\r\n");
sb.append("Content-Length: ").append(contentLength).append("\r\n");
sb.append("Accept-Ranges: bytes\r\n" );
sb.append("Content-Type: audio/mpeg\r\n");
sb.append("Connection: close\r\n" );
sb.append("\r\n");
return sb.toString();
}
I've looked around for alternate implementations but as I have encrypted audio and the MediaPlayer does not support InputStreams as a data source my only option (I think..) is to use a proxy such as this.
Again, this is working fairly well Android 2.1 - 2.3 but in ICS the MediaPlayer is buffering a huge amount of this data before playing.
EDIT 2 :
Further testing is showing that this is also an issue on the SGS2 once upgraded to Android 4.0.3. So it seems like the MediaPlayer's buffering implementation has changed significantly in 4.0. This is frustrating as the API provides no way to alter the behaviour.
EDIT 3 :
Android bug created. Please add comments and star there as well
http://code.google.com/p/android/issues/detail?id=29870
EDIT 4 :
My playback code is fairly standard.. I have the start() call on the MediaPlayer in my onPrepared() method.
mCurrentPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
mCurrentPlayer.setDataSource(url);
mCurrentPlayer.prepareAsync();
Have tried it using just prepare() and also ajacian81's recommended way but to no avail.
I should add that recently a Google employee got back to me about my question and confirmed that the buffer size was intentionally increased in ICS (for HD content). It has been requested to the API developers to add the ability to set a buffer size on MediaPlayer.
Though I think this API change request had been around before I came along so I wouldn't advise anyone to hold their breath.
Would it be possible to see the code where you're start()ing the MediaPlayer?
Are you using the STREAM_MUSIC audio stream type?
player.setAudioStreamType(AudioManager.STREAM_MUSIC);
Have you also experimented between player.prepareAsync(); and player.prepare();?
There was a similar issue last year I remember, where the solution was to: start, pause and then onPrepared to start():
player.setAudioStreamType(AudioManager.STREAM_MUSIC);
player.setDataSource(src);
player.prepare();
player.start();
player.pause();
player.setOnPreparedListener(new OnPreparedListener() {
#Override
public void onPrepared(MediaPlayer mp) {
player.start();
}
});
Unlikely to be the fix in this case, but while you're spinning your wheels this might be worth a shot.
For me the solution was to use MediaCodec with AudioTrack, I found all I need to know here:
This could be a solution: http://www.piterwilson.com/blog/2014/03/15/mediacodec-mediaextractor-and-audiotrack-to-the-rescue/

AudioTrack: Playing sound coming in over WiFi

I've got an AudioTrack in my application, which is set to Stream mode. I want to write audio which I receive over a wireless connection. The AudioTrack is declared like this:
mPlayer = new AudioTrack(STREAM_TYPE,
FREQUENCY,
CHANNEL_CONFIG_OUT,
AUDIO_ENCODING,
PLAYER_CAPACITY,
PLAY_MODE);
Where the parameters are defined like:
private static final int FREQUENCY = 8000,
CHANNEL_CONFIG_OUT = AudioFormat.CHANNEL_OUT_MONO,
AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT,
PLAYER_CAPACITY = 2048,
STREAM_TYPE = AudioManager.STREAM_MUSIC,
PLAY_MODE = AudioTrack.MODE_STREAM;
However, when I write data to the AudioTrack with write(), it will play choppy... The call
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
is made whenever a packet is received over the network connection. Does anybody have an idea on why it sounds choppy? Maybe it has something to do with the WiFi connection itself? I don't think so, as the sound doesn't sound horrible the other way around, when I send data from the Android phone to another source over UDP. The sound then sounds complete and not choppy at all... So does anybody have an idea on why this is happening?
Do you know how many bytes per second you are recieving, the average time between packets compares, and the maximum time between packets? If not, can you add code to calculate it?
You need to be averaging 8000 samples/second * 2 bytes/sample = 16,000 bytes per second in order to keep the stream filled.
A gap of more than 2048 bytes / (16000 bytes/second) = 128 milliseconds between incoming packets will cause your stream to run dry and the audio to stutter.
One way to prevent it is to increase the buffer size (PLAYER_CAPACITY). A larger buffer will be more able to handle variation in the incoming packet size and rate. The cost of the extra stability is a larger delay in starting playback while you wait for the buffer to initially fill.
I have partially solved it by placing the mPlayer.write(audio, 0, audio.length); in it's own Thread. This does take away some of the choppy-ness (due to the fact that write is a blocking call), but it still sounds choppy after a good second or 2. It still has a significant delay of 2-3 seconds.
new Thread(){
public void run(){
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
}
}.start();
Just a little anonymous Thread that does the writing now...
Anybody have an idea on how to solve this issue?
Edit:
After some further checking and debugging, I've noticed that this is an issue with obtainBuffer.
I've looked at the java code of the AudioTrack and the C++ code of AudioTrack And I've noticed that it only can appear in the C++ code.
if (__builtin_expect(result!=NO_ERROR, false)) {
LOGW( "obtainBuffer timed out (is the CPU pegged?) "
"user=%08x, server=%08x", u, s);
mAudioTrack->start(); // FIXME: Wake up audioflinger
timeout = 1;
}
I've noticed that there is a FIXME in this piece of code. :< But anyway, could anybody explain how this C++ code works? I've had some experience with it, but it was never as complicated as this...
Edit 2:
I've tried somewhat different now, the difference being that I buffer the data I receive, and then when the buffer is filled with some data, it is being written to the player. However, the player keeps up with consuming for a few cycles, then the obtainBuffer timed out (is the CPU pegged?) warning kicks in, and there is no data at all written to the player untill it is kick started back to life... After that, it will continually get data written to it untill the buffer is emptied.
Another slight difference is that I stream a file to the player now. That is, reading it in chunks, the writing those chunks to the buffer. This simulates the packages being received over wifi...
I am beginning to wonder if this is just an OS issue that Android has, and it isn't something I can solve on my own... Anybody got any ideas on that?
Edit 3:
I've done more testing, but this doesn't help me any further. This test shows me that I only get lag when I try to write to the AudioTrack for the first time. This takes somewhat between 1 and 3 seconds to complete. I did this by using the following bit of code:
long beforeTime = Utilities.getCurrentTimeMillis(), afterTime = 0;
mPlayer.write(data, 0, data.length);
afterTime = Utilities.getCurrentTimeMillis();
Log.e("WriteToPlayerThread", "Writing a package took " + (afterTime - beforeTime) + " milliseconds");
However, I get the following results:
Logcat Image http://img810.imageshack.us/img810/3453/logcatimage.png
These show that the lag initially occurs at the beginning, after which the AudioTrack keeps getting data continuously... I really need to get this one fixed...

Categories

Resources