I have a server that encodes real-time voice into mono or stereo mp3 thanks to libmp3lame and sends it chunk by chunk through a WebSocket.
I'm trying to make an Android App that receives those mp3 chunks and play them with the most appropriate Audio player Android have. I went with AudioTrack since it seems pretty easy to add chunks to the player as well as "stream" oriented. (Since what I'm doing is sending to the track some byte array and not a full song that is locally stocked in the Android phone).
Since AudioTrack does not support compressed audio format (such as MP3), I have to decode those chunks into PCM to play them afterward. I'm using the famous JLayer to do this real-time decoding. Thanks to that, I can play each sample into my AudioTrack and hear what the server is sending.
My problem is that the received/player audio is badly hashed. (I can understand whatever the speaker is saying perfectly, but the quality is bad, like if the speaker had a "robotic voice").
Here is the code I'm using to receive/decode/play those byte[].
public void addSample(byte[] data) throws BitstreamException, DecoderException, IOException {
// JLayer decoder
Decoder decoder = new Decoder();
// Input Stream with the byte[] voice data
InputStream bis = new ByteArrayInputStream(data);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
Bitstream bits = new Bitstream(bis);
// Decoding MP3 data into PCM in a PCM BUFFER
SampleBuffer pcmBuffer = (SampleBuffer) decoder.decodeFrame(bits.readFrame(), bits);
// Sending the PCMBuffer data into Audio Track to play it
mTrack.write(pcmBuffer.getBuffer(), 0, pcmBuffer.getBufferLength());
bits.closeFrame();
}
And here is my AudioTrack initialization
mTrack= new AudioTrack.Builder()
.setAudioAttributes(new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_SPEECH)
.build())
.setAudioFormat(new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_16BIT)
.setSampleRate(48000)
.setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
.build())
.setBufferSizeInBytes(AudioTrack.getMinBufferSize(48000, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT))
.build();
mTrack.play();
So to understand what was happening I tried to lag each data contained in the pcmBuffer. It seems like a huge part of those data where 0 at the very beginning of the buffer (I'd say 1/5 of the buffer is 0, all of them located at the beginning). So then I took an oscilloscope and tried to get the signal my Android phone was receiving. Here is the result:
As you can see, each frame is present, but as some "blank" or 0 data values. Those 0 in the beginning of each frame makes the signal hashed and pretty annoying to listen.
I have no idea whether this comes from the MP3 signal itself, the way I'm playing it, AudioTrack, JLayer, or the way I'm decoding it. So if anyone has an idea it would be really awesome.
EDIT :
Found out something interesting. By decoding each frame header I can have access to a lot of information such as the time in ms for each frame. I logged it :
System.out.println(bits.readFrame().ms_per_frame());
I found out that each of my frames are 24ms. When I look back at the oscilloscope, I can see that each frame actually take 24ms, but the beginning/end of each frame is filled with 0. So first of all, is it a decoding problem ? If it is not, how can I have a clear signal without small breakup in each frame ?
I've been printing all the data that each frame is sending me, each frame starts with a looot of zeros. How am I supposed to have a clear signal if each frame have some kind of audio void ?
If I print the MP3 data that I'm receiving each frame (96 bits), I have the first four bytes (probably the header?) that always have the same value :
"-1, -5, 20, -60"
Then I have a fifth bit that is always equal to 0, and sometimes a sixth bit that is also equal to 0. Should I be removing those ?
Related
I have an RTMP stream I want to play in my app using the Exoplayer library. My setup for that is as follows:
TrackSelector trackSelector = new DefaultTrackSelector();
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory(bandwidthMeter);
ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory();
factory = new ExtractorMediaSource.Factory(rtmpDataSourceFactory);
factory.setExtractorsFactory(extractorsFactory);
createSource();
mPlayer = ExoPlayerFactory.newSimpleInstance(mActivity, trackSelector, new DefaultLoadControl(
new DefaultAllocator(true, C.DEFAULT_BUFFER_SEGMENT_SIZE),
1000, // min buffer
3000, // max buffer
1000, // playback
2000, //playback after rebuffer
DefaultLoadControl.DEFAULT_TARGET_BUFFER_BYTES,
true
));
vwExoPlayer.setPlayer(mPlayer);
mPlayer.addListener(mVideoStreamHandler);
mPlayer.addVideoListener(new VideoListener() {
#Override
public void onVideoSizeChanged(int width, int height, int unappliedRotationDegrees, float pixelWidthHeightRatio) {
Log.d("hasil", "onVideoSizeChanged: w:" + width + ", h:" + height);
String res = width + "x" + height;
resolution.setText(res);
}
#Override
public void onRenderedFirstFrame() {
}
});
Where createSource() is as follows:
private void createSource() {
mMediaSource180 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_180));
mMediaSource360 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_360));
mMediaSource720 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_720));
mMediaSourceAudio = factory.createMediaSource(Uri.parse(API.GAME_AUDIO_STREAM_URL));
}
My current problem is that only the first three ExtractorMediaSources work fine in Exoplayer. The mMediaSourceAudio refuses to play in Exoplayer, but works just fine in the VLC Media Player for Android.
Right now I have a suspicion that the format is AAC-LTP, or whatever AAC variant that requires a codec available in VLC but not in default Android. However, I do not have access to the encoding process so I don't know for sure.
If this isn't the case, what is it?
EDIT:
I've been debugging the BandwidthMeter and added a MediaSourceEventListener. When I use the normal Video sources, onDownstreamFormatChanged() gets called, but not when I use that Audio Stream source.
In addition, the BandwidthMeter works fine, with bytes always downloaded in all parts of the stream and more bytes when the video stream comes in, but only in the Audio only stream that, when I call mPlayer.getBufferedPosition(), the returned value is always 0. Also, when I use the Audio Stream source, no OMX code was called - no decoders were set up.
Am I seeing a malformed audio stream, or do I need to change my Exoplayer's settings?
EDIT 2:
Further debugging reveals that, in all the Video streams and Audio stream, the same FlvExtractor is used. Even though the Video streams have the avc video track encoding and mp4a-latm audio track encoding. Is this normal?
Turns out it's because the stream was recognized to have two tracks/sampleQueues. One Audio track, and one track with null format. That null track was supposed to be the video track, which was supposed to exist according to the stream's flvHeader flag.
For now, I get around this by creating a custom MediaSource using a custom MediaPeriod. Said custom MediaPeriod having code to separate the video and audio tracks of the SampleQueues, then using the audio-only SampleQueue[] instead of the source SampleQueue[] when I want to play the audio-only stream.
Though this gives me another point of concern: There's something one can do to alter the 'has audio track (flag & 0x04) and video track (flag & 0x01)' flag in the rtmp stream, right?
Thanks for the comments, I'm new to ExoPlayer. But your comments helped me in debugging and getting multiple workarounds to the issue.
I tried to use custom MediaSource and custom MediaPeriod to address this audio issue. I have observed video format data coming after audio data incase of video+audio wowza stream, so the function maybeFinishPrepare() will wait for getting both video and audio format tag data before invoking onPrepared, incase if video tagData is received first. Incase of audio data received first, it wont wait and will call onPrepare().
With the above changes, I was able to play audio alone and video_audio wowza streams, where rtmp tagHeader with tagTypes were coming in the order of video tagData and then followed by audio data.
I wasn't able to use the same patch with srs server to play both audio_only and video_audio streams with the same changes. srs server is giving tagData in the order of audio and then video tagData,
So, I debugged further in FlvExtractor. In readFlvHeader, I have overriden the hasAudio and hasVideo variables. These variables will be set based on the first few tagHeaders(5 or 6). I used peekFully on input for 6 times in a loop. In each loop after fetching tagType and tagDataSize, tagDataSize is used to input.advancePeekPosition(), and tagType is used to identify whether we have audio/video format data in tagData. After peeking for first 6 consecutive tagHeaders, I was able to get actual values of hasAudio and hasVideo, and ignored the flvHeaders.flags, which were used to set these variables.
Custom FlvExtractor workaround, looked cleaner than custom MediaSource/MediaPeriod, as we will create those many tracks as necessary, as we are setting proper hasVideo/hasAudio values.
I am doing android video recording using mediacodec + mediamuxer, and now I can record video and generate mp4 file which can be played. The problem is that I find the recorded video will seize for about one second some time. So I launched traceview and I find MediaMuxer.nativeWriteSampleData() cause the problem. Sometimes this function is very fast and returns within several micro-seconds, but sometimes this function is very slow and will consume about one second or so, and the video blocks at that time. I do not know why this function will perform so varying from time to time. The recording target file is located at external SDCard or internal storage, and the problem exist on both media.
Neo
This is a problem that occurs mostly on devices with lower writing flash speeds or if you try to write to the SD card. The solution is to copy the encoded data to a temporary ByteBuffer, release the data back to MediaCodec and call writeSampleData asynchronously on a dedicated thread.
So, assuming that you have a thread for draining MediaCodec's ouput and one for feeding MediaMuxer, this is a possible solution:
// this runs on the MediaCodec's draining thread
public void writeSampleData(final MediaCodec mediaCodec, final int trackIndex, final int bufferIndex, final ByteBuffer encodedData, final MediaCodec.BufferInfo bufferInfo) {
final ByteBuffer data = ByteBuffer.allocateDirect(bufferInfo.size); // allocate a temp ByteBuffer
data.put(encodedData); // copy the data over
mediaCodec.releaseOutputBuffer(bufferIndex, false); // return the packet to MediaCodec
mWriterHandler.post(new Runnable() {
// this runs on the Muxer's writing thread
#Override
public void run() {
mMuxer.writeSampleData(trackIndex, data, bufferInfo); // feed the packet to MediaMuxer
});
}
The problem with this approach is that we allocate a new ByteBuffer for every incoming packet. It would be better if we could re-use a big circular buffer to enqueue and deque the new data. I have written a post about this matter and also propose a solution which is rather lengthy to explain here. You can read it here.
i am developing an android app, which plays live speex audio stream. So i used jspeex library .
The audio stream is 11khz,16 bit.
At android side i have done as follows:
SpeexDecoder decoder = new SpeexDecoder();
decoder.init(1, 11025,1, true);
decoder.processData(subdata, 0, subdata.length);
byte[] decoded_data = new byte[decoder.getProcessedDataByteSize()];
int result= decoder.getProcessedData(decoded_data, 0);
When this decoded data is played by Audiotrack , some part of audio is clipped.
Also when decoder is set to nb-mode( first parameter set to 0) the sound quality is worse.
I wonder there is any parameter configuration mistake in my code.
Any help, advice appreciated.
Thanks in advance.
Sampling rate and buffer size should be set in an optimized way for the specific device. For example you can use AudioRecord.getMinBufferSize() to obtain the best size for your buffer:
int sampleRate = 11025; //try also different standard sampleRate
int bufferSize = AudioRecord.getMinBufferSize(sampleRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
If your Audiotrack has a buffer which is too small or too large you will experience audio glitch. I suggest you to take a look here and play around with these values (sampleRate and bufferSize).
I have followed this example to convert raw audio data coming from AudioRecord to mp3, and it happened successfully, if I store this data in a file the mp3 file and play with music player then it is audible.
Now my question is instead of storing mp3 data to a file i need to play it with AudioTrack, the data is coming from the Red5 media server as live stream, but the problem is AudioTrack can only play PCM data, so i can only hear noise from my data.
Now i am using JLayer to my require task.
My code is as follows.
int readresult = recorder.read(audioData, 0, recorderBufSize);
int encResult = SimpleLame.encode(audioData,audioData, readresult, mp3buffer);
and this mp3buffer data is sent to other user by Red5 stream.
data received at other user is in form of stream, so for playing it the code is
Bitstream bitstream = new Bitstream(data.read());
Decoder decoder = new Decoder();
Header frameHeader = bitstream.readFrame();
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
short[] pcm = output.getBuffer();
player.write(pcm, 0, pcm.length);
But my code freezes at bitstream.readFrame after 2-3 seconds, also no sound is produced before that.
Any guess what will be the problem? Any suggestion is appreciated.
Note: I don't need to store the mp3 data, so i cant use MediaPlayer, as it requires a file or filedescriptor.
just a tip, but try to
output.close();
bitstream.closeFrame();
after yours write code. I'm processing MP3 same as you do, but I'm closing buffers after usage and I have no problem.
Second tip - do it in Thread or any other Background process. As you mentioned these deaf 2 seconds, media player may wait until you process whole stream because you are loading it in same thread.
Try both tips (and you should anyway). In first, problem could be in internal buffers; In second you probably fulfill Media's input buffer and you locked app (same thread, full buffer cannot receive your input and code to play it and release same buffer is not invoked because writing locks it...)
Also, if you don't doing it now, check for 'frameHeader == null' due to file end.
Good luck.
You need to loop through the frames like this:
While (frameHeader = bitstream.readFrame()){
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
short[] pcm = output.getBuffer();
player.write(pcm, 0, pcm.length);
bitstream.close();
}
And make sure you are not running them on main thread.(This is probably the reason of freezing.)
I am reading the Android documents about MediaCodec and other online tutorials/examples. As I understand it, the way to use the MediaCodec is like this (decoder example in pseudo code):
//-------- prepare audio decoder, format, buffers, and files --------
MediaExtractor extractor;
MediaCodec codec;
ByteBuffer[] codecInputBuffers;
ByteBuffer[] codecOutputBuffers;
extractor = new MediaExtractor();
extractor.setDataSource();
MediaFormat format = extractor.getTrackFormat(0);
//---------------- start decoding ----------------
codec = MediaCodec.createDecoderByType(mime);
codec.configure(format, null /* surface */, null /* crypto */, 0 /* flags */);
codec.start();
codecInputBuffers = codec.getInputBuffers();
codecOutputBuffers = codec.getOutputBuffers();
extractor.selectTrack(0);
//---------------- decoder loop ----------------
while (MP3_file_not_EOS) {
//-------- grasp control of input buffer from codec --------
codec.dequeueInputBuffer();
//---- fill input buffer with data from MP3 file ----
extractor.readSampleData();
//-------- release input buffer so codec can have it --------
codec.queueInputBuffer();
//-------- grasp control of output buffer from codec --------
codec.dequeueOutputBuffer();
//-- copy PCM samples from output buffer into another buffer --
short[] PCMoutBuffer = copy_of(OutputBuffer);
//-------- release output buffer so codec can have it --------
codec.releaseOutputBuffer();
//-------- write PCMoutBuffer into a file, or play it -------
}
//---------------- stop decoding ----------------
codec.stop();
codec.release();
Is this the right way to use the MediaCodec? If not, please enlighten me with the right approach. If this is the right way, how do I measure the performance of the MediaCodec? Is it the time difference between when codec.dequeueOutputBuffer() returns and when codec.queueInputBuffer() returns? I'd like an accuracy/precision of microseconds. Your ideas and thoughts are appreciated.
(merging comments and expanding slightly)
You can't simply time how long a single buffer submission takes, because the codec might want to queue up more than one buffer before doing anything. You will need to measure it in aggregate, timing the duration of the entire file decode with System.nanoTime(). If you turn the copy_of operation into a no-op and just discard the decoded data, you'll keep the output side (writing the decoded data to disk) out of the calculation.
Excluding the I/O from the input side is more difficult. As noted in the MediaCodec docs, the encoded input/output "is not a stream of bytes, it's a stream of access units". So you'd have to populate any necessary codec-specific-data keys in MediaFormat, and then identify individual frames of input so you can properly feed the codec.
An easier but less accurate approach would be to conduct a separate pass in which you time how long it takes to read the input data, and then subtract that from the total time. In your sample code, you would keep the operations on extractor (like readSampleData), but do nothing with codec (maybe dequeue one buffer and just re-use it every time). That way you only measure the MediaExtractor overhead. The trick here is to run it twice, immediately before the full test, and ignore the results from the first -- the first pass "warms up" the disk cache.
If you're interested in performance differences between devices, it may be the case that the difference in input I/O time, especially from a "warm" cache, is similar enough and small enough that you can just disregard it and not go through all the extra gymnastics.