Documentations says: sessionId - Id of audio session the AudioTrack must be attached to
May I use it something like this?:
MediaPlayer mp = MediaPlayer.create(this, R.raw.test);
mp.start();
int minSize = AudioTrack.getMinBufferSize(
44100, AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT );
at = new AudioTrack(AudioManager.STREAM_MUSIC,
44100, AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT, minSize,
AudioTrack.MODE_STREAM, mp.getAudioSessionId());
at.setStereoVolume(0.0f, 1.0f);
What is right way to connect audio track to stream which used to media player. Can I make changes on this stream using AudioTrack?
You don't need to specify a session ID, as there's also an AudioTrack constructor that doesn't have a sessionId input parameter. However, as the documentation states, if you want to "associate audio effects to a particular instance of AudioTrack" you should use the constructor that takes a session ID.
This session ID can either be taken from a MediaPlayer instance that you have created - or it can be zero, in which case "a new session will be created for this track if none is supplied".
While looking through the Android Documentation, I found an AudioTrack.Builder class. A snippet of the documentaiton explains that
If the session ID is not specified with setSessionId(int), a new one will be generated.
It appears that this method would allow for creation of a new AudioTrack without need of supplying a session ID.
Related
I have an RTMP stream I want to play in my app using the Exoplayer library. My setup for that is as follows:
TrackSelector trackSelector = new DefaultTrackSelector();
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory(bandwidthMeter);
ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory();
factory = new ExtractorMediaSource.Factory(rtmpDataSourceFactory);
factory.setExtractorsFactory(extractorsFactory);
createSource();
mPlayer = ExoPlayerFactory.newSimpleInstance(mActivity, trackSelector, new DefaultLoadControl(
new DefaultAllocator(true, C.DEFAULT_BUFFER_SEGMENT_SIZE),
1000, // min buffer
3000, // max buffer
1000, // playback
2000, //playback after rebuffer
DefaultLoadControl.DEFAULT_TARGET_BUFFER_BYTES,
true
));
vwExoPlayer.setPlayer(mPlayer);
mPlayer.addListener(mVideoStreamHandler);
mPlayer.addVideoListener(new VideoListener() {
#Override
public void onVideoSizeChanged(int width, int height, int unappliedRotationDegrees, float pixelWidthHeightRatio) {
Log.d("hasil", "onVideoSizeChanged: w:" + width + ", h:" + height);
String res = width + "x" + height;
resolution.setText(res);
}
#Override
public void onRenderedFirstFrame() {
}
});
Where createSource() is as follows:
private void createSource() {
mMediaSource180 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_180));
mMediaSource360 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_360));
mMediaSource720 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_720));
mMediaSourceAudio = factory.createMediaSource(Uri.parse(API.GAME_AUDIO_STREAM_URL));
}
My current problem is that only the first three ExtractorMediaSources work fine in Exoplayer. The mMediaSourceAudio refuses to play in Exoplayer, but works just fine in the VLC Media Player for Android.
Right now I have a suspicion that the format is AAC-LTP, or whatever AAC variant that requires a codec available in VLC but not in default Android. However, I do not have access to the encoding process so I don't know for sure.
If this isn't the case, what is it?
EDIT:
I've been debugging the BandwidthMeter and added a MediaSourceEventListener. When I use the normal Video sources, onDownstreamFormatChanged() gets called, but not when I use that Audio Stream source.
In addition, the BandwidthMeter works fine, with bytes always downloaded in all parts of the stream and more bytes when the video stream comes in, but only in the Audio only stream that, when I call mPlayer.getBufferedPosition(), the returned value is always 0. Also, when I use the Audio Stream source, no OMX code was called - no decoders were set up.
Am I seeing a malformed audio stream, or do I need to change my Exoplayer's settings?
EDIT 2:
Further debugging reveals that, in all the Video streams and Audio stream, the same FlvExtractor is used. Even though the Video streams have the avc video track encoding and mp4a-latm audio track encoding. Is this normal?
Turns out it's because the stream was recognized to have two tracks/sampleQueues. One Audio track, and one track with null format. That null track was supposed to be the video track, which was supposed to exist according to the stream's flvHeader flag.
For now, I get around this by creating a custom MediaSource using a custom MediaPeriod. Said custom MediaPeriod having code to separate the video and audio tracks of the SampleQueues, then using the audio-only SampleQueue[] instead of the source SampleQueue[] when I want to play the audio-only stream.
Though this gives me another point of concern: There's something one can do to alter the 'has audio track (flag & 0x04) and video track (flag & 0x01)' flag in the rtmp stream, right?
Thanks for the comments, I'm new to ExoPlayer. But your comments helped me in debugging and getting multiple workarounds to the issue.
I tried to use custom MediaSource and custom MediaPeriod to address this audio issue. I have observed video format data coming after audio data incase of video+audio wowza stream, so the function maybeFinishPrepare() will wait for getting both video and audio format tag data before invoking onPrepared, incase if video tagData is received first. Incase of audio data received first, it wont wait and will call onPrepare().
With the above changes, I was able to play audio alone and video_audio wowza streams, where rtmp tagHeader with tagTypes were coming in the order of video tagData and then followed by audio data.
I wasn't able to use the same patch with srs server to play both audio_only and video_audio streams with the same changes. srs server is giving tagData in the order of audio and then video tagData,
So, I debugged further in FlvExtractor. In readFlvHeader, I have overriden the hasAudio and hasVideo variables. These variables will be set based on the first few tagHeaders(5 or 6). I used peekFully on input for 6 times in a loop. In each loop after fetching tagType and tagDataSize, tagDataSize is used to input.advancePeekPosition(), and tagType is used to identify whether we have audio/video format data in tagData. After peeking for first 6 consecutive tagHeaders, I was able to get actual values of hasAudio and hasVideo, and ignored the flvHeaders.flags, which were used to set these variables.
Custom FlvExtractor workaround, looked cleaner than custom MediaSource/MediaPeriod, as we will create those many tracks as necessary, as we are setting proper hasVideo/hasAudio values.
Say, I have an InputStream, providing audio/mpeg (or audio/aacp) - a Shoutcast radio.
Could somebody please show me a code example to playback such a stream?
I searched all over the internet, and it looks like the android.media.MediaPlayer can't playback a buffered stream. It can playback streams only via HTTP URL as DataSource.
Yes, there is a possibility to implement your own android.media.MediaDataSource and feed it to MediaPlayer.setDataSource(), but in case of audio/aacp the codec could not initialize.
Guess this could be done with OpenSL ES, but still I haven't found any example which would perform decoding from InputStream (not from URL), and then play back the output..
Come on guys! Any sample snippet in OpenSL ES decoding an input byte array and sending it to the audio output. Please!
Only one condition - input audio format should be obtained from the input stream.
The only one manual I found on OpenSL ES is Khronos OpenSL ES™ Registry, but that is not a reading for beginners at all. I'd rather die there searching for a proper "how-to" example... :(
Why is it so weird - it's very simple /though inconvenient/ (java), or it's very hard (NDK)? Why is there nothing in the middle?
your solution is something like this:
public void run() {
int buffersize = AudioRecord.getMinBufferSize(11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM);
atrack.setPlaybackRate(11025);
byte[] buffer = new byte[buffersize];
atrack.play();
while (isPlaying) {
yourStream.read(buffer, 0, buffersize);
atrack.write(buffer, 0, buffer.length);
}
}
[First App] I am creating a sort of Alarm app that allows user to select alarm sound from either sd-card or app-supplied sounds. Since, the app essentially plays alarms, I want the volume to be 'alarm volume' of the device. I am able to achieve this for sd card sounds. But, I am unable to setAudioStreamType for raw resource sounds.
I am using following code :
MediaPlayer m_player = new MediaPlayer();
m_player.setAudioStreamType(AudioManager.STREAM_ALARM);
switch (bin_name) { //bin_name = various user selectable music files
default:
m_player = MediaPlayer.create(context, R.raw.blu);
break;
}
m_player.setLooping(true);
m_player.start();
My blu.mp3 plays at media volume only. Upon checking the documentation for MediaPlayer.create(Context context, int resid), I found this :
Note that since prepare() is called automatically in this method, you cannot change the audio stream type (see setAudioStreamType(int)), audio session ID (see setAudioSessionId(int)) or audio attributes (see setAudioAttributes(AudioAttributes) of the new MediaPlayer.
I also tried finding code samples for above method but none of them showed how to set AudioStreamType to AudioManager.STEAM_ALARM. I will accept answers with alternative ways that simply play the sound with infinite loop. How to achieve this ?
As the documentation you are referring to says, you must create and prepare the MediaPlayer yourself. Haven't tried with the STREAM_ALARM but I'm using following snippet to play on STREAM_VOICE_CALL
Uri uri = Uri.parse("android.resource://com.example.app/" + R.raw.hdsweep);
MediaPlayer mMediaPlayer = new MediaPlayer();
mMediaPlayer.setAudioStreamType(AudioManager.STREAM_VOICE_CALL);
mMediaPlayer.setDataSource(context, uri);
mMediaPlayer.prepare();
mMediaPlayer.start()
i am developing an android app, which plays live speex audio stream. So i used jspeex library .
The audio stream is 11khz,16 bit.
At android side i have done as follows:
SpeexDecoder decoder = new SpeexDecoder();
decoder.init(1, 11025,1, true);
decoder.processData(subdata, 0, subdata.length);
byte[] decoded_data = new byte[decoder.getProcessedDataByteSize()];
int result= decoder.getProcessedData(decoded_data, 0);
When this decoded data is played by Audiotrack , some part of audio is clipped.
Also when decoder is set to nb-mode( first parameter set to 0) the sound quality is worse.
I wonder there is any parameter configuration mistake in my code.
Any help, advice appreciated.
Thanks in advance.
Sampling rate and buffer size should be set in an optimized way for the specific device. For example you can use AudioRecord.getMinBufferSize() to obtain the best size for your buffer:
int sampleRate = 11025; //try also different standard sampleRate
int bufferSize = AudioRecord.getMinBufferSize(sampleRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
If your Audiotrack has a buffer which is too small or too large you will experience audio glitch. I suggest you to take a look here and play around with these values (sampleRate and bufferSize).
I've found lots of tutorials and posts showing how to use AudioTrack to play wav files in AudioTrack.MODE_STREAM and I've successfully implemented this example.
However I'm having issues with performance when playing multiple audio tracks at once and thinking that I should first create the tracks using AudioTrack.MODE_STATIC then just call play each time.
I can't find any resources on how to implement this. How can I do this?
Thanks
The two main sticking points for me were realizing that .write() comes first and that the instantiated player must have the size of the entire clip as the buffer_size_in_bytes.
Assuming you have recorded a PCM file using AudioRecord, you can play it back with STATIC_MODE like so...
File file = new File(FILENAME);
int audioLength = (int)file.length();
byte filedata[] = new byte[audioLength];
try{
InputStream inputStream = new BufferedInputStream(new FileInputStream(FILENAME));
int lengthOfAudioClip = inputStream.read(filedata, 0, audioLength);
player = new AudioTrack(STREAM_TYPE, SAMPLE_RATE, CHANNEL_OUT_CONFIG, AUDIO_FORMAT,audioLength, AUDIO_MODE);
player.write(filedata, OFFSET, lengthOfAudioClip);
player.setPlaybackRate(playbackRate);
player.play();
}