I recently started working with Google Cast SDK v3(Android App).
I am working on casting audio streams on cast device, it works well untill any stream throw failure callback (FAILURE STATUS CODE : 15 (TIMEOUT)) in setResultCallback and cast device stuck on loading screen, i tried using
remoteMediaClient.stop()
method, but it doesn't work and also it stops playing other streams as well after failure.
I am looking forward for a way to clear Media request which i made earlier to cast device, so that i will be able to load new stream on cast device.
Here is the stream that stuck on Buffering state while playing:
http://14523.live.streamtheworld.com/KALLAMAAC
Don't know exactly what your setup is; when you call stop(), it not only stops the playback, it clears/unloads the media and any queue that you have so to play another item afterward, you need to load the media again, as if nothing was playing back before; it is rare to call stop(). To be able to help you further, it is important to see what type of error your receiver is running into.
Related
I've written some code for Android 2.2 that plays an audio file using the Android MediaPlayer. Without getting into the details of the code, I noticed that there exists a function called
isPlaying()
that allows you to check if an audio file is currently being played by the MediaPlayer. So, for example, when the following snippet of code runs
Toast.makeText(getApplicationContext(), "Sound playing is: " +
mediaPlayer.isPlaying(), Toast.LENGTH_SHORT).show();
it displays the following message
Sound playing is: true / false
depending on whether there's sound playing or not.
When I wrote some code to record sound from the microphone using the Android MediaRecorder, however, I noticed that there did not look like there exists a function called
isRecording()
that checks to see whether a recording is in progress.
So, I was wondering if the onus is on the programmer then to figure out if a recording is in progress by embedding some logic into their code - or if perhaps there indeed exists a way to do this (check if a recording is in progress) by using another in-built function offered by the Android API.
Doesn't look like there is such a function after all. I think it makes sense to try to embed some logic in the code to do this cleanly.
Important
I found a nice workaround to handle this issue, and you can read the workaround right away, but I strongly suggest that you read the entire explanation first.
The scenario
I was trying to find a way to figure out if the mediarecorder had started recording or not, I assumed that calling the start() method on the recorder after the prepare() method was enough, but it turns out, it isn't.
Before you get offended by what I just said, Let me explain the scenario...
I was building a simple audio recording app from scratch, no libraries, copy paste, all hardwork. So I knew exactly what each part of my code is doing. Or at least I thought I did.
Until I decided to try and break my application by clicking on the buttons to start and stop recording like I was playing a piano. And yes, my stop button wasn't even appearing until the mediarecorder's start() method was called.
So I was greeted with a crash and logcat welcomed me with
java.lang.RuntimeException: stop failed.
at android.media.MediaRecorder.stop(Native Method)
along with
E/MediaRecorder(15709): stop failed: -1007
So I read online and found out that calling stop() right after start() on MediaRecorder causes this problem.
So the biggest question was, how do I detect if it is now SAFE for me to enable the STOP button on my recorder?
The Workaround (Not at all perfect, but it works)
MediaRecorder.getMaxAmplitude() // The maximum absolute amplitude measured since the last call, or 0 when called for the first time
As you can see, the MediaRecorder.getMaxAmplitude() method or the MediaRecorder.maxAmplitude property return 0 when called for the first time and the amplitude after that.
So, instead of allowing the user to Stop the recording right after calling MediaRecorder.start() I am now waiting until the MediaRecorder.maxAmplitude value is greater than zero, at which point I can be sure that the MediaRecorder is initialized, started, and recording, and is in a state where calling stop() is allowed. You can accomplish this by using a runnable that keeps checking until the amplitude is greater than 0. I am already using a runnable for the timer, so I perform the check in that.
Please Note
When working on an emulator, the value returned by MediaRecorder.maxAmplitude is always 0. So, you should use an Android Device to check if everything works as expected.
Now my buttons stay disabled for less than a second when I first start recording. But if I stop and start too quickly, they remain disabled for a bit longer, and I show a "Please wait while the recording starts" message for the user.
I hope this answer helps someone.
Regards!
I am streaming audio using MediaPlayer on Android.
When the device moves from Wi-Fi to the cell network or vice-versa, the MediaPlayer stops playback.
Typically there are a few seconds-worth of audio in the buffer, so playback does not cease immediately.
Ideally I would like to pick up the stream for uninterrupted playback, but I cannot see how to do it.
I am working with both mp3 files hosted on the server and a live broadcast stream.
From a servers point of view, changing network mode from WiFi to 3G (visa versa), will look like a brand new connection from a separate IP's (client).
If the server that you are downloading from does not support tracking the stream (e.g number of seconds, sequence, byte) (unlike media-servers),It will have to start serving your mp3 from 0 byte again.
If your URL is pointing to a MP3 file located at a standard HTTP server, your situation will be what to expect. You should look into using a Media streaming server, so you could resume downloading/streaming at your choice. When you receive the intent that the connection is lost/resume, you could point your mediaplayer to the new URL with file-position in the URL (e.g seconds=19, bytes=57365).
Not sure if this helps you, but it explains a bit whats going on "behind the scenes".
Try setting your setOnCompletionListener and setOnErrorListener. In on complete with a live stream you can just call prepareAsync() again and this will kick off the stream again. There is no graceful way of doing this really unless you write your own media framework.
You can also listen in you onError() for the MEDIA_ERROR_SERVER_DIED you can then fire off the prepareAsync() again.
You'll find that the MediaPlayer will either Error or Complete. If you handle both these callbacks the very least you can do is restart the stream on change of network, as for smooth playback.. that would require custom mediaframework as the android one is pretty shoddy.
I don't know why your media player is stopping, but maybe you could add an onReceive method and put "mp.start()" in the method to make it restart playback.
Android, How to handle change in network (from GPRS to Wi-fi and vice-versa) while polling for data
You might need to make a separate class, but that should explain how to create a method that is called when you switch networks, at which point you could call "mp.start()" to resume playback (assuming mp is your MediaPlayer).
This assumes, of course, that your MediaPlayer is only being paused when you are switching networks, not stopped.
As Vidar says, reestablishing the connection will be treated by the server as a new connection.
It appears that I have to double-buffer the audio playback, which means building a custom media player. This can provide continuous audio, but it will still skip when listening to a live stream.
The MP3 file is a bit easier because I can know the playback position. Not so with the live stream.
As gmaster says, I'll need a broadcast receiver to establish a new connection when the network changes.
The audio buffer from the previous network connection should continue to playback while a new audio buffer is filled via the new connection.
When the new buffer is full enough to start playback I can switch playback to it.
If I am streaming a file, with server support and a little bit of work I can ensure that the current playback position data is in both buffers and switch seamlessly.
As the live stream buffers cannot be synchronized, there will inevitably be a glitch when they switch.
A larger buffer will avoid audio drop-out if the connection takes a while to establish, but will delay the first start of playback. An MP3 file can be downloaded and fill the buffer faster than real time, but the live stream will buffer in real time.
Chris.Jenkins mentions some MediaPlayer methods that can help but points out that this does seem to need a custom framework. It will need to handle the conditions he mentions and others.
If I can make it look pretty I'll post it here. I'm going to keep the question open.
For everyone using Android's voice recognition API, there used to be a handy RecognitionListener you could register that would push various events to your callbacks. In particular, there was the following onBufferReceived(byte[]) method:
public abstract void onBufferReceived (byte[] buffer)
Since: API Level 8 More sound has been received. The purpose of this
function is to allow giving feedback to the user regarding the
captured audio. There is no guarantee that this method will be called.
Parameters buffer a buffer containing a sequence of big-endian 16-bit
integers representing a single channel audio stream. The sample rate
is implementation dependent.
Although the method explicitly states that there is no guarantee it will be called, in ICS and prior it would effectively be called 100% of the time: regularly enough, at least, that by concatenating all the bytes received this way, you could reconstruct the entire audio stream and play it back.
For some reason, however, in the Jellybean SDK, this magically stopped working. There's no notice of deprecation and the code still compiles, but the onBufferReceived is now never called. Technically this isn't breaking their API (since it says there's "no guarantee" the method will be called), but clearly this is a breaking change for a lot of things that depended on this behaviour.
Does anybody know why this functionality was disabled, and if there's a way to replicate its behaviour on Jellybean?
Clarification: I realize that the whole RecognizerIntent thing is an interface with multiple implementations (including some available on the Play Store), and that they can each choose what to do with RecognitionListener. I am specifically referring to the default Google implementation that the vast majority of Jellybean phones use.
Google does not call this method their Jelly Bean speech app (QuickSearchBox). Its simply not in the code. Unless there is an official comment from a Google Engineer I cannot give a definite answer "why" they did this. I did search the developer forums but did not see any commentary about this decision.
The ics default for speech recognition comes from Google's VoiceSearch.apk. You can decompile this apk and see and find see there is an Activity to handle an intent of action *android.speech.action.RECOGNIZE_SPEECH*. In this apk I searched for "onBufferReceived" and found a reference to it in com.google.android.voicesearch.GoogleRecognitionService$RecognitionCallback.
With Jelly Bean, Google renamed VoiceSearch.apk to QuickSearch.apk and made a lot of new additions to the app (ex. offline dictation). You would expect to still find an onBufferReceived call, but for some reason it is completely gone.
I too was using the onBufferReceived method and was disappointed that the (non-guaranteed) call to the method was dropped in Jelly Bean. Well, if we can't grab the audio with onBufferReceived(), maybe there is a possibility of running an AudioRecord simultaneously with voice recognition. Anyone tried this? If not, I'll give it a whirl and report back.
I ran in to the same problem. The reason why I didn't just accept that "this does not work" was because Google Nows "note-to-self" record the audio and sends it to you. What I found out in logcat while running the "note-to-self"-operation was:
02-20 14:04:59.664: I/AudioService(525): AudioFocus requestAudioFocus() from android.media.AudioManager#42439ca8com.google.android.voicesearch.audio.ByteArrayPlayer$1#424cca50
02-20 14:04:59.754: I/AbstractCardController.SelfNoteController(8675): #attach
02-20 14:05:01.006: I/AudioService(525): AudioFocus abandonAudioFocus() from android.media.AudioManager#42439ca8com.google.android.voicesearch.audio.ByteArrayPlayer$1#424cca50
02-20 14:05:05.791: I/ActivityManager(525): START u0 {act=com.google.android.gm.action.AUTO_SEND typ=text/plain cmp=com.google.android.gm/.AutoSendActivity (has extras)} from pid 8675
02-20 14:05:05.821: I/AbstractCardView.SelfNoteCard(8675): #onViewDetachedFromWindow
This makes me belive that google disposes the audioFocus from google now (the regonizerIntent), and that they use an audio recorder or something similar when the Note-to-self-tag appears in onPartialResults. I can not confirm this, has anyone else made tries to make this work?
I have a service that is implementing RecognitionListener and I also override onBufferReceived(byte[]) method. I was investigating why the speech recognition is much slower to call onResults() on <=ICS . The only difference I could find was that onBufferReceived is called on phones <= ICS. On JellyBean the onBufferReceived() is never called and onResults() is called significantly faster and I'm thinking its because of the overhead to call onBufferReceived every second or millisecond. Maybe thats why they did away with onBufferReceived()?
I am trying to play multiple audio files, one after the other and am currently using AsyncTasks to prepare and start the mediaPlayer but have failed to find a good way to move on the to next track at the end of the current one. Not every audio file will be played every time, and it's playing is decided by a boolean value.
Any help is much apprecieated.
I guess you have read android-sdk/docs/reference/android/media/MediaPlayer.html , it says:
When the playback reaches the end of stream, the playback completes.
If the looping mode was being set to truewith setLooping(boolean), the
MediaPlayer object shall remain in the Started state. If the looping
mode was set to false , the player engine calls a user supplied
callback method, OnCompletion.onCompletion(), if a
OnCompletionListener is registered beforehand via
setOnCompletionListener(OnCompletionListener). The invoke of the
callback signals that the object is now in the PlaybackCompleted
state. While in the PlaybackCompleted state, calling start() can
restart the playback from the beginning of the audio/video source.
So you may set a new source, prepareAsync then start in completion callback. In this way , you get continuous playback, but it is not seamless.
Doubtful using MediaPlayer for this will work like you want it to. Try this tutorial:
http://www.droidnova.com/creating-sound-effects-in-android-part-1,570.html
If that doesn't work you'll probably have to mix the sounds together yourself them stream that result directly to the hardware using AudioTrack. That's more low level, but it will give you the most control. It just depends on what you are doing if the AudioManager solution will work for you or not. It's definitely the simpler route. But, if you're trying to line up two samples so that when one finishes the next begins, like in a music app, you probably will have to mix and stream that audio yourself.
http://developer.android.com/reference/android/media/AudioTrack.html
Algorithm to mix sound
I have a simple app, which plays a short sound repeatedly, by invoking the play() method on the audio element in JavaScript. It works well on desktop browsers, ipads, iphones, etc. On a mobile device running Android 2.3.3, the first time I play the sound, I hear it immediately after invoking the play() method, but on subsequent invocations, there is a noticeable and variable delay.
I have done some sleuthing and found that the device is fetching the audio file from the server each time the play() method is invoked. I can invoke the load() method on the audio element to re-load it after each play, thus queueing it up for the next play, but there are a number of problems with that band-aid. I'd really like to make the browser just keep the audio element loaded permanently, instead of unloading it as soon as it finishes playing. Does anyone know if that's possible?
EDIT: I've done a little more investigating, and I've found that after playing the sound, the audio element's readystate remains at HAVE_ENOUGH_DATA, even though the browser won't play that sound again without re-fetching it from the server. I believe this is a bug. I'd hoped to use the readystate to detect browsers that unload after playing, and only explicitly load if necessary, but that's not going to work.
The more experiments I do, the more rough edges I find in Android 2.3.3's implementation of the HTML5 audio tag. There's a lot broken there, at least on the Droid X phone I'm using for testing.
The best I have come up with so far is the band-aid alluded to in my original question: as soon as an invocation of play() completes, invoke the load() method, to prepare for the next play():
if (navigator.userAgent.toLowerCase().indexOf("android") > -1)
audElt.addEventListener('ended', function () {
var t = setTimeout(function () { audElt.load(); }, 1000);
}, false);
I had to restrict the work-around to Android user agents, because simply invoking the load() method creates problems on Chrome and generates unnecessary trips to the server on non-Android systems.
I had to add a 1-sec delay, because if I simply invoked load() from the "ended" handler, it interrupted the playback, which, apparently, hadn't really "ended" yet....
Of course, it's still fetching the sound from the server repeatedly, so if you try to play the sound multiple times in rapid succession, things go south quickly.
best solution i've found. Another options is to just use the video tag but there are some problems with that as well. Nothing seems to work good enough to implement.
luckily im using phone gap so I'll give their audio methods a shot.
You can bind on updatetime to pause() the playback a second before reaching the end of the audio and rewind before playing it again. Android will not flush the audio then.