I know there are quite a few posts about this already and I have read pretty much all of them (or so it feels at least). Yet my service refuses to work, at all.
In short: I have a widget that should launch the speecrecognizer service when clicked on. The service should then listen to input and (for now) print it out to logcat.
The service starts correctly (the logs in oncreate and ondestroy are printed) but the actual voice recognition part doesn't work.
I have added the service in the android manifest and added the "android.permission.RECORD_AUDIO" permission.
Part of the issue is that
SpeechRecognizer.isRecognitionAvailable(this)
returns false.
Here's the important bit of code from the service:
#Override
public void onCreate() {
super.onCreate();
mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(this);
mSpeechRecognizer.setRecognitionListener(mListener);
if (SpeechRecognizer.isRecognitionAvailable(this))
Log.d(TAG, "we are go for speech recognition");
else
Log.d(TAG, "red light for speech recognition :c");
}
#Override
public void onDestroy() {
super.onDestroy();
Log.d(TAG, "i'm being destroyed :c");
}
#Override
public int onStartCommand(Intent intent, int flags, int startId) {
Log.d(TAG, "we made it");
playTtsForMessage("We made it", true);
speechIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
speechIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
speechIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, this.getPackageName());
mSpeechRecognizer.startListening(speechIntent);
Log.d(TAG, "started listening");
return super.onStartCommand(intent, flags, startId);
}
private void broadcastStopIntent() {
Intent intent = new Intent(Intents.ACTION_VOICE_COMMANDS_STOP);
AddApplication.getInstance().sendBroadcast(intent, null);
}
All the log outputs get printed, apart from the ones i've placed inside the listener.
If interested, here's the listener:
private RecognitionListener mListener = new RecognitionListener() {
#Override
public void onReadyForSpeech(Bundle params) {
// TODO Auto-generated method stub
}
#Override
public void onBeginningOfSpeech() {
// TODO Auto-generated method stub
Log.d(TAG, "speech started");
}
#Override
public void onRmsChanged(float rmsdB) {
// TODO Auto-generated method stub
}
#Override
public void onBufferReceived(byte[] buffer) {
// TODO Auto-generated method stub
}
#Override
public void onEndOfSpeech() {
// TODO Auto-generated method stub
Log.d(TAG, "speech ended");
broadcastStopIntent();
}
#Override
public void onError(int error) {
// TODO Auto-generated method stub
}
#Override
public void onResults(Bundle results) {
// TODO Auto-generated method stub
ArrayList<String> spoken = results
.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
String msg = "";
for (String s : spoken) {
msg += s + " ";
}
Log.d(TAG, "what i understood from that was: " + msg);
WidgetVoiceCommandProviderIntentService.this.stopSelf();
}
#Override
public void onPartialResults(Bundle partialResults) {
// TODO Auto-generated method stub
}
#Override
public void onEvent(int eventType, Bundle params) {
// TODO Auto-generated method stub
}
};
I tried this guy's workaround: Android Speech Recognition as a service on Android 4.1 & 4.2
Which yielded no results. Any suggestions?
Related
I was wondering, how can I keep the device listening for voice commands with voice recognition while the device is asleep? The idea I have is that I would like the device to respond to my voice, even if I have the screen locked or the screen has timed out.
Is this possible? I have tried using this as a service and an interface and it stops listening once the screen locks. Can I receive any help with this? This is my class.
public class VoiceEngineService extends Activity {
private boolean isSpeakingDone = false; // default setting
private SpeechRecognizer sr = SpeechRecognizer.createSpeechRecognizer(this);
private AudioManager mAudioManager;
#Override
protected void onCreate(Bundle savedInstanceState) {
// TODO Auto-generated method stub
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
setContentView(R.layout.wait_for_speech);
// mute beep sound
mAudioManager.setStreamSolo(AudioManager.STREAM_VOICE_CALL, true);
sr.setRecognitionListener(new listener());
Intent i = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); // LANGUAGE_MODEL_WEB_SEARCH
i.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, getApplication()
.getClass().getName());
i.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 6);
i.putExtra(RecognizerIntent.EXTRA_PROMPT, "");
i.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS, 5500);
sr.startListening(i);
}
class listener implements RecognitionListener {
#Override
public void onReadyForSpeech(Bundle params) {
// TODO Auto-generated method stub
}
#Override
public void onBeginningOfSpeech() {
// TODO Auto-generated method stub
}
#Override
public void onRmsChanged(float rmsdB) {
// TODO Auto-generated method stub
}
#Override
public void onBufferReceived(byte[] buffer) {
// TODO Auto-generated method stub
}
#Override
public void onEndOfSpeech() {
// TODO Auto-generated method stub
}
#Override
public void onError(int error) {
// TODO Auto-generated method stub
if (SharedPref.getVoiceController() == false) {
sr.cancel();
Intent i = new Intent();
sr.startListening(i);
} else {
sr.stopListening();
sr.destroy();
finish();
}
}
#Override
public void onResults(Bundle results) {
// TODO Auto-generated method stub
isSpeakingDone = true;
ArrayList<String> mDataList = results
.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
Intent i = new Intent();
i.putStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS, mDataList);
setResult(RESULT_OK, i);
finish();
}
#Override
public void onPartialResults(Bundle partialResults) {
// TODO Auto-generated method stub
}
#Override
public void onEvent(int eventType, Bundle params) {
// TODO Auto-generated method stub
}
} // end listener class
#Override
protected void onPause() {
if ((isSpeakingDone == false)) {
finish();
}
super.onPause();
}
#Override
protected void onStop() {
// when speaking is true finish() has already been called
if (isSpeakingDone == false) {
finish();
}
super.onStop();
}
#Override
protected void onDestroy() {
sr.stopListening();
sr.destroy();
super.onDestroy();
}
}
You have to implement your voice listener in a service. Create a class that extends 'Service' and make up some logic to take care of recording.
If you tried already the service, then it might be that you tried to redirect commands to an activity which most likely has been stopped by Android OS. Generally when talking about doing stuff when phone is in lock mode, you only can hope to accomplish tasks in one or more services coupled togethe.
When you are in Activity, of course wen the activity goes out of scope it will be shut down by Android OS. but services can still run in background unless shut dow mm explicitly by your own code or in rare cases that Android will recognize that it needs memory and processor power for other tasks.
I am integrating Game Circle SDK & Whisper Sync in my game. I have implemented the code, but it has caused an issue. When I load the game and get the game state from the WhisperSync and set my local variables, white patches are observed in some places randomly instead of the proper image. When I turn of GameCircle & Whisper Sync it runs fine.
My game is developed using Cocos2d Android.
Does any one encountered such issue?
I have attached the image for reference.
Some Code:
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON,
WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
CCGLSurfaceView _glSurfaceView = new CCGLSurfaceView(this);
setContentView(_glSurfaceView);
CCDirector.sharedDirector().attachInView(_glSurfaceView);
CCDirector.sharedDirector().setDisplayFPS(false);
CCDirector.sharedDirector().setAnimationInterval(1.0f / 60.0f);
CCScene scene = IntroLayer.scene();
CCDirector.sharedDirector().runWithScene(scene);
}
#Override
public void onResume() {
super.onResume();
if (IS_AMAZON) {
AmazonGamesClient.initialize(this, callback, myGameFeatures);
AmazonGamesClient.getWhispersyncClient()
.setWhispersyncEventListener(
new WhispersyncEventListener() {
#Override
public void onAlreadySynchronized() {
// TODO Auto-generated method stub
super.onAlreadySynchronized();
System.out
.println("FA here onAlreadySynchronized");
loadGameScene();
}
#Override
public void onDataUploadedToCloud() {
// TODO Auto-generated method stub
super.onDataUploadedToCloud();
System.out
.println("FA here onDataUploadedToCloud");
}
#Override
public void onDiskWriteComplete() {
// TODO Auto-generated method stub
super.onDiskWriteComplete();
System.out
.println("FA here onDiskWriteComplete");
}
#Override
public void onFirstSynchronize() {
// TODO Auto-generated method stub
super.onFirstSynchronize();
System.out
.println("FA here onFirstSynchronize");
loadGameScene();
}
#Override
public void onNewCloudData() {
// TODO Auto-generated method stub
super.onNewCloudData();
System.out
.println("FA here onNewCloudData");
}
#Override
public void onSyncFailed(FailReason reason) {
// TODO Auto-generated method stub
super.onSyncFailed(reason);
System.out
.println("FA here onSyncFailed reason: "
+ reason.name());
}
#Override
public void onThrottled() {
// TODO Auto-generated method stub
super.onThrottled();
System.out.println("FA here onThrottled");
}
});
Log.i(TAG, "onResume: call initiateGetUserIdRequest");
PurchasingManager.initiateGetUserIdRequest();
Log.i(TAG, "onResume: call initiateItemDataRequest for skus: "
+ LAppInfo.getInstance().getList());
Set<String> skus = new HashSet<String>(LAppInfo.getInstance()
.getList());
PurchasingManager.initiateItemDataRequest(skus);
}
CCDirector.sharedDirector().resume();
}
private AmazonGamesCallback callback = new AmazonGamesCallback() {
#Override
public void onServiceNotReady(AmazonGamesStatus status) {
// unable to use service
System.out.println("FA here callback onServiceNotReady: "
+ status.name());
}
#Override
public void onServiceReady(AmazonGamesClient amazonGamesClient) {
System.out.println("FA here callback onServiceReady: ");
agsClient = amazonGamesClient;
}
};
private void loadGameScene() {
LoadData();
CCScene mainMenu = LevelMenuScene.scene();
CCDirector.sharedDirector().replaceScene(
CCFadeTransition.transition(0.5f, mainMenu));
}
Solved it by a work around by having a boolean variable and set it to true after the data is loaded in firstsync/alreadysync method. and loading the scene in the main thread instead of the callback of whispersync. Will be glad to know if some one find a proper solution.
I've a viewpager in my application. When the user swipes to the right/left I use TTS engine to speak the text and MediaPlayer to play a sound.
The problem is both plays simultaneously.. How do I play the sound once TTS engine speaks the text is over?
P.S: I don't want to use sleep or wait.
Update:
Here is my code:
#SuppressLint("NewApi")
#SuppressWarnings("deprecation")
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
if (result == TextToSpeech.LANG_MISSING_DATA
|| result == TextToSpeech.LANG_NOT_SUPPORTED) {
} else {
//Do Something here
}
if(Build.VERSION.SDK_INT >= 15 ){
UtteranceProgressListener listener = new UtteranceProgressListener() {
#Override
public void onStart(String utteranceId) {
// TODO Auto-generated method stub
}
#Override
public void onError(String utteranceId) {
// TODO Auto-generated method stub
}
#Override
public void onDone(String utteranceId) {
// TODO Auto-generated method stub
//start MediaPlayer
playMedia(viewPager.getCurrentItem());
}
};
tts.setOnUtteranceProgressListener(listener);
}
else{
tts.setOnUtteranceCompletedListener(new OnUtteranceCompletedListener(){
#Override
public void onUtteranceCompleted(String arg0) {
playMedia(viewPager.getCurrentItem());
}
});
}
} else {
Intent installIntent = new Intent();
installIntent.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(installIntent);
}
}
You have to give it an id param. Otherwise it doesn't call the listeners:
HashMap<String, String> params = new HashMap<String, String>();
params.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID,"stringId");
textToSpeech.speak(string,TextToSpeech.QUEUE_ADD, params);
In android their are two APIs to detect if the TTS engine finished speaking:
> Android 4 (ICS)
UtteranceProgressListener listener = new UtteranceProgressListener() {
#Override
public void onStart(String utteranceId) {
// TODO Auto-generated method stub
}
#Override
public void onError(String utteranceId) {
// TODO Auto-generated method stub
}
#Override
public void onDone(String utteranceId) {
// TODO Auto-generated method stub
//start MediaPlayer
}
};
yourTTSObject.setOnUtteranceProgressListener(listener);
Perior to ICS Android 4.0 you can use :
yourTTSObject.setOnUtteranceCompletedListener(new OnUtteranceCompletedListener(){
#Override
public void onUtteranceCompleted(String arg0) {
// start your mediaplayer here
}
});
take a look at the documentations here
I have a class that extends the AsyncTask.
When the doing background task done, in post executed i get the static properties of other class to equal the result of doing background, n I WANT TO BROADCAST IT OUT SO THAT the on receiver in other class will be update interface
here the code of the onPostExecuted
protected void onPostExecute(String result) {
Log.d(tag, "post executed "+result);
// do sth here
if (result != null){
result = result.trim();
String temp_result[];
if ( result.contains("|") ){
temp_result = result.split("\\|");
MyGPS.location_info = temp_result[1];//
Log.d(result, "contains | : "+MyGPS.location_info);
}else if (result.equalsIgnoreCase("300 OK")){
Log.d(result, "in 300 OK BUT UNKNOWN : "+ result);
MyGPS.location_info = "Unknown";
}else if (result.equalsIgnoreCase("400 ERROR"))
Log.d(result, "400 ERROR : "+ result);
else Log.d(result, "else : "+ result);
//assemble data bundle to be broadcasted
//myFilteredResponseThread = new Intent(GPS_FILTER);
myFilteredResponseThread.putExtra("location_info_post",
MyGPS.location_info);
// CAN"T USE SEND BROADCAST METHOD ?
//myFilteredResponseThread.
//Log.e(">>GPS_Service<<", "location_info"+MyGPS.location_info);
}
}
after that i can't write sendBroadcast method , it is undefined whY?
If I understood u correctly following is the solution.
class MyService extends Service{
#Override
public IBinder onBind(Intent arg0) {
// TODO Auto-generated method stub
return null;
}
class GPSListener implements LocationListener{
public void onLocationChanged(Location arg0) {
// TODO Auto-generated method stub
// you can get context as follow
MyService.this.getBaseContext().sendBroadcast(new Intent("Hi"));
}
public void onProviderDisabled(String arg0) {
// TODO Auto-generated method stub
}
public void onProviderEnabled(String arg0) {
// TODO Auto-generated method stub
}
public void onStatusChanged(String arg0, int arg1, Bundle arg2) {
// TODO Auto-generated method stub
}
}
}
}
I'm trying to get Android's TTS to run inside a service, but I have no idea why it isn't working, it compiles, doesn't crash, but it just doesn't work.
The Toast notification do work though.
package alarm.test;
import android.app.Service;
import com.google.tts.TextToSpeechBeta;
import android.content.Intent;
import android.os.IBinder;
import android.widget.Toast;
public class MyAlarmService extends Service {
private TextToSpeechBeta myTts;
private TextToSpeechBeta.OnInitListener ttsInitListener = new TextToSpeechBeta.OnInitListener() {
public void onInit( int arg0, int arg1 ) {
myTts.speak("", 0, null);
}
};
#Override
public void onCreate() {
// TODO Auto-generated method stub
myTts = new TextToSpeechBeta( this,
ttsInitListener );
Toast.makeText(this, "MyAlarmService.onCreate()", Toast.LENGTH_LONG).show();
}
#Override
public IBinder onBind(Intent intent) {
// TODO Auto-generated method stub
myTts.speak("something is working", TextToSpeechBeta.QUEUE_FLUSH, null);
Toast.makeText(this, "MyAlarmService.onBind()", Toast.LENGTH_LONG).show();
return null;
}
#Override
public void onDestroy() {
// TODO Auto-generated method stub
super.onDestroy();
Toast.makeText(this, "MyAlarmService.onDestroy()", Toast.LENGTH_LONG).show();
}
#Override
public void onStart(Intent intent, int startId) {
// TODO Auto-generated method stub
super.onStart(intent, startId);
Toast.makeText(this, "MyAlarmService.onStart()", Toast.LENGTH_LONG).show();
}
#Override
public boolean onUnbind(Intent intent) {
// TODO Auto-generated method stub
Toast.makeText(this, "MyAlarmService.onUnbind()", Toast.LENGTH_LONG).show();
return super.onUnbind(intent);
}
}
You can do like below: It's working for me.
You have to create an activity to start this service, like this: this.startService(intent)
public class TTSService extends Service implements TextToSpeech.OnInitListener{
private String str;
private TextToSpeech mTts;
private static final String TAG="TTSService";
#Override
public IBinder onBind(Intent arg0) {
return null;
}
#Override
public void onCreate() {
mTts = new TextToSpeech(this,
this // OnInitListener
);
mTts.setSpeechRate(0.5f);
Log.v(TAG, "oncreate_service");
str ="turn left please ";
super.onCreate();
}
#Override
public void onDestroy() {
// TODO Auto-generated method stub
if (mTts != null) {
mTts.stop();
mTts.shutdown();
}
super.onDestroy();
}
#Override
public void onStart(Intent intent, int startId) {
sayHello(str);
Log.v(TAG, "onstart_service");
super.onStart(intent, startId);
}
#Override
public void onInit(int status) {
Log.v(TAG, "oninit");
if (status == TextToSpeech.SUCCESS) {
int result = mTts.setLanguage(Locale.US);
if (result == TextToSpeech.LANG_MISSING_DATA ||
result == TextToSpeech.LANG_NOT_SUPPORTED) {
Log.v(TAG, "Language is not available.");
} else {
sayHello(str);
}
} else {
Log.v(TAG, "Could not initialize TextToSpeech.");
}
}
private void sayHello(String str) {
mTts.speak(str,
TextToSpeech.QUEUE_FLUSH,
null);
}
}
https://developer.android.com/reference/android/speech/tts/TextToSpeechService.html
since API Level 14, android has added a default TextToSpeech Service class that does what you want.