Android Making a Call Only Works the Second Time - android

So I'm making a Speech to Text app using the voice assistant. I'm trying to make a phone call feature so the user can speak a number and it will call it.
I'm almost there but the number only rings the second time I speak it. The first time it says "Call not sent".
I figured out the reason for this is; when the user speaks the number it's not updating the variable first and then calling the "call" function. I've tried almost everything but it doesn't update the variable correctly.
I.e.
private TextView txtSpeechInput;
public String num = "123";
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
switch (requestCode) {
case REQ_CODE_SPEECH_INPUT: {
if (resultCode == RESULT_OK && null != data) {
ArrayList<String> result = data
.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
txtSpeechInput.setText(result.get(0).replaceAll("\\s+", ""));
num = txtSpeechInput.getText().toString();
}
break;
}
}
}
public void dialPhoneNumber(String phone) {
Intent intent = new Intent(Intent.ACTION_CALL);
intent.setData(Uri.parse("tel:" + phone));
if (intent.resolveActivity(getPackageManager()) != null) {
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CALL_PHONE) == PackageManager.PERMISSION_GRANTED) {
startActivity(intent);
}
return ;
}
}
private void processResult(String command) {
command = command.toLowerCase();
if(command.indexOf("time") != -1) {
Date now = new Date();
String time = DateUtils.formatDateTime(this, now.getTime(), DateUtils.FORMAT_SHOW_TIME);
speak("The time is " + time);
}
if(command.indexOf("date") != -1) {
String date = DateFormat.getDateInstance().format(new Date());
speak("The date is " + date);
}
else if (command.indexOf("open") != -1) {
if(command.indexOf("browser") != -1) {
Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse("https://www.google.co.uk/"));
startActivity(intent);
}
}
if(command.indexOf("call") != -1) {
promptSpeechInput();
try {
Thread.sleep(18000);
} catch (InterruptedException e) {
e.printStackTrace();
}
dialPhoneNumber(num);
}
}
In this code, wen the use says "make a call" it opens another prompt to take the speech input. Stores it in txtSpeechInput (Where it says results.get0) and then at that stage I update the "num" variable and convert it to a string.
It then runs dialPhoneNumber
Now let's say I run it the first time and speak "07123456789", it will say call not sent because its trying to call the default 123, if i speak it again or a different number then it will ring the 07123456789.
How and why does it not update before calling the phone feature?

Based on danny117's comment...
promptSpeechInput();
try {
Thread.sleep(18000);
} catch (InterruptedException e) {
e.printStackTrace();
}
dialPhoneNumber(num);
promptSpeechInput starts a new activity, and the result from that activity is the spoken text? If that's right, then why are you prompting for input, sleeping for 18 seconds (never sleep on the main thread by the way), and then assuming your input is ready? As danny says, prompting for input should be the last thing that if block does. Dialing the number should be initiated from onActivityResult.
Also, why is there a default phone number of "123"? That will never be correct, so is not a sensible default. And to reiterate, if you are putting the thread to sleep for a fixed amount of time waiting for something else to happen, you're almost certainly approaching it the wrong way. And if you're putting the main thread to sleep in Android you're absolutely doing it the wrong way.

Related

Mediaprojection issues on Android 9+

I made an OCR application that makes a screenshot using Android mediaprojection and processes the text in this image. This is working fine, except on Android 9+. When mediaprojeciton is starting there is always a window popping up warning about sensitive data that could be recorded, and a button to cancel or start recording. How can I achieve that this window will only be showed once?
I tried preventing it from popping up by creating two extra private static variables to store intent and resultdata of mediaprojection, and reusing it if its not null. But it did not work (read about this method in another post).
// initializing MP
mProjectionManager = (MediaProjectionManager) getSystemService(Context.MEDIA_PROJECTION_SERVICE);
// Starting MediaProjection
private void startProjection() {
startActivityForResult(mProjectionManager.createScreenCaptureIntent(), REQUEST_CODE);
}
// OnActivityResult
protected void onActivityResult(final int requestCode, final int resultCode, final Intent data) {
if (requestCode == 100) {
if(mProjectionManager == null) {
cancelEverything();
return;
}
new Handler().postDelayed(new Runnable() {
#Override
public void run() {
if(mProjectionManager != null)
sMediaProjection = mProjectionManager.getMediaProjection(resultCode, data);
else
cancelEverything();
if (sMediaProjection != null) {
File externalFilesDir = getExternalFilesDir(null);
if (externalFilesDir != null) {
STORE_DIRECTORY = externalFilesDir.getAbsolutePath() + "/screenshots/";
File storeDirectory = new File(STORE_DIRECTORY);
if (!storeDirectory.exists()) {
boolean success = storeDirectory.mkdirs();
if (!success) {
Log.e(TAG, "failed to create file storage directory.");
return;
}
}
} else {
Log.e(TAG, "failed to create file storage directory, getExternalFilesDir is null.");
return;
}
// display metrics
DisplayMetrics metrics = getResources().getDisplayMetrics();
mDensity = metrics.densityDpi;
mDisplay = getWindowManager().getDefaultDisplay();
// create virtual display depending on device width / height
createVirtualDisplay();
// register orientation change callback
mOrientationChangeCallback = new OrientationChangeCallback(getApplicationContext());
if (mOrientationChangeCallback.canDetectOrientation()) {
mOrientationChangeCallback.enable();
}
// register media projection stop callback
sMediaProjection.registerCallback(new MediaProjectionStopCallback(), mHandler);
}
}
}, 2000);
}
}
My code is working fine on Android versions below Android 9. On older android versions I can choose to keep that decision to grant recording permission, and it will never show up again. So what can I do in Android 9?
Thanks in advance, I'm happy for every idea you have :)
Well the problem was that I was calling
startActivityForResult(mProjectionManager.createScreenCaptureIntent(), REQUEST_CODE);
every time, which is not necessary (createScreenCaptureIntent() leads to the dialog window which requests user interaction)
My solution makes the dialog appear only once (if application was closed it will ask for permission one time again).
All I had to do was making addiotional private static variables of type Intent and int.
private static Intent staticIntentData;
private static int staticResultCode;
On Activity result I assign those variables with the passed result code and intent:
if(staticResultCode == 0 && staticIntentData == null) {
sMediaProjection = mProjectionManager.getMediaProjection(resultCode, data);
staticIntentData = data;
staticResultCode = resultCode;
} else {
sMediaProjection = mProjectionManager.getMediaProjection(staticResultCode, staticIntentData)};
}
Every time I call my startprojection method, I will check if they are null:
if(staticIntentData == null)
startActivityForResult(mProjectionManager.createScreenCaptureIntent(), REQUEST_CODE);
else
captureScreen();
If null it will request permission, if not it will start the projection with the static intent data and static int resultcode, so it is not needed to ask for that permission again, just reuse what you get in activity result.
sMediaProjection = mProjectionManager.getMediaProjection(staticResultCode, staticIntentData);
Simple as that! Now it will only showing one single time each time you use the app. I guess thats what Google wants, because theres no keep decision checkbox in that dialog like in previous android versions.

Random signal 11 (SIGSEGV)

For few days now i got SIGSEGV errors at random points in my app. It crashed when i do nothing.
I started taking apart code to see which line was causing this segmentation, and was most shocked when i found out that it was: ( drum roll, please )
Activity.setResult(Activity.OK,intent)
WHAT? This still doesn't make ANY sense to me at all! In the line below i will show you my code before and after and can anyone tell me WHY the SIGSEGV was happening?
Before
CommentActivity:
val comments = ArrayList<Comment>()
comments.add(comment)
/*Set the results*/
val intent = Intent()
intent.putExtra("comment_result", Utils.General.serializeToJson(comments))
this.setResult(Activity.RESULT_OK, intent)
this.finish()
MainActivity:
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent intent) {
switch (requestCode) {
//catch data from CommentActivity
case (INTENT_COMMENT): {
if (resultCode == RESULT_OK) {
String comments = intent.getExtras().getString("comment_result");
if (comments != null && !comments.isEmpty()) {
/*Save the instance to the database and expect the database observable to call the upload thread*/
Type type = new TypeToken<ArrayList<Comment>>() {}.getType();
ArrayList<Comment> list = Utils.General.Companion.deserializeFromJson(comments, type);
/*Save the instance to the database and expect the database observable to call the upload thread*/
DataSource.INSTANCE.saveComments(list);
} else {
showToast(getResources().getString(R.string.picture_lost), Toast.LENGTH_SHORT);
}
}
break;
}
==== This crashed in MainActivity after between 10secs and 1 minute =====
After - No Crash
CommentActivity:
/*Set the results*/
DataSource.saveComments(comments)
this.finish()

Android Wear DataItem Syncing / parallel use of Google now

I'm developing an app for android wear, which is collecting sensor data (e.g. acceleration, ..) and syncs it with corresponding app on mobile phone. Each time a sensor is triggered (onSensorChanged), the following code is executed:
Runnable runnable = new Runnable() {
#Override
public void run() {
PutDataMapRequest dataMap = PutDataMapRequest.create(WearConstants.WTP_SENSOR_DATA_CHANGED + sensorType);
dataMap.getDataMap().putInt(WearConstants.ACCURACY, accuracy);
dataMap.getDataMap().putLong(WearConstants.TIMESTAMP_WEAR, timestamp);
dataMap.getDataMap().putFloatArray(WearConstants.VALUES, values);
Wearable.DataApi.putDataItem(mApiClient, dataMap.asPutDataRequest()).setResultCallback(new ResultCallback<DataApi.DataItemResult>() {
#Override
public void onResult(DataApi.DataItemResult dataItemResult) {
Log.d(TAG, "Sending sensor data: " + dataItemResult.getStatus().isSuccess());
}
});
}
}; Thread thread = new Thread(runnable); thread.start();
This is working pretty well, the sensor data is transferred to mobile phone and displayed in the app. The second feature of the app is the possibility to take notes. There are two possibilites for taking notes: a) press a button while the app is opened and b) triggered by "Ok Google, take a note" while the app is closed. (see Google documentation)
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
startActivityForResult(intent, WearConstants.W_SPEECH);
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
switch (requestCode) {
case WearConstants.W_SPEECH: {
if (resultCode == RESULT_OK && data != null) {
List<String> results = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
String spokenText = results.get(0);
sendSpeech(spokenText);
}
}
}
}
This is working well too, but only if the app is not listening for sensor changes. And this is the problem: each time I trigger a voice action while app (or background service) is listening for sensor changes, voice is not recognized and I am getting an error "Google not reachable". It seems to me like there is not enough capacity on the BluetoothLE connection. Therefore I tried to batch sensor data (in an ArrayList) and pause sending to mobile phone while listening to voice commands, but there was no difference. After five days trying to find a solution I am getting a little desperate now :-) I hope you have new ideas or tips what to try next.
After a couple of additional research it looks like I finally found a solution: When I tried to batch sensor data in the first run, I only paused sending. But when my limit was reached, I still sent each value separately. That was the big mistake. After chaning my data item to include an array of data maps, battery usage and performance looks pretty good. This is the code I am using now:
DataMap dataMap = PutDataMapRequest.create(WearConstants.WTP_SENSOR_DATA_CHANGED + sensorType).getDataMap();
dataMap.putInt(WearConstants.SENSOR, sensorType);
dataMap.putInt(WearConstants.ACCURACY, accuracy);
dataMap.putLong(WearConstants.TIMESTAMP_WEAR, timestamp);
dataMap.putFloatArray(WearConstants.VALUES, values);
dataMaps.add(dataMap);
if(dataMaps.size() > maxMaps) {
final ArrayList dataMapsCopy = (ArrayList) dataMaps.clone();
dataMaps.clear();
Runnable runnable = new Runnable() {
#Override
public void run() {
PutDataMapRequest dataMap = PutDataMapRequest.create(WearConstants.WTP_SENSOR_DATA_CHANGED + WearConstants.BATCH);
dataMap.getDataMap().putDataMapArrayList(WearConstants.DATA_ARRAY, dataMapsCopy);
Wearable.DataApi.putDataItem(mApiClient, dataMap.asPutDataRequest()).setResultCallback(new ResultCallback() {
#Override
public void onResult(DataApi.DataItemResult dataItemResult) {
Log.d(TAG, "Sending sensor data: " + dataItemResult.getStatus().isSuccess());
}
});
}
};
Thread thread = new Thread(runnable);
thread.start();
}

Android TTS different languages supported each time when checked

I am struggling with one very strange bug in my app.
I have added TTS to it, and I am using the build one. The user can choose the language from the spinner which is filled in during AsyncTask started in onResume().
The AsyncTask looks like this:
private class AsyncTTSDownload extends AsyncTask<Void, Integer, String> {
#Override
protected String doInBackground(Void... params) {
try {
languagesTTS = tts.testLang();
} catch (Exception ex) {
if (D)
Log.e(TAG, ex.toString());
}
return null;
}
#Override
protected void onPostExecute(String result) {
ttsUpdate.dismiss();
TTSSpinnerAdapter adapterTTS = new TTSSpinnerAdapter(
MyTTS.this, android.R.layout.simple_spinner_item,
languagesTTS);
int savedLangTTS = ttsLang.getInt("savedTTS", -1);
langTTS.setAdapter(adapterTTS);
if (savedLangTTS == -1)
{
try {
int langObject = languagesTTS.indexOf(tts.getLanguage());
langTTS.setSelection(langObject);
} catch (IndexOutOfBoundsException ie) {
langTTS.setSelection(0);
}
} else {
langTTS.setSelection(savedLangTTS);
}
Locale langChoosen = (Locale) langTTS.getItemAtPosition(langTTS
.getSelectedItemPosition());
tts.setTTSLanguage(langChoosen);
}
#Override
protected void onPreExecute() {
ttsUpdate = ProgressDialog.show(MyTTS.this, "Wait",
"Loading TTS...");
ttsUpdate.setCancelable(false);
}
}
the thing is, that I am from time to time getting different number of languages supported. This is on this same device, during this same run. Just I open and close Activity with TTS. This bug is causing IndexOutOfBoundsException. This is my way of getting TTS languages:
public List<Locale> testLang() {
Locale[] AvalLoc = Locale.getAvailableLocales();
List<Locale> listaOK = new ArrayList<Locale>();
String tester = "";
for (Locale l : AvalLoc) {
if(tester.contains(l.getLanguage()))
{
continue;
}
int buf = tts.isLanguageAvailable(l);
if (buf == TextToSpeech.LANG_MISSING_DATA
|| buf == TextToSpeech.LANG_NOT_SUPPORTED) {
//TODO maybe
} else {
listaOK.add(l);
tester += l.getLanguage() + ";";
}
}
tts.setLanguage(Locale.ENGLISH);
return listaOK;
}
For now I've only find out a small hack for not showing this error, just save in shared preferences number of languages and compare it with what tts received, but it is not working well at all. Each time I am getting different number.
For me it seems, that something is not finished or started when I am starting again this same activity after return, because this is tts.isAvaliableLanguage(l) who is deciding whether language is supported or not and from time to time, one language is not supported and after reload it is.
EDIT:
As there appeared new comment about my question I need to add one important thing about TTS engine itself.
testLang() is a method inside my class Called TTSClass, that is implementing TextToSpeech.OnInitListener. tts object is created in onCreate of MyTTS activity and this constructor looks like this in TTSClass:
public TTSClass(Context context, Locale language) {
contextTTS = context;
languageTTS = language;
tts = new TextToSpeech(contextTTS, this);
}
and call in activity:
tts = new TTSClass(getApplicationContext(), Locale.ENGLISH);
Because TTSClass implements TextToSpeech.OnInitListener there is also onInit() method which looks like this:
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
int result = 0;
result = tts.setLanguage(languageTTS);
if (result == TextToSpeech.LANG_MISSING_DATA
|| result == TextToSpeech.LANG_NOT_SUPPORTED) {
if(D) Log.e(TAG, "This Language is not supported");
}
if(D) Log.d(TAG,"Initialized");
} else {
if(D) Log.e(TAG, "Initilization Failed!");
}
}
So, this is everything connecting to this class and problem I think. If anything is missing, let me now.
EDIT2:
Suggested by shoe rat comment I've run few more tests, and the outcome is just amazing, or extraordinary, I think it is better word.
So what I've done was adding 3 Log from different places in code informing me about list size on different stages.
First was added in onInit() in if status == TextToSpeech.SUCCESS. This one is just simple call of testLang().size(). The outcome is 5 languages - that is the correct number and it is always like this, no matter if there is or isn't an exception.
Second was added there:
protected String doInBackground(Void... params) {
try {
Log.w(TAG,"before: "+tts.testLang().size());
languagesTTS = tts.testLang();
}
and this one is starting to act quite weird. It is sometimes, or even quite often, showing number lower than 5. But this is not the strangest thing.
The third one is just at the beginning of onPostExecute checking the size of languagesTTS. And believe or not, the number is quite often totally different from the second log. However, it is never smaller. It can be equal or bigger.
Does anyone know, what is going one?
I've found solution. It came out that indeed it was initialization problem.
I'm not sure if documentation is saying anything about it, but it seem like the TTS engine initialization is done asynchronously, so it can finish at any time.
My solution was to change the doInBackground() method like this:
#Override
protected String doInBackground(Void... params) {
try {
while(!TTSClass.isInit){}
languagesTTS = tts.testLang();
} catch (Exception ex) {
if (D)
Log.e(TAG, ex.toString());
}
return null;
}
and in onInit() method I've added isInit public static boolean variable:
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
int result = 0;
result = tts.setLanguage(languageTTS);
if (result == TextToSpeech.LANG_MISSING_DATA
|| result == TextToSpeech.LANG_NOT_SUPPORTED) {
if(D) Log.e(TAG, "This Language is not supported");
}
if(D) Log.d(TAG,"initialized");
isInit = true;
} else {
if(D) Log.e(TAG, "Initilization Failed!");
}
}
Hope, that someone will find it helpful.

Making the device speak text on command

I am having difficulty in framing this question but anyways here it goes, I have made two applications basically the TextToSpeech app and the SpeechToText application. They are working fine, now I am trying to merge them. Basically I want the the text to speech part to say the text entered in a text field , only if the user says the word "speak".That would be done using the speech to text part. Now the problem I am having is that android displays a list of outputs when the user says something, but I only one output that is "Speak" . How can I get only one output instead of a list.
Initially the app becomes active on button click.
The code I am trying is as follows:
#Override
public void onClick(View v) {
// TODO Auto-generated method stub
Intent i = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
i.putExtra(RecognizerIntent.EXTRA_PROMPT, "system Activated");
startActivityForResult(i, check);
}
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
// TODO Auto-generated method stub
if (requestCode == check && resultCode == RESULT_OK) {
String command = data
.getStringExtra(RecognizerIntent.EXTRA_RESULTS).toString();
if (command.equals(command_verify)) {
speak();
} else
finish();
}
super.onActivityResult(requestCode, resultCode, data);
}
public void speak() {
String text_to_speak = message_field.getText().toString();
talk.speak(text_to_speak, TextToSpeech.QUEUE_FLUSH, null);
}
The Voice Recognizer returns you a list of guesses, where the first guess is the most accurate. You can use it calling list.get(0). Hope this helps.

Categories

Resources