we work with xamarin android and visual studio 2015.
we have an app who works since several months fine :)
This app when it starts call a webservice for to retrieve some data in json format.
All work fine, but last week we have a problem and we are COMPLETY lost about it !
Since last week, for one device when it call the web service we receive this error in the catch exception :
unable to read data from the transport connection Connection reset by peer ...
Here is it the method on the device who call the WS:
public override HttpResult ExecuteGet(Uri target)
{
var client = new HttpClient();
client.MaxResponseContentBufferSize = 25600000;
try
{
var response = client.GetAsync(target).Result;
if (response.IsSuccessStatusCode)
{
var content = response.Content.ReadAsStringAsync().Result;
return new HttpResult(content, null, null);
}
return new HttpResult(null, " ERROR MESSAGE ", response.StatusCode.ToString());
}
catch (Exception e)
{
return new HttpResult(null, " ERROR MESSAGE", e.Message);
}
}
well, now with the same device if we stop the wifi and call the webservice in GPRS that work.
Also we have two wifi, we also have noticed if the device connect on the second wifi and try to call the web service => that's work !
After talk with some colleague, they tell me to look to update my android version of the device , or wifi app on the device but for me the first thing we need to do will be to compare wifi 1 and wifi 2.
My question is, how i can compare two wifi ?
All suggestion are welcome because we search and we find nothing ...
Thanks for all guys that's really great to share your knowledge ...
I'm using Azure Mobile Services in my android application to add authentication to the app, via Facebook and Google. However, every single time I attempt to login from the app, I receive the following error:
"com.microsoft.windowsazure.mobileservices.MobileServiceException: Logging >in with the selected authentication provider is not enabled".
No other errors occur. This is my code:
private void authenticate(boolean bRefreshCache)
throws ClientProtocolException, IOException {
bAuthenticating = true;
if (bRefreshCache || !loadUserTokenCache(mClient)) {
mClient.login(MobileServiceAuthenticationProvider.Facebook,
new UserAuthenticationCallback() {
#Override
public void onCompleted(MobileServiceUser user,
Exception exception,
ServiceFilterResponse response) {
synchronized (mAuthenticationLock) {
if (exception == null) {
cacheUserToken(mClient.getCurrentUser());
Log.i("MappingRoadConditions",
"authenticating");
createAndShowDialog(String.format(
"You are now logged in - %1$2s",
user.getUserId()), "Success");
} else {
createAndShowDialog(exception.getMessage(),
"Login Error");
}
bAuthenticating = false;
mAuthenticationLock.notifyAll();
}
}
});
} else {
// Other threads may be blocked waiting to be notified when
// authentication is complete.
synchronized (mAuthenticationLock) {
bAuthenticating = false;
mAuthenticationLock.notifyAll();
}
}
}
The function for logging in by Google is exactly the same, other than the name of the provider of course.
1) I have tried troubleshooting by logging in through the browser and I can login perfectly well using both Facebook and Google.
2) I have added the internet permission in the manifest file.
3) I have also tried testing the app by changing the internet connections, in case it's a network connection problem but to no avail. I am able to login perfectly well through the browser on the same internet connection.
Any ideas on what could be happening?
I struggled with this for a while when moving my working code over into a fresh app
It seems that after I eliminated the provider app connection as your problem (I used the javascript html client in parallel ) I needed to go back to basics because I found this similar question
Check your Manifest
I also had this issue just happen on a successful build - the ADB bridge had failed and the emulator could not connect to the internet (I had switched networks)
This error code is not descriptive, but Azure seems to assume if it can't connect to a provider then you didn't set it up!
It looks as though Google has made offline speech recognition available from Google Now for third-party apps. It is being used by the app named Utter.
Has anyone seen any implementations of how to do simple voice commands with this offline speech rec? Do you just use the regular SpeechRecognizer API and it works automatically?
Google did quietly enable offline recognition in that Search update, but there is (as yet) no API or additional parameters available within the SpeechRecognizer class. {See Edit at the bottom of this post} The functionality is available with no additional coding, however the user’s device will need to be configured correctly for it to begin working and this is where the problem lies and I would imagine why a lot of developers assume they are ‘missing something’.
Also, Google have restricted certain Jelly Bean devices from using the offline recognition due to hardware constraints. Which devices this applies to is not documented, in fact, nothing is documented, so configuring the capabilities for the user has proved to be a matter of trial and error (for them). It works for some straight away – For those that it doesn't, this is the ‘guide’ I supply them with.
Make sure the default Android Voice Recogniser is set to Google not
Samsung/Vlingo
Uninstall any offline recognition files you already have installed
from the Google Voice Search Settings
Go to your Android Application Settings and see if you can uninstall
the updates for the Google Search and Google Voice Search
applications.
If you can't do the above, go to the Play Store see if you have the
option there.
Reboot (if you achieved 2, 3 or 4)
Update Google Search and Google Voice Search from the Play Store (if
you achieved 3 or 4 or if an update is available anyway).
Reboot (if you achieved 6)
Install English UK offline language files
Reboot
Use utter! with a connection
Switch to aeroplane mode and give it a try
Once it is working, the offline recognition of other languages,
such as English US should start working too.
EDIT: Temporarily changing the device locale to English UK also seems to kickstart this to work for some.
Some users reported they still had to reboot a number of times before it would begin working, but they all get there eventually, often inexplicably to what was the trigger, the key to which are inside the Google Search APK, so not in the public domain or part of AOSP.
From what I can establish, Google tests the availability of a connection prior to deciding whether to use offline or online recognition. If a connection is available initially but is lost prior to the response, Google will supply a connection error, it won’t fall-back to offline. As a side note, if a request for the network synthesised voice has been made, there is no error supplied it if fails – You get silence.
The Google Search update enabled no additional features in Google Now and in fact if you try to use it with no internet connection, it will error. I mention this as I wondered if the ability would be withdrawn as quietly as it appeared and therefore shouldn't be relied upon in production.
If you intend to start using the SpeechRecognizer class, be warned, there is a pretty major bug associated with it, which require your own implementation to handle.
Not being able to specifically request offline = true, makes controlling this feature impossible without manipulating the data connection. Rubbish. You’ll get hundreds of user emails asking you why you haven’t enabled something so simple!
EDIT: Since API level 23 a new parameter has been added EXTRA_PREFER_OFFLINE which the Google recognition service does appear to adhere to.
Hope the above helps.
I would like to improve the guide that the answer https://stackoverflow.com/a/17674655/2987828 sends to its users, with images. It is the sentence "For those that it doesn't, this is the ‘guide’ I supply them with." that I want to improve.
The user should click on the four buttons highlighted in blue in these images:
Then the user can select any desired languages. When the download is done, he should disconnect from network, and then click on the "microphone" button of the keyboard.
It worked for me (android 4.1.2), then language recognition worked out of the box, without rebooting. I can now dictates instructions to the shell of Terminal Emulator ! And it is twice faster offline than online, on a padfone 2 from ASUS.
These images are licensed under cc by-sa 3.0 with attribution required to stackoverflow.com/a/21329845/2987828 ; you may hence add these images anywhere along with this attribution.
(This the standard policy of all images and texts at stackoverflow.com)
A simple and flexible offline recognition on Android is implemented by CMUSphinx, an open source speech recognition toolkit. It works purely offline, fast and configurable It can listen continuously for keyword, for example.
You can find latest code and tutorial here.
Update in 2019: Time goes fast, CMUSphinx is not that accurate anymore. I recommend to try Kaldi toolkit instead. The demo is here.
In short, I don't have the implementation, but the explanation.
Google did not make offline speech recognition available to third party apps. Offline recognition is only accessable via the keyboard. Ben Randall (the developer of utter!) explains his workaround in an article at Android Police:
I had implemented my own keyboard and was switching between Google
Voice Typing and the users default keyboard with an invisible edit
text field and transparent Activity to get the input. Dirty hack!
This was the only way to do it, as offline Voice Typing could only be
triggered by an IME or a system application (that was my root hack) .
The other type of recognition API … didn't trigger it and just failed
with a server error. … A lot of work wasted for me on the workaround!
But at least I was ready for the implementation...
From Utter! Claims To Be The First Non-IME App To Utilize Offline Voice Recognition In Jelly Bean
I successfully implemented my Speech-Service with offline capabilities by using onPartialResults when offline and onResults when online.
I was dealing with this and I noticed that you need to install the offline package for your Language. My language setting was "Español (Estados Unidos)" but there is not offline package for that language, so when I turned off all network connectivity I was getting an alert from RecognizerIntent saying that can't reach Google, then I change the language to "English (US)" (because I already have the offline package) and launched the RecognizerIntent it just worked out.
Keys: Language setting == Offline Voice Recognizer Package
It is apparently possible to manually install offline voice recognition by downloading the files directly and installing them in the right locations manually. I guess this is just a way to bypass Google hardware requirements.
However, personally I didn't have to reboot or anything, simply changing to UK and back again did it.
Working example is given below,
MyService.class
public class MyService extends Service implements SpeechDelegate, Speech.stopDueToDelay {
public static SpeechDelegate delegate;
#Override
public int onStartCommand(Intent intent, int flags, int startId) {
//TODO do something useful
try {
if (VERSION.SDK_INT >= VERSION_CODES.KITKAT) {
((AudioManager) Objects.requireNonNull(
getSystemService(Context.AUDIO_SERVICE))).setStreamMute(AudioManager.STREAM_SYSTEM, true);
}
} catch (Exception e) {
e.printStackTrace();
}
Speech.init(this);
delegate = this;
Speech.getInstance().setListener(this);
if (Speech.getInstance().isListening()) {
Speech.getInstance().stopListening();
} else {
System.setProperty("rx.unsafe-disable", "True");
RxPermissions.getInstance(this).request(permission.RECORD_AUDIO).subscribe(granted -> {
if (granted) { // Always true pre-M
try {
Speech.getInstance().stopTextToSpeech();
Speech.getInstance().startListening(null, this);
} catch (SpeechRecognitionNotAvailable exc) {
//showSpeechNotSupportedDialog();
} catch (GoogleVoiceTypingDisabledException exc) {
//showEnableGoogleVoiceTyping();
}
} else {
Toast.makeText(this, R.string.permission_required, Toast.LENGTH_LONG).show();
}
});
}
return Service.START_STICKY;
}
#Override
public IBinder onBind(Intent intent) {
//TODO for communication return IBinder implementation
return null;
}
#Override
public void onStartOfSpeech() {
}
#Override
public void onSpeechRmsChanged(float value) {
}
#Override
public void onSpeechPartialResults(List<String> results) {
for (String partial : results) {
Log.d("Result", partial+"");
}
}
#Override
public void onSpeechResult(String result) {
Log.d("Result", result+"");
if (!TextUtils.isEmpty(result)) {
Toast.makeText(this, result, Toast.LENGTH_SHORT).show();
}
}
#Override
public void onSpecifiedCommandPronounced(String event) {
try {
if (VERSION.SDK_INT >= VERSION_CODES.KITKAT) {
((AudioManager) Objects.requireNonNull(
getSystemService(Context.AUDIO_SERVICE))).setStreamMute(AudioManager.STREAM_SYSTEM, true);
}
} catch (Exception e) {
e.printStackTrace();
}
if (Speech.getInstance().isListening()) {
Speech.getInstance().stopListening();
} else {
RxPermissions.getInstance(this).request(permission.RECORD_AUDIO).subscribe(granted -> {
if (granted) { // Always true pre-M
try {
Speech.getInstance().stopTextToSpeech();
Speech.getInstance().startListening(null, this);
} catch (SpeechRecognitionNotAvailable exc) {
//showSpeechNotSupportedDialog();
} catch (GoogleVoiceTypingDisabledException exc) {
//showEnableGoogleVoiceTyping();
}
} else {
Toast.makeText(this, R.string.permission_required, Toast.LENGTH_LONG).show();
}
});
}
}
#Override
public void onTaskRemoved(Intent rootIntent) {
//Restarting the service if it is removed.
PendingIntent service =
PendingIntent.getService(getApplicationContext(), new Random().nextInt(),
new Intent(getApplicationContext(), MyService.class), PendingIntent.FLAG_ONE_SHOT);
AlarmManager alarmManager = (AlarmManager) getSystemService(Context.ALARM_SERVICE);
assert alarmManager != null;
alarmManager.set(AlarmManager.ELAPSED_REALTIME_WAKEUP, 1000, service);
super.onTaskRemoved(rootIntent);
}
}
For more details,
https://github.com/sachinvarma/Speech-Recognizer
Hope this will help someone in future.
I'm trying to start using Quickblox, since it provides great tools for backend. I have registered on website an got credentials for my app, however I fail to start simple program to test connection:
public class MainActivity extends Activity{
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
QBSettings.getInstance().fastConfigInit("3504", "NMuekBBXBg6PgST", "HaOj5bY4LgxUpPH");
QBAuth.createSession(new QBCallbackImpl() {
#Override
public void onComplete(Result result) {
// result comes here
// check if result success
if (result.isSuccess()) {
Toast.makeText(getApplicationContext(), "success!!", Toast.LENGTH_LONG).show();
}else{
Toast.makeText(getApplicationContext(), "fail :( " + result.getErrors(), Toast.LENGTH_LONG).show();
}
}
});
}
It works on AVD with Android 4.2.2 (API Level 17), however it fails on my galaxy s2 with 4.1.2 (API Level 16), with getErrors() returning "[base Bad timestamp]". I have no idea what I might be doing wrong, so any help would be appriciated. Please, don't make me switch to Parse :)
This is typical developers error and it's easy to fix it.
Bad timestamp means that while Creating Session you sent invalid 'timestamp' value which is based on your phone time.
We suggest you synchronize time on your devices with NTP service or just set tick 2 checkboxes in Settings in your device: Automatic date & time and Automatic time zone
Hope this help
Please check the official manual (page 89) where the time settings are described
manual
Hello Friends I implemented login by facebook account , in my android application ,but the problem is this functionality is not working in all mobiles like if i am running application in 2.2 mobile its working fine but when i tried in HTC Mobile which is haveing 2.3 version login page is coming and suddenly disappearing .
public class TestLoginListener implements DialogListener {
public void onComplete(Bundle values) {
testAuthenticatedApi();
}
public void onFacebookError(FacebookError e) {
e.printStackTrace();
}
public void onError(DialogError e) {
e.printStackTrace();
}
public void onCancel() {
}
}
public boolean testAuthenticatedApi() {
if (!authenticatedFacebook.isSessionValid()) return false;
try {
Log.d("Tests", "Testing request for 'me'");
String response1 = authenticatedFacebook.request("me")
JSONObject obj = Util.parseJson(response1);
fbid=obj.getString("id");
String name=obj.getString("name");
fbfirstname=obj.getString("first_name");
fblastname=obj.getString("last_name");
fbemail=obj.getString("email");
}
}
Yeah, I think I know the cause. I've hacked around with implementing facebook and what you are describing usually happens if your device already has a facebook app installed and you are logged in currently. Then when you try logging in, it just shows the login for a brief moment before disappearing. I'm guessing your HTC has the fb app and somebody is logged in while your other mobiles do not have the fb app installed. I think this happens because the fb server is not pinged when you try logging in. It pings the fb app instead.
The solution I came up with was to change the code to NOT use SSO (Single Sign On). I'm sure others would disagree with this approach, but I chose not to use SSO and it works fine. To do this, use authorize(FORCE_DIALOG_AUTH).
I did some digging around and found a related question.