What is bindServiceAsUser() method in Android? - android

I cannot understand what is the bindServiceAsUser() method used for. Can anyone please kindly explain about it ? Googling seems doesn't help much.
public boolean bindService(Intent intent, ServiceConnection connection, int flags) {
return mContext.bindServiceAsUser(intent, connection, flags, UserHandle.OWNER);
}

I've never felt the need to use bindServiceAsUser(), but here's what the Android documentation has to say about it:
Same as bindService(android.content.Intent,android.content.ServiceConnection,int), but with an explicit userHandle argument for use by system server and other multi-user aware code.
The multi-user support was added in Android 4.2 (API: 17), read about it HERE. In my understanding it'll be mostly used by device manufacturers, releasing special devices for the Enterprise world for example. The best doc for multi-users I've found is THIS one, along with all referenced links there.

As Vesko said, in most android devices multi user is disabled. Some device manufacturers enable it. Foe example you have to bind a service with AIDl and disable a feature for a user in your privileged app. Here you need to know bind service as which user. We can invoke bindServiceAsUser using reflection.
UserManager um = (UserManager) getSystemService(Context.USER_SERVICE);
UserHandle owner = null;
owner = um.getUserForSerialNumber(0L);
try {
MethodUtils.invokeMethod(getApplicationContext(), "bindServiceAsUser", new Object[]{i, serviceConnection, Context.BIND_AUTO_CREATE, owner});
} catch (NoSuchMethodException | IllegalAccessException | InvocationTargetException e) {
e.printStackTrace();
}

Related

How to implement offline speech recognition [duplicate]

It looks as though Google has made offline speech recognition available from Google Now for third-party apps. It is being used by the app named Utter.
Has anyone seen any implementations of how to do simple voice commands with this offline speech rec? Do you just use the regular SpeechRecognizer API and it works automatically?
Google did quietly enable offline recognition in that Search update, but there is (as yet) no API or additional parameters available within the SpeechRecognizer class. {See Edit at the bottom of this post} The functionality is available with no additional coding, however the user’s device will need to be configured correctly for it to begin working and this is where the problem lies and I would imagine why a lot of developers assume they are ‘missing something’.
Also, Google have restricted certain Jelly Bean devices from using the offline recognition due to hardware constraints. Which devices this applies to is not documented, in fact, nothing is documented, so configuring the capabilities for the user has proved to be a matter of trial and error (for them). It works for some straight away – For those that it doesn't, this is the ‘guide’ I supply them with.
Make sure the default Android Voice Recogniser is set to Google not
Samsung/Vlingo
Uninstall any offline recognition files you already have installed
from the Google Voice Search Settings
Go to your Android Application Settings and see if you can uninstall
the updates for the Google Search and Google Voice Search
applications.
If you can't do the above, go to the Play Store see if you have the
option there.
Reboot (if you achieved 2, 3 or 4)
Update Google Search and Google Voice Search from the Play Store (if
you achieved 3 or 4 or if an update is available anyway).
Reboot (if you achieved 6)
Install English UK offline language files
Reboot
Use utter! with a connection
Switch to aeroplane mode and give it a try
Once it is working, the offline recognition of other languages,
such as English US should start working too.
EDIT: Temporarily changing the device locale to English UK also seems to kickstart this to work for some.
Some users reported they still had to reboot a number of times before it would begin working, but they all get there eventually, often inexplicably to what was the trigger, the key to which are inside the Google Search APK, so not in the public domain or part of AOSP.
From what I can establish, Google tests the availability of a connection prior to deciding whether to use offline or online recognition. If a connection is available initially but is lost prior to the response, Google will supply a connection error, it won’t fall-back to offline. As a side note, if a request for the network synthesised voice has been made, there is no error supplied it if fails – You get silence.
The Google Search update enabled no additional features in Google Now and in fact if you try to use it with no internet connection, it will error. I mention this as I wondered if the ability would be withdrawn as quietly as it appeared and therefore shouldn't be relied upon in production.
If you intend to start using the SpeechRecognizer class, be warned, there is a pretty major bug associated with it, which require your own implementation to handle.
Not being able to specifically request offline = true, makes controlling this feature impossible without manipulating the data connection. Rubbish. You’ll get hundreds of user emails asking you why you haven’t enabled something so simple!
EDIT: Since API level 23 a new parameter has been added EXTRA_PREFER_OFFLINE which the Google recognition service does appear to adhere to.
Hope the above helps.
I would like to improve the guide that the answer https://stackoverflow.com/a/17674655/2987828 sends to its users, with images. It is the sentence "For those that it doesn't, this is the ‘guide’ I supply them with." that I want to improve.
The user should click on the four buttons highlighted in blue in these images:
Then the user can select any desired languages. When the download is done, he should disconnect from network, and then click on the "microphone" button of the keyboard.
It worked for me (android 4.1.2), then language recognition worked out of the box, without rebooting. I can now dictates instructions to the shell of Terminal Emulator ! And it is twice faster offline than online, on a padfone 2 from ASUS.
These images are licensed under cc by-sa 3.0 with attribution required to stackoverflow.com/a/21329845/2987828 ; you may hence add these images anywhere along with this attribution.
(This the standard policy of all images and texts at stackoverflow.com)
A simple and flexible offline recognition on Android is implemented by CMUSphinx, an open source speech recognition toolkit. It works purely offline, fast and configurable It can listen continuously for keyword, for example.
You can find latest code and tutorial here.
Update in 2019: Time goes fast, CMUSphinx is not that accurate anymore. I recommend to try Kaldi toolkit instead. The demo is here.
In short, I don't have the implementation, but the explanation.
Google did not make offline speech recognition available to third party apps. Offline recognition is only accessable via the keyboard. Ben Randall (the developer of utter!) explains his workaround in an article at Android Police:
I had implemented my own keyboard and was switching between Google
Voice Typing and the users default keyboard with an invisible edit
text field and transparent Activity to get the input. Dirty hack!
This was the only way to do it, as offline Voice Typing could only be
triggered by an IME or a system application (that was my root hack) .
The other type of recognition API … didn't trigger it and just failed
with a server error. … A lot of work wasted for me on the workaround!
But at least I was ready for the implementation...
From Utter! Claims To Be The First Non-IME App To Utilize Offline Voice Recognition In Jelly Bean
I successfully implemented my Speech-Service with offline capabilities by using onPartialResults when offline and onResults when online.
I was dealing with this and I noticed that you need to install the offline package for your Language. My language setting was "Español (Estados Unidos)" but there is not offline package for that language, so when I turned off all network connectivity I was getting an alert from RecognizerIntent saying that can't reach Google, then I change the language to "English (US)" (because I already have the offline package) and launched the RecognizerIntent it just worked out.
Keys: Language setting == Offline Voice Recognizer Package
It is apparently possible to manually install offline voice recognition by downloading the files directly and installing them in the right locations manually. I guess this is just a way to bypass Google hardware requirements.
However, personally I didn't have to reboot or anything, simply changing to UK and back again did it.
Working example is given below,
MyService.class
public class MyService extends Service implements SpeechDelegate, Speech.stopDueToDelay {
public static SpeechDelegate delegate;
#Override
public int onStartCommand(Intent intent, int flags, int startId) {
//TODO do something useful
try {
if (VERSION.SDK_INT >= VERSION_CODES.KITKAT) {
((AudioManager) Objects.requireNonNull(
getSystemService(Context.AUDIO_SERVICE))).setStreamMute(AudioManager.STREAM_SYSTEM, true);
}
} catch (Exception e) {
e.printStackTrace();
}
Speech.init(this);
delegate = this;
Speech.getInstance().setListener(this);
if (Speech.getInstance().isListening()) {
Speech.getInstance().stopListening();
} else {
System.setProperty("rx.unsafe-disable", "True");
RxPermissions.getInstance(this).request(permission.RECORD_AUDIO).subscribe(granted -> {
if (granted) { // Always true pre-M
try {
Speech.getInstance().stopTextToSpeech();
Speech.getInstance().startListening(null, this);
} catch (SpeechRecognitionNotAvailable exc) {
//showSpeechNotSupportedDialog();
} catch (GoogleVoiceTypingDisabledException exc) {
//showEnableGoogleVoiceTyping();
}
} else {
Toast.makeText(this, R.string.permission_required, Toast.LENGTH_LONG).show();
}
});
}
return Service.START_STICKY;
}
#Override
public IBinder onBind(Intent intent) {
//TODO for communication return IBinder implementation
return null;
}
#Override
public void onStartOfSpeech() {
}
#Override
public void onSpeechRmsChanged(float value) {
}
#Override
public void onSpeechPartialResults(List<String> results) {
for (String partial : results) {
Log.d("Result", partial+"");
}
}
#Override
public void onSpeechResult(String result) {
Log.d("Result", result+"");
if (!TextUtils.isEmpty(result)) {
Toast.makeText(this, result, Toast.LENGTH_SHORT).show();
}
}
#Override
public void onSpecifiedCommandPronounced(String event) {
try {
if (VERSION.SDK_INT >= VERSION_CODES.KITKAT) {
((AudioManager) Objects.requireNonNull(
getSystemService(Context.AUDIO_SERVICE))).setStreamMute(AudioManager.STREAM_SYSTEM, true);
}
} catch (Exception e) {
e.printStackTrace();
}
if (Speech.getInstance().isListening()) {
Speech.getInstance().stopListening();
} else {
RxPermissions.getInstance(this).request(permission.RECORD_AUDIO).subscribe(granted -> {
if (granted) { // Always true pre-M
try {
Speech.getInstance().stopTextToSpeech();
Speech.getInstance().startListening(null, this);
} catch (SpeechRecognitionNotAvailable exc) {
//showSpeechNotSupportedDialog();
} catch (GoogleVoiceTypingDisabledException exc) {
//showEnableGoogleVoiceTyping();
}
} else {
Toast.makeText(this, R.string.permission_required, Toast.LENGTH_LONG).show();
}
});
}
}
#Override
public void onTaskRemoved(Intent rootIntent) {
//Restarting the service if it is removed.
PendingIntent service =
PendingIntent.getService(getApplicationContext(), new Random().nextInt(),
new Intent(getApplicationContext(), MyService.class), PendingIntent.FLAG_ONE_SHOT);
AlarmManager alarmManager = (AlarmManager) getSystemService(Context.ALARM_SERVICE);
assert alarmManager != null;
alarmManager.set(AlarmManager.ELAPSED_REALTIME_WAKEUP, 1000, service);
super.onTaskRemoved(rootIntent);
}
}
For more details,
https://github.com/sachinvarma/Speech-Recognizer
Hope this will help someone in future.

PhoneStateListener - what can I expect from onCellInfoChanged?

I posted this on Android dev group. I'm hoping I can get some feedback here.
The PhoneStateListener's callbacks onCellLocationChanged and onSignalStrengthsChanged were the goto methods for when I wanted to handle cell and signal data changes in GSM and CDMA. With API 17+, I can see that there's a new callback (onCellInfoChanged) for handling both cell and signal changes.
Looking at the documentation, it's not clear what I can expect from the introduction of this new callback.
Will LTE changes always and only trigger onCellInfoChanged?
Will GSM/CDMA changed remain on the older callbacks?
Does one overlap with the other? (i.e. Both old and new get triggered for LTE or GSM/CDMA.)
It may very well be that different OEMs will have different implementations (sigh!), but I'm hoping there are guidelines that everyone's supposed to follow.
Can anyone shed some light on this?
Thanks,
Sebouh
I didn't test if but it looks from the code that both will be called.
I downloaded code source of Android 4.3(API 18) using the SDK Manager.
The following observations made me think that both would be called.
The class that triggers these events is: com.android.server.TelephonyRegistry
It notifies the listener though:
public void listen(String pkgForDebug, IPhoneStateListener callback, int events, boolean notifyNow)
This same function calls for both type of notifications(Location and CellInfo) in a non exclusive way.
On line 256:
if (validateEventsAndUserLocked(r, PhoneStateListener.LISTEN_CELL_LOCATION)) {
try {
if (DBG_LOC) Slog.d(TAG, "listen: mCellLocation=" + mCellLocation);
r.callback.onCellLocationChanged(new Bundle(mCellLocation));
} catch (RemoteException ex) {
remove(r.binder);
}
}
This one will call onCellLocationChanged even on new LTE phone since there is nothing from the above code that would prevent this. This needs double checking that there is no upper layer that filters the events themselves
On line 300 in the same code:
if (validateEventsAndUserLocked(r, PhoneStateListener.LISTEN_CELL_INFO)) {
try {
if (DBG_LOC) Slog.d(TAG, "listen: mCellInfo=" + mCellInfo);
r.callback.onCellInfoChanged(mCellInfo);
} catch (RemoteException ex) {
remove(r.binder);
}
}
There are other things from the code that look like CDMA will be calling the newer API. For example com.android.internal.telephony.cdma.CdmaLteServiceStateTracker seems to be dealing with CDMA and LTE. Again it would require a more careful look but that should give you a good place to start.
You can also try to simulate that with the emulator.

How to auto-accept Wi-Fi Direct connection requests in Android

I have 2 Android devices using WiFi Direct. On one device I can get information about the other device using the WifiP2pManager class, and request a connection to the other device. However when I request a connection, the other device pops up a little window and asks the user if they want to accept the connection request.
Is it possible to auto-accept these connection requests? I.E to be able to connect to the other device without user confirmation?
It can be easily done with the help of Xposed framework. You just need to replace the single method inside one of android java classes (see the link from snihalani's answer). But of course to use Xposed your device must be rooted. The main idea can be expressed in the following code (using Xposed)
#Override
public void handleLoadPackage(LoadPackageParam lpparam) {
try {
Class<?> wifiP2pService = Class.forName("android.net.wifi.p2p.WifiP2pService", false, lpparam.classLoader);
for (Class<?> c : wifiP2pService.getDeclaredClasses()) {
//XposedBridge.log("inner class " + c.getSimpleName());
if ("P2pStateMachine".equals(c.getSimpleName())) {
XposedBridge.log("Class " + c.getName() + " found");
Method notifyInvitationReceived = c.getDeclaredMethod("notifyInvitationReceived");
final Method sendMessage = c.getMethod("sendMessage", int.class);
XposedBridge.hookMethod(notifyInvitationReceived, new XC_MethodReplacement() {
#Override
protected Object replaceHookedMethod(MethodHookParam param) throws Throwable {
final int PEER_CONNECTION_USER_ACCEPT = 0x00023000 + 2;
sendMessage.invoke(param.thisObject, PEER_CONNECTION_USER_ACCEPT);
return null;
}
});
break;
}
}
} catch (Throwable t) {
XposedBridge.log(t);
}
}
I tested it on SGS4 stock 4.2.2 ROM and it worked.
I guess the same could be done with the help of Substrate for android.
From my current understanding of the API, You cannot really accept connections automatically without user's intervention. You can initiate a connection, that doesn't require user intervention. If both of your devices are mobile devices, you will have to accept connection request on one end.
I have put this as a feature request in android project hosting.
You can monitor their response here: https://code.google.com/p/android/issues/detail?id=30880
Based on the comments, do you really need to connect to the devices if you just want to track and log the vehicles around you ?
I don't know the scope of the project, but you could simply use the WifiP2pDeviceList that you get when you request the peers in the WifiP2pManager. You could get the list of the devices (~= vehicles) around you and could log this.
Connection is useful if you want to send more detailed information I guess.
If you can modify the framework, you can ignore the accept window and direct send the "PEER_CONNECTION_USER_ACCEPT".
Base on Android 5.0, "frameworks/opt/net/wifi/service/java/com/android/server/wifi/p2p/WifiP2pServiceImpl.java".
You must find the "notifyInvitationReceived", and modify to ...
private void notifyInvitationReceived() {
/*Direct sends the accept message.*/
sendMessage(PEER_CONNECTION_USER_ACCEPT);
/*
... old code
*/
}

How to prevent name caching and detect bluetooth name changes on discovery

I'm writing an Android app which receives information from a Bluetooth device. Our client has suggested that the Bluetooth device (which they produce) will change its name depending on certain conditions - for the simplest example its name will sometimes be "xxx-ON" and sometimes "xxx-OFF". My app is just supposed to seek this BT transmitter (I use BluetoothAdapter.startDiscovery() ) and do different things depending on the name it finds. I am NOT pairing with the Bluetooth device (though I suppose it might be possible, the app is supposed to eventually work with multiple Android devices and multiple BT transmitters so I'm not sure it would be a good idea).
My code works fine to detect BT devices and find their names. Also, if the device goes off, I can detect the next time I seek, that it is not there. But it seems that if it is there and it changes name, I pick up the old name - presumably it is cached somewhere. Even if the bluetooth device goes off, and we notice that, the next time I detect it, I still see the old name.
I found this issue in Google Code: here but it was unclear to me even how to use the workaround given ("try to connect"). Has anyone done this and had any luck? Can you share code?
Is there a simple way to just delete the cached names and search again so I always find the newest names? Even a non-simple way would be good (I am writing for a rooted device).
Thanks
I would suggest 'fetchUuidsWithSdp()'. It's significance is that, unlike the similar getUuids() method, fetchUuidsWithSdp causes the device to update cached information about the remote device. And I believe this includes the remote name as well as the SPD.
Note that both the methods I mentioned are hidden prior to 4.0.3, so your code would look l ike this:
public static void startServiceDiscovery( BluetoothDevice device ) {
// Need to use reflection prior to API 15
Class cl = null;
try {
cl = Class.forName("android.bluetooth.BluetoothDevice");
} catch( ClassNotFoundException exc ) {
Log.e(CTAG, "android.bluetooth.BluetoothDevice not found." );
}
if (null != cl) {
Class[] param = {};
Method method = null;
try {
method = cl.getMethod("fetchUuidsWithSdp", param);
} catch( NoSuchMethodException exc ) {
Log.e(CTAG, "fetchUuidsWithSdp not found." );
}
if (null != method) {
Object[] args = {};
try {
method.invoke(device, args);
} catch (Exception exc) {
Log.e(CTAG, "Failed to invoke fetchUuidsWithSdp method." );
}
}
}
}
You'll then need to listen for the BluetoothDevice.ACTION_NAME_CHANGED intent, and extract BluetoothDevice.EXTRA_NAME from it.
Let me know if that helps.

How to programatically hide Caller ID on Android

on Android phones, under Call -> Additional settings -> Caller ID
it is possible to hide your caller ID. I want to do that programatically from my code, but was not able to find a way to do that.
I searched through
android.provider
android.telephony
for 2.1 release and was not able to find it.
Has anybody successfully solved this issue?
Thanks in advance. Best regards.
Here I will describe two approaches I tried.
1.) It is possible to display Additional Call Settings screen from your application. Although it looks like it is part of the Settings application, that is not true. This Activity is part of the Native Phone Application, and it may be approached with the following intent:
Intent additionalCallSettingsIntent = new Intent("android.intent.action.MAIN");
ComponentName distantActivity = new ComponentName("com.android.phone", "com.android.phone.GsmUmtsAdditionalCallOptions");
additionalCallSettingsIntent.setComponent(distantActivity);
startActivity(additionalCallSettingsIntent);
Then user has to manually press on the CallerID preference and gets radio button with 3 options.
This was not actually what I wanted to achieve when I asked this question. I wanted to avoid step where user has to select any further options.
2.) When approach described under 1.) is executed in the Native Phone Application, function setOutgoingCallerIdDisplay() from com.android.internal.telephony.Phone has been used.
This was the basis for the next approach: use Java Reflection on this class and try to invoke the function with appropriate parameters:
try
{
Class <?> phoneFactoryClass = Class.forName("com.android.internal.telephony.PhoneFactory");
try
{
Method getDefaultPhoneMethod = phoneFactoryClass.getDeclaredMethod("getDefaultPhone");
Method makeDefaultPhoneMethod = phoneFactoryClass.getMethod("makeDefaultPhone" , Context.class);
try
{
makeDefaultPhoneMethod.invoke(null, this);
Object defaultPhone = getDefaultPhoneMethod.invoke(null);
Class <?> phoneInterface = Class.forName("com.android.internal.telephony.Phone");
Method getPhoneServiceMethod = phoneInterface.getMethod("setOutgoingCallerIdDisplay", int.class, Message.class);
getPhoneServiceMethod.invoke(defaultPhone, 1, null);
}
catch (InvocationTargetException ex)
{
ex.printStackTrace();
}
catch (IllegalAccessException ex)
{
ex.printStackTrace();
}
}
catch (NoSuchMethodException ex)
{
ex.printStackTrace();
}
}
catch (ClassNotFoundException ex)
{
ex.printStackTrace();
}
Firstly I tried just to use getDefaultPhone(), but I get RuntimeException
"PhoneFactory.getDefaultPhone must be called from Looper thread"
Obviously, issue lies in the fact that I tried to call this method from the Message Loop that was not the Native Phone App one.
Tried to avoid this by making own default phone, but this was a security violation:
ERROR/AndroidRuntime(2338): java.lang.SecurityException: Permission Denial: not allowed to send broadcast android.provider.Telephony.SPN_STRINGS_UPDATED from pid=2338, uid=10048
The only way to overcome (both of) this would be to sign your app with the same key as the core systems app, as described under
Run secure API calls as root, android
I'm not sure if this is a global feature, but Australian phones can hide their number by prefixing the caller's number with #31# or 1831. This may not be the perfect solution, but a prefix like this could possibly work for your requirements during coding.

Categories

Resources