We have a custom scanner to scan barcode using, which works with SOFT trigger (Using App Button) by using Motorola's emdk library.
barcodeManager = (BarcodeManager) this.emdkManager.getInstance(EMDKManager.FEATURE_TYPE.BARCODE);
scanner = barcodeManager.getDevice(BarcodeManager.DeviceIdentifier.DEFAULT);
scanner.addStatusListener(articleListener);
scanner.addDataListener(new Scanner.DataListener() {
#Override
public void onData(ScanDataCollection scanDataCollection) {
processData(scanDataCollection);
}
});
scanner.addDataListener(dataListener);
scanner.triggerType = Scanner.TriggerType.SOFT_ALWAYS;
scanner.enable();
How can i have both soft and Hard trigger to scan the data?
and with datalistener process the data received from both?
Zebra Technologies acquire Motorola Solution enterprise business in Oct. 2014, most of the updated documentation is now available under the Zebra Launchpad.
Scanner.TriggerType controls how you want to activate the barcode scanner on Zebra Android devices, usually you can set it up or Hard (scan is activated pressing the hardware trigger button) or Soft (scan is activated as soon as you call the Scanner.read() method).
To have an application that can use the Hardware trigger and having an on screen button to activate the scanner you can leave set the triggerType to Scanner.TriggerType.HARD and implement a login in the click event handler for the soft scan button so that you set the TriggerType to Scanner.TriggerType.SOFT_ONCE and then you call the Scanner.read() method. You can eventually check if there's another read active.
This is a sample implementation that you can test adding a button in the Barcode API sample included in the EMDK for Android (latest is v4.0):
private void softScan() {
if (scanner != null) {
try {
// Reset continuous flag
bContinuousMode = false;
if (scanner.isReadPending()) {
// Cancel the pending read.
scanner.cancelRead();
}
scanner.triggerType = TriggerType.SOFT_ONCE;
scanner.read();
new AsyncUiControlUpdate().execute(true);
} catch (ScannerException e) {
textViewStatus.setText("Status: " + e.getMessage());
}
}
}
So, usually you work with a TriggerType.HARD, but when you press the SCAN button you disable any pending read and you switch to TriggerType.SCAN_ONCE.
The implementation of the status listener needs to switch back the scanner to TriggerType.HARD and call the read() method.
You can find a complete sample at this github repository where I've added a Soft Scan button to the standard Zebra's EMDK Barcode API sample.
All the data are received by the same Data Listener.
Related
I am trying to receive the scanned barcode result from a device paired via (Bluetooth/USB) to an android device.
so many topics said :
most plug-in barcode scanners (that I've seen) are made as HID profile devices so whatever they are plugged into should see them as a Keyboard basically.
source
So I am using this code to receive the result of the scan:
#Override
public boolean dispatchKeyEvent(KeyEvent event) {
if (viewModel.onTriggerScan()) {
//1
char pressedKey = (char) event.getUnicodeChar();
viewModel.addCharToCode(pressedKey);
//2
String fullCode = event.getCharacters();
viewModel.fullCode(fullCode);
//check if the scan is done, received all the chars
if (event.getAction() == EditorInfo.IME_ACTION_DONE) {
//does this work ?
viewModel.gotAllChars();
//3
String fullCode2 = event.getCharacters();
viewModel.fullCode(fullCode2);
}
return true;
} else
return super.dispatchKeyEvent(event);
}
Note: I don't have a barcode scanner device for the test.
which code will receive the result ?? (1 or 2 or 3 ?)
You won't ever see an IME_ACTION_DONE, that's something that's Android only and an external keyboard would never generate.
After that, it's really up to how the scanner works. You may get a full key up and key down for each character. You may not, and may receive multiple characters per event. You may see it finish with a terminator (like \n) you may not- depends on how the scanner is configured. Unless you can configure it yourself or tell the user how to configure it, you need to be prepared for either (which means treating the data as done either after seeing the terminator, or after a second or two once new data stops coming in.
Really you need to buy a configurable scanner model and try it in multiple modes and make ever mode works. Expect it to take a few days in your schedule.
Workaround solution but it works 100%.
the solution is based on clone edittext (hidden from the UI), this edit text just receives the result on it, adds a listener, and when the result arrives gets it and clears the edittext field. An important step, when you try to receive the result(trigger scan) make sure that edittext has the focus otherwise you wil not get the result.
Quick steps:
1- create editText (any text field that receives inputs) in your layout
2- set its visibility to "gone" and clear it.
3- add onValueChangeListener to your edittext.
4- focus your edittext when you start trigger the scan
5- each time you the listener call, get the result and clear edittext
Note: never miss to focus your edittext whenever you start trigger scan.
Note: this method work(99%) for all external scan device and any barcode type.
I am trying to programmatically control incoming call (Accept and reject) Target Android 6.0 and above in my companion app.
Working method but deprecated
telecomManager.acceptCall() and telecomManager.endCall()
This method is working fine till Android 10 and also in virtual Android 11 but in developers site it says it is deprecated.
This method was deprecated in API level 29.
Companion apps for wearable devices should use the InCallService API instead.
Partially working method
By simulation headset press button key event found that the call can be controlled. The following is my implementation
void sendHeadsetHookLollipop() {
MediaSessionManager mediaSessionManager = (MediaSessionManager) getApplicationContext().getSystemService(Context.MEDIA_SESSION_SERVICE);
try {
List<MediaController> mediaControllerList = mediaSessionManager.getActiveSessions
(new ComponentName(getApplicationContext(), NotificationReceiverService.class));
for (MediaController m : mediaControllerList) {
if ("com.android.server.telecom".equals(m.getPackageName())) {
m.dispatchMediaButtonEvent(new KeyEvent(KeyEvent.ACTION_UP, KeyEvent.KEYCODE_HEADSETHOOK));
log.info("HEADSETHOOK sent to telecom server");
break;
}
}
} catch (SecurityException e) {
log.error("Permission error. Access to notification not granted to the app.");
}
}
in the above piece of code, I am able to answer the ringing call. To reject, I need to simulate a long press of the same KeyEvent.
1.How to achieve long press of a keyEvent?
2. Is there any other non deprecated implementation method for the above need?
3.In the telecomManager class, they have suggested to implement InCallService . How to implement InCallService without making my app default app?
When a phone is ringing ( by an incoming call) If the phone number is a specific number I want to show my custom UI. If It is not, I want to pass it to the (built-in) system call app(Or any other call app is okay).
I should use 'InCallService' and the device set my app as 'a default call app' so that even when the phone screen is locked my custom-UI-activity be shown. The following kotlin source code is my goal.
override fun onCallAdded(call: Call) {
//app should receive a new incoming call via 'onCallAdded'
super.onCallAdded(call)
val phoneNumber = getPhoneNumber(call)
if (isMyTargetNumber(phoneNumber)) {
//show my custom UI
} else {
//run a built-in call app
}
}
The problem that I want to solve is how to run a built-in call app appropriately. I mean I want to complete to wirte the blank of 'else'
else {
//run a built-in call app
}
Apps on the android market like 'truecaller' or 'whosecall' works well the way I want to acheive. I want to make my app such apps. Please help me and advise something to me.
I have the following code to capture the Hook button press from a headset. This code works in Android 4.1, Android 5.0 and also on 7.0
I have two headphones,
First one is a simple Samsung handsfree/headphones which came with an old samsung phone. It has only one button.
Second one is a Sony headphone with handsfree mic, it also has only one button.
Both these headsets when plugged in to Android 4.1 or Android 5 - the button press is recognised in the OnPlay method (see code below).
However in Android 7.1.2 when I use the Samsung Headset the onPlay method is NOT called when the Hook button is pressed.
The Sony headset button press results in onPlay method being called.
I added the commented out code to see whether a MediaButton event is being received by the application. If I use the samsung headset and press the button it does result in the MediaButton event, I verified it using the onMediaButtonEvent.
Why is this mediabutton event not translating in to onPlay - only in case of Android 7.1.2 and that too only using that particular headset.
What should I be looking in the event.
private void initMediaSessions()
{
mSession = new MediaSessionCompat(getApplicationContext(), VoiceTicketService.class.getSimpleName());
mSession.setFlags(MediaSessionCompat.FLAG_HANDLES_MEDIA_BUTTONS);
mSession.setMediaButtonReceiver(null);
mStateBuilder = new PlaybackStateCompat.Builder()
.setActions(PlaybackStateCompat.ACTION_PLAY);
mSession.setPlaybackState(mStateBuilder.build());
mSession.setCallback(new MediaSessionCompat.Callback()
{
//callback code is here.
#Override
public void onPlay()
{
Log.d("onPlay", "Hook key pressed UI is active");
toggleRecogniserState();
}
#Override
public void onStop()
{
Log.d("onStop", "Hook key pressed UI is active");
toggleRecogniserState();
}
#Override
public void onPause()
{
Log.d("onPause", "Hook key pressed UI is active");
toggleRecogniserState();
}
/* #Override
public boolean onMediaButtonEvent(Intent mediaButtonEvent)
{
KeyEvent event = (KeyEvent)mediaButtonEvent.getParcelableExtra(Intent.EXTRA_KEY_EVENT);
Log.d("onMediaButtonEvent ", "Hook key pressed UI is active "+event.getAction());
if(event.getAction()==0)
toggleRecogniserState();
return true;
}*/
}
);
mSession.setActive(true);
}
I have figured out the issue using getKeycode() on the event.
The KeyCode expected for the hook button press is 79. Both the headsets send this keycode 79 when it is tested on the Android 4.1,5.0.
However Android 7.1 is running on a Xiaomi phone which has its own Android Mod. I think this is the culprit which is recognising the button press from Samsung headphone as keycode 88 instead of 79. So its a phone specific issue and not an Android problem.
I have a working application that I would like to add voice commands. The current application transmits data back and forth over bluetooth on a periodic (timer) basis. The user can press Buttons and NumberPickers to modify the data being sent over bluetooth. There is also data received from the bluetooth link, and displayed in textViews. This application is currently working correctly.
What I would like to do is add voice command capability, so that the user has either the choice of pressing the Buttons/NumberPickers, or can change the values with only voice commands.
I have tested some of the Speech-to-Text examples that can be found on various websites. I have succesfully tested an App that uses RecognizerIntent. Upon a button press, a dialog pops up and you can speak words or phrases, and it correctly displays the result on the screen.
So, I think that I am close, but I'm not really sure how I can combine the Speech-to-Text with my current Bluetooth App. I don't want the user to have to press a button, I just want the App to be constantly listening. Also, I don't want the pop-up Voice Dialog on the screen.
My hardware is a Samsung tablet running Android 4.1.
I am relatively new to Android programming, so any advice (no matter how basic) is appreciated. Thanks.
To prevent the pop-up Voice Dialog on the screen, you can use the ACTION_RECOGNIZE_SPEECH intent:
private static int SR_CODE = 123;
/**
* Initializes the speech recognizer and starts listening to the user input
*/
private void listen() {
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
//Specify language
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.ENGLISH)
// Specify language model
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
// Specify how many results to receive
intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 5);
// Start listening
startActivityForResult(intent, SR_CODE);
}
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == SR_CODE && resultCode == RESULT_OK) {
if(data!=null) {
//Retrieves the best list SR result
ArrayList<String> nBestList = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
String bestResult = nBestList.get(0);
Toast.makeText(getApplicationContext(), bestResult, Toast.LENGTH_LONG).show;
}else {
//Reports error in recognition error in log
Log.e(LOGTAG, "Recognition was not successful");
}
}
Concerning the other issue, " I don't want the user to have to press a button, I just want the App to be constantly listening":
I'll recommend using CMUSphinx to recognize speech continuously. To achieve continuous speech recognition using google speech recognition api, you might have to resort to a loop in a background service which will take too much resources and drains the device battery.
On the other hand, Pocketsphinx works really great. It's fast enough to spot a key phrase and recognize voice commands behind the lock screen without users touching their device. And it does all this offline.
You can try the demo.
If you really want to use google's api as I've demonstrated above, see this