Anyline OCR SDK Integration for scanning url's in an image - android

I am trying to integrate anyline ocr sdk to extract url links from an image.
From their documentation I understood that by modifying the iban scanner example i can achieve the same results for an url.
Here is the config file for scan view
{
"captureResolution":"1080",
"cutout": {
"style": "rect",
"maxWidthPercent": "80%",
"maxHeightPercent": "80%",
"alignment": "top_half",
"width": 900,
"ratioFromSize": {
"width": 10,
"height": 1
},
"strokeWidth": 2,
"cornerRadius": 10,
"strokeColor": "FFFFFF",
"outerColor": "000000",
"outerAlpha": 0.3,
"feedbackStrokeColor": "0099FF"
},
"flash": {
"mode": "manual",
"alignment": "bottom_right"
},
"beepOnResult": true,
"vibrateOnResult": true,
"blinkAnimationOnResult": true,
"cancelOnResult": true,
"visualFeedback": {
"style": "contour_point",
"strokeColor": "0099FF",
"strokeWidth": 2,
"fillColor": "110099FF"
}
}
Code for ScanURLActivity.java
public class ScanURLActivity extends AppCompatActivity {
private static final String TAG = ScanURLActivity.class.getSimpleName();
private AnylineOcrScanView scanView;
private URLResultView urlResultView;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//Set the flag to keep the screen on (otherwise the screen may go dark during scanning)
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
setContentView(R.layout.activity_anyline_ocr);
addURLResultView();
String license = getString(R.string.anyline_license_key);
// Copies given traineddata-file to a place where the core can access it.
// This MUST be called for every traineddata file that is used (before startScanning() is called).
// The file must be located directly in the assets directory (or in tessdata/ but no other folders are allowed)
scanView = new AnylineOcrScanView(getApplicationContext(),null);
scanView.copyTrainedData("tessdata/eng_no_dict.traineddata", "d142032d86da1be4dbe22dce2eec18d7");
scanView.copyTrainedData("tessdata/deu.traineddata", "2d5190b9b62e28fa6d17b728ca195776");
//Configure the OCR for URLs
AnylineOcrConfig anylineOcrConfig = new AnylineOcrConfig();
// use the line mode (line length and font may vary)
anylineOcrConfig.setScanMode(AnylineOcrConfig.ScanMode.LINE);
// set the languages used for OCR
anylineOcrConfig.setTesseractLanguages("eng_no_dict", "deu");
// allow only capital letters and numbers
anylineOcrConfig.setCharWhitelist("ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890abcdefghijklmnopqrstuvwxyz.\\:");
// set the height range the text can have
anylineOcrConfig.setMinCharHeight(20);
anylineOcrConfig.setMaxCharHeight(60);
// The minimum confidence required to return a result, a value between 0 and 100.
// (higher confidence means less likely to get a wrong result, but may be slower to get a result)
anylineOcrConfig.setMinConfidence(65);
// a simple regex for a basic validation of the URL, results that don't match this, will not be returned
// (full validation is more complex, as different countries have different formats)
anylineOcrConfig.setValidationRegex("^(?:http(s)?:\\/\\/)?[\\w.-]+(?:\\.[\\w\\.-]+)+[\\w\\-\\._~:/?#[\\]#!\\$&'\\(\\)\\*\\+,;=.]+$");
// removes small contours (helpful in this case as no letters with small artifacts are allowed, like iöäü)
anylineOcrConfig.setRemoveSmallContours(true);
// removes whitespaces from the result
// (also causes faster processing, because optimizations can be made if whitespaces are not relevant)
anylineOcrConfig.setRemoveWhitespaces(true);
// Experimental parameter to set the minimum sharpness (value between 0-100; 0 to turn sharpness detection off)
// The goal of the minimum sharpness is to avoid a time consuming ocr step,
// if the image is blurry and good results are therefor not likely.
anylineOcrConfig.setMinSharpness(66);
// set the ocr config
scanView.setAnylineOcrConfig(anylineOcrConfig);
// set an individual focus configuration for this example
FocusConfig focusConfig = new FocusConfig.Builder()
.setDefaultMode(Camera.Parameters.FOCUS_MODE_AUTO) // set default focus mode to be auto focus
.setAutoFocusInterval(8000) // set an interval of 8 seconds for auto focus
.setEnableFocusOnTouch(true) // enable focus on touch functionality
.setEnablePhaseAutoFocus(true) // enable phase focus for faster focusing on new devices
.setEnableFocusAreas(true) // enable focus areas to coincide with the cutout
.build();
// set the focus config
scanView.setFocusConfig(focusConfig);
// set the highest possible preview fps range
scanView.setUseMaxFpsRange(true);
// set sports scene mode to try and bump up the fps count even more
scanView.setSceneMode(Camera.Parameters.SCENE_MODE_SPORTS);
// initialize with the license and a listener
scanView.initAnyline(license, new AnylineOcrListener() {
#Override
public void onReport(String identifier, Object value) {
// Called with interesting values, that arise during processing.
// Some possibly reported values:
//
// $brightness - the brightness of the center region of the cutout as a float value
// $confidence - the confidence, an Integer value between 0 and 100
// $thresholdedImage - the current image transformed into black and white
// $sharpness - the detected sharpness value (only reported if minSharpness > 0)
}
#Override
public boolean onTextOutlineDetected(List<PointF> list) {
// Called when the outline of a possible text is detected.
// If false is returned, the outline is drawn automatically.
return false;
}
#Override
public void onResult(AnylineOcrResult result) {
// Called when a valid result is found (minimum confidence is exceeded and validation with regex was ok)
urlResultView.setResult(result.getText());
urlResultView.setVisibility(View.VISIBLE);
}
#Override
public void onAbortRun(AnylineOcrError code, String message) {
// Is called when no result was found for the current image.
// E.g. if no text was found or the result is not valid.
}
});
// disable the reporting if set to off in preferences
if (!PreferenceManager.getDefaultSharedPreferences(this).getBoolean(
SettingsFragment.KEY_PREF_REPORTING_ON, true)) {
// The reporting of results - including the photo of a scanned meter -
// helps us in improving our product, and the customer experience.
// However, if you wish to turn off this reporting feature, you can do it like this:
scanView.setReportingEnabled(false);
}
urlResultView.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
urlResultView.setVisibility(View.INVISIBLE);
scanView.startScanning();
}
});
}
private void addURLResultView() {
RelativeLayout mainLayout = (RelativeLayout) findViewById(R.id.main_layout);
RelativeLayout.LayoutParams params = new RelativeLayout.LayoutParams(
ViewGroup.LayoutParams.WRAP_CONTENT, ViewGroup.LayoutParams.WRAP_CONTENT);
params.addRule(RelativeLayout.CENTER_HORIZONTAL, RelativeLayout.TRUE);
params.addRule(RelativeLayout.CENTER_VERTICAL, RelativeLayout.TRUE);
urlResultView = new URLResultView(this);
urlResultView.setVisibility(View.INVISIBLE);
mainLayout.addView(urlResultView, params);
}
#Override
protected void onResume() {
super.onResume();
scanView.startScanning();
}
#Override
protected void onPause() {
super.onPause();
scanView.cancelScanning();
scanView.releaseCameraInBackground();
}
#Override
public void onBackPressed() {
if (urlResultView.getVisibility() == View.VISIBLE) {
urlResultView.setVisibility(View.INVISIBLE);
scanView.startScanning();
} else {
super.onBackPressed();
}
}
}
The AnylineOCRlistener is unable to detect any ocr results despite confidence set at 65.
Code for URLResultView.java class
public class URLResultView extends RelativeLayout {
private TextView resultText;
public URLResultView(Context context) {
super(context);
init();
}
public URLResultView(Context context, AttributeSet attrs) {
super(context, attrs);
init();
}
public URLResultView(Context context, AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
init();
}
private void init() {
setPadding(DimensUtil.getPixFromDp(getContext(), 4), DimensUtil.getPixFromDp(getContext(), 16),
DimensUtil.getPixFromDp(getContext(), 4), DimensUtil.getPixFromDp(getContext(), 16));
//setBackgroundResource(R.drawable.);
inflate(getContext(), R.layout.url_result, this);
resultText = (TextView) findViewById(R.id.text_result);
}
public void setResult(String result) {
resultText.setText(result.trim());
}
}
Can someone help me with this integration as i am unable to find any other resource/tutorial other than their documentation and sdk samples.

Actually there are two problems that you ran into:
You only added the backslash \ to the charWhitelist, not the slash /
You have to set removeSmallContours to false
removeSmallContours removes everything from the line that is smaller than the minCharHeight. So in your case, it will remove the : and ., because they are too small for the SDK.
After changing these two settings, the scanning works fine with a minConfidence of 85.

Related

Handle Talkback in a Xamarin app using a virtual DPAD

I have a Xamarin app that was not meant to handle the talkback functionality of android, because for it to work well it had to be build in a specific way.
My app is a little order, and I simply can't make a do-over of the whole thing.
So, what is happening?
My Xamarin app is made with non-native libs, that are not supported by the Talkback, so, when the user turns on the Talkback functionality the app effectively stops receiving the DPAD events since they are handled by the systems Accessibility Service.
That service, gets the events, and tries to handle them within my app, but, since my components are non-native, the system does not recognize them and the DPAD is wasted, hence, the illusion that the DPADs are not working.
So, what do you have to do if you just want to handle the DPADs (and nothing else) yourself with Talkback on?
The answer to this post will contain the code that describes the following behavior:
1. The talkback wont be able to 'talk' about your components
2. The DPAD events will be handled by an Accessibility Delegate
3. A virtual DPAD will handle the navigation
4. The green rectangle used for focus will be disabled, since you wont need it anyway
5. The app will look exactly the same with Talkback on and off
This post was made for educational purposes, since I had a hard time coming up with the solution, and hope the next guy finds it helpfull.
The first step is to create a class that inherits the AccessibilityDelegateCompat in order to create our own Accessibility Service.
class MyAccessibilityHelper : AccessibilityDelegateCompat
{
const string Tag = "MyAccessibilityHelper";
const int ROOT_NODE = -1;
const int INVALID_NODE = -1000;
const string NODE_CLASS_NAME = "My_Node";
public const int NODE_UP = 1;
public const int NODE_LEFT = 2;
public const int NODE_CENTER = 3;
public const int NODE_RIGHT = 4;
public const int NODE_DOWN = 5;
private class MyAccessibilityProvider : AccessibilityNodeProviderCompat
{
private readonly MyAccessibilityHelper mHelper;
public MyAccessibilityProvider(MyAccessibilityHelper helper)
{
mHelper = helper;
}
public override bool PerformAction(int virtualViewId, int action, Bundle arguments)
{
return mHelper.PerformNodeAction(virtualViewId, action, arguments);
}
public override AccessibilityNodeInfoCompat CreateAccessibilityNodeInfo(int virtualViewId)
{
var node = mHelper.CreateNode(virtualViewId);
return AccessibilityNodeInfoCompat.Obtain(node);
}
}
private readonly View mView;
private readonly MyAccessibilityProvider mProvider;
private Dictionary<int, Rect> mRects = new Dictionary<int, Rect>();
private int mAccessibilityFocusIndex = INVALID_NODE;
public MyAccessibilityHelper(View view)
{
mView = view;
mProvider = new MyAccessibilityProvider(this);
}
public override AccessibilityNodeProviderCompat GetAccessibilityNodeProvider(View host)
{
return mProvider;
}
public override void SendAccessibilityEvent(View host, int eventType)
{
Android.Util.Log.Debug(Tag, "SendAccessibilityEvent: host={0} eventType={1}", host, eventType);
base.SendAccessibilityEvent(host, eventType);
}
public void AddRect(int id, Rect rect)
{
mRects.Add(id, rect);
}
public AccessibilityNodeInfoCompat CreateNode(int virtualViewId)
{
var node = AccessibilityNodeInfoCompat.Obtain(mView);
if (virtualViewId == ROOT_NODE)
{
node.ContentDescription = "Root node";
ViewCompat.OnInitializeAccessibilityNodeInfo(mView, node);
foreach (var r in mRects)
{
node.AddChild(mView, r.Key);
}
}
else
{
node.ContentDescription = "";
node.ClassName = NODE_CLASS_NAME;
node.Enabled = true;
node.Focusable = true;
var r = mRects[virtualViewId];
node.SetBoundsInParent(r);
int[] offset = new int[2];
mView.GetLocationOnScreen(offset);
node.SetBoundsInScreen(new Rect(offset[0] + r.Left, offset[1] + r.Top, offset[0] + r.Right, offset[1] + r.Bottom));
node.PackageName = mView.Context.PackageName;
node.SetSource(mView, virtualViewId);
node.SetParent(mView);
node.VisibleToUser = true;
if (virtualViewId == mAccessibilityFocusIndex)
{
node.AccessibilityFocused = true;
node.AddAction(AccessibilityNodeInfoCompat.ActionClearAccessibilityFocus);
}
else
{
node.AccessibilityFocused = false;
node.AddAction(AccessibilityNodeInfoCompat.FocusAccessibility);
}
}
return node;
}
private AccessibilityEvent CreateEvent(int virtualViewId, EventTypes eventType)
{
var e = AccessibilityEvent.Obtain(eventType);
if (virtualViewId == ROOT_NODE)
{
ViewCompat.OnInitializeAccessibilityEvent(mView, e);
}
else
{
var record = AccessibilityEventCompat.AsRecord(e);
record.Enabled = true;
record.SetSource(mView, virtualViewId);
record.ClassName = NODE_CLASS_NAME;
e.PackageName = mView.Context.PackageName;
}
return e;
}
public bool SendEventForVirtualView(int virtualViewId, EventTypes eventType)
{
if (mView.Parent == null)
return false;
var e = CreateEvent(virtualViewId, eventType);
return ViewParentCompat.RequestSendAccessibilityEvent(mView.Parent, mView, e);
}
public bool PerformNodeAction(int virtualViewId, int action, Bundle arguments)
{
if (virtualViewId == ROOT_NODE)
{
return ViewCompat.PerformAccessibilityAction(mView, action, arguments);
}
else
{
switch (action)
{
case AccessibilityNodeInfoCompat.ActionAccessibilityFocus:
if (virtualViewId != mAccessibilityFocusIndex)
{
if (mAccessibilityFocusIndex != INVALID_NODE)
{
SendEventForVirtualView(mAccessibilityFocusIndex, EventTypes.ViewAccessibilityFocusCleared);
}
mAccessibilityFocusIndex = virtualViewId;
mView.Invalidate();
SendEventForVirtualView(virtualViewId, EventTypes.ViewAccessibilityFocused);
// virtual key event
switch (virtualViewId)
{
case NODE_UP:
HandleDpadEvent(Keycode.DpadUp);
break;
case NODE_LEFT:
HandleDpadEvent(Keycode.DpadLeft);
break;
case NODE_RIGHT:
HandleDpadEvent(Keycode.DpadRight);
break;
case NODE_DOWN:
HandleDpadEvent(Keycode.DpadDown);
break;
}
// refocus center
SendEventForVirtualView(NODE_CENTER, EventTypes.ViewAccessibilityFocused);
return true;
}
break;
case AccessibilityNodeInfoCompat.ActionClearAccessibilityFocus:
mView.RequestFocus();
if (virtualViewId == mAccessibilityFocusIndex)
{
mAccessibilityFocusIndex = INVALID_NODE;
mView.Invalidate();
SendEventForVirtualView(virtualViewId, EventTypes.ViewAccessibilityFocusCleared);
return true;
}
break;
}
}
return false;
}
private void HandleDpadEvent(Keycode keycode)
{
//Here you know what DPAD was pressed
//You can create your own key event and send it to your app
//This code depends on your own application, and I wont be providing the code
//Note, it is important to handle both, the KeyDOWN and the KeyUP event for it to work
}
}
Since the code is a bit large, I'll just explain the crutal parts.
Once the talkback is active, the dictionary (from our view bellow) will be used to create a virtual tree node of our virtual DPAD. With that in mind, the function PerformNodeAction will be the most important one.
It handles the actions once a virtual node was focused by the Accessibility system, based on the provided id of the virtual element, there are two parts, the first one is the ROOT_NODE, which is the view iteslf that contains our virtual dpad, which for the most part can be ignored, but the seond part is where the handling is done.
The second part is where the actions ActionAccessibilityFocus and ActionClearAccessibilityFocus are handled. The two of witch are both important, but the first one is where we can finally handle our virtual dpad.
What is done here is that with the provided virtual ID from the dictionary, we know which DPAD was selected (virtualViewId). Based on the selected DPAD, we can perform the action we want in the HandleDpadEvent function. What is important to notice, is that after we handle the selecteds DPAD event, we will refocus our CENTER node, in order to be ready to handle the next button press. This is very important, since, you dont want to find yourself in a situation where you go DOWN, and then UP, just for the virtual dpad to focus the CENTER pad.
So, I'll epeat myself, the refocusing of the CENTER pad after the previous' DPAD event was handled needs to be done in order for us to know EXACTLY where we will be after the next DPAD button was pressed!
There is one function that I wont post here, since the code for it is very specific for my app, the function is HandleDpadEvent, there you must create a keydown and a keyup event and send it to your main activity where the function onKeyDown/Up will be triggered. Once you do that, the delegate is done.
And once the Delegate is done, we have to make our view like this:
/**
* SimplestCustomView
*/
public class AccessibilityHelperView : View
{
private MyAccessibilityHelper mHelper;
Dictionary<int, Rect> virtualIdRectMap = new Dictionary<int, Rect>();
public AccessibilityHelperView(Context context) :
base(context)
{
Init();
}
public AccessibilityHelperView(Context context, IAttributeSet attrs) :
base(context, attrs)
{
Init();
}
public AccessibilityHelperView(Context context, IAttributeSet attrs, int defStyle) :
base(context, attrs, defStyle)
{
Init();
}
public void Init()
{
this.SetFocusable(ViewFocusability.Focusable);
this.Focusable = true;
this.FocusedByDefault = true;
setRectangle();
mHelper = new MyAccessibilityHelper(this);
ViewCompat.SetAccessibilityDelegate(this, mHelper);
foreach (var r in virtualIdRectMap)
{
mHelper.AddRect(r.Key, r.Value);
}
}
private void setRectangle()
{
virtualIdRectMap.Add(MRAccessibilityHelper.NODE_CENTER, new Rect(1, 1, 2, 2));
virtualIdRectMap.Add(MRAccessibilityHelper.NODE_LEFT, new Rect(0, 1, 1, 2));
virtualIdRectMap.Add(MRAccessibilityHelper.NODE_UP, new Rect(1, 0, 2, 1));
virtualIdRectMap.Add(MRAccessibilityHelper.NODE_RIGHT, new Rect(2, 1, 3, 2));
virtualIdRectMap.Add(MRAccessibilityHelper.NODE_DOWN, new Rect(1, 2, 2, 3));
}
protected override void OnDraw(Canvas canvas)
{
base.OnDraw(canvas);
}
}
That view looks like this:
What is to notice?
The size of the node pads is in pixels, and they will be found on the top left corner of your app.
They are set to that single pixel size, because the Talkback functionality would otherwise select the first node pad that was added to the dictionary with a green rectangle (thats standard behavior for talkback)
All the rectangles in the view are added to a dictionary that will be used in our own Accessibility Delegate, to mention here is that the CENTER pad was added first, and therefor will be in focus once the talkback is activated by default
The Init function
The Init function is crutial for this, there we will create our view, and set some talkback parameters nessessary for our virtual dpad to be recognized by the systems own Accessibility Service.
Also, there will our Accessibility Delegate be initialized and our dictionary with all the created DPADs.
Ok, so far, we made a Delegate and a View, I placed them both in the same file, so they can see each other. But it is not a must.
So what now? We must add the AccessibilityHelperView to our app, in the MainActivity.cs file
AccessibilityHelperView mAccessibilityHelperView;
In the OnCreate function, you can add the following code to initiate the view:
mAccessibilityHelperView = new AccessibilityHelperView(this);
In the OnResume function, you can check if the talkback is on or off, based on the result, you can add or remove the mAccessibilityHelperView from your mBackgroundLayout(AddView, and RemoveView).
The OnResume function should look like this:
if (TalkbackEnabled && !_isVirtualDPadShown)
{
mBackgroundLayout.AddView(mAccessibilityHelperView);
_isVirtualDPadShown = true;
}
else if (!TalkbackEnabled && _isVirtualDPadShown)
{
mBackgroundLayout.RemoveView(mAccessibilityHelperView);
_isVirtualDPadShown = false;
}
The TalkbackEnabled variable is a local one that checks if the Talkback service is on or off, like this:
public bool TalkbackEnabled
{
get
{
AccessibilityManager am = MyApp.Instance.GetSystemService(Context.AccessibilityService) as AccessibilityManager;
if (am == null) return false;
String TALKBACK_SETTING_ACTIVITY_NAME = "com.android.talkback.TalkBackPreferencesActivity";
var serviceList = am.GetEnabledAccessibilityServiceList(FeedbackFlags.AllMask);
foreach (AccessibilityServiceInfo serviceInfo in serviceList)
{
String name = serviceInfo.SettingsActivityName;
if (name.Equals(TALKBACK_SETTING_ACTIVITY_NAME))
{
Log.Debug(LogArea, "Talkback is active");
return true;
}
}
Log.Debug(LogArea, "Talkback is inactive");
return false;
}
}
That should be all you need to make it work.
Hope I could help you out.

Xamarin Forms - Take photograph without any user interaction

I have a requirement to take a photograph of a user in Xamarin Forms without them having to press the shutter button. For example, when the app launches it should show a preview and count down from 5 seconds (to give the user chance to get in position) then take a picture automatically.
I have tried the Xamarin Media Plugin library however this stackoverflow post and this GitHub issue state that this feature is not a supported.
I have seen a number of dead discussions such as this with people asking similar questions without resoltion.
I tried the LeadTools AutoCapture sample but this only seems to work for documents/text and not people (unless I am missing something??).
I am now working my way through the Camera2Basic sample which is quite old and only targets Android via android.hardware.camera2.
Are there any samples out there (or 3rd party libraries) that can acheive this requirement? Ideally I would like it to be cross platform (iOS and Android) but currently the main focus is Android.
You can create the Custom View Renderer on Android to achieve that.
And based on this offical sample is more convenient, just modify code as follow can achieve your wants.
This official sample can preview camera view in Xamarin Forms App, we just need to add a Timer to call the Frame from Camera after 5 seconds.The modified Renderer code as follow:
public class CameraPreviewRenderer : ViewRenderer<CustomRenderer.CameraPreview, CustomRenderer.Droid.CameraPreview>, Camera.IPreviewCallback
{
CameraPreview cameraPreview;
byte[] tmpData;
public CameraPreviewRenderer(Context context) : base(context)
{
}
protected override void OnElementChanged(ElementChangedEventArgs<CustomRenderer.CameraPreview> e)
{
base.OnElementChanged(e);
if (e.OldElement != null)
{
// Unsubscribe
cameraPreview.Click -= OnCameraPreviewClicked;
}
if (e.NewElement != null)
{
if (Control == null)
{
cameraPreview = new CameraPreview(Context);
SetNativeControl(cameraPreview);
}
Control.Preview = Camera.Open((int)e.NewElement.Camera);
// Subscribe
cameraPreview.Click += OnCameraPreviewClicked;
}
}
protected override void OnAttachedToWindow()
{
base.OnAttachedToWindow();
// call the timer method to get the current frame.
Device.StartTimer(new TimeSpan(0, 0, 5), () =>
{
// do something every 5 seconds
Device.BeginInvokeOnMainThread(() =>
{
Console.WriteLine("get data"+tmpData);
// using MessagingCenter to pass data to forms
MessagingCenter.Send<object, byte[]>(this, "CameraData", tmpData);
cameraPreview.Preview.StopPreview();
cameraPreview.IsPreviewing = false;
// interact with UI elements
});
return false; // runs again, or false to stop
});
}
void OnCameraPreviewClicked(object sender, EventArgs e)
{
if (cameraPreview.IsPreviewing)
{
cameraPreview.Preview.StopPreview();
cameraPreview.IsPreviewing = false;
}
else
{
cameraPreview.Preview.SetPreviewCallback(this);
cameraPreview.Preview.StartPreview();
cameraPreview.IsPreviewing = true;
}
}
protected override void Dispose(bool disposing)
{
if (disposing)
{
Control.Preview.Release();
}
base.Dispose(disposing);
}
// get frame all the time
public void OnPreviewFrame(byte[] data, Camera camera)
{
tmpData = data;
}
}
Now, Xamarin Forms can receive the data from MessagingCenter:
MessagingCenter.Subscribe<object, byte[]>(this, "CameraData", async (sender, arg) =>
{
MemoryStream stream = new MemoryStream(arg);
if (stream != null)
{
//image is defined in Xaml
image.Source = ImageSource.FromStream(() => stream);
}
});
image is defined in XAML: <Image x:Name="image" WidthRequest="200" HeightRequest="200"/>

'MyApp' has stopped and forcing to close

I'm developing Android app on Android studio using Opencv library and when I try to open my app it opens then right after that it closes and displaying crash message. I'm new on mobile development
Using : OpenCV310, Android Studio 3.0,
public class ScanLicensePlateActivity extends AppCompatActivity {
protected AnylineOcrScanView scanView;
private LicensePlateResultView licensePlateResultView;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//Set the flag to keep the screen on (otherwise the screen may go dark during scanning)
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
setContentView(R.layout.activity_anyline_ocr);
String license = getString(R.string.anyline_license_key);
// Get the view from the layout
scanView = (AnylineOcrScanView) findViewById(R.id.scan_view);
// Configure the view (cutout, the camera resolution, etc.) via json
// (can also be done in xml in the layout)
scanView.setConfig(new AnylineViewConfig(this, "license_plate_view_config.json"));
// Copies given traineddata-file to a place where the core can access it.
// This MUST be called for every traineddata file that is used
// (before startScanning() is called).
// The file must be located directly in the assets directory
// (or in tessdata/ but no other folders are allowed)
scanView.copyTrainedData("tessdata/GL-Nummernschild-Mtl7_uml.traineddata",
"8ea050e8f22ba7471df7e18c310430d8");
scanView.copyTrainedData("tessdata/Arial.traineddata", "9a5555eb6ac51c83cbb76d238028c485");
scanView.copyTrainedData("tessdata/Alte.traineddata", "f52e3822cdd5423758ba19ed75b0cc32");
scanView.copyTrainedData("tessdata/deu.traineddata", "2d5190b9b62e28fa6d17b728ca195776");
// Configure the OCR for license plate scanning via a custom script file
// This is how you could add custom scripts optimized by Anyline for your use-case
AnylineOcrConfig anylineOcrConfig = new AnylineOcrConfig();
anylineOcrConfig.setCustomCmdFile("license_plates.ale");
// set the ocr config
scanView.setAnylineOcrConfig(anylineOcrConfig);
// initialize with the license and a listener
scanView.initAnyline(license, new AnylineOcrListener() {
#Override
public void onReport(String identifier, Object value) {
// Called with interesting values, that arise during processing.
// Some possibly reported values:
//
// $brightness - the brightness of the center region of the cutout as a float value
// $confidence - the confidence, an Integer value between 0 and 100
// $thresholdedImage - the current image transformed into black and white
// $sharpness - the detected sharpness value (only reported if minSharpness > 0)
}
#Override
public boolean onTextOutlineDetected(List<PointF> list) {
// Called when the outline of a possible text is detected.
// If false is returned, the outline is drawn automatically.
return false;
}
#Override
public void onResult(AnylineOcrResult result) {
// Called when a valid result is found
String results[] = result.getText().split("-");
String licensePlate = results[1];
licensePlateResultView.setLicensePlate(licensePlate);
licensePlateResultView.setVisibility(View.VISIBLE);
}
#Override
public void onAbortRun(AnylineOcrError code, String message) {
// Is called when no result was found for the current image.
// E.g. if no text was found or the result is not valid.
}
});
// disable the reporting if set to off in preferences
if (!PreferenceManager.getDefaultSharedPreferences(this).getBoolean(
SettingsFragment.KEY_PREF_REPORTING_ON, true)) {
// The reporting of results - including the photo of a scanned meter -
// helps us in improving our product, and the customer experience.
// However, if you wish to turn off this reporting feature, you can do it like this:
scanView.setReportingEnabled(false);
}
addLicensePlateResultView();
}
private void addLicensePlateResultView() {
RelativeLayout mainLayout = (RelativeLayout) findViewById(R.id.main_layout);
RelativeLayout.LayoutParams params = new RelativeLayout.LayoutParams(
ViewGroup.LayoutParams.WRAP_CONTENT, ViewGroup.LayoutParams.WRAP_CONTENT);
params.addRule(RelativeLayout.CENTER_HORIZONTAL, RelativeLayout.TRUE);
params.addRule(RelativeLayout.CENTER_VERTICAL, RelativeLayout.TRUE);
licensePlateResultView = new LicensePlateResultView(this);
licensePlateResultView.setVisibility(View.INVISIBLE);
mainLayout.addView(licensePlateResultView, params);
licensePlateResultView.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
startScanning();
}
});
}
private void startScanning() {
licensePlateResultView.setVisibility(View.INVISIBLE);
// this must be called in onResume, or after a result to start the scanning again
scanView.startScanning();
}
#Override
protected void onResume() {
super.onResume();
startScanning();
}
#Override
protected void onPause() {
super.onPause();
scanView.cancelScanning();
scanView.releaseCameraInBackground();
}
#Override
public void onBackPressed() {
if (licensePlateResultView.getVisibility() == View.VISIBLE) {
startScanning();
} else {
super.onBackPressed();
}
}
#Override
protected void onDestroy() {
super.onDestroy();
}}
source code is here.
If possible please help.
Logcat error shown here
Ideally more information regarding the error would be best i.e the opencv library version etc. Given it seems to be an Android issue, I would advise
File and issue or view issues pertaining to this error on their github page. Search for related Android errors to see if they match.
IF you cannot find a related error, file an issue there.

Accessibility function implementation problems in Android

I'm developing application that views books. There is a screen (Activity) which shows a book. It has custom view, something similar to ViewSwitcher and every page is a bitmap that is rendered by a custom View.
Now I should implement accessibility function - book should be read by the phone (audio).
I've read Accessibility section here https://developer.android.com/guide/topics/ui/accessibility/index.html but it is not clear enough.
I use SupportLibrary for accessibility management and now I have this code in ViewGroup (which manages book pages). Code 1:
private class EditionPagesViewSwitcherAccessibilityDelegate extends AccessibilityDelegateCompat {
private int mPageCount;
private double[] mPageRange;
#Override
public void onInitializeAccessibilityEvent(final View host, final AccessibilityEvent event) {
super.onInitializeAccessibilityEvent(host, event);
event.setClassName(EditionPagesViewSwitcher.class.getName());
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) {
event.setScrollable(canScroll());
}
if (event.getEventType() == AccessibilityEventCompat.TYPE_VIEW_SCROLLED && updatePageValues()) {
event.setItemCount(mPageCount);
// we use +1 because of user friendly numbers (from 1 not 0)
event.setFromIndex((int) (mPageRange[0] + 1));
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) {
event.setToIndex((int) (mPageRange[1] + 1));
}
}
}
#Override
public void onInitializeAccessibilityNodeInfo(final View host, final AccessibilityNodeInfoCompat info) {
super.onInitializeAccessibilityNodeInfo(host, info);
info.setClassName(EditionPagesViewSwitcher.class.getName());
info.setScrollable(canScroll());
info.setLongClickable(true);
if (canScrollForward()) {
info.addAction(AccessibilityNodeInfoCompat.ACTION_SCROLL_FORWARD);
}
if (canScrollBackward()) {
info.addAction(AccessibilityNodeInfoCompat.ACTION_SCROLL_BACKWARD);
}
}
#Override
public boolean performAccessibilityAction(final View host, final int action, final Bundle args) {
if (super.performAccessibilityAction(host, action, args)) {
return true;
}
switch (action) {
case AccessibilityNodeInfoCompat.ACTION_SCROLL_FORWARD: {
if (canScrollForward()) {
showNext();
return true;
}
}
return false;
case AccessibilityNodeInfoCompat.ACTION_SCROLL_BACKWARD: {
if (canScrollBackward()) {
showPrevious();
return true;
}
}
return false;
}
return false;
}
Here is code from page view Code 2:
#Override
public void onInitializeAccessibilityEvent(final View host, final AccessibilityEvent event) {
super.onInitializeAccessibilityEvent(host, event);
event.setClassName(EditionPageView.class.getName());
if (hasText()) {
event.getText().add(getPageRangeText());
final String trimText = mSurfaceUpdateData.getPageText().trim();
if (trimText.length() > MAX_TEXT_LENGTH) {
event.getText().add(trimText.substring(0, MAX_TEXT_LENGTH));
// event.getText().add(trimText.substring(MAX_TEXT_LENGTH, trimText.length()));
}
else {
event.getText().add(trimText);
}
}
}
#Override
public void onInitializeAccessibilityNodeInfo(final View host, final AccessibilityNodeInfoCompat info) {
super.onInitializeAccessibilityNodeInfo(host, info);
info.setClassName(EditionPageView.class.getName());
}
Because page text data loads asynchronous first time accessibility don't have any text while executes onInitializeAccessibilityEvent code. And then when data have been loaded I fire AccessibilityEvent.TYPE_VIEW_SELECTED and AccessibilityEvent.TYPE_VIEW_TEXT_CHANGED events. Then onInitializeAccessibilityEvent executes again and phone "read" book text.
So my questions:
Is my Accessibility implementation right? May be it is design wrong? Because I didn't find any good tutorial about this feature.
Why I need to use SDK versions checks in Support implementations in Code 1? Why support implementation doesn't handle it correctly?
Is firing TYPE_VIEW_SELECTED and TYPE_VIEW_TEXT_CHANGED really needed? Or may be some other code should be implemented?
The main question. In Code 2 there is commented code line. This code statement substring text to be less then MAX_TEXT_LENGTH (it's 3800) because if text is bigger nothing is played. Nothing. Is it accessibility restriction? Any other text that is less then this value is played well.
Does anyone know where I can find any good tutorial? (yes I saw samples).
Does anyone have any custom realizations to look through?
UPDATED
Well. Here is some answers:
As I can see TYPE_VIEW_SELECTED and TYPE_VIEW_TEXT_CHANGED events are not needed if you don't want this text to be read as soon as you get it.
On Nexus 7 all large text is played well (text up to 8000 symbols), so this issue doesn't reproduce on it, but on Samsung Galaxy Tab 10.1 (Android 4.0.4) and Genymotion emulator of Tab 10.1 with Android 4.3 does. And this is strange...
4.. According to the documentation of String.substring()
The first argument you pass is the start index in the original string, the second argument is the end index in the original string.
Example:
String text = "Hello";
partOfText = text.substring(2,text.length() - 1);
partOfText equals to "llo" (the first char is index 0)
So by putting your constant MAX_TEXT_LENGTH as a first argument, it would start at index 3800 to take out the substring.
http://developer.android.com/reference/java/lang/String.html#substring(int)
You are right MAX_TEXT_LENGTH is 3800.
About your doubt,
this code:
event.getText().add(trimText.substring(MAX_TEXT_LENGTH, trimText.length()));
}
you are trying to substring "trimText" from MAX_TEXT_LENGTH to trimText.length() !
Supposing that trimText = "STACK", trimText.length() = 5, then trimText.substring(3800,5) is going to be ?
At first, this doesn't have sense, using correctly would be like this:
trimText.substring(0,2) = "ST";

achartengine: getZoomRate always returns 1.5?

I have a TimeChart and I'm trying to save the zoom rate whenever it is changed.
Therefore I add a ZoomListener to my chart:
public void showChart()
{
mChartView = ChartFactory.getTimeChartView(this.context, this.mDataset,
this.mRenderer, TrackedValue.DATE_FORMAT_USER);
this.layout.addView(mChartView);
mChartView.setZoomRate(prefs.getChartZoomRate());
Log.d("showChart", "Set: "+prefs.getChartZoomRate());
mChartView.addZoomListener(new ZoomListener() {
#Override
public void zoomReset() {
// TODO Auto-generated method stub
}
#Override
public void zoomApplied(ZoomEvent e) {
prefs.setChartZoomRate(e.getZoomRate());
Log.d("zoomApplied", "Save: "+String.valueOf(e.getZoomRate()+", isZoomIn: "+e.isZoomIn()));
}
}, true, true);
}
When I see the chart and press the Zoom-In button, the output of my Log is:
zoomApplied Save: 1.5, isZoomIn: true
When I zoom out (via the zoom button), the log output is:
zoomApplied Save 1.5, isZoomIn: false
No matter how often I zoom in or out, I don't get why the e.getZoomRate() always returns 1.5, no matter what the actual zoom rate is...
The e.isZoomIn() is working fine though.
Any ideas?
I have just tried the AChartEngine demo program and called the line below and it displays me the correct 1.2 value:
mChartView.setZoomRate(1.2f);
Please make sure you are setting the correct value.

Categories

Resources