Activating Users' mobile phone camera Documents mode in App - android

Background
I am building an Optical Character Recognition (OCR) tool that makes sense of photographed Forms.
Arguably the most complicated part of the pipeline is to get the target Document into perspective; basically what is attempted in this Tutorial.
Due to the fact that the data is acquired often in very poor conditions, i.e.:
Uncontrolled Brightness
Covered or removed corners
Background containing more texture than the Target Document
Shades
Overlapped Documents
I have "solved" the Problem using Instance + Semantic Segmentation.
Situation
The images are uploaded by Users via an App that captures images as is. There are versions for the App in both Android and IOS.
Question
Would it be possible to force the App to use the Users' mobile phone Documents mode (if present) prior to acquiring the photo?
The objective is to simplify the pipeline.
In end effect, at a description level, the App would have to do three things:
1 - Activate the Documents mode
2 - Outline the Target Document; if possible even showing the yellow frame.
3 - Upload the processed file to the server. Orientation and extension are not important.

iOS
This isn't a "mode" for the native camera app.
Android
There isn't a way to have the the "documents mode" automatically selected. This isn't available for all Android devices, either. So even if you could, it wouldn't be reliable.
Best bet is following the documentation for Building a camera app, rather than using the native camera if special document scanning is essential. This won't come out of the box on either platform for you.

Related

Can two apps use camera at the same time in android phone?

Or is it impossible no matter how you build or code apps?
I am saying in android version 5.0
Which was released 2014-2016.
I mean if when one app is using camera in “background” , is using another camera app at the same time possible??
Some people say it depends on how the applications is coded.
Also the app “sound assistant” makes 2 music apps using speaker at the same time.
Then how about camera???
And i saw a comment saying
“Our current frame work does support limited support for multi-app access to the camera.
We allow one (and only one) "controlling" app to the camera, but an arbitrary number of "shared" apps to access the same camera.
There are some limitations for "shared" apps:
No camera controls Exposure/White
Balance/Focus, etc...).
No media type selection (can't choose VGA vs.
720p vS. 1080p, etc...).
Only access to a video stream by default, photo pins are blocked (any photo operation will use a video frame instead).
The "controlling" app decides what media type to use and can set any camera control. Any of the sharing app can register to be notified if a controlling app releases control of the camera, at which point, the sharing app can re-open the camera in controlling mode.
The mechanism described above does not require any copying of the captured frame so the overhead is minimal.”
What this comment means? does it say 2apps can access to camera at the sametime anyway in android??
Any way thanks for reading and I want to know if the phone is rooted , this can be possible
thank you for reading !

How to get all intermediate stages of image processing in Android?

If I use camera2 API to capture some image I will get "final" image after image processing, so after noise reduction, color correction, some vendor algorithms and etc.
I should also be able to get raw camera image following this.
The question is can I get intermediate stages of image as well? For example let's say that raw image is stage 0, then noise reduction is stage 1 color correction stage 2 and etc. I would like to get all of those stages and present them to user in an app.
In general, no. The actual hardware processing pipelines vary a great deal between different chip manufacturers and chip versions even from the same manufacturer. Plus each Android device maker then adds their own software on top of that.
And often, it's not possible to dump outputs from every step of the process, only some of them.
So making a consistent API for fetching this isn't very feasible, and the camera2 API doesn't have support for it.
You can somewhat simulate it by turning things like noise reduction entirely off (if supported by the device) and capturing multiple images, but that of course isn't as good as multiple versions of a single capture.

Wrong Camera Orientation with Android & Vuforia

We are developing our own Android-based hardware and we wish to use Vuforia (developed via Unity3D) for certain applications. However, we are having problems making Vuforia work well with our current camera orientation settings.
On our hardware, when the camera is placed horizontally - everything works fine. That is, when the camera is parallel to the placement of the display. However, we need to place the camera vertically, or in other words, with a 90 degree difference to the placement of the display. These are all hardware settings. Our kernel is programmed according to such settings and every other program that utilises the camera works compatibly with everything, including our IMU sensors. However, apps developed with Vuforia behave completely odd when the camera is placed vertically.
We assume the problem to be related to Vuforia's algorithms of processing raw camera data however we are not sure. Moreover, we do not know how to fix the situation. For further details, I can list:
-When "Enable Video Background" is on, the projected image is distorted and no video feed is available. The AR projection appears on a black background with distorted dimensions.
-When "Enable Video Background" is on and the device is rotated, the black background is replaced by flickering solid colors.
-When "Enable Video Background" is off, the AR projection has normal dimensions (no distortion) however it is tracked with wrong axis settings. For example, when the target moves left in real world, the projection moves up.
-When "Enable Video Background" is off and the device is rotated, the AR projection is larger compared to its appearance when the device is in it's default state.
I will be glad to provide any more information you need.
Thank you very much, have a nice day.
PS: We have found out that applications that use the camera as a main purpose (Camera apps, Barcode Scanners, etc) work fine while apps for which camera usage is an extra quality (such as some games) have the same problem as Vuforia. This make me think that apps who access the camera directly work fine whereas those who use Android API and classes fail for some reason.
First understand that every platform deals with cameras differently and that beyond this different android phone manufacturers deal with these differently as well. In my testing WITHOUT vuforia I had to transform the plane I cast the video feed onto 0,-90,90 for android/iphone and -270,-90,90 for the windows surface tablet. Past this the iPhone rear camera was mirrored, the android front camera was mirrored as well as the surface front camera. That is easy to account for, but an annoying issue is that the Google Pixel and Samsung front cameras were mirrored across the y (as were ALL iOS on the back camera), but the Nexus 6p was mirrored across the x. What I am getting at here is that there are a LOT of devices to account for with android so try more than just that one device. Vuforia so far has dealt with my pixel and 4 of my iOS devices just fine.
As for how to fix your problem:
Go into your player settings for unity and look at the orientation. There are a few options here and my application only uses portrait so I force portrait and it seems to work fine (none of the problems I had to account for with the above mentioned scenario). Vuforia previously did NOT support auto rotation so you need to make sure you have the latest version since it sounds like that is what you need. If the auto rotate is set and it is not working right you may have to account for that specific device (don't do this for all devices until after you test those devices). To account for that device use an if (or construct a case statement if you have multiple instances of this problem with different devices) and then reflect or translate as needed. Cross platform development systems (like unity) doesn't always get everything perfect since there is basically no standard. In these cases you have to directly account for them by creating a method and a case statement within that so you can cleanly and modularly manipulate all necessary devices. It is a pain, but it beats developing for all devices separately.
One more thing is make sure you check out the vuforia configuration file as it has some settings such as camera mirror and direction settings on there. These seem to be public settings so you should also be able to script to these in your case statement in the event you need to use "Flip Horizontally" for one phone, but not another.

Fingerprint Scanner using Camera [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Working on fingerprint scanner using camera or without, its possibility and its success rate?, I came across one of open source SDK named FingerJetFX its provide feasibilty with android too.
The FingerJetFX OSE fingerprint feature extractor is platform-independent and can be built
for, with appropriate changes to the make files, and run in environments with or without
operating systems, including
Linux
Android
Windows
Windows CE
various RTOSs
but I'm not sure whether Fingerprint scanner possible or not, I download the SDK and digging but no luck, even didn't found any steps to integrate the SDK, so having few of question which listed below:
I'm looking for suggestion and guidance:
Fingerprint scanner can be possible in android using camera or without camera?
With the help of FingerJetFX can I achieve my goal?
If 2nd answer is yes, then can someone provide me any sort of steps to integrate SDK in android?
Your suggestion are appreciable.
Android Camera Based Solutions:
As someone who's done significant research on this exact problem, I can tell you it's difficult to get a suitable image for templating (feature extraction) using a stock camera found on any current Android device. The main debilitating issue is achieving significant contrast between the finger's ridges and valleys. Commercial optical fingerprint scanners (which you are attempting to mimic) typically achieve the necessary contrast through frustrated total internal reflection in a prism.
In this case, light from the ridges contacting the prism are transmitted to the CMOS sensor while light from the valleys are not. You're simply not going to reliably get the same kind of results from an Android camera, but that doesn't mean you can't get something useable under ideal conditions.
I took the image on the left with a commercial optical fingerprint scanner (Futronics FS80) and the right with a normal camera (15MP Cannon DSLR). After cropping, inverting (to match the other scanner's convention), contrasting, etc the camera image, we got the following results.
The low contrast of the camera image is apparent.
But the software is able to accurately determine the ridge flow.
And we end up finding a decent number of matching minutia (marked with red circles.)
Here's the bad news. Taking these types of up close shots of the tip of a finger is difficult. I used a DSLR with a flash to achieve these results. Additionally most fingerprint matching algorithms are not scale invariant. So if the finger is farther away from the camera on a subsequent "scan", it may not match the original.
The software package I used for the visualizations is the excellent and BSD licensed SourceAFIS. No corporate "open source version"/ "paid version" shenanigans either although it's currently only ported to C# and Java (limited).
Non Camera Based Solutions:
For the frightening small number of devices that have hardware that support "USB Host Mode" you can write a custom driver to integrate a fingerprint scanner with Android. I'll be honest, for the two models I've done this for it was a huge pain. I accomplished it by using wireshark to sniff USB packets between the scanner and a linux box that had a working driver and then writing an Android driver based on the sniffed commands.
Cross Compiling FingerJetFX
Once you have worked out a solution for image acquisition (both potential solutions have their drawbacks) you can start to worry about getting FingerJetFX running on Android. First you'll use their SDK to write a self contained C++ program that takes an image and turns it into a template. After that you really have two options.
Compile it to a library and use JNI to interface with it.
Compile it to an executable and let your Android program call it as a subprocess.
For either you'll need the NDK. I've never used JNI so I'll defer to the wisdom of others on how best us it. I always tend to choose route #2. For this application I think it's appropriate since you're only really calling the native code to do one thing, template your image. Once you've got your native program running and cross compiled you can use the answer to this question to package it with your android app and call it from your Android code.
Tthere are a couple immediate hurdles:
Obtaining a good image of the fingerprint will be critical. According to their site, fingerjet expects standard fingerprint images - e.g. 8-bit greyscale (high contrast), flattened fingerprint images. If you took fingerprint pictures with the camera, the user would need to have a flat transparent surface (glass) you could flatten the fingerprints onto in order to take the picture. Your app would then locate the fingerprint in the image, transform it into a format acceptable for fingerjet. A library like OpenCV would help do this.
FingerJetFX OSE does not appear to offer canned android support - you will have to compile the library for android and use it via JNI/NDK.
From there, fingerjet should provide you with a compact representation of the print you can use for matching.
It would be feasible, but the usage requirement (need for the user to have a flat transparent surface available) might be a deal breaker...

Jquery Mobile Application Strange Behaviour

I have created one application which contains several buttons to home page clicking on one of that button my application redirects to some view which contains JQM form, with JQM calendar, text field, buttons and database etc....
My query is that when I test my application in android device on that time application works a little bit slow, even if I have not used any images,or any data which can contain more space. That's my first query and second one is that when I tested my application to android tablet on that time that form page is appearing for a while and suddenly it redirects back to home page automatically, while this same feature working well for android phone.
Why this strange issue?
If any one can guide me on it that it will be my pleasure
It's difficult to make assumptions regarding the slow performance and the redirection issue. Below you can find some aspects which in my opinion affect the performance of a mobile application which consists of HTML5, CSS3, JavaScript and should be taken into consideration on the analysis, the design and the development phase.
Implementation method based on the size
When developing small mobile applications the usage of a single HTML page using internal-AJAX page linking is recommended. For bigger mobile applications, a method of using different HTML pages with internal-AJAX linking is recommended. Try to create reusable page templates.
Page Transitions
As stated in the jQM 1.1.1 Docs, by default, all transitions except fade require 3D transform support. Devices that lack 3D support will fall back to a fade transition, regardless of the transition specified. jQM does this this to proactively exclude poorly-performing platforms like Android 2.x from advanced transitions and ensure they still have a smooth experience. Note that there are platforms such as Android 3.0 that technically support 3D transforms, but still have poor animation performance so this won't guarantee that every browser will be 100% flicker-free. Decide the transition type that you will use after considering the above.
Minify JS and CSS files
Each page should be as lightweight as possible. The minification’s goal is to preserve the operational qualities of the code while reducing its overall byte footprint. There are a lot of tools available on the WEB like the YUI Compressor, the Minify and many more. Furthermore there are tools like the JLint which is used to check whether JavaScript source code complies with coding rules. JLint is a code quality tool which checks for problems in the JavaScript code. The reported problems are not necessarily syntax errors but may be structural problems. Note that JLint does not prove that your code is correct. Consider it as a helping tool. Also there are tools for performing CSS optimization. The optimization helps you to get smaller CSS file sizes and better written code. You can find a lot of CSS optimizers available on the WEB such as CleanCSS and CSSTidy.
Components limits
The HTML pages is recommended to be limited to 25kb in order to gain the optimal caching advantage for the majority of the mobile web browsers. The caching limit varies depending on the OS version. For example Android 2.1 has a caching limit of approximately 2Mb.
HTML5 & CSS3
Try to create easy to read, to extend and reusable code. It is important to take full advantage of the using of HTML5 and CSS3. The HTML5 DocType declaration (<!DOCTYPE html>) should be the first thing in your HTML5 document before the html tag. It is an instruction to the web browser about what version of HTML the page is written in.
Use the W3C mobileOK Checker
The W3C mobileOK Checker is a free service by W3C that helps check the level of mobile friendliness of Web documents, and in particular assert whether a Web document is mobileOK. A Web Page is mobileOK when it passes all the tests. The tests are defined in the mobileOK Basic Tests 1.0 specification. To understand why checking a Web document for mobile-friendliness really matters, it is probably worth emphasizing a few points about the so-called mobile world. Compared to a regular desktop computer, a mobile device may be regarded as limited at first glance: smaller screen size, smaller processing powers, smaller amount of memory, no mouse, and so on. Compared to fixed data connections, mobile networks can be slow and often have a higher latency. Compared to a user sitting in front of his computer, the user on the go has limited time and is easily distracted. On top of these constraints, the mobile world is highly fragmented: many different devices, each of them defining a unique set of supported features.
Consider the appearance on different screen sizes
The screens density and the viewport size and web page’s scale should be taken into consideration when targeting different screen sizes. The viewport metadata can be used to define the viewport size where viewport is the container area in which the page is drawn. The viewport scale defines the zoom level which is applied to the web page. The target-densitydpi viewport property and CSS, JS techniques can be used to change the target screen density for the web page. There are plenty of articles on the WEB regarding the appearance on different screen sizes.
Identify the flows with potential delay
The PageSpeed Firefox/Chrome extension can be used to check the pages speed. When you profile a web page with Page Speed, it evaluates the page's conformance to a number of different rules. These rules are general front-end best practices you can apply at any stage of web development. The extension gives specific tips and suggestions on how to best implement the rules and incorporate them into the development process.
I hope this helps.

Categories

Resources