Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Working on fingerprint scanner using camera or without, its possibility and its success rate?, I came across one of open source SDK named FingerJetFX its provide feasibilty with android too.
The FingerJetFX OSE fingerprint feature extractor is platform-independent and can be built
for, with appropriate changes to the make files, and run in environments with or without
operating systems, including
Linux
Android
Windows
Windows CE
various RTOSs
but I'm not sure whether Fingerprint scanner possible or not, I download the SDK and digging but no luck, even didn't found any steps to integrate the SDK, so having few of question which listed below:
I'm looking for suggestion and guidance:
Fingerprint scanner can be possible in android using camera or without camera?
With the help of FingerJetFX can I achieve my goal?
If 2nd answer is yes, then can someone provide me any sort of steps to integrate SDK in android?
Your suggestion are appreciable.
Android Camera Based Solutions:
As someone who's done significant research on this exact problem, I can tell you it's difficult to get a suitable image for templating (feature extraction) using a stock camera found on any current Android device. The main debilitating issue is achieving significant contrast between the finger's ridges and valleys. Commercial optical fingerprint scanners (which you are attempting to mimic) typically achieve the necessary contrast through frustrated total internal reflection in a prism.
In this case, light from the ridges contacting the prism are transmitted to the CMOS sensor while light from the valleys are not. You're simply not going to reliably get the same kind of results from an Android camera, but that doesn't mean you can't get something useable under ideal conditions.
I took the image on the left with a commercial optical fingerprint scanner (Futronics FS80) and the right with a normal camera (15MP Cannon DSLR). After cropping, inverting (to match the other scanner's convention), contrasting, etc the camera image, we got the following results.
The low contrast of the camera image is apparent.
But the software is able to accurately determine the ridge flow.
And we end up finding a decent number of matching minutia (marked with red circles.)
Here's the bad news. Taking these types of up close shots of the tip of a finger is difficult. I used a DSLR with a flash to achieve these results. Additionally most fingerprint matching algorithms are not scale invariant. So if the finger is farther away from the camera on a subsequent "scan", it may not match the original.
The software package I used for the visualizations is the excellent and BSD licensed SourceAFIS. No corporate "open source version"/ "paid version" shenanigans either although it's currently only ported to C# and Java (limited).
Non Camera Based Solutions:
For the frightening small number of devices that have hardware that support "USB Host Mode" you can write a custom driver to integrate a fingerprint scanner with Android. I'll be honest, for the two models I've done this for it was a huge pain. I accomplished it by using wireshark to sniff USB packets between the scanner and a linux box that had a working driver and then writing an Android driver based on the sniffed commands.
Cross Compiling FingerJetFX
Once you have worked out a solution for image acquisition (both potential solutions have their drawbacks) you can start to worry about getting FingerJetFX running on Android. First you'll use their SDK to write a self contained C++ program that takes an image and turns it into a template. After that you really have two options.
Compile it to a library and use JNI to interface with it.
Compile it to an executable and let your Android program call it as a subprocess.
For either you'll need the NDK. I've never used JNI so I'll defer to the wisdom of others on how best us it. I always tend to choose route #2. For this application I think it's appropriate since you're only really calling the native code to do one thing, template your image. Once you've got your native program running and cross compiled you can use the answer to this question to package it with your android app and call it from your Android code.
Tthere are a couple immediate hurdles:
Obtaining a good image of the fingerprint will be critical. According to their site, fingerjet expects standard fingerprint images - e.g. 8-bit greyscale (high contrast), flattened fingerprint images. If you took fingerprint pictures with the camera, the user would need to have a flat transparent surface (glass) you could flatten the fingerprints onto in order to take the picture. Your app would then locate the fingerprint in the image, transform it into a format acceptable for fingerjet. A library like OpenCV would help do this.
FingerJetFX OSE does not appear to offer canned android support - you will have to compile the library for android and use it via JNI/NDK.
From there, fingerjet should provide you with a compact representation of the print you can use for matching.
It would be feasible, but the usage requirement (need for the user to have a flat transparent surface available) might be a deal breaker...
Related
Background
I am building an Optical Character Recognition (OCR) tool that makes sense of photographed Forms.
Arguably the most complicated part of the pipeline is to get the target Document into perspective; basically what is attempted in this Tutorial.
Due to the fact that the data is acquired often in very poor conditions, i.e.:
Uncontrolled Brightness
Covered or removed corners
Background containing more texture than the Target Document
Shades
Overlapped Documents
I have "solved" the Problem using Instance + Semantic Segmentation.
Situation
The images are uploaded by Users via an App that captures images as is. There are versions for the App in both Android and IOS.
Question
Would it be possible to force the App to use the Users' mobile phone Documents mode (if present) prior to acquiring the photo?
The objective is to simplify the pipeline.
In end effect, at a description level, the App would have to do three things:
1 - Activate the Documents mode
2 - Outline the Target Document; if possible even showing the yellow frame.
3 - Upload the processed file to the server. Orientation and extension are not important.
iOS
This isn't a "mode" for the native camera app.
Android
There isn't a way to have the the "documents mode" automatically selected. This isn't available for all Android devices, either. So even if you could, it wouldn't be reliable.
Best bet is following the documentation for Building a camera app, rather than using the native camera if special document scanning is essential. This won't come out of the box on either platform for you.
I have a question about using ARcore.
How could one develop an app that recognizes the environment and distributes certain virtual elements to specific places in the scene? For example, when viewing a hallway with a few doors, a ladder and an exit, the app places a virtual board (sign) over the ladder with the word 'ladder' written upon it; upon each door, a board with the name of the room and a board saying 'output' on the exit. Is this possible? It is not a GeolocationApp, because GPS would not be used. wanted to do this from the recognition of the environment.
Even with Vuforia I found it difficult to do so, and so far I have not.
Can someone help me? Is there a manual or tutorial about it? Preferably not on video.
I thank everyone.
You will not go to space today
You want to do something that virtually no software can do yet. The only system I've seen that can do anything even remotely close to what you want is the Microsoft Hololens. And even it can't do what you're asking. The hololens can only identify the physical shape of the environment, providing details such as "floor like, 3.7 square meters" and "wall like, 10.2 square meters," and it does so in 8 cm cube increments (any given cube is updated once every few minutes).
On top of that, you want to identify "doors." Doors come in all shapes and sizes and in different colors. Recognizing each door in a hallway and then somehow identifying each one with a room number from a camera image? Yeah, no. We don't have the technology to do that yet, not outside Google Labs and Boston Dynamics.
You will not find what you are looking for.
Dual-camera smartphones are relatively new in the market, but I was wondering if a camera app could explicitly choose to use only one lens, or to manually retrieve separate input from each lens.
I couldn't find any Android API documentation that is specifically designed for dual-lens phones, so I guess this is a hardware/OS-level implementation that would be difficult to override or bypass.
Android's Camera HAL documentation page doesn't mention dual-lens devices as well, but it does seem to strengthen that assumption.
I don't have much experience with iOS, but I guess it wouldn't be easier.
So the question is - how, if at all, would such task be possible on either Android or iOS?
Edit: seems that in iOS it is possible, as explained here (thanks to the4kman for pointing this out in the comments). So I guess the question remains for Android only.
Different vendors provide dual cameras for their Android devices in hope to improve the photo quality for average user, more often than not, specifically tuned for special conditions like challenging illumination or distortions of selfie mode. Each vendor uses proprietary technologies to handle dual cameras, and they are not interested to disclose the implementation details. The only public interface they support is a virtual single camera which is more or less compliant with Google specs.
This does not mean that you cannot be lucky chance be able to unlock the camera for some specific device. Playing with photo modes, or scenes, or other parameters, may accidentally give you a picture that only passed one lens of the dual setup.
In some rare cases some documentation is actually present. E.g. HTC let you control stereo vs. mono setting for the circa'2011 Evo 3D device.
Another example: this review implies that for Huawei P9 and P10, you will actually get one-camera result if you choose monochrome mode.
A recent article gives some perspective how different are approaches to dual camera among different manufacturers.
I've made a Camera App.
I want to add the functionality of anti-shake.
But I could not find the setting for anti-shake(image Stabilizer).
Plz Help me!!
Usually Image Stabilizer is a built-in camera feature, while OIS (Optical-Image-Stabilization) is a built-in hardware feature; by now really few devices support them.
If device hasn't a built-in feature, i think you cannot do anything.
Android doesn't provide a direct API to manage image stabilization, but you may try:
if android.hardware.Camera.getParameters().getSupportedSceneModes(); contains steadyphoto keyword (see here), your device supports a kind of stabilization (usually it shots when accelerometer data indicates a "stable" situation)
check android.hardware.Camera.getParameters().flatten(); for a "OIS" or "image-stabilizer" keyword/values or similar to use in Parameters.set(key, value);. For the Samsung Galaxy Camera you should use parameters.set("image-stabilizer", "ois");//can be "ois" or "off"
if you are really boring you may try reading the accelerometer data and decide to shot when the device looks steady.
Good luck.
If you want to develop software image stabilizer, OpenCV is helpful library for you. Following is the one of the way to stabilize the image using Feature.
At first, you should extract feature from image using feature extractor like SIFT, SURF algorithm. In my case, FAST+ORB algorithm is best. If you want more information, See this paper
After you get the features in images, you should find matching features with images.there are several matcher but Bruteforce matcher is not bad. If Bruteforce is slow in your system, you should use a algorithm like KD-Tree.
Last, you should get geometric transformation matrix which is minimize error of transformed points. You can use RANSAC algorithm in this process.
You can develop all this process using OpenCV and I already developed it in mobile devices. See this repository
I intend to work on a project, where I have to detect fingerprint from an image captured by android's camera.
I have no prior knowledge on fingerprint processing. Is there any open source library in android to accomplish the task? If not, then from which point can I have to start to gain adequate knowledge about fingerprint detection and processing?
I have not worked with image processing before. So there is so much to cover up, i can do that. But I want to know the exact starting point so that I don't have to circle around ...
Any links, papers or books related on this topic will be very convenient.
Thanks in advance!
How will the phone camera take an image like this? Fingerprints are acquired using specialized sensors.
If you're talking about getting fingerprint data from a normal photograph of a finger, I doubt you'll be able to distinguish between fingerprints accurately.