How to Integrate fingerprint lock in Android - android

I am new to Android Development. Now I want to integrate Fingerprint lock in my application. Which is the best. Please help me to find good fingerprint lock.

USING CAMERA AS FINGER LOCK
as refernece check this
Fingerprint Scanner using Camera
As someone who's done significant research on this exact problem, I can tell you it's difficult to get a suitable image for templating (feature extraction) using a stock camera found on any current Android device. The main debilitating issue is achieving significant contrast between the finger's ridges and valleys. Commercial optical fingerprint scanners (which you are attempting to mimic) typically achieve the necessary contrast through frustrated total internal reflection in a prism.
FTIR in Biometrics
In this case, light from the ridges contacting the prism are transmitted to the CMOS sensor while light from the valleys are not. You're simply not going to reliably get the same kind of results from an Android camera, but that doesn't mean you can't get something useable under ideal conditions.
I took the image on the left with a commercial optical fingerprint scanner (Futronics FS80) and the right with a normal camera (15MP Cannon DSLR). After cropping, inverting (to match the other scanner's convention), contrasting, etc the camera image, we got the following results.
enter image description here
The low contrast of the camera image is apparent.
enter image description here
But the software is able to accurately determine the ridge flow.
enter image description here
And we end up finding a decent number of matching minutia (marked with red circles.)
Here's the bad news. Taking these types of up close shots of the tip of a finger is difficult. I used a DSLR with a flash to achieve these results. Additionally most fingerprint matching algorithms are not scale invariant. So if the finger is farther away from the camera on a subsequent "scan", it may not match the original.
The software package I used for the visualizations is the excellent and BSD licensed SourceAFIS. No corporate "open source version"/ "paid version" shenanigans either although it's currently only ported to C# and Java (limited).
Non Camera Based Solutions:
For the frightening small number of devices that have hardware that support "USB Host Mode" you can write a custom driver to integrate a fingerprint scanner with Android. I'll be honest, for the two models I've done this for it was a huge pain. I accomplished it by using wireshark to sniff USB packets between the scanner and a linux box that had a working driver and then writing an Android driver based on the sniffed commands.
Cross Compiling FingerJetFX
Once you have worked out a solution for image acquisition (both potential solutions have their drawbacks) you can start to worry about getting FingerJetFX running on Android. First you'll use their SDK to write a self contained C++ program that takes an image and turns it into a template. After that you really have two options.
Compile it to a library and use JNI to interface with it.
Compile it to an executable and let your Android program call it as a subprocess.
For either you'll need the NDK. I've never used JNI so I'll defer to the wisdom of others on how best us it. I always tend to choose route #2. For this application I think it's appropriate since you're only really calling the native code to do one thing, template your image. Once you've got your native program running and cross compiled you can use the answer to this question to package it with your android app and call it from your Android code.

1 ] There is no APIs or Hardware support for finger print detection in Android platform.
2 ] Existing finger print lock systems are not working on finger print pattern matching.
3 ] They are working on pressure comparison , area of finger impression etc.
Reference : Link

Related

Check if user is looking at screen from code

In newer Android devices there's the possibility to unlock the phone with your face. It will also be possible with the Iphone X.
Is there a way of using these sensors/camera to check if the user is looking at the screen?
Edit:
I found that there's also a Vision Framework from Google: Vision Framework
Yes and no.
The built-in Face ID feature on iPhone X can unlock the device and authorize other built-in features (Apple Pay, iTunes/App Store payment, etc). You can also use it as a method of authorization in your app — the same LocalAuthentication framework calls that you use to support Touch ID on other devices automatically use Face ID instead on iPhone X.
Face ID, by default, requires the user to be looking at the screen. Thus, if your use case for attention detection has to do with authorization or unlocking, you can use LocalAuthentication to do it. (However, the user can disable attention detection in Accessibility settings, reducing the security but increasing the usability of Face ID. Third-party apps can't control or even read this setting.)
If you're talking about more directly doing attention detection or gaze tracking... Apple doesn't provide any API that exposes the inner workings of Face ID, or at least the gaze tracking part. Here's what they do have:
ARKit offers ARFaceTrackingConfiguration (see also sample code), which provides a detailed 3D model of the face in real time (supposedly using some of the same Neural Engine stuff as Face ID for detail and performance).
But as far as ARKit is concerned, eyes are just two holes in the face — there's no gaze tracking.
Apple's Vision framework offers face detection and face landmark recognition (that is, it locates eyes, nose, mouth, etc). Vision does identify the eye outline and the pupil, which you could theoretically use as a basis for gaze tracking.
However, since Vision offers such data only in 2D and doesn't get a 3D pose for the face, you're still left with a hefty computer vision problem if you want to build gaze tracking yourself. Vision processes 2D images, which means that it doesn't require iPhone X (but also means that it doesn't benefit from the TrueDepth camera on iPhone X either).
AVCapture offers access to the TrueDepth camera, so you can get the same color + depth imagery that Face ID and ARKit use to do their magic. (You just don't get said magic for yourself.)
None of this is to say that gaze tracking isn't possible on iOS in general or iPhone X specifically — all the building blocks are there, so given enough R&D effort you can implement it yourself. But Apple doesn't provide any developer access to the built-in gaze tracking mechanism.
Yes, in iOS 11 developer can use this feature in their third party application too through the iOS latest Vision Framework
Whole idea behind this feature is using front camera with facial recognition.
But you have to optimise it for when to capture images for processing
Tips
On application become active or become in foreground.
Also when user interact with any UI control or widget like (buttons,
table , touch events etc ).
Make sure stop or pause processing when
application not active.
Also you can use Gyroscope and other sensors to find device physical state.
If you are open to bulk up your app with a ML model, Google's media pipe is another option. You can even track the user's iris in this way:
https://google.github.io/mediapipe/solutions/iris
Obviously this is a overkill for simple eye detection, but you should be able to do much more with these models and framework.

Sony Alpha R 7 Camera - - On Camera App

I have a Sony Alpha 7R Camera and look for information about the "build in" application support. What are those apps, android based? I there information public about how to create and install your own camera app -- NOT talking about the remote api.
The few available apps are kind of primitive and limited, in particular I'd like to create a more versatile "Interval timer" app -- the time lapse thing is kind of too simple for my purpose.
To be particular: versatile bracketing, absolute start/stop times, complex shooting programs with pre programmed ISO, Shutter, bracketing etc. for a series for programmed interval shooting, or simply as fast as possible... As an example -- I just suffered "lost valuable time" shooting a Eclipse as I had to reconfigure/switch modes, etc.
Ideal would be a scenario I could upload a shooting script to the app on the camera.
The real answer is that you can build applications for the Camera API using many different methods. When you create an application for the Camera API you are just making API calls to the camera while your code is connected to the camera Wifi somehow. In the end the easiest way to distribute your code is using smartphones as it will work for IOS, Windows, etc.. as well as Android, but you are not limited to these technologies. Please let me know if I can provide more information.

Controlling camera hardware in Android phone

I want to control the aperture, shutter speed and ISO on my android phone. Is there a way in which I can access the hardware features?
I won't say it's impossible to do this, but it IS effectively impossible to do it in a way that's generalizable to all -- or even many -- Android phones. If you stray from the official path defined by the Android API, you're pretty much on your own, and this is basically an embedded hardware development project.
Let's start with the basics: you need a schematic of the camera subsystem and datasheets for everything in the image pipeline. For every phone you intend to support. In some cases, you might find a few phones with more or less identical camera subsystems (particularly when you're talking about slightly-different carrier-specific models sold in the US), and occasionally you might get lucky enough to have a lot of similarity between the phone you care about and a Nexus phone.
This is no small feat. As far as I know, not even NEXUS phones have official schematics released. Popular phones (especially Samsung and HTC) usually get teardowns published, so everyone knows the broad details (camera module, video-encoding chipset, etc), but there's still a lot of guesswork involved in figuring out how it's all wired together.
Make no mistake -- this isn't casual hacking territory. If terms like I2C, SPI, MMC, and iDCT mean nothing to you, you aren't likely to get very far. If you don't literally understand how CMOS image sensors are read serially, and how bayer arrays are used to produce RGB images, you're almost certainly in over your head.
That doesn't mean you should throw in the towel and give up... but it DOES mean that trying to hack the camera on a commercial Android phone probably isn't the best place to start. There's a lot of background knowledge you're going to need in order to pull off a project like this, and you really need to acquire that knowledge from a hardware platform that YOU control & have proper documentation for. Make no mistake... on the hierarchy of "hard" Android software projects, this ranks pretty close to the top of the list.
My suggestion (simplified and condensed a bit): buy a Raspberry Pi, and learn how to light up a LED from a GPIO pin. Then learn how to selectively light up 8 LEDs through an 74HC595 shift register. Then buy a SPI-addressed flash chip on a breakout board, and learn how to write to it. At some point, buy a video image sensor with "serial" (fyi, "serial" != "rs232") interface from somebody like Sparkfun.com & learn how to read it one frame at a time, and dump the raw RGB data to flash. Learn how to use i2c to read and write the camera's control registers. At this point, you MIGHT be ready to tackle the camera in an Android phone for single photos.
If you're determined to start with an Android phone, at least stick to "Nexus" devices for now, and don't buy the phone (if you don't already own it) until you have the schematics, datasheets, and sourcecode in your possession. Don't buy the phone thinking you'll be able to trace the schematic yourself. You won't. At least, not unless you're a grad student and have one hell of a graduate-level electronics lab (with X-Ray capabilities) at your disposal. Most of these chips and modules are micro-BGA. You aren't going to trace them with a multimeter, and every Android camera I'm aware of has most of its low-level driver logic hidden in loadable kernel modules whose source isn't available.
That said, I'd dearly love to see somebody pull a project like this off. :-)
Android has published online training which contain all the information you need:
You can find it here - Media APIs
However, there are limitations, not all hardware's support all kind of parameters.
And if I recall correctly, you can't control the shutter speed and ISO.

Is smartphone(likely Android) can be used for image processing unit?

I'm new in image processing.
I have a photocamera(not built-in in smartphone) that would use smartphone(likely Android) as processing unit. The cam will be placed on car's back or maybe car's roof(let mark this car as X) and the smartphone should alert if any other car aproaches to this car X or if other car drive strangely(goes right and left)...
My question is: can I use smartphone as processing unit for this kind of purpose or I'll need to have some server that would process the images and that server will sent the result to smartphone?
1 - If you think that smartphone(likely Android) could NOT manage this kind of image-processing tell me why please?
2 - If you think that smartphone(likely Android) DO could manage with this what tools I can use for this purpose?
It certainly can be done. I've used a Eee PC (1.4 GHz Atom processor) for image processing (3D reconstruction) and it worked very well. The system as a whole wasn't powerful enough, but the issue here was other stuff not directly related to the image processing portion (path finding, etc.). Depending on what you're going to do, you shouldn't have any issues processing images at 15, 30 or even 60 Hz.
As a note: Ever checked Android's camera app (the default one)? Newer versions offer a "background" mode for video recordings, replacing the actual backdrop with other videos. This is essentially image processing.
As for tools: I'm not sure if there's a OpenCV port yet, but this really depends on what (and how) you want to do it. Simple tracking, depth detection, etc. can definitely be done without such libraries and without having to rewrite too much.

How to get a software's hardware requirements when transplant from windows to Android?

I want to transplant a 3D program written in OpenGL on windows platform to Android, but I wonder if it can run smoothly on general Android platforms, so i want to estimate how much hardware resource is sufficient for it to run smoothly. It is some kind like the hardware requirements for a software or 3d game that a company will recommend the users. I don't know how can i get a hardware requirements of my program when transplant to Android.
i used gdebugger and it gave me some information but i don't think that is enough for me. Anyone here have some idea or solution? Many thanks in advance!
If your program is simple enough, you could write up some estimates about texture fill rate, which is a pretty basic (and old) metric of rendering performance. Nearly every 3D chip comes with a theoretical fill rate, so you can get the theoretical numbers of both your desktop system and some Android phones.
The texture memory footprint is another thing that you can estimate, especially using gdebugger. Once again, these numbers are known for most chips.
This is a quick way to produce some numbers, obviously without any real life performance guarantees.
The best way would be to test it on an actual device, and get an idea of what hardware works well. You could distribute a beta app and get some feedback too.
Depends on feature set that you use. For example, if you use FBO, the device will have to support framebuffer extension. If you use MSAA, smooth line, the device will have support corresponding extensions.
After listing down your requirements, you can use glGet to check for the device suppport
http://www.opengl.org/sdk/docs/man/xhtml/glGet.xml

Categories

Resources