I am working on a face recognition app where the picture is taken and sent to server for recognition.
I have to add a validation that user should capture picture of real person and of another picture. I have tried a feature of eye blink and in which the camera waits for eye blink and captures as soon as eye is blinked, but that is not working out because it detects as eye blink if mobile is shaken during capture.
Would like to ask for help here, is there any way that we can detect if user is capturing picture of another picture. Any ideas would help.
I am using react native to build both Android and iOS apps.
Thanks in advance.
Thanks for support.
I resolve it by the eye blink trick after all. Here is a little algorithm I used:
Open camera, click capture button:
Camera detects if any face is in the view and waits for eye blink.
If eye blink probability is 90% for both the eyes, wait 200 milliseconds. Detect face again with eye open probability > 90% to verify if the face is still there, and capture the picture at the end.
That's a cheap trick but working out so far.
Regards
On some iPhones (iOS 11.1 upwards), there's a so-called trueDepthCamera that's used for Face ID. With it (or the back facing dual camea system) you can capture images along with depth maps. You could exploit that feature to see if the face is flat (captured from an image) or has normal facial contours. See here...
One would have to come up with a 3d face model to fool that.
It's limited to only a few iPhone models though and I don't know about Android.
Related
I am working on app that detect eye blink of the user. I have been searching the web for 2 days but still don't have clear vision about how this can be done.
As far as i have knew is that the system supports face detection which is detecting if there is a face in the picture and locating it.
But this works only with images and detect only faces which is not what i need. I need to open an camera activity and directly detect the face of the user and locate his eyes and other facial parts and wait till he blinks, like when you long click on the screen on snap chat.
I have seen a lot about open-cv but still not sure what it is or how to use it or if it seize my goals.
Note: snap chat has no API released for the technology used, and even it doesn't let anyone to talk to the engineers behind this technology.
I know that openCV has the ability to allow image processing on the device's camera feed (as opposed to only being able to process still images).
Here is an introductory tutorial on eye detection using openCV:
http://romanhosek.cz/android-eye-detection-and-tracking-with-opencv/
If you can't find eye-blink detection tutorials in a google search, I think you'll have to create the code for eye-blink detection on your own, but I think openCV will be a helpful tool in doing so. There are lots of beginner openCV tutorials to help you get started.
I want to recognise an image and also to extract information from the image. Like while pointing the camera on vehicle dashboard it should detect all the led lights and should show an Augmented reality information for each led lights and what is the meaning if the led is blinking.
I tried with Wikitude, Craft AR and other libraries that they are focusing on recognising one single image.
For me I want to recognise an image and within that image I want to detect all the led lights and display information in augmented reality way on the camera display.
I think you are probably looking for a computer vision segmentation/detecion problem. For that I would suggest to use openCV to process the images and detect the information of the leds that you need. Depending on what you want to do with that then you could need some of those AR libraries or not, but without more information I would suggest you to try to do some experiments on openCV to achieve your goals.
I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.
i'm trying to do a simple AR scene with NFT image that i've created with genTextData. The result works fairly well in unity editor, but once compiled and run on an android device, the camera resolution is very bad and there's no focus at all.
My marker is rather small (3 cm picture), and the camera is so blurred that the AR cannot identify the marker from far away. I have to put the phone right in front of it (still verrrrryy blurred) and it will show my object but with a lot of flickering and jittering.
I tried playing with the filter fields (Sample rate/cutoff..), it helped just a little bit wit the flickering of the object, but it would never display it from far away..i always have to put my phone like right in front of it. The result that i want should be: detecting the small marker (sharp resolution or/and good focus) from a fair distance away from it..just like the distance from your computer screen to your eyes.
The problem could be camera resolution and focus, or it could be something else. But i'm pretty sure that the AR cannot identify the marker points because of the blurriness.
Any ideas or solutions about this problem ?
You can have a look here:
http://augmentmy.world/augmented-reality-unity-games-artoolkit-video-resolution-autofocus
I compiled the Unity plugin java part and set it to use the highest resolution from your phone. Also the auto focus mode is activated.
Tell me if that helps.
I want to develop an app where the face is detected using the front camera. However the image is not taken. The front camera should only detect the face and check whether it is within the correct dimensions. These dimensions will then help me to detect the distance between the face and front camera. I also want to check whether the phone is held at a distance of 20 inches or about 1 feet or not. If this is possible.. please help me with it. The app is basically for testing vision. I want to add the above feature in it.
You can achieve this without Unity or OpenCV or any other library. Please refer this and this links. They detect faces with help of android.hardware.Camera.Face class. Alo you will need to implement the listener android.hardware.Camera.FaceDetectionListener to grab the event of face detection. On being detected, you will get the face[] that will give you all the information related to the face that was detected. Hope this in what you wanted.