I want to develop an app where the face is detected using the front camera. However the image is not taken. The front camera should only detect the face and check whether it is within the correct dimensions. These dimensions will then help me to detect the distance between the face and front camera. I also want to check whether the phone is held at a distance of 20 inches or about 1 feet or not. If this is possible.. please help me with it. The app is basically for testing vision. I want to add the above feature in it.
You can achieve this without Unity or OpenCV or any other library. Please refer this and this links. They detect faces with help of android.hardware.Camera.Face class. Alo you will need to implement the listener android.hardware.Camera.FaceDetectionListener to grab the event of face detection. On being detected, you will get the face[] that will give you all the information related to the face that was detected. Hope this in what you wanted.
Related
I am working on a face recognition app where the picture is taken and sent to server for recognition.
I have to add a validation that user should capture picture of real person and of another picture. I have tried a feature of eye blink and in which the camera waits for eye blink and captures as soon as eye is blinked, but that is not working out because it detects as eye blink if mobile is shaken during capture.
Would like to ask for help here, is there any way that we can detect if user is capturing picture of another picture. Any ideas would help.
I am using react native to build both Android and iOS apps.
Thanks in advance.
Thanks for support.
I resolve it by the eye blink trick after all. Here is a little algorithm I used:
Open camera, click capture button:
Camera detects if any face is in the view and waits for eye blink.
If eye blink probability is 90% for both the eyes, wait 200 milliseconds. Detect face again with eye open probability > 90% to verify if the face is still there, and capture the picture at the end.
That's a cheap trick but working out so far.
Regards
On some iPhones (iOS 11.1 upwards), there's a so-called trueDepthCamera that's used for Face ID. With it (or the back facing dual camea system) you can capture images along with depth maps. You could exploit that feature to see if the face is flat (captured from an image) or has normal facial contours. See here...
One would have to come up with a 3d face model to fool that.
It's limited to only a few iPhone models though and I don't know about Android.
I am working on an android application which detect faces in real time and for that i have to make a camera to scan faces but i do not want to give a capture facility in that camera and after detection the next option of filters will come automatically. So, is this possible?
Yes of course,it is possible and OpenCV do that all for you... follow the given links:
1. Demo:
2. OpenCV sample code
3. OpenCV
Thanks,
I have this issue I have to make an application that is able to detect the x,y position of where the user is looking at the application, by that I dont mean like detecting user eye coordinates on face recognition like with opencv, I need the exact position of where the user is watching the screen, I have look everywhere and seem to find nothing on it. a little help of where can I research more about it would be nice, right now I can only find is opencv answers related to show the coordinates of both eyes when doing some facial recognition showing the camera, but thats not what I need.
thanks in advance!
Is it possible to measure the distance between your android phone screen and user face?
I want to change the zoom ratio zoom of your android app screen base on the distance between your face and your phone screen.
Please help me.
Thanks.
Yes it is possible, as long as you have a front facing camera in your device. This was more or less my bachelor thesis.
Here is the code
https://github.com/philiiiiiipp/Android-Screen-to-Face-Distance-Measurement
and a video of the result https://www.youtube.com/watch?v=-6_pGkPKAL4
and the paper A new context - Screen to Face distance 1 1.pdf
I described how to calculate screen to face distance in a post on my blog. The algorithm is based on the distance between eyes. The further the face, the less the distance between your eyes appears on the camera.
https://ivanludvig.github.io/blog/2019/07/20/calculating-screen-to-face-distance-android
I don't believe it's possible.
Not all Android devices are equipped with a proximity sensor
These sensors are just meant to detect wether or not you have your cheek next to the phone.
EDIT following OP comment :
Technically, you could indeed use some image recognition algorithm to compute the screen-face distance based on the input from front camera.
But:
Not all Android devices have a front facing camera.
Implementing this from scratch would be extremely complex if you have no background in image processing.
Is there a way I could show what the hind-side camera captures on a full-screen such that it creates an illusion of screen being see-through? It doesn't need to be perfect, just convincing enough, a little lag won't make any difference.
Is it possible to create such an effect using phone camera? If yes, how can the effect be achieved? (as in what transformations to apply etc.)
(I already know how to create a simple Camera Preview)
Edit : Now, I also know it has been done, http://gizmodo.com/5587749/the-samsung-galaxy-s-goes-see+through, But, I still have no clue how to properly do this, I know trial and error is one way, other is calculating what part a user should be seeing if phone wasn't there.
I think there would be some factors involved like -
viewing distance,
viewing angle,
camera zoom range,
camera focus,
camera quality,
phone orientation,
camera position (where is camera located on phone) etc.
So, I don't feel this problem has a simple enough solution, if it is not so, please clarify with an answer.
Thanks for help,
Shobhit,
You can use standard 3D projection math to project a portion of the backside camera image onto the display; you can manage this by assuming everything the camera sees is at a particular depth from the backside camera, and by assuming a particular viewpoint for the observer
You can improve on this by looking for faces/eyes using the frontside camera. You can get a rough estimate of the the viewing distance from the eye spacing, and assume a viewer position midway between the eyes. Of course, this only works for one viewer at a time (e.g., if your face tracker finds multiple faces, you can select one of them).
Also, you can improve the illusion by calibrating the camera and screen so you can match the color and brightness from one to the other.