I am working on an application which actually detects the objects or faces and measures the distance from camera to that object or face. I complete the face detection area, now is there any way to measure the distance between detected face from the point where camera is located.
Please Provide any link or source code I have searched a lot but all in vain.
Essentially, by tracking the distance between the user's eyes, and how this changes as the face is closer or further from the camera, a fairly accurate idea of the distance from the camera can be obtained.
Android has a built in face detector class that will handle determining where the face is, and even calculate eye separation for you.
A guy did this for his thesis, and posted the code on github along with some nice images outlining what it does, a demo video and a link to the paper he wrote.
Related
I've got a problem with Google's MLKit face detector, as it returns a face even if the face is half-covered by something and this makes the face recognition model I use to think it's a new face, so I would like to know a solution for this problem, maybe a different face detector or another solution using MLKit face detector.
Thanks in advance.
ML kit works on data. Your results will be more accurate as much as more data you will provide to your model.If its taking half covered image it will also be beneficial for your trained model. You can train model by providing many images like with eye close, eyes open, left face, right face, look down, look up, zoom or half face covered etc.Once your model have enough data then it will recognized your even if you are wearing a face mask or even if your eyes were closed.
According to my opinion MLKit is much enough to implement face detection in your app. They are also improving it contineously. Happy Coding :)
I have this issue I have to make an application that is able to detect the x,y position of where the user is looking at the application, by that I dont mean like detecting user eye coordinates on face recognition like with opencv, I need the exact position of where the user is watching the screen, I have look everywhere and seem to find nothing on it. a little help of where can I research more about it would be nice, right now I can only find is opencv answers related to show the coordinates of both eyes when doing some facial recognition showing the camera, but thats not what I need.
thanks in advance!
I am making an app to calculate facial symmetry by comparing distance between points to golden ratio.So far,I tried
hardware.Camera.Face - gives face bounds and co-ords of eye and mouth centre.
media.FaceDetector.Face - it only gives face bounds and eye location.
I need face bounds PLUS eye,nose,mouth and ear bounds.
If anyone have used a library which can detect face points in image, please mention the name. Also, in your opinion how fast and accurate is it ?
OpenCV might be a good consideration, have a look at this example, http://romanhosek.cz/android-eye-detection-and-tracking-with-opencv/
OpenCV is a very powerful library for doing all kinds of face recognition and face detection work! Check it out
You might also try the new Android face detector, which detects several facial landmarks:
https://developers.google.com/vision/detect-faces-tutorial
Is it possible to measure the distance between your android phone screen and user face?
I want to change the zoom ratio zoom of your android app screen base on the distance between your face and your phone screen.
Please help me.
Thanks.
Yes it is possible, as long as you have a front facing camera in your device. This was more or less my bachelor thesis.
Here is the code
https://github.com/philiiiiiipp/Android-Screen-to-Face-Distance-Measurement
and a video of the result https://www.youtube.com/watch?v=-6_pGkPKAL4
and the paper A new context - Screen to Face distance 1 1.pdf
I described how to calculate screen to face distance in a post on my blog. The algorithm is based on the distance between eyes. The further the face, the less the distance between your eyes appears on the camera.
https://ivanludvig.github.io/blog/2019/07/20/calculating-screen-to-face-distance-android
I don't believe it's possible.
Not all Android devices are equipped with a proximity sensor
These sensors are just meant to detect wether or not you have your cheek next to the phone.
EDIT following OP comment :
Technically, you could indeed use some image recognition algorithm to compute the screen-face distance based on the input from front camera.
But:
Not all Android devices have a front facing camera.
Implementing this from scratch would be extremely complex if you have no background in image processing.
I'm planning on doing an AR application that will just use GPS technology to get a location, and then use compass/gyroscope for tracking 6DOF viewfinder movements. Its a personal project for my own development, but I'm looking for starting places as its a new field to me so this might be a slightly open ended question with more than 1 right answer. By using GPS I am hoping to simply the development for my first AR application at the cost of its accuracy.
The idea for this AR is not to use any vision processing (relying on GPS only), and to display 3d models on the screen at roughly correct distances (up to a point) from where the user is standing. It sounds simple given games work in a 3D world with a view point and locations of faces/objects/models etc to draw. My target platform will be mobile devices & tablets potentially running one of these OS's WM6, Phone7 or Android.
Most of the applications I have seen use markers and use AR-ToolKit or ARTag, and those that use GPS tend to just display a point of interest or a flat box on a screen to state your at a desired location.
I've done some very limited work with 3D graphics programming, but are there any libraries that you think may be able to get me started on this, rather than building everything from the bottom up. Ignoring the low accuracy of GPS (in regards to AR) I will have a defined point in a 3D space (constantly moving due to GPS fix), and then a defined point in which to render a 3D model in the same 3D space.
I've seen some examples of applications which are similar but nothing which I can expand on, so can anyone suggest places to start of libraries to use that might be suitable for my project.
Sensor-based AR is do-able from scratch without using any libraries. All you're doing is estimating your camera's position in 6DOF using, and then performing a perspective projection which projects the known 3D point onto your camera's focal plane. You define your camera matrix using sensors and GPS, and perform the projection on each new camera frame. If you get this up and running that's plenty sufficient to begin projecting billboards, images etc into the camera frame.
Once you have a pin-hole camera model working you can try to compensate for your camera's wide-angle lens, for lens distortion etc.
For calculating relative distances there's the haversine forumula.
Moving to 3D models will probably be the most difficult part. It can be tricky to introduce camera frames into OpenGL on mobile devices. I don't have any experience on windows mobile or android, so I can't help there.
In any case have fun, it's really nice to see your virtual elements in the world for the first time!