I've got a problem with Google's MLKit face detector, as it returns a face even if the face is half-covered by something and this makes the face recognition model I use to think it's a new face, so I would like to know a solution for this problem, maybe a different face detector or another solution using MLKit face detector.
Thanks in advance.
ML kit works on data. Your results will be more accurate as much as more data you will provide to your model.If its taking half covered image it will also be beneficial for your trained model. You can train model by providing many images like with eye close, eyes open, left face, right face, look down, look up, zoom or half face covered etc.Once your model have enough data then it will recognized your even if you are wearing a face mask or even if your eyes were closed.
According to my opinion MLKit is much enough to implement face detection in your app. They are also improving it contineously. Happy Coding :)
Related
I am trying to explore various face points and add some filters like msqrd/snapchat using google vision api in android.Can anyone help me with detecting the neck portion of the human face?
Simply stated, you can't. But you of course can do some educated guesses.
You have the face outline, with its' size, position, yaw and pitch. The size of the neck is somewhat related to the size of the face and it's always under the face. Knowing that, you can draw an outline. This will not determine if it's f.i. covered by a scarf.
I have a project to implement a Snapchat Lenses-like face recognition and distortion algorithm. So far I've tried Android's default face detection API, the play-services-vision face detection API, and OpenCV/JavaCV, but they seem to only detect the location of faces and features and not describe the exact shape of the faces.
Is there anything I missed from these libraries that will allow me to do total face recognition that does describe the exact shape of the faces?
P.S. Should I ask this in Superuser instead?
That is a problem with a solution still in progress in the area of Digital Image Processing (very good solutions available, but not the best one defined yet).
I can think of one simple solution, that might work. Once you have the features of the face (such as eyes, lips, etc), you can retrieve the color of the skin surrounding it (using a pre-defined window, considering the size of the eyes/lips block). Then, you use the retrieved color to feed a border/edge recognition algorithm.
All of this can be easily done with OpenCV, but I cannot assure the method's accuracy, since I have not tried this myself.
Also, maybe Canny's method could be useful for your application.
But I strongly suggest you to search for contemporary papers on this subject, as methods for face recognition are getting better and better, but it can lead to very complex and efficiency-expensive alternatives.
Take a look at the link below, and search for any paper that leads a method in doing what you want:
http://ieeexplore.ieee.org/Xplore/home.jsp
Use keywords such as "face border recognition", "border recognition", "face edge detection", or similar. It's highly probable someone has already done this and then you won't have to reinvent the wheel!
I am making an app to calculate facial symmetry by comparing distance between points to golden ratio.So far,I tried
hardware.Camera.Face - gives face bounds and co-ords of eye and mouth centre.
media.FaceDetector.Face - it only gives face bounds and eye location.
I need face bounds PLUS eye,nose,mouth and ear bounds.
If anyone have used a library which can detect face points in image, please mention the name. Also, in your opinion how fast and accurate is it ?
OpenCV might be a good consideration, have a look at this example, http://romanhosek.cz/android-eye-detection-and-tracking-with-opencv/
OpenCV is a very powerful library for doing all kinds of face recognition and face detection work! Check it out
You might also try the new Android face detector, which detects several facial landmarks:
https://developers.google.com/vision/detect-faces-tutorial
I am working on an application which actually detects the objects or faces and measures the distance from camera to that object or face. I complete the face detection area, now is there any way to measure the distance between detected face from the point where camera is located.
Please Provide any link or source code I have searched a lot but all in vain.
Essentially, by tracking the distance between the user's eyes, and how this changes as the face is closer or further from the camera, a fairly accurate idea of the distance from the camera can be obtained.
Android has a built in face detector class that will handle determining where the face is, and even calculate eye separation for you.
A guy did this for his thesis, and posted the code on github along with some nice images outlining what it does, a demo video and a link to the paper he wrote.
I am making an Android application which is based in OpenCV. I have implemented a profile face detector and I use lbpcascade_profileface.xml file. The detector runs correctly but when I put on glasses the detection is worse.
Is it a limitation of file or not? Someone know a solution?
There's also an xml for glasses. You can run it as well to detect glasses, but it won't give you the face bounding box, just the glasses bounding box.
You can of course train a new cascade with images of people wearing glasses, or if you're willing to consider face detectors outside of OpenCV, you can try one of this API's for face detection:
http://blog.mashape.com/post/53379410412/list-of-40-face-detection-recognition-apis