Looking for opinions on if OpenCV could be or has been used to detect eye dilation on Android or iOS. I haven't found much other than eye tracking and blink detection with the app EyePhone that uses OpenCV. Under perfect conditions, I'm sure it's possible, I'm more curious of seeing a proof of concept, that it can and has been done.
Thank you for your opinion.
rd42
try template matching, it gives me best results for now. You can see my sample app:
Example app or use haar detector as its at the start of video, but haar detector is slowly and drop fps.
Related
I've got a problem with Google's MLKit face detector, as it returns a face even if the face is half-covered by something and this makes the face recognition model I use to think it's a new face, so I would like to know a solution for this problem, maybe a different face detector or another solution using MLKit face detector.
Thanks in advance.
ML kit works on data. Your results will be more accurate as much as more data you will provide to your model.If its taking half covered image it will also be beneficial for your trained model. You can train model by providing many images like with eye close, eyes open, left face, right face, look down, look up, zoom or half face covered etc.Once your model have enough data then it will recognized your even if you are wearing a face mask or even if your eyes were closed.
According to my opinion MLKit is much enough to implement face detection in your app. They are also improving it contineously. Happy Coding :)
I'm baffled that on Android we have to import a 30 MB OpenCV library to detect rectangles in images / video frames. On iOS that is pretty easy using CIDetector.
Has anyone found a solution that isn't OpenCV based? Maybe using Renderscript? I've found this one (explained here) which implements some kind of edge detection, but I'm not sure whether this is the right basis to extend. Any vision / graphics expert out there who could evaluate this and maybe point me in the right direction?
I am working on app that detect eye blink of the user. I have been searching the web for 2 days but still don't have clear vision about how this can be done.
As far as i have knew is that the system supports face detection which is detecting if there is a face in the picture and locating it.
But this works only with images and detect only faces which is not what i need. I need to open an camera activity and directly detect the face of the user and locate his eyes and other facial parts and wait till he blinks, like when you long click on the screen on snap chat.
I have seen a lot about open-cv but still not sure what it is or how to use it or if it seize my goals.
Note: snap chat has no API released for the technology used, and even it doesn't let anyone to talk to the engineers behind this technology.
I know that openCV has the ability to allow image processing on the device's camera feed (as opposed to only being able to process still images).
Here is an introductory tutorial on eye detection using openCV:
http://romanhosek.cz/android-eye-detection-and-tracking-with-opencv/
If you can't find eye-blink detection tutorials in a google search, I think you'll have to create the code for eye-blink detection on your own, but I think openCV will be a helpful tool in doing so. There are lots of beginner openCV tutorials to help you get started.
I am planning to do an app that involves triangle colliding with circle. Also determining how fast the user touches the circle while it is is moving.
I am already experienced with android development (but limited on drawing but knows the basic). Should I go with surface view drawing or should I start learning libgdx for the above purpose? And what would be the rational "so it is not vague/opinionated question"
Thank you so much
I suggest you to go with libgdx. It is very well designed, well documented and has a friendly community which will help you to quickly get started.
Furthermore it will be a lot easier to render graphics elements, which would be a lot more difficult with "pure" Android/OpenGL.
Another great feature you might be interested in is that libgdx has a Box2D extension, which might help you with the triangle/sphere collisions, if you are planning something more advanced here.
And last but not least libgdx would be my preferred way of developing a game, because I can develop it on desktop and then just deploy and test it on my mobile device.
I am making an Android application which is based in OpenCV. I have implemented a profile face detector and I use lbpcascade_profileface.xml file. The detector runs correctly but when I put on glasses the detection is worse.
Is it a limitation of file or not? Someone know a solution?
There's also an xml for glasses. You can run it as well to detect glasses, but it won't give you the face bounding box, just the glasses bounding box.
You can of course train a new cascade with images of people wearing glasses, or if you're willing to consider face detectors outside of OpenCV, you can try one of this API's for face detection:
http://blog.mashape.com/post/53379410412/list-of-40-face-detection-recognition-apis