http://www.youtube.com/watch?feature=player_detailpage&v=Boz3yVBz1hM#t=18s
In this video, at 22 sec, man is drawing a square on SetupWizard screen of Samsung Nexus.
Could anyone please help me to find the source code which handles this gesture?
The source code for that specific application isn't part of AOSP, but you can use GestureDetector to detect arbitrary shape gestures.
Related
I am working on a face recognition app where the picture is taken and sent to server for recognition.
I have to add a validation that user should capture picture of real person and of another picture. I have tried a feature of eye blink and in which the camera waits for eye blink and captures as soon as eye is blinked, but that is not working out because it detects as eye blink if mobile is shaken during capture.
Would like to ask for help here, is there any way that we can detect if user is capturing picture of another picture. Any ideas would help.
I am using react native to build both Android and iOS apps.
Thanks in advance.
Thanks for support.
I resolve it by the eye blink trick after all. Here is a little algorithm I used:
Open camera, click capture button:
Camera detects if any face is in the view and waits for eye blink.
If eye blink probability is 90% for both the eyes, wait 200 milliseconds. Detect face again with eye open probability > 90% to verify if the face is still there, and capture the picture at the end.
That's a cheap trick but working out so far.
Regards
On some iPhones (iOS 11.1 upwards), there's a so-called trueDepthCamera that's used for Face ID. With it (or the back facing dual camea system) you can capture images along with depth maps. You could exploit that feature to see if the face is flat (captured from an image) or has normal facial contours. See here...
One would have to come up with a 3d face model to fool that.
It's limited to only a few iPhone models though and I don't know about Android.
I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.
I am to start working on an Android Custom Camera App. I just want to know is there any way to add the following features to my app:
Beauty Level
Red Eye Removal
Acne Removal
I just want to know that if it is possible, can someone suggest or give me any idea how can code it into my app.
Though I am familiar with Android Camera API functions, and worked on several simple custom camera apps.
Thanks in advance
For beginning, you can use FaceDetector to detect faces in the picture.For example , you can remove Red eyes by searching for Red pixels in the picture and try to decrease red level of them.And also , you can use OpenCV for detecting eyes.I found a sample for eye detection in Here
is there any library for detecting an eye in a given rectangle (and the eye's size) , while the camera preview is still showing its content (non stop) ?
i need to find an easy way to acheive this . i've found out that there is an API for face detection , and that on android 4 they also added eyes detection , but only if it found a face , yet i need to find an eye even without any face.
You could always ook at the source code for Android and see how they do Eye detection.
Otherwise check out this question: OpenCV eye tracking on Android
If you want to see an example of opencv on android, click on this open source code.
opencv is best lib for working with face and eye Detection Using opencv you can do:
http://opencv-code.com/tutorials/eye-detection-and-tracking/
Example Code:
http://romanhosek.cz/android-eye-detection-and-tracking-with-opencv/
I am trying to display a .md2 model over top of the camera preview on my android phone. I don't need to use the accelerometers or anything. If anyone could even just point me in the right direction as to have to set up an opengl overlay that would be fantastic. If you are able to provide code that shows how to enable this that would be even better! It would be greatly appreciated..
I'm not able to provide code until later this week, but you might want to check out a library called min3d, because I believe they already have a parser written for .md2 files. Then I believe that if you use a GLSurfaceView, the background can be set to be transparent, and you can put a view of the camera behind it. Are you trying to get some kind of augmented reality effect? There are android specific libraries for that too, but they're pretty laggy (at least on my Motorola Droid).