Face detection in Android system - android

I have a question regarding where the face detection informaition is stored by Android.
There seem to be two options :
1) The face detection information is stored along with Image as a part of EXIF metadata.
2) Android stores the detected faces information somewhere and retrives when user opens that particular image.
For option 1 I tried to fetch information with Metadata Extractor but there was no tag in particular that corresponds to face detection (correct me if I am wrong)
If it is option 2 how exactly I can filter gallary images according to faces tagged inside ?
Please give me some pointers.

Android have face detection api. You can just call findFaces method for bitmap. Also you cat use external libs and frameworks like OpenCV. According your points - which framefork you use for face detection?

Related

Android Studio: Display a message if the camera is covered

I am writing an Android-app that uses the camera. To make it user-friendly, I'd like to display a message when the picture is too dark or the user has his finger in the lens. Is there any possibility to get the camera-state and decide wether it is covered by something or the camera lens is free?
In order to detect whether the camera is covered by some object or not you will have to use OpenCV library and perform the action accordingly after the object is detected. There is nothing inbuilt in android for the task you want to achieve.
Link to OpenCV
You can use the Camera.PreviewCallback coupled with the Camera class to get a callback with a byte array, that byte array contain the image data of that frame.
Then you'll need some sort of algorithm/logic to determine whether or not it's "too dark". There is nothing built into Android that can help you determine that.

how to autocrop business card images taken from camera using opencv in android

I am developing an app that captures a business card using custom android camera and then i need to autocrop the unwanted space in android and then store the image . I am using opencv for this. All examples i am seeing are in python . I need it in android native.
You can probably try something like this:
1) Get an edge map of the image (perform edge detection)
2) Find contours on the edge map. The outermost contour should correspond to the boundaries of your business card. (under assumption that the business card image is against a solid background) This will help you extract the business card from the image.
3) Once extracted you can store the image separately without the unwanted space.
OpenCV will help you with points 1,2 and 3. Use something like a cannyedge detection for point 1. The findContours function will come in handy for point 2. Point 3 is basic image manipulation which I guess you don't need help with.
This might not be the most precise answer out there - but neither is the question is very precise - so, i guess it is alright.

opencv4android to do object tracking

I use ffmpeg to play video stream on SurfaceView of Android project. Now I would like to implement following feature.
1) Select one object by drawing a red rectangle on the SurfaceView.
2) Send x, y, width, height of the selected object and the original video frame to opencv.
3) Then, opencv return the new x and y of the object by processing the new video frame.
Anybody did it before? I will be very nice of you to give me some suggestion, or tell me very I can download the demo source code. Thank you so much.
For part (1), try searching Google a little more. It won't be hard to find a tutorial that uses touch input, a tutorial to draw a rectangle, and a tutorial to draw over the SurfaceView. Part (2) is done just by how you set up and define your variables - there isn't a specific mechanism or function that "sends" the data over.
Part (3) is the part that isn't obvious, so that's the part I'll focus on. As with most problems in computer vision, you can solve object tracking in many ways. In no particular order, what comes to mind includes:
Optical Flow - Python openCV examples are here
Lucas-Kanade - the algorithm compares extracted features frame-by-frame. [The features are Shi-Tomasi by default, but can also be BRIEF, ORB, SIFT/SURF, or any others.] This runs quickly enough if the number of features is reasonable [within an order of magnitude of 100].
Dense [Farneback] - the algorithm compares consecutive frames and produces a dense vector field of motion direction and magnitude.
Direct Image Registration - if the motion between frames is small [about 5-15% of the camera's field of view], there are functions that can map the previous image to the current image quickly and efficiently. However, this feature is not in the vanilla OpenCV package - you need to download and compile the contrib modules, and use the Android NDK. If you're a beginner with Java and C++, I don't recommend using it.
Template Matching [example] - useful and cheap if the object and camera orientations do not change much.

How can I use openCV to do the image pair?

I need to use openCV to do the image pair in android devices.
For example, I want the mobile devices to match the apple image.
When I open the application, the camera is opening and prepare to detect the apple image. If it is matched, the "match" message will be shown.
Can any one give me some direction to finish it? Thanks.
To match an image, you can using Template Matching or using SURF Detector in Open-CV. see the following links:
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html

Overlay Image on Live Camera feed using Wikitude

I have gone through all the samples of wikitude. Is it possible to overlay live camera feed image which has been saved as screenshot and create augmenetd image? If it is possible then what tracker image should I use? Because tracker image is the one which I know presently that which image I am going for track. Then if the image will be taken in future how can I create a .wtc file for that and how can I augment my camera feed? Is it possible in wikitude?
I have to create one application using wikitude. I like the sdk of wikitude.
If I understand you correctly you are looking for a way to create target images (that are used for recognition) on the device. This is currently not supported. However if you have a valid business case we are able to provide you with a server based tool to create target images. For more information please contact sales#wikitude.com.
Disclaimer: As you probably already guessed, I'm working for Wikitude.

Categories

Resources