Is there any Android API to detect face inside an image? For exemple on iOS there is such API to detect face, and I‘m curious if there is a similar API in Android.
The Android framework has the FaceDetector API, although is only suitable for bitmaps (not real-time), and returns only the boundaries (rectangle) location.
For more advanced features, such as real-time detection or face features contour, there is the ML Kit library offered by Google. Although this library can also be used for very simple use cases, such as also getting the face location in a bitmap.
More about ML Kit in the next link:
https://developers.google.com/ml-kit/guides
Related
I want to just put glass over eyes.
I can use the CameraX Library for taking picture.
and take some coordinates for eyes.
Any best way to approch this problem
Have you tried ML Kit's Face Detection API? It can detect facial "landmarks", including eyes.
Currently I'm working on an app for Android. We want to detect features of a face expression. This feature should be detect the face and facial expression from image.
Accuracy should be fine but don’t need it’s perfect. We need expression detection feature work speedy and smooth.
I have tried ML kit Library for Detect Face and It’s Working fine, but this library provide only Single expression “Smile”, We need result in different different expression like “Sad”, “Surprise”, “Angry”, “Neutral” and “Happiness”.
I googled a lot and I found many interesting things. There is also a face detection in the Android API. But the returned only single Face Expression (https://github.com/googlecodelabs/mlkit-android/tree/master/vision/final) only contains the position of the eyes. And face landmarks So friends have you any reference or suggestion? For what library there are good documentations, tutorials?
I would like to implement an android application (API min 27) for a term project which enables users to experience Arcore and mixed reality features in headsets such as google cardboard etc with stereoscopic view. For preliminary research, i couldn't find valid resources and approaches in order to solve stereoscopic vision on arcore except some approaches in unity3d, openGL and some frameworks such as Vuforia etc.. As far as i know, currently arcore 1.5 is not supporting this feature.
I considered using cardboard sdk and arcore sdk together but I am not sure that it is going to be naive approach and provide solid foundation for the project and future works.
Is there a way to work around for desired stereoscopic view for Arcore in native android or how can I implement stereoscopic view for given case (Not asking for actual implementation, just brainstorming) ?
Thanks in Advance
there is the Google Sceneform - scenegraph-based visualization engine for AR applications and there is this feature request asking for the same thing you have planned. Now to answer your question, there is this java visualization library used throughout google's own ARCore examples and it has support for stereoscopic rendering. What is missing in their implementation is accounting for lens distortion.
I have implemented Augmented reality program using Qualcomm's vuforia library. Now I want to add Optical character recognition feature to my program so that i can translate the text from one language to another in real time. I am planning to use Tesseract OCR library. But my question is How do i Integrate Tesseract with QCAR?
can some body suggest me proper way to do it?
What you need is an access to the camera frames, so you can send them to Tesseract. The Vuforia SDK offers a way to access the frames using the QCAR::UpdateCallback interface (documentation here).
What you need to do is create a class that implements this protocol, register it to the Vuforia SDK using the QCAR::registerCallback() (see here), and from there you'll get notified each time the Vuforia SDK has processed a frame.
This callback will be provided a QCAR::State object, from which you can get access to the camera frame (see the doc for QCAR::State::getFrame() here), and send it to the Tesseract SDK.
But be aware of the fact that the Vuforia SDK works with frames in a rather low resolution (on many phones I tested, it returns frames in the 360x240 to 720x480 range, and more often the former than the latter), which may not be accurate enough for Tesseract to detect text.
As complimentary information to #mbrenon 's answer: Tesseract only does text recognition and doesn't support ROI text extraction, so you will need to add that to your system after capturing your image.
You can read these academic papers which report on the additional steps for using Tesseract on mobile phones and provide some evaluation performances:
TranslatAR: Petter, M.; Fragoso, V.; Turk, M.; Baur, Charles, "Automatic text detection for mobile augmented reality translation," Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on , vol., no., pp.48,55, 6-13 Nov. 2011
Mobile Camera Based Detection and Translation
I want to know how I implement Android Face Detection using OpenCV/JavaCV. Any one have idea about that or have code please comment on this or put the code. I want get faces from the Phone Gallery and detect them..
For face detectiion you can use the built in FaceDetector in the Android SDK, It returns face positions and angles in BMPs. But it's not very fast.
You can Also use the javaCV face detection but before to start i recommend you to see this article to see advantages and constarint of some API that you can use and also compare Performance
For FaceDetector you can see these links
Link 1
Link 2
Here's a realtime face detection sample using FaceDetector and OpenGL (draws rectangles) which works in Android 2.2
You can also use OpenCV in Android
You'd better try this on Linux (I've tried it on Windows, but failed).
Finally JavaCV (strongly recommended)
There is a sample code of realtime face detection using the camera. See "javacv-src-*.zip" on the download page.
The timing figures on the screenshot from K_Anas are shockingly slow... my app on my HTC Desire S with the OpenCV library (here) does 4+ fps...
My demo app on Play Store (eurgh) is here. In the menu the first item takes you to my web page for the app with source code snippets. 1) install OpenCV, 2) get the supplied samples running, 3) edit "Tutorial 2 OpenCVSamples" and drop my code snippets into the frame processing loop.
I claim no credit for the app, it is just a slightly enlarged and adjusted version of the sample which comes with the OpenCV library.