I have sucesfully implemented face detection in my app using Android's Camera.FaceDetectionListener (following the Android Developers guide), but unfortunatelly some devices does not support this feature. Is there another way to achieve the same result?
I usually work with OpenCV to make image processing algorithms.
http://opencv.org/platforms/android.html
Its algorithms are much better than Android face detection, besides if you download the SDK you have a faceDetection example.
Here are the downloads:
http://opencv.org/downloads.html
The sdk, handles camera api 2, which it works at 30 fps, with a wrapper if you want to process video frames. Besides there are samples where you can mix Java OpenCV code with JNI code, to make so much faster your algorithm.
Unfortunately, these examples are made on Eclipse projects, but they are not difficult to merge into Android studio project.
I hope that these references are useful
Cheers.
Related
I'm trying to build an AR Android app that uses Vuforia + jPCT-AE.
The jPCT is being used because it makes it easier to use objects exported from Blender, and dramatically reduces the code verbosity (when compared with vuforia stand-alone).
I would like to introduce the possibility to display a video along with different objects (e.g. a banana and a monkey ) that were rendered using jPCT-AE, however I haven't found any clues (documentation) on how to do this so I'm asking for you help and knowledge.
Thanks in advance!
Use of vuforia video playback in this link:
Advanced Topics
The samples below show how to implement sophisticated rendering techniques in Unity and OpenGL ES to enrich your app with creative effects. A project to show you how to work with C++ on Android is also available.
...
Video Playback
i'm new to developing android apps in general.
I'm trying to create an application that given a certain image it would detect faces and would give me the eye locations and other info.
I've done some research and i found some stuff such as, the android FaceDetector API and OpenCV.
Could anyone give me some advice on how to make an app like this or send me a link with any info related to this, all help would be great!
Thanks, Daniel.
I have worked with Face recognition for a while.
If you want to use OpenCV you could do a better effort searching in SO and you can found things like this one.
The best one for me is the SDK provide by lockheed martin... but it's too expensive :S for a single person.
Edited
"Face detection and face recognition are different things ;) Face detection tells you where is the face and face recognition tells you who's the owner of the face"
If you choose OpenCV, you can find full doc in official page.
I'm going to give you a overview :
You can use OpenCV in your app using "OpenCV Manager" or with "Static Initialization on OpenCV Android".
About the first one:
OpenCV Manager is an Android service targeted to manage OpenCV library binaries on end users devices. It allows sharing the OpenCV dynamic libraries between applications on the same device. The Manager provides the following benefits:
Less memory usage. All apps use the same binaries from service and do not keep native libs inside themselves;
Hardware specific optimizations for all supported platforms;
Trusted OpenCV library source. All packages with OpenCV are published on Google Play market;
Regular updates and bug fixes;
About the second one:
A complete tutorial using eclipse.
You might try the new Android face API. See the tutorial here about how to detect faces and facial landmarks:
https://developers.google.com/vision/detect-faces-tutorial
I explain how to do it in this article. I used a TensorFlow Lite with a MobileFaceNet implementation, achieving very accurate results and with surprisingly high speed.
You'll find the source code and an APK in this repo
I need to develop a facial features detection application by which I could be able to detect eyes,nose,lips,head along with its face. For this I opted for OpenCV. I had gone through many tutorials and also sample projects. There I could see the usage of haarcascade files through which I could detect the facial features while recording a video. As I know the location of haar cascade file.
But no site could tell me the complete implementation of haarcascade files in OpenCV android project.
Kindly provide me some sites regarding or give me some brief knowledge regarding the same.
That is a good approach for detecting faces using openCV. Since you are using android, you will probably not find code written in java but you can always modal it to work on android too. This is what I believe is the best approach. http://docs.opencv.org/master/d7/d8b/tutorial_py_face_detection.html#gsc.tab=0
http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html
This is another detailed use of different approaches using OpenCV.
Hope it helps, if you need help in changing the code you have to post what you've done so far and then I would be happy to help! Cheers!
I need to work on detecting edges from an Image, I'm using Canny algorithm for that.
Since OpenCV for android is available 2.4.2 while i'm trying to run examples it says.
"OpenCV Manager is not installed, please try to install it." after install it from the market it is working fine.
But if i want the user's to install my application so that they don't have to install another .apk for using my application.
-> How to use openCV without without asking for another application i.e. manger should be pre installed.?
-> is there any way i can use Canny algorithm for edge detection without OpenCV any good angorithm tutorials for implementing in in android.?
You might find information about this on the OpenCV webpage. This said, this is deprecated and OpenCV advises not to do this in production. The manager actually allows the user to download the OpenCV library once for all. Then, your application will be much smaller!
About not using OpenCV, you can try FastCV (as Aaron suggested), but it seems overkill for your application (and it requires you to be familiar with NDK development). With OpenCV, in the other hand, you can code either in java (by the way, have a look at JavaCV) or using the NDK.
Finally, if you only need a Canny Edge detector and don't want to use a library, you can try to write it yourself. The related page on Wikipedia should be enough for this (I could do it a few years ago as an exercise).
Have you looked into Qualcomm's FastCV? It offers some of the more common image processing algorithms offered in libraries like OpenCV. They also have a pretty cool augmented reality API called Vuforia.
Fair warning, the support documentation isn't that great and it requires that you are familiar with NDK development.
https://developer.qualcomm.com/mobile-development/mobile-technologies/computer-vision-fastcv
I'm just looking around for starting learning NDK, with one particular project in mind:
I want to continually render a changing bitmap from NDK side to be able to show it in a live wallpaper.
(hence I'm not talking about rendering to OpenGL texture here, but about rendering to a Bitmap)
I googled a bit and found out that there's an option to directly manipulate a Bitmap pixels. But I also found that documentation says that this feature is avaiable only since Android 2.2.
And I'd like to support 2.1 in my live wallpaper.
On the other hand I found several projects that do similar stuff - render something from ndk and show it in live wallpaper. And they work on 2.1. Examples are: wonderful Video Live Wallpaper, and I think Shake Them All Live Wallpaper does the same kind of stuff.
So the question is - am I missing some other way to do continuous rendering to LW other than direct manipulation of Bitmap data?
Or some other thing I got wrong? :)
As far as I know, the other projects that do rendering with the NDK and use jnigraphics prior to 2.2 actually include that library in the project and load it as a 3-rd party library.
See the description of PREBUILT_SHARED_LIBRARY in android-ndk-r5b/docs/ANDROID-MK.html for more information on how to do that.
You can find jnigraphics in:
android-ndk-r5b/platforms/android-8/arch-arm/usr/lib/libjnigraphics.so
Of course, I don't know if it's actually permissible to redistribute part of the NDK (maybe someone else can weigh in on this), but apparently the only holdup with using jnigraphics prior to Android 2.2 is simply the fact that it's not present in earlier releases.
Hope that helps.