Detecting face landmarks points in android - android

I am developing app in which I need to get face landmarks points on a cam like mirror cam or makeup cam. I want it to be available for iOS too. Please guide me for a robust solution.
I have used Dlib and Luxand.
DLIB: https://github.com/tzutalin/dlib-android-app
Luxand: http://www.luxand.com/facesdk/download/
Dlib is slow and having a lag of 2 sec approximately (Please look at the demo video on the git page) and luxand is ok but it's paid. My priority is to use an open source solution.
I have also use the Google vision but they are not offering much face landmarks points.
So please give me a solution to make the the dlib to work fast or any other option keeping cross-platform in priority.
Thanks in advance.

You can make Dlib detect face landmarks in real-time on Android (20-30 fps) if you take a few shortcuts. It's an awesome library.
Initialization
Firstly you should follow all the recommendations in Evgeniy's answer, especially make sure that you only initialize the frontal_face_detector and shape_predictor objects once instead of every frame. The frontal_face_detector will initialize faster if you deserialize it from a file instead of using the get_serialized_frontal_faces() function. The shape_predictor needs to be initialized from a 100Mb file, and takes several seconds. The serialize and deserialize functions are written to be cross-platform and perform validation on the data, which is robust but makes it quite slow. If you are prepared to make assumptions about endianness you can write your own deserialization function that will be much faster. The file is mostly made up of matrices of 136 floating point values (about 120000 of them, meaning 16320000 floats in total). If you quantize these floats down to 8 or 16 bits you can make big space savings (e.g. you can store the min value and (max-min)/255 as floats for each matrix and quantize each separately). This reduces the file size down to about 18Mb and it loads in a few hundred milliseconds instead of several seconds. The decrease in quality from using quantized values seems negligible to me but YMMV.
Face Detection
You can scale the camera frames down to something small like 240x160 (or whatever, keeping aspect ratio correct) for faster face detection. It means you can't detect smaller faces but it might not be a problem depending on your app. Another more complex approach is to adaptively crop and resize the region you use for face detections: initially check for all faces in a higher res image (e.g. 480x320) and then crop the area +/- one face width around the previous location, scaling down if need be. If you fail to detect a face one frame then revert to detecting the entire region the next one.
Face Tracking
For faster face tracking, you can run face detections continuously in one thread, and then in another thread, track the detected face(s) and perform face feature detections using the tracked rectangles. In my testing I found that face detection took between 100 - 400ms depending on what phone I used (at about 240x160), and I could do 7 or 8 face feature detections on the intermediate frames in that time. This can get a bit tricky if the face is moving a lot, because when you get a new face detection (which will be from 400ms ago), you have to decide whether to keep tracking from the new detected location or the tracked location of the previous detection. Dlib includes a correlation_tracker however unfortunately I wasn't able to get this to run faster than about 250ms per frame, and scaling down the resolution (even drastically) didn't make much of a difference. Tinkering with internal parameters produced increase speed but poor tracking. I ended up using a CAMShift tracker based on the chroma UV planes of the preview frames, generating the color histogram based on the detected face rectangles. There is an implementation of CAMShift in OpenCV, but it's also pretty simple to roll your own.
Hope this helps, it's mostly a matter of picking the low hanging fruit for optimization first and just keep going until you're happy it's fast enough. On a Galaxy Note 5 Dlib does face+feature detections at about 100ms, which might be good enough for your purposes even without all this extra complication.

Dlib is fast enough for most cases. The most of processing time is taken to detect face region on image and its slow because modern smartphones are producing high-resolution images (10MP+)
Yes, face detection can take 2+ seconds on 3-5MP image, but it tries to find very small faces of 80x80 pixels size. I am really sure, that you dont need such small faces on high resolution images and the main optimization here is to reduce the size of image before finding faces.
After the face region is found, the next step - face landmarks detection is extremely fast and takes < 3 ms for one face, this time does not depend on resolution.
dlib-android port is not using dlib's detector the right way for now. Here is a list of recommendations how to make dlib-android port work much faster:
https://github.com/tzutalin/dlib-android/issues/15
Its very simple and you can implement it yourself. I am expecting performance gain about 2x-20x

Apart from OpenCV and Google Vision, there are widely-available web services like Microsoft Cognitive Services. The advantage is that it would be completely platform-independent, which you've listed as a major design goal. I haven't personally used them in an implementation yet but based on playing with their demos for awhile they seem quite powerful; they're pretty accurate and can offer quite a few details depending on what you want to know. (There are similar solutions available from other vendors as well by the way).
The two major potential downsides to something like that are the potential for added network traffic and API pricing (depending on how heavily you'll be using them).
Pricing-wise, Microsoft currently offers up to 5,000 transactions a month for free with added transactions beyond that being some fraction of a penny (depending on traffic, you can actually get a discount for high volume), but if you're doing, for example, millions of transactions per month the fees can start adding up surprisingly quickly. This is actually a fairly typical pricing model; before you select a vendor or implement this kind of a solution make sure you understand how they're going to charge you and how much you're likely to end up paying and how much you could be paying if you scale your user base. Depending on your traffic and business model it could be either very reasonable or cost-prohibitive.
The added network traffic may or may not be a problem depending on how your app is written and how much data you're sending. If you can do the processing asynchronously and be guaranteed reasonably fast Wi-Fi access that obviously wouldn't be a problem but unfortunately you may or may not have that luxury.

I am currently working with the Google Vision API and it seems to be able to detect landmarks out of the box. Check out the FaceTracker here:
google face tracker
This solution should detect the face, happiness, and left and right eye as is. For other landmarks, you can call the getLandmarks on a Face and it should return everything you need (thought I have not tried it) according to their documentation: Face reference

Related

Plotting 3D Math Functions in Android

as part of my project, I need to plot 2D and 3D functions in android using android studio. I know how to plot 2D functions but I'm struggling with 3D functions.
What is the best way to plot 3D functions? What do I need and where do I start?
I'm not looking for code or external libraries that do all the work, I just need to know what I need to learn to be able to do it myself.
Thanks in advance.
I know how to plot 2D functions but I'm struggling with 3D functions.
What is the best way to plot 3D functions? What do I need and where do I start?
I'm not looking for code or external libraries that do all the work, I just need to know what I need to learn to be able to do it myself.
Since you already understand 2D and want to advance to 3D there's a simple and non-optimal method:
Decide on how much z depth you desire:
EG: Currently your limits for x and y in your drawing functions are 1920 and 1080 (or even 4096x4096), if you want to save memory and have things a bit low resolution use a size of 1920x1080x1000 - that's going to use 1000x more memory and has the potential to increase the drawing time of some calls by 1000 times - but that's the cost of 3D.
A more practical limit is matrices of 8192,8192,16384 but be aware that video games at that resolution need 4-6GB graphic cards to work half decently (more being a bit better) - so you'll be chewing up some main memory starting at that size.
It's easy enough to implement a smaller drawing space and simply increase your allocation and limit variables later, not only does that test that future increases will go smoothly but it allows everything to run faster while you're ironing the bugs out.
Add a 3rd dimension to the functions:
EG: Instead of a function that is simply line_draw(x,y) change it to line_draw(x,y,z), use the global variable z_limit (or whatever you decide to name it) to test that z doesn't exceed the limit.
You'll need to decide if objects at the maximum distance are a dot or not visible. While testing having all z's past the limit changed to the limit value (thus making them a visible dot) is useful. For the finished product once it goes past the limit that you are implementing it's best that it isn't visible.
Start by allocating the drawing buffer and implementing a single function first, there's no point (and possibly great frustration) changing everything and hoping it's just going to work - it should but if it doesn't you'll have a lot on your plate if there's a common fault in every function.
Once you have this 3D buffer filled with an image (start with just a few 3D lines, such as a full screen sized "X" and "+") you draw to your 2D screen X,Y by starting at the largest value of Z first (EG: z=1000). Once you finish that layer decrement z and continue, drawing each layer until you reach zero, the objects closest to you.
That's the simplest (and slowest) way to make certain that closest objects obscure the furthest objects.
Now does it look OK? You don't want distant objects the same size (or larger) than the closest objects, you want to make certain that you scale them.
The reason to choose numbers such as 8192 is because after writing your functions in C (or whichever language you choose) you'll want to optimize them with several versions each, written in assembly language, optimized for specific CPUs and GPU architectures. Without specifically optimized versions everything will be extremely slow.
I understand that you don't want to use a library but looking at several should give you an idea of the work involved and how you might implement your own. No need to copy, improve instead.
There are similar questions and answers that might fill in the blanks:
Reddit - "I want to create a 3D engine from scratch. Where do I start?"
Davrous - "Tutorial series: learning how to write a 3D soft engine from scratch in C#, TypeScript or JavaScript"
GameDev.StackExchange - "How to write my own 3-D graphics library for Windows? [closed]"

Starling for IOS & Android: Very low FPS in a static situation

I created an application with Starling, on the new mobile devices it performs amazingly well, however on the older devices (e.g. iPhone 4) I encounter a very odd lag.
I have as far as I can tell a completely static situation:
There are quite a few display objects added to stage, many of them are buttons in case it matters, their properties are not changed at all after initialization (x, y, rotation, etc...).
There are no enterframes / timeouts / intervals / requests of any kind in the background.
I'm not allocating / deallocating any memory.
In this situation, there's an average of 10 FPS out of 30, which is very odd.
Since Starling is a well established framework, I imagine it's me who's doing something wrong / not understanding something / not aware of something.
Any idea what might be causing it?
Has anyone else experienced this sort of problem?
Edit:
After reading a little I've made great optimizations in every possible way according to this thread:
http://wiki.starling-framework.org/manual/performance_optimization
I reduced the draw calls from around 90 to 12, flattened sprites and set blendmode to none in specific cases to ease on CPU, and so on...
To my surprise when I tested again, the FPS was unaffected:
fps: 6 / 60
mem: 19
drw: 12
Is it even possible to get normal fps with Starling on mobile? What am I missing?
I am using big textures that are scaled down to the size of the device, is it possible that such a thing affects the fps that much?
Regarding "Load textures from files/URLs", I'm downloading different piles of assets for different situations, therefore I assumed compiling each pile into a SWF would be way faster than sending a separate request for each file. The problem is, for that I can only use embed, which apparently uses twice the memory. Do you have any solution in mind to enjoy the best of both worlds?
Instead of downloading 'over-the-wire' your assets and manually caching them for re-use, you can embed the assets into your app bundle vs. embedding them and then use the Starling AssetManager to load the textures at the resolution/scale that you need for the device:
ie.
assets.enqueue(
appDir.resolvePath("audio"),
appDir.resolvePath(formatString("fonts/{0}x", scaleFactor)),
appDir.resolvePath(formatString("textures/{0}x", scaleFactor))
);
Ref: https://github.com/Gamua/Starling-Framework/blob/master/samples/scaffold_mobile/src/Scaffold_Mobile.as
Your application bundle gets bigger of course, but you do not take the 2x ram hit of using 'embed'.
Misc perf ideas from my comment:
Testing FPS with "Release" mode correct?
Are you using textures that are scaled down to match the resolution of the device before loading them?
Are you mixing BLEND modes that are causing additional draw calls?
Ref: The Performance Optimization is great reading to optimize your usage of Starling.
Starling is not a miracle solution for mobile device. There's quite a lot of code running in the background in order to make the GPU display anything. You the coder has to make sure the amount of draw call is kept to a minimum. The weaker the device and the less draw call you should force. It's not rare to see people using Starling and not pay any attention to their draw calls.
The size of graphics used is only relevant for the GPU upload time and not that much for the GPU display time. So of course all relevant texture need to be uploaded prior to displaying any scenes. You simply cannot try to upload any new texture while any given scene is playing. Even a small texture uploading will cause idling.
Displaying everything using Starling is not always a smart choice. In render mode the GPU gets a lot of power but the CPU still has some remaining. You can reduce the amount of GPU uploading and GPU charge by simply displaying static UI elements using the classic display list (which is where the Staling framework design is failing). Starling was originally made to make it very difficult to use both display system together that's one of the downsides of using this framework. Most professional I know including myself don't use Starling for that reason.
Your system must be flexible and you should embed your assets for mobile and not use any external swf as much as possible and be able to switch to another system for the web. If you expect to use one system of asset for mobile/desktop/web version of your app you are setting yourself up for failure. Embedding on mobile is critical for memory management as the AIR platform internally manages the cache of those embedded assets. Thx to that when creating new instances of those assets the memory consumption stays under control, if you don't embed then you are on your own.
Regarding overall performance a very weak Android device will probably never be able to go passed 10 fps when using Starling or any Stage3D framework because of the amount of code those framework need to run (draw calls) in the background. On weak device that amount of code is already enough to completely overload the CPU. On the other hand on weak device you can still get a good performance and user experience by using GPU mode instead of render mode (so no Stage3D) and displaying mostly raster graphic.
IN RESPONSE TO YOUR EDIT:
12 draw calls is very good (90 was pretty high).
That you still get low FPS on some device is not that surprising. Especially low end Android device will always have low FPS in render mode with Stage3D framework because of the amount of code that those framework have to run to render one frame. Now the size of the texture you are using should not affect the FPS that much (that's the point of Stage3D). It would help with the GPU uploading time if you reduce the size of those graphics.
Now optimization is the key and optimizing on low end device with low FPS is the best way to go since whatever you do will have great effect on better device as well. Start by running tests and only displaying static graphics with no or very little code on your part just to see how far the Stage3D framework can go on its own on those weak device without losing any FPS and then optimize from there. The amount of object displayed on screen + the amount of draw calls is what affects FPS with those Stage3D framework so keep a count of those and always seek ways to reduce it. On some low end device it's not practical to try to keep a 60fps so try to switch to 30 and adjust your rendering accordingly.

Object detection in android using ORB

I'm trying to detect certain objects with the camera of an android device. I've tried the OpenCV people detection sample using HOG descriptor but that seems to be pretty slow. Then I tried using Haar Cascades which gave a better average frame rate of 10 fps and better accuracy as well.
After reading a bit more, another viable option seems to be the ORB feature detector. What I could understand is that I need to save the feature vectors of the images against which I want to compare the current frame. So,
What would be the best way to store these vectors on an android device
How big a database of images would I need for decent accuracy (suppose I use it for people detection) & what could be the overhead of comparing against a large database
Also, what limitations does ORB present regarding color differences of objects and distance from camera

Neural Network to recognize accelerometer pattern

I'm building an application for Android devices that requires it to recognize, by accelerometer data, the difference between walking noise and double tapping it. I'm trying to solve this problem using Neural Networks.
At the start it went pretty well, teaching it to recognize the taps from noise such as standing up/ sitting down and walking around at a slower pace. But when it came to normal walking it never seemed to learn even though I fed it with a large proportion of noise data.
My question: Are there any serious flaws in my approach? Is the problem based on lack of data?
The network
I've choosen a 25 input 1 output multi-layer perceptron, which I am training with backpropagation. The input is the changes in acceleration every 20ms and output ranges from -1 (for no-tap) to 1 (for tap). I've tried pretty much every constallation of hidden inputs there are, but had most luck with 3 - 10.
I'm using Neuroph's easyNeurons for the training and exporting to Java.
The data
My total training data is about 50 pieces double taps and about 3k noise. But I've also tried to train it with proportional amounts of noise to double taps.
The data looks like this (ranges from +10 to -10):
Sitting double taps:
Fast walking:
So to reiterate my questions: Are there any serious flaws in my approach here? Do I need more data for it to recognize the difference between walking and double tapping? Any other tips?
Update
Ok so after much adjusting we've boiled the essential problem down to being able to recognize double taps while taking a brisk walk. Sitting and regular (in-house) walking we can solve pretty good.
Brisk walk
So this is some test data of me first walking then stopping, standing still, then walking and doing 5 double taps while I'm walking.
If anyone is interested in the raw data, I linked it for the latest (brisk walk) data here
Do you insist on using a neural network? If not, here is an idea:
Take a window of 0.5 seconds and consider the area under the curve (or since your signal is discrete, the sum of the absolute values of each sensor reading-- the red area in the attached image). You will probably find that that sum is high when the user is walking and much much lower when they are sitting and/or tapping. You can set a threshold above which you consider a given window to be taken while the user is walking. Alternatively, since you have labelled data, you can train any binary classifier to differentiate between walking and not walking.
You can probably improve your system by considering other features of the signal, such as how jagged the line is. If the phone is sitting on a table, the line will be almost flat. If the user is typing, the line will be kind of flat, and you will see a spike every now and then. If they are walking, you will see something like a sine wave.
Have you considered that the "fast walking" and "fast walking + double tapping" signals might be too similar to differentiate using only accelerometer data? It may simply not be possible to achieve accuracy above a certain amount.
Otherwise, neural networks are probably a good choice for your data, and it still may be possible to get better performance out of them.
This very-useful paper (http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) recommends that you whiten your dataset so that it has a mean of zero and unit covariance.
Also, since your problem is a classification problem, you should make sure that you are training your network using a cross-entropy criteria (http://arxiv.org/pdf/1103.0398v1.pdf ) rather than RMSE. (I have no idea whether Neuroph supports cross-entropy or not.)
Another relatively simple thing you could try, as other posters suggested, is transforming your data. Using an FFT or DCT to transform your data to the frequency domain is relatively standard for time-series classification.
You could also try training networks on different sized windows and averaging the results.
If you want to try some more difficult NN architectures, you could look at the Time-Delay-Neural-Network (just google this for the paper), which takes multiple windows into account in its structure. It should be relatively straightforward to use one of the Torch libraries (http://www.torch.ch/) to implement this, but it might be hard to export the network to an Android environment.
Finally, another method of getting better classification performance in time-series data is to consider the relationships between adjacent labels. Conditional Neural Fields (http://code.google.com/p/cnf/ - note:I have never used this code) do this by integrating neural networks into conditional random fields, and, depending on the patterns of behavior in your actual data, may do a better job.
What probably would work is to filter the data using a Fourier transform first. Walking has a sinus like amplitude, your double taps would stand-out in the transform-result as a different frequency. I guess a neural network can than determine if the data contains your double tabs because it has the extra frequency (the double tabs frequency). Some questions remain:
How long the sample of data needs to be?
Can your phone do all the work it needs to do, does it have enough processing power?
You might even want to consider using the GPU for this.
Another option is to use the Fourier output and some good old Fuzzy Logic.
This sound like fun...

What is the real world accuracy of phone accelerometers when used for positioning?

I am working on an application where I would like to track the position of a mobile user inside a building where GPS is unavailable. The user starts at a well known fixed location (accurate to within 5 centimeters), at which point the accelerometer in the phone is to be activated to track any further movements with respect to that fixed location. My question is, in current generation smart phones (iphones, android phones, etc), how accurately can one expect to be able to track somebodies position based on the accelerometer these phones generally come equip with?
Specific examples would be good, such as "If I move 50 meters X from the starting point, 35 meters Y from the starting point and 5 meters Z from the starting point, I can expect my location to be approximated to within +/- 80 centimeters on most current smart phones", or whatever.
I have only a superficial understanding of techniques like Kalman filters to correct for drift, though if such techniques are relevant to my application and someone wants to describe the quality of the corrections I might get from such techniques, that would be a plus.
If you integrate the accelerometer values twice you get position but the error is horrible. It is useless in practice.
Here is an explanation why (Google Tech Talk) at 23:20.
I answered a similar question.
I don't know if this thread is still open or even if you are still attempting this approach, but I could at least give an input into this, considering I tried the same thing.
As Ali said.... it's horrible! the smallest measurement error in accelerometers turn out to be rediculess after double integration. And due to constant increase and decrease in acceleration while walking (with each foot step in fact), this error quickly accumulates over time.
Sorry for the bad news. I also didn't want to believe it, till trying it self... filtering out unwanted measurements also doesn't work.
I have another approach possibly plausible, if you're interested in proceeding with your project. (approach which I followed for my thesis for my computer engineering degree)... through image processing!
You basically follow the theory for optical mice. Optical flow, or as called by a view, Ego-Motion. The image processing algorithms implemented in Androids NDK. Even implemented OpenCV through the NDK to simplify algorithms. You convert images to grayscale (compensating for different light entensities), then implement thresholding, image enhancement, on the images (to compensate for images getting blurred while walking), then corner detection (increase accuracy for total result estimations), then template matching which does the actual comparing between image frames and estimates actual displacement in amount of pixels.
You then go through trial and error to estimate which amount of pixels represents which distance, and multiply with that value to convert pixel displacement into actual displacement. This works up till a certain movement speed though, the real problem being camera images still getting too blurred for accurate comparisons due to walking. This can be improved by setting camera shutterspeeds, or ISO (I'm still playing around with this).
So hope this helps... otherwise google for Egomotion for real-time applications. Eventually you'll get the right stuff and figure out the jibberish I just explained to you.
enjoy :)
The optical approach is good, but OpenCV provides a few feature transforms. You then feature match (OpenCV provides this).
Without having a second point of reference (2 cameras) you can't reconstruct where you are directly because of depth. At best you can estimate a depth per point, assume a motion, score the assumption based on a few frames and re-guess at each depth and motion till it makes sense. Which isn't that hard to code but it isn't stable, small motions of things in the scene screw it up. I tried :)
With a second camera though, it's not that hard at all. But cell phones don't have them.
Typical phone accelerometer chips resolve +/- 2g # 12 bits providing 1024 bits over full range or 0.0643 ft/sec^2 lsb. The rate of sampling depends on clock speeds and overall configuration. Typical rates enable between one and 400 samples per second, with faster rates offering lower accuracy. Unless you mount the phone on a snail, displacement measurement likely will not work for you. You might consider using optical distance measurement instead of a phone accelerometer. Check out Panasonic device EKMB1191111.

Categories

Resources