Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am working on a project, in which i need to detect the facial features of someone,in real time, using the camera and be able to interact with those. For example, with a tap on the screen, the eye's area should become black.
I have done a lot of searching but nothing seems to fit.
I am either not searching in the right places either i don't understand fully the potential of what i find by searching.
I would really appreciate it if someone could point me in the right direction.
You need to use Google Mobile Vision Library. It is part of Google Play Services. It can detect:
left and right eye
left and right ear
left and right ear tip
base of the nose
left and right cheek
left and right corner of the mouth
base of the mouth
For further information, check out these links:
developers.google.com
JournalDev
Note:-
JournalDev site states:
Add the following dependency inside the build.gradle file of your application.
compile 'com.google.android.gms:play-services-vision:11.0.4'
Instead of compile you need to use implementation keyword here:
implementation 'com.google.android.gms:play-services-vision:11.0.4'
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
There is an app named Funny Face Effects on play store.
I am trying to achieve the effect in which when a user moves the finger on image, pixels under the finger in certain radius shifts in that direction. It feels like moving a cloth with a finger or moving some thick paste with the finger. I could not find a proper name but I think it is called smudge.
After searching and trying I found that I can achieve this with GPUImage library.This library uses OpenGL fragment shader to apply some effect on image. I tried this but to get the continuous effect I have to save each finger position(each point on line) and apply filter for each point which is not feasible and it stuck after drawing more lines.
How to achieve this effect with OpenGL-ES?
It will be good if code is provided but just the idea of implementation will also work. Thank you.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm working on an Android app (though eventually I'll want to do the same thing on iOS) and I'm looking to build an image recognition feature into it. The user would snap a picture, then this component of the app would need to figure out what that image is, whether it's a bowling ball, a salad, a book, you name it. It would also be helpful if it could figure out roughly how big the object in question is, though I imagine the camera focus values could help with that. The objects in question would not be moving.
I've heard of neural networks being used, but I'm not sure how this could be implemented, especially since I want to be able to recognize a very wide range of objects. I highly doubt this sort of processing could happen natively on a phone either. What are some solutions to this problem?
I would suggest you look at OpenCV. They have an awesome open source library for image processing and object detection. They also have great Android sample apps ready for testing some of their APIs.
http://opencv.org/platforms/android.html
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I use Vuforia and Unity to build an android AR app. I follow guide on youtube. And when phone's camera scan the image target, the house object appear on the image target. But when I get the image target out phone's camera, the house object isn't removed and it still appear on screen.
Are you using Extended Tracking? Check the option in the inspector of your image target if the extended tracking is on.
For more information about extended tracking please refer to this Vuforia Guide.
If you want the house to disappear when tracking is lost, try attaching DefaultTrackableEventHandler.cs from under Vuforia/Scripts to your game object.
Specifically look at OnTrackingLost - it disables rendering and collision.
Good luck!
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I think the window is too large too be out of my Windows Frament,but i don't know how to fix it?who can give me some advice?
Although this is getting some downvotes it's actually kind of a valid question having just upgraded it myself to see what OP is talking about. The dialog gives you no way to resize it, and the next button is at the bottom, and is easily cut off on a lower resolution monitor. The easiest work around I see if to select the option you want then press the enter key, as nothing seems to be able to resize the window and double clicking doesn't work. At each stage of the wizard enter always appears to continue it on, and most of the relevant things to fill out should be in view.
That window definitely has some bugs, it seems to let you resize it bigger but never smaller, and when you try to make it wider I just had it bug out and cut off the buttons on my 1080p monitor. I'm guessing it's just a mess up on google's side, I'm sure they will fix it in a later release. That's the price you pay getting builds from the dev channel =).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am using Google Maps and getting latitude/longitude. I have a conundrum:
Let's say I have an Android device in one room (at home) facing upright (device A) and another Android device in a flat position (device B). I would like to find out which direction device 'A' is facing and directions to that device from device 'B'. Also, if it is possible, how do I find the path to that device?
Is there a solution I can apply for the above requirement? Even via Bluetooth, if possible.
In short, how can device 'A' get to device 'B' in a room?
You should just start with simple algorithm for drawing a line between coordinates of A and B. Once that's done, you can consider orientation and see what difference it make, perhaps you don't even need to care about orientation at all, as usually a device is of a relatively small size. Drawing a line between two coordinates on the same plane is really quite simple. If you need to take care what roads you take, then you can just start by playing with URL parameters of Google Maps in the browser.