How to capture hand gestures (not touch) on android app? - android

I want to develop an android app that captures hand gestures without touching the screen. What kind of sensors will I have to use? I don't want camera detection, that's too battery consuming. Will proximity sensor be of any help? How can I even access these sensors? Please, need help here in respect of coding.

Related

Detect if picture of picture taken in mobile app

I am working on a face recognition app where the picture is taken and sent to server for recognition.
I have to add a validation that user should capture picture of real person and of another picture. I have tried a feature of eye blink and in which the camera waits for eye blink and captures as soon as eye is blinked, but that is not working out because it detects as eye blink if mobile is shaken during capture.
Would like to ask for help here, is there any way that we can detect if user is capturing picture of another picture. Any ideas would help.
I am using react native to build both Android and iOS apps.
Thanks in advance.
Thanks for support.
I resolve it by the eye blink trick after all. Here is a little algorithm I used:
Open camera, click capture button:
Camera detects if any face is in the view and waits for eye blink.
If eye blink probability is 90% for both the eyes, wait 200 milliseconds. Detect face again with eye open probability > 90% to verify if the face is still there, and capture the picture at the end.
That's a cheap trick but working out so far.
Regards
On some iPhones (iOS 11.1 upwards), there's a so-called trueDepthCamera that's used for Face ID. With it (or the back facing dual camea system) you can capture images along with depth maps. You could exploit that feature to see if the face is flat (captured from an image) or has normal facial contours. See here...
One would have to come up with a 3d face model to fool that.
It's limited to only a few iPhone models though and I don't know about Android.

Full six degrees motion tracking with standard iOS/Android devices

Project Tango has Motion Tracking API. I'm curious what's the best way to track a motion in similar way (i.e. track position and orientation of the user's device in full six degrees of freedom) on standard Android and iOS devices using any kind of 3D party SDKs and/or physical additions (like markers or beacons)?
You might be interested in visual odometry.
From this documentation:
Tango implements Motion Tracking using visual-inertial odometry, or VIO, to estimate where a device is relative to where it started.
Standard visual odometry uses camera images to determine a change in position by looking at the relative position of different features in those images. For example, if you took a photo of a building from far away and then took another photo from closer up, it would be possible to calculate the distance the camera moved based on the change in size and position of the building in the photos.
Visual-inertial odometry supplements visual odometry with inertial motion sensors capable of tracking a device's rotation and acceleration. This allows a Tango device to estimate both its orientation and movement within a 3D space with even greater accuracy. Unlike GPS, Motion Tracking using VIO works indoors.

Image processing in Android

I'm beginner in android.
I’m working on a project that I'm supposed to convert smart phone movement into mouse movement via smart phone camera with android. The smart phone moves on a checkboard surface and the movement information is sent to computer by Bluetooth. Should I use image processing techniques to do that? Has anyone have a relative experience or a similar code to help me out?
If I understand correctly image processing would be a good way to go to discover movement on a 2d plane. The checkerboard pattern should make for relatively easy pixel image comparison.
You could implement this using object detection in simple way.
But for your method you will need to implement optical flow analysis algorithm.
Optical mice internally uses the similar technique called Digital image correlation, it captures the video frames contentiously and compares consecutive frames to detect the motion.
You should read about optical flow detection techniques on Wikipedia.
& this slide

Sensor for detect moving with device

Hi I make app in which I use camera and I need to detect in which side is phone moving (left,rigth,up or down) all sensor I have tried just return moving around axis.
So If I have camera and I will move phone to right/left/up/down how to detect it?
You can you the linear accelerometer to detect which direction the user is moving. This page has all the details you will need. There a few sample projects you can have a look at as well.
http://developer.android.com/guide/topics/sensors/sensors_motion.html#sensors-motion-accel

Recognize active objects with a capacitive touch screen display

I'm trying to develop an app that can recognize an active object (for example: a memory) that touch the smartphone display. Before I start to develop I've to know if there's any objects that my touch screen display can recognize? Which device can be recognizable by a smartphone display? I'm interested to know that for iPhone or for Android phone.
I found this app and you can see that with a card I can interact with a mobile device, now I'm asking you if anyone know how to do this kind of app with an iPhone or with an Android phone.
Does anyone knows how to do that? There's a library (iOS or Android) to recognize object that I put over the display?
volumique is the company that develops the monopoly card technology that you are talking about. However I will suggest two things.
For Android devices you can use NFC. Its kind of what you are doing right now but you just need to bring your object closer to the screen, no need to actually touch it.
For iOS, there is no NFC or RFID technology available. However you can develop a hardware which has active capacitors arranged in a pattern over it so when you bring your device closer to the iOS screen, the touch controller should recognize the pattern of the capacitors and report this to the main controller which can do recognition of the object with the help of your code. basically capacitive touch screens used in iPhones are just an array of capacitors arranged in a grid pattern. So when you touch using your finger, you change the capacitance of one or two capacitors and then the controller finds out the location of the change. However if you change the capacitance of say 5 6 sensors at the same time, in a particular order like in a pentagon, then you can write software for your controller that if the location of the sensors whose capacitance has been changed by this external object form the shape of a pentagon, then show the viewer that it is a 5 $ card (just an example). This is one way I can think of doing this.
Thanks

Categories

Resources