I want to create a Glass app that responds to a finger waved past the Glass camera much like the shape splitter mini game does. For those of you who are unfamiliar with shape splitter a screen cast to a phone of it is shown here, the swiping is created by moving your finger in front of glass' camera. http://youtu.be/aKGgT8H0AJM?t=4m27s
I'm curious as to how this works and how I can use my hand as an interactive part of my application. Does anyone know how this is done? Or have any suggestions for recreating the same effect?
Related
I am working on a face recognition app where the picture is taken and sent to server for recognition.
I have to add a validation that user should capture picture of real person and of another picture. I have tried a feature of eye blink and in which the camera waits for eye blink and captures as soon as eye is blinked, but that is not working out because it detects as eye blink if mobile is shaken during capture.
Would like to ask for help here, is there any way that we can detect if user is capturing picture of another picture. Any ideas would help.
I am using react native to build both Android and iOS apps.
Thanks in advance.
Thanks for support.
I resolve it by the eye blink trick after all. Here is a little algorithm I used:
Open camera, click capture button:
Camera detects if any face is in the view and waits for eye blink.
If eye blink probability is 90% for both the eyes, wait 200 milliseconds. Detect face again with eye open probability > 90% to verify if the face is still there, and capture the picture at the end.
That's a cheap trick but working out so far.
Regards
On some iPhones (iOS 11.1 upwards), there's a so-called trueDepthCamera that's used for Face ID. With it (or the back facing dual camea system) you can capture images along with depth maps. You could exploit that feature to see if the face is flat (captured from an image) or has normal facial contours. See here...
One would have to come up with a 3d face model to fool that.
It's limited to only a few iPhone models though and I don't know about Android.
i'm trying to do a simple AR scene with NFT image that i've created with genTextData. The result works fairly well in unity editor, but once compiled and run on an android device, the camera resolution is very bad and there's no focus at all.
My marker is rather small (3 cm picture), and the camera is so blurred that the AR cannot identify the marker from far away. I have to put the phone right in front of it (still verrrrryy blurred) and it will show my object but with a lot of flickering and jittering.
I tried playing with the filter fields (Sample rate/cutoff..), it helped just a little bit wit the flickering of the object, but it would never display it from far away..i always have to put my phone like right in front of it. The result that i want should be: detecting the small marker (sharp resolution or/and good focus) from a fair distance away from it..just like the distance from your computer screen to your eyes.
The problem could be camera resolution and focus, or it could be something else. But i'm pretty sure that the AR cannot identify the marker points because of the blurriness.
Any ideas or solutions about this problem ?
You can have a look here:
http://augmentmy.world/augmented-reality-unity-games-artoolkit-video-resolution-autofocus
I compiled the Unity plugin java part and set it to use the highest resolution from your phone. Also the auto focus mode is activated.
Tell me if that helps.
I'm trying to make a unity app, that is like the 360 youtube viewer, but just for images.
So I'm trying to be able to rotate the camera by dragging my finger, and by rotating the device.
The image is rendered on a skymap and the camera is at (0,0,0).
I'm using the cardboard sdk for the head rotation, and it works great alone.
I also have a smooth look script for the camera that also works great alone.
But when i try to control the camera with both methods stuff gets weird: sometimes the head rotation or the dragging is opposite on the y axis, and other weird like that.
(The hierarchy is Cardboard main-> Cardboard head -> empty gameobject with the dragging script -> cardboard camera)
I have tried other ways, but nothing seems to work well.
How can I implement both ways of controlling the camera, like it is in the panoramic youtube player?
Thank you very much.
Can you put the empty game object with the script at the top of the hierarchy, with the Cardboard Main as a child? I made a game that does the same thing and I think this is the best way to do it.
For the life of me, I cannot figure out anything about this topic. Maybe I'm just not using the correct keywords while searching.
What I'm trying to do is create an overlay transparent canvas which will basically pass on all touch events to the Android home screen, in effect, making the phone completely usable.
If I'm not making sense, please have a look at the "Transparent Screen" application on the Android Market. That is exactly what I want to do. The camera preview screen is always on top while the phone can be used normally. How can I achieve this?
Got it! Just use this in the activity:
getWindow().addFlags(WindowManager.LayoutParams.FLAG_NOT_TOUCHABLE);
It creates a click-through activity.
I'm developing an Android application. I want to do the following:
I will have a black screen with an object in its center, for example, a vase.
With this app, I will a 360 degrees view of vase. I explain: imagine the vase is the center of an imaginary circle. I want to make user follow this circle, to see the vase from any point of view. I don't know if I explain it well.
In real life, you can move around a vase and see it in front, behind, and other sides. I want to simulate this.
My problem is that I'm not sure if I can simulate this using accelerometer.
Who can I know if user is describing a circle with the mobile phone?
If you don't understand me or you need more details, please tell me.
You should combine accelerometer with compass. Compass gives you direction.