I want to design VR app in android which detects/recognise things using camera. and show details.
I know about unity, but this shows assets provided while designing background like gaming set.
In augmented reality we can scan image and accordingly we get results, the same way I want using VR screen.
Check out Vuforia library for android. It works like a charm.
I have implemented what you're talking about. It'll take a little effort and time to get the hang of it. But it'll definitely work.
Related
I am in a situation where I need to use two cameras at a same time.
I have been looking up in internet for Camera2 api examples. Although not successful in developing my own camera app the way I want for android phone, I found some examples which open the camera.
Now I have a situation. I would like to know if I can access two cameras simultaneously in Android Things Odroid N2+ board. This is because I am working on the app that needs to open two cameras and display at the same time. For processing the image, I am planning to use OpenCV library.
Is this possible in Android/Odroid ?
I also recommend to use UVC Library to access two camera. Android Things is not fit to this problem. I think this example help you. https://github.com/saki4510t/UVCCamera
I am working on app that detect eye blink of the user. I have been searching the web for 2 days but still don't have clear vision about how this can be done.
As far as i have knew is that the system supports face detection which is detecting if there is a face in the picture and locating it.
But this works only with images and detect only faces which is not what i need. I need to open an camera activity and directly detect the face of the user and locate his eyes and other facial parts and wait till he blinks, like when you long click on the screen on snap chat.
I have seen a lot about open-cv but still not sure what it is or how to use it or if it seize my goals.
Note: snap chat has no API released for the technology used, and even it doesn't let anyone to talk to the engineers behind this technology.
I know that openCV has the ability to allow image processing on the device's camera feed (as opposed to only being able to process still images).
Here is an introductory tutorial on eye detection using openCV:
http://romanhosek.cz/android-eye-detection-and-tracking-with-opencv/
If you can't find eye-blink detection tutorials in a google search, I think you'll have to create the code for eye-blink detection on your own, but I think openCV will be a helpful tool in doing so. There are lots of beginner openCV tutorials to help you get started.
I have a normal android app which I want to make VR compatible. While searching on web I found out that VrVideoView object is available on android sdk which can be used to render videos in VR.
I was wondering if something similar is available for Layouts as well on android.
If not, one possible way is to keep two layouts one for original and other can be rendered with bitmap of original one but this seems to be very cumbersome and won't scale.
Thanks.
Working on Android Mobile Camera
Want to implement the motion blur effect to the Android mobile camera.
This is implemented in iOS using the filter GPUImageLowPassFilter. I want alternative for this in android.
The best way to do this is to take a screen shot of the control, apply a blur to it, and then show that image over the top of the original control. This is how the yahoo weather app does it and its how google suggest you do things like this.
Render script does bluring fast. I've also got some code, but it's not currently at hand right now.
These might help:
http://blog.neteril.org/blog/2013/08/12/blurring-images-on-android http://docs.xamarin.com/recipes/android/other_ux/drawing/blur_an_image_with_renderscript/
I've also read that there are methods built into Android that do this, but the API isn't public so we cannot use it... which sucks.
I would like to use the mobile camera and develop a smart magnifier that can zoom and freeze-frame what we are viewing, so we don't have to keep holding the device steady while we read. Also should be able to change colors as given in the image in the link below.
https://lh3.ggpht.com/XhSCrMXS7RCJH7AYlpn3xL5Z-6R7bqFL4hG5R3Q5xCLNAO0flY3Fka_xRKb68a2etmhL=h900-rw
Since i'm new to android i have no idea on how to start, do you have any idea?
Thanks in advance for your help :)
I've done something similar and published it here. I have to warn you though, this is not a task to start Android development with. Not because of development skills, the showstopper here is a need for massive amount of devices to test it on.
Basically, two reasons:
Camera API is quite complicated and the different HW devices behave differently. Forget about using emulator, you would need a bunch of real HW devices.
There is a new API, Camera2 for platform 21 and higher, and the old Camera API is deprecated (kind of 'in limbo' state).
I have posted some custom Camera code on GitHub here, to show some of the hurdles involved.
So the easiest way out in your situation would be to use camera intent approach, and when you get your picture back (it is a jpeg file) just decompress it and zoom-in to the center of the resulting bitmap.
Good Luck