OpenCV eye tracking on Android - android

I'm looking to do basic eye tracking in android using the OpenCV api. I've found that there seem to be two ways to use opencv in Andriod, either by using their c++ wrapper or by using JavaCV api. I'm willing to do either but I'm looking for some idea or sample code as to how i would track basic eye movement with either platform. I'm leaning toward the JavaCV api because it looks easier to use but I could really use some sort of tutorial on the basics of using it with android.

Assuming you already looked into JNI (Java Native interface), JavaCV is exactly the same thing as OpenCV. As per eye tracking, you will need to get the live video feed from the camera and locate the participant's eyes in the frames using template matching and blink detection.
You will just have to make your View implements Camera.PreviewCallback in order to get a hold on the camera feed.
The OpenCV Site on eye tracking provides some sample codes that will help you track the eyes.
If you want to see an example of opencv on android, click on this open source code.
Hope it helps

Related

Adding ARToolkit Marker tracking into Tango

I have been trying to integrate ARToolkit Marker Object tracking into a Tango Application.
So far I have created a build so that a tango app can access and use the ARToolkit Native Library or the ARToolkit Unity wrappers.
However, they both seem to require exclusive access to the camera in their default configurations.
How could you feed the same Android video feed to both libraries?
Could you create a dummy camera device which doubles out the feed?
Could you take the tango feed as normal, and then resend it into ARToolkit with a special VideoConf
[edit]
ARToolkit uses the older Camera1 API, takes a onPreviewFrame() callback and passes that byte[] data to it's own Native Library call, which does the actual work.
Along the lines of the second bullet point, could Tango provide a copy of each frames raw camera data using something like iTangoVideoOverlay .
(ARToolkits NDK functionality seems to expect NV21, but can also accept other formats)
If that data was extractable from tango, I believe the ARToolkit NDK functionality can be used without actually owning the camera.
I am afraid that neither of the method you mentioned would work. Tango has exclusive access to camera and I believe ARToolkit also occupies the camera exclusively through camera2 API. With current TangoSDK, I think the walk-around would be use ARToolkit for camera rendering, and Tango for pose tracking.
However, this could expose a problem for time-stamping, which is Tango and ARToolkit has different timestamps. The solution for this is to take a timestamp offset at the very beginning when application starts, and constantly apply that offset when querying pose from Tango based on timestamp.
This blog shows an example integrating the two.
It also links to example source code, but I haven't tidied it up at all after testing - proceed with caution!
You cannot feed the same camera source to both libraries (first bullet point), but you can forward the camera feed from Tango (ITangoVideoOverlay) into ARToolkit ([AcceptVideoImage][2]) (second bullet point).
This is not ideal, because it is fairly inefficient to send the data to Java from C#. The Phab 2 Pro has to downsample the video X4 to achieve a decent framerate.
A better answer would replace the AndroidJavaClass calls with pipes/sockets.
Also there are many little problems - it's a pretty hacky workaround.

App of image detection in xamarin

I want to make an app of image recognition and i need help of where to start. What i need is someone to explain me these few things
Wich offline libraries are the best to use with xamarin for image processing
in case more performance is needed, best libraries for Image processing iOS and Android to work them separately.
It does not matter if the library is in C or C++ what i want is documentation to follow
My idea of best is.
Well documented.
Easy to implement on the platforms xamarin or individually
The main functions i am looking for are for object recognition in an image NOT at runtime with camera
Also i want to add if is there any well document of fundamentals of image procesing and edge detection.
Thanks
Depends on what you want to do - image detection is a big topic.
A couple of places to start are:
Microsoft cognitive services
https://blog.xamarin.com/performing-ocr-for-ios-android-and-windows-with-microsoft-cognitive-services/
This is an on-line service and can do OCR, facial recognition, even describing what is in an image.
OpenCV
This is a fully featured computer vision library available in C++ with iOS and Android wrappers you can bind to use form Xamarin.
http://opencv.org

How to integrate the metaio + Open CV for android application?

Hi i'm trying to create a application related to the Augmented Reality (AR) and was able to configure my application with Metaio SDK and OpenCV library successfully in two separate application.
but the thing is i want to use both the library of OpenCV and Metaio together into one application. so can any one help me with its integration.
In my single application I want to use OpenCV for markerless detection and MetaIO for 3D Model rendering.
Metaio:http://www.metaio.com/
OpenCV:http://opencv.org/
=====>
I'm using opencv to detect shapes in a camera image and want to display 3D objects rendered by metaio on those shapes. Similar to marker tracking.
Metaio and openCV, each have their own cameraview. I have disabled cameraview of openCV.
I want to convert an ImageStruct object received in onNewCameraFrame() method into an OpenCV Mat in Android. For this, I have registered MetaioSDKCallback to continuously receive camera frame.
But onSDKReady() and onNewCameraFrame() method of this callback is not being called,though I have added 'metaioSDK.requestCameraImage()'.
This where i'm stuck with it.
I suggest you to integrate the sdk of Opencv4android, and to see the samples come with, they are very good examples to teach you how to use the camera easily
For your objective probably face detection example is good to check.
Here is the tuto that help you to install and configure opencv SDK
For AR I can't help you so much, but have look at this discussion, it could be helpful.

better AR on Android

I am trying to create a small Android app with reasonably simple AR functionality - load a few known markers and render known 2D/3D objects on top of the video stream when those are detected. I would appreciate any pointers to a library for doing this, or at least a decent example of doing it right.
Here are some leads I have looked into:
AndAR - https://code.google.com/p/andar/ - This starts out great, and the AndAR app works well enough to render one cube on a single pattern on a real-time video stream, but it looks like the project is effectively abandoned, and to extend it I'll have to go heavily into OpenGL land - not impossible, but very undesirable. The follow-up AndAR Model Viewer project, which supposedly lets you load custom .obj files, doesn't seem to recognize the marker at all. Once again, this looks very much abandonware, and it could have been so much more.
Processing - Previously mentioned NyARToolkit is great with Processing from a PC - example usage, which works perfectly for the 'here's a pattern, here's an object, just render it there' functionality, but then it all breaks down for Android - GStreamer for Android is at a very very early hacky stage, and in general video functionality seems to be a rather low priority for the Android Processing project - right now import processing.video.*; just fails.
layar, wikitude etc, they all seem to focus more on interactivity, location and whatnot, which I absolutely don't need, and are somehow missing this basic usage.
Where am I going wrong? I would be happy to code some part of the video capture/detection/rendering, I don't need a drag-and-drop library, but the sample code from AndAR just fills me with dread
I suggest to take a look at the Vuforia SDK (formerly QCAR) by Qualcomm plus jPCT-AE as 3D-Engine. They both work very well together, no pure OpenGL needed. However you need some C/C++ knowledge, since Vuforia relies to the NDK to some degree.
It basically boils to to get the marker pose from Vuforia via a simple JNI-function (the SDK contains fully functional and extensive sample code) and use that to place the 3D-objects with jPCT (the easiest way is to set the pose as the rotation matrix of the object, which is a bit hacky, but produces quick results).
jPCT-AE supports 3D-model loading for some common formats. The API docs are good, but you may need to consult the forums for sample code.

Adding effect to an Android Camera preview

I am looking to get Live camera feed and add effects to it and display it. Which is the right technology to go forward with. Any open source
You can configure camera class to provide you the preview buffer (but copies). This is provided throuh PreviewBuffer Callback interface. You have implement the interface and set it on the camera. During preview you will get the preview buffers.
http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html
Then you can apply custom processing algorithm on the buffer and use either Surface or opengl surface to draw the same.
Shash316
Check out OpenCV... it will require you to do some work in C/C++, JNI and the Android NDK but it is a really nice library and should do what you need pretty easily.
Kieran is right, OpenCV would be a good and easy way with a lot of capabilities.
See http://opencv.willowgarage.com/wiki/AndroidTrunk for details of the android implementation.
And checkout the sample application: https://code.ros.org/svn/opencv/trunk/opencv/android/apps/OpenCV_SAMPLE/
This should get you a good starter as it is an example using processors for the live image.

Categories

Resources