Augmented reality with Optical character recognition - android

I have implemented Augmented reality program using Qualcomm's vuforia library. Now I want to add Optical character recognition feature to my program so that i can translate the text from one language to another in real time. I am planning to use Tesseract OCR library. But my question is How do i Integrate Tesseract with QCAR?
can some body suggest me proper way to do it?

What you need is an access to the camera frames, so you can send them to Tesseract. The Vuforia SDK offers a way to access the frames using the QCAR::UpdateCallback interface (documentation here).
What you need to do is create a class that implements this protocol, register it to the Vuforia SDK using the QCAR::registerCallback() (see here), and from there you'll get notified each time the Vuforia SDK has processed a frame.
This callback will be provided a QCAR::State object, from which you can get access to the camera frame (see the doc for QCAR::State::getFrame() here), and send it to the Tesseract SDK.
But be aware of the fact that the Vuforia SDK works with frames in a rather low resolution (on many phones I tested, it returns frames in the 360x240 to 720x480 range, and more often the former than the latter), which may not be accurate enough for Tesseract to detect text.

As complimentary information to #mbrenon 's answer: Tesseract only does text recognition and doesn't support ROI text extraction, so you will need to add that to your system after capturing your image.
You can read these academic papers which report on the additional steps for using Tesseract on mobile phones and provide some evaluation performances:
TranslatAR: Petter, M.; Fragoso, V.; Turk, M.; Baur, Charles, "Automatic text detection for mobile augmented reality translation," Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on , vol., no., pp.48,55, 6-13 Nov. 2011
Mobile Camera Based Detection and Translation

Related

Is there any Android API to detect face inside an image?

Is there any Android API to detect face inside an image? For exemple on iOS there is such API to detect face, and I‘m curious if there is a similar API in Android.
The Android framework has the FaceDetector API, although is only suitable for bitmaps (not real-time), and returns only the boundaries (rectangle) location.
For more advanced features, such as real-time detection or face features contour, there is the ML Kit library offered by Google. Although this library can also be used for very simple use cases, such as also getting the face location in a bitmap.
More about ML Kit in the next link:
https://developers.google.com/ml-kit/guides

ARCore alternative on Android

I'm developing an Android app with Augmented Reality in order to display points of interests at given location. I do not need face, plane or object recognition, only placing some points at specific locations (lat/long).
It seems ARCore on Android only supports few devices, my customer requires more devices supported as the AR view is the core of the app.
I was wondering if there are alternatives to ARCore on Android that supports placing points of interest at some coordinates, covering a large number of Android devices.
Thanks for any tip.
Well, there is this location based AR framework for Android: https://github.com/bitstars/droidar
However it hasn't been maintained for quite a long time. You can also look at vuforia, however it's not free:
https://developer.vuforia.com/

Android Application : Augmented Reality or Image Recognition

I am interested in developing an Android Application that employs the Android Devices Camera to detect moving "Targets".
The three types of targets I need to detect and distinguish between are pedestrians, runners (joggers) and cyclists.
The augmented realities SDK's I have looked at only seem to offer face recognition which doesn't sound like they can detect entire people.
Have i misunderstood what Augmented Realities SDK can provide?
There is a big list of AR SDKs (also for Android platform):
Augmented reality SDKs
However, to be honest I strongly doubt that you will find any (doesn't matter free or payed) SDK for your task. It is to specific so you should probably write it by yourself using OpenCV.
OpenCV will allow you to detect objects (more or less) and then you will need to write some algorithm for classification. I would recommend classification based on object speed.
Then, when you have your object classified you can add any AR SDK to add something to your picture.

how can i set the camera function that anti-shake(image Stabilizer) at android

I've made a Camera App.
I want to add the functionality of anti-shake.
But I could not find the setting for anti-shake(image Stabilizer).
Plz Help me!!
Usually Image Stabilizer is a built-in camera feature, while OIS (Optical-Image-Stabilization) is a built-in hardware feature; by now really few devices support them.
If device hasn't a built-in feature, i think you cannot do anything.
Android doesn't provide a direct API to manage image stabilization, but you may try:
if android.hardware.Camera.getParameters().getSupportedSceneModes(); contains steadyphoto keyword (see here), your device supports a kind of stabilization (usually it shots when accelerometer data indicates a "stable" situation)
check android.hardware.Camera.getParameters().flatten(); for a "OIS" or "image-stabilizer" keyword/values or similar to use in Parameters.set(key, value);. For the Samsung Galaxy Camera you should use parameters.set("image-stabilizer", "ois");//can be "ois" or "off"
if you are really boring you may try reading the accelerometer data and decide to shot when the device looks steady.
Good luck.
If you want to develop software image stabilizer, OpenCV is helpful library for you. Following is the one of the way to stabilize the image using Feature.
At first, you should extract feature from image using feature extractor like SIFT, SURF algorithm. In my case, FAST+ORB algorithm is best. If you want more information, See this paper
After you get the features in images, you should find matching features with images.there are several matcher but Bruteforce matcher is not bad. If Bruteforce is slow in your system, you should use a algorithm like KD-Tree.
Last, you should get geometric transformation matrix which is minimize error of transformed points. You can use RANSAC algorithm in this process.
You can develop all this process using OpenCV and I already developed it in mobile devices. See this repository

OpenCV for Android: Autofocus native camera

Is it possible to control autofocus feature of Android camera, using OpenCV's libnative_camera*.so ?
Or maybe it's possible to manually set focus distance?
Is there alternative approach (May be, it's better to use Android API to control camera and then grab frame in onPreview events and pass it to native code)?
If your intention is to control the camera on your own, then Android Camera APIs suck. As such Android APIs suck when it comes to offering your hardware camera device number to JavaCV native camera library. Without a native device number, JavaCV would be clueless to connect to the appropriate camera (Front or Back).
If your intention is only to perform object detection and stuff, well Android Camera APIs coupled with JavaCV should work. Setup a callbackBuffer of sufficient size, setPreviewCallbackWithBuffer, setup sufficient preview frame-rate, and once you start getting preview-frames in ImageFormat.NV21 format (mind you!!, this is the only format supported for preview-frames even in ICS), pass them off to JavaCV for performing object detection.
AutoFocus on Android Camera APIs suck big time. I have been researching for over a month for feasible solutions.

Categories

Resources