Detecting pathways in a video using BoofCV on Android - android

For my application I have been looking into using BoofCV to detect if I am on a pathway or not. The pathway is just gravel so it is the color of a standard roadway. I'm not sure exactly what image processing technique to use. The BoofCV demo app has a lot of features, but I would like to know which one is appropriate for what I'm trying to do.
Ultimately I'd like to have a toast appear on the screen when I am on a pathway.

From your question, I'm guessing that you' re using a regular camera, as real time input from a moving object. In that case you may need to:
Calibrate and Stabilize your input frames (since your pathway is made from gravel). BoofCV provides libraries.
Adjust exposure, contrast or brightness (for night/low light vision cameras or low contrast frames).
Use BoofCV's Binary Image Ops, according to your app's needs (Image Thresholding, Binary Labeling etc).
Use a classifier for 2 classes ("inside pathway", "outside pathway").
Process your output and feedback results to your "decision operator", to make a choice and guide your moving object.
More details about your project may help for a better answer.

Related

How to get all intermediate stages of image processing in Android?

If I use camera2 API to capture some image I will get "final" image after image processing, so after noise reduction, color correction, some vendor algorithms and etc.
I should also be able to get raw camera image following this.
The question is can I get intermediate stages of image as well? For example let's say that raw image is stage 0, then noise reduction is stage 1 color correction stage 2 and etc. I would like to get all of those stages and present them to user in an app.
In general, no. The actual hardware processing pipelines vary a great deal between different chip manufacturers and chip versions even from the same manufacturer. Plus each Android device maker then adds their own software on top of that.
And often, it's not possible to dump outputs from every step of the process, only some of them.
So making a consistent API for fetching this isn't very feasible, and the camera2 API doesn't have support for it.
You can somewhat simulate it by turning things like noise reduction entirely off (if supported by the device) and capturing multiple images, but that of course isn't as good as multiple versions of a single capture.

Real time mark recognition on Android

I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.

Limit Detection Area in Vision API

It seems I've found myself in the deep weeds of the Google Vision API for barcode scanning. Perhaps my mind is a bit fried after looking at all sorts of alternative libraries (ZBar, ZXing, and even some for-cost third party implementations), but I'm having some difficulty finding any information on where I can implement some sort of scan region limiting.
The use case is a pretty simple one: if I'm a user pointing my phone at a box with multiple barcodes of the same type (think shipping labels here), I want to explicitly point some little viewfinder or alignment straight-edge on the screen at exactly the thing I'm trying to capture, without having to worry about anything outside that area of interest giving me some scan results I don't want.
The above case is handled in most other Android libraries I've seen, taking in either a Rect with relative or absolute coordinates, and this is also a part of iOS' AVCapture metadata results system (it uses a relative CGRect, but really the same concept).
I've dug pretty deep into the sample app for the barcode-reader
here, but the implementation is a tad opaque to get anything but the high level implementation details down.
It seems an ugly patch to, on successful detection of a barcode anywhere within the camera's preview frame, to simple no-op on barcodes outside of an area of interest, since the device is still working hard to compute those frames.
Am I missing something very simple and obvious on this one? Any ideas on a way to implement this cleanly, otherwise?
Many thanks for your time in reading through this!
The API currently does not have an option to limit the detection area. But you could crop the preview image before it gets passed into the barcode detector. See here for an outline of how to wrap a detector with your own class:
Mobile Vision API - concatenate new detector object to continue frame processing
You'd implement the "detect" method to take the frame received from the camera, create a cropped version of the frame, and pass that through to the underlying detector.

how can i set the camera function that anti-shake(image Stabilizer) at android

I've made a Camera App.
I want to add the functionality of anti-shake.
But I could not find the setting for anti-shake(image Stabilizer).
Plz Help me!!
Usually Image Stabilizer is a built-in camera feature, while OIS (Optical-Image-Stabilization) is a built-in hardware feature; by now really few devices support them.
If device hasn't a built-in feature, i think you cannot do anything.
Android doesn't provide a direct API to manage image stabilization, but you may try:
if android.hardware.Camera.getParameters().getSupportedSceneModes(); contains steadyphoto keyword (see here), your device supports a kind of stabilization (usually it shots when accelerometer data indicates a "stable" situation)
check android.hardware.Camera.getParameters().flatten(); for a "OIS" or "image-stabilizer" keyword/values or similar to use in Parameters.set(key, value);. For the Samsung Galaxy Camera you should use parameters.set("image-stabilizer", "ois");//can be "ois" or "off"
if you are really boring you may try reading the accelerometer data and decide to shot when the device looks steady.
Good luck.
If you want to develop software image stabilizer, OpenCV is helpful library for you. Following is the one of the way to stabilize the image using Feature.
At first, you should extract feature from image using feature extractor like SIFT, SURF algorithm. In my case, FAST+ORB algorithm is best. If you want more information, See this paper
After you get the features in images, you should find matching features with images.there are several matcher but Bruteforce matcher is not bad. If Bruteforce is slow in your system, you should use a algorithm like KD-Tree.
Last, you should get geometric transformation matrix which is minimize error of transformed points. You can use RANSAC algorithm in this process.
You can develop all this process using OpenCV and I already developed it in mobile devices. See this repository

Android image and color blending using iOS blend modes

I am currently porting an application from iOS into Android and I have ran into some difficulties when it comes to image processing.
I have a filter class that is comprised of ImageOverlays and ColorOverlays which are applied in a specific order to a given base Bitmap. Each ColorOverlays has an RGB color value, a BlendModeId, and an alpha value. Each ImageOverlay has an image Bitmap, a BlendModeId, and an alpha/intensity value.
My main problem is that I need to support the following blend modes taken from iOS:
CGBlendModeNormal
CGBlendModeMultiply
CGBlendModeScreen
CGBlendModeOverlay
CGBlendModeDarken
CGBlendModeLighten
CGBlendModeColorDodge
Some of these have corresponding PorterDuff.Mode types in Android while others do not. What's worse, some of the modes that do exist were introduced in recent versions of Android and I need to run on API level 8.
Trying to build the modes from scratch is extremely inefficient.
Additionally, even with the modes that do exist in API8, I was unable to find methods that blend 2 images but that allow you to specify the intensity of the mask (the alpha value from ImageOverlay). Similarly with ColorOverlays.
The iOS functions I am trying to replicate in Android are
CGContextSetBlendMode(...)
CGContextSetFillColorWithColor(...)
CGContextFillRect(...) - This one is easy
CGContextSetAlpha(...)
I have started looking at small third party libraries that support these blend modes and alpha operations. The most promising one was poelocesar's lib-magick which is supposedly a port of ImageMagick.
While lib-magick did offer most of the desired blend modes (called CompositeOperator) I was unable to find a way to set intensity values or to do a color fill with a blend mode.
I'm sure somebody has had this problem before. Any help would be appreciated. BTW, Project specifications forbid me from going into OpenGLES.
Even though I helped you via e-mail, I thought I'd post to your question too in case someone wanted some more explanation :-)
2.2 is API level 8, which supports the "libjnigraphics" library in the
NDK, which gives you access to the pixel buffer for bitmap objects.
You can do these blends manually - they are pretty simple math
calculations and can be done really quickly.
Check out this site for Android JNI bitmap information.
It's really simple, just create a JNI method blend() with any
parameters you need (either the color values or possibly another bitmap object to blend together), lock the pixel buffer for that bitmap, do the
calculation needed, and unlock the bitmap. Link
Care needs to be taken on the format of the bitmap in memory, though,
as the shifting/calculation for 565 will be different than 8888. Keep that in mind if it doesn't look exactly correct!
It turned out that implementing it in the jni wasn't nearly as painful as previously expected. The following link had all the details.
How does photoshop blend two images together?

Categories

Resources