Sample camera images as bitmap Android - android

I am trying to create an app which periodically samples an image in the camera(preview?) and then does some processing on this image (i.e. face detection). I think this is the way to go about this., I have looked into OpenCV but don't think my knowledge is quite up to scratch to get it implemented well enough. My idea is to sample the image (raw format?), convert this to a bitmap image which then a FaceDetector object can detect the faces in the image and indicate this on screen.
Very much like the Native Camera app on the HTC Desire, which puts a grey square around the faces it sees before taking the picture.

Sam,
A sample is provided for capturing the preview stream from the camera: CameraPreview
This would be a great place to start.

Related

Detect blur image using MLKit (TensorFlow)

I am working on camera2 api and sometime getting blur images while capturing images from camera. I need to check if clicked image is blur or not, Using openCV increase the .apk size 3x times. I also check the solution posted on a SO thread Here but seems incomplete.
Is it possible to detect if image is blur or not using MLKit? Any suggestion is appreciated. Thanks.

Real time mark recognition on Android

I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.

Android front camera video capture mirroring

I am making an application using front camera and recently came up with this issue that outcomes of captured image and video are different(left-right inverted) from what I've seen through the display.
So for the image I could process it right after taking the picture because the size of the image wasn't too big - I used matrix preScale to mirror the image.
However, I have no idea how to do that with video capture. I already know that Android default camera does not mirror the front camera video output and lots of famous applications have compromised with default setting. But at the same time there are some camera related applications such as Instagram, Snow Camera, and B612 that do exactly what I want.
So my questions are,
Is it possible to mirror the front camera video output using Android MediaRecorder and Camera class? (That's what I've been using so far)
Or do you have to process the video after you take it? And if so, is there any nice and fast way of doing it?
It would be nice if any of you can answer it specifically. Thank you in advance!

Android auto crop camera captured images

I am looking for some kind of auto trim/crop functionality in android.
Which detects a object in captured image and creates a square box around object for
cropping. I have found face detection apis in android, but my problem is captured images are documents/pages not human faces so how can I detected documents or any other object from captured picture.
I am thinking of any algorithms for object detection or some color detection. Is there any apis or libraries available for it.
I have tried following link but not found any desired output.
Find and Crop relevant image area automatically (Java / Android)
https://github.com/biokys/cropimage
Any small hint would also help me alot. Please help. Thanks in advance
That depends on what you intend to capture and crop, but there are many ways to achieve this. Like littleimp suggested, you should use OpenCv for the effect.
I suggest you use edge-detection algorithms, such as Sobel, and perform image transformation on it with, for example, a Threshold function that will turn the image into a binary one (only black and white). Afterwards, you can search the image for the geometric shape you want, using what's suggested here. Filter the object you want by calculating the detected geometric figure's area and ratio.
It would help a lot to know what you're trying to detect in an image. Those methods I described were the ones I used for my specific case, which was developing an algorithm to detect and crop the license plate from a given vehicle image. It works close to perfect and it was all done by using OpenCV.
If you have anything else you'd like to know, don't hesitate to ask. I'm watching this post :)
Use OpenCV for android.
You can use the Watershed (Imgproc.watershed) function to segment the image into foreground and background. Then you can crop around the foreground (which will be the document).
The watershed algorithm needs some markers pre-defining the regions. You can for example assume the document to be in the middle of the image, so create a marked region in the middle of the image to get the watershed algorithm started.

OpenCV android native camera motion blur

I'm using the OpenCV Android native camera libraries for image capture in NDK, but while the image acquisition is happening the phone is (necessarily) moving so the images are quite blurry. What options are there to reduce this blur? I don't actually care too much about color fidelity so can sacrifice quality in that area. There doesn't appear to be direct exposure control, but I did find that doing
cap.set(CV_CAP_PROP_EXPOSURE, -4)
seemed to help a bit although -4 is the furthest it would go. I'm not terribly familiar with camera terminology in general so am not sure what effect the other properties might have for me.
Also, I noticed that in the default camera application (Galaxy SIII), the camera motion blur occurs during image preview, which makes sense since OpenCV also grabs preview images. But in video preview (non-recording) mode the image is a lot crisper. Is anyone aware of how I can access this stream?

Categories

Resources