Here is what i have done till now.
I have launched the camera from my application
I have taken a photo
I have used the face detection technique to paint a square green window in the face region
Now I am wondering, how will i extract the exact face using face detection?
I have gone through a number of questions over stack overflow related to this problem, unfortunately, the solutions are not accurate enough.
I mean, I have seen solutions like performing cropping based on circular window, but I somehow need to get the exact face extracted from the image. Is it possible? How? Whats the best possible solution available to do this? Any code snippets would be appreciated.
Related
Link To Image. Please Look at Image to Understand Question
I have a bigger problem to solve but I have to take a small step first.
Bigger Problem Statement: In above-Linked Image, There is a series of Led Lights and a Digit next to them. I have to Read the Digit (Not Those F and R Letters) of Led which is currently Glowing. And this should happen when You open your Android Phone camera and Hold in front of this device. I understand that it is a much bigger Problem and Need a lot of ML and OCR algorithms to solve my problem.
Reduced Problem Statement: But by Myself, I have figured it out a simple way of solving the above problem in a little tricky way (If Possible, Please help). What I want to do is to process this Image using OpenCV Java Library and Somehow read those Switched On or Switched Of LED lights and somehow stored their brightness value values into an Array and It must be sequential From Top to Bottom along with some brightness data so that I can figure out later that this LED was on or off. So using this simple logic I can solve this problem for now without going through all those ML DL and OCR algorithm because I have to deliver it within 3 Days.
So now the issue is that I am new to OpenCV. Till now I have figured it out that using below
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {}
I can Get a Frame from Live Android Camera. and then I can do some processing over this frame/image. Also, I have tried many ad-hoc approaches Which I found over SOF, But nowhere I could figure out How can I solve my problem.
Is there any way to detect all those Switched On/Off LEDs brightness data and can be stored in an array in sequence from Top to bottom?
Any help or approach will be appreciated. Thanks in advance.
I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.
I have this issue I have to make an application that is able to detect the x,y position of where the user is looking at the application, by that I dont mean like detecting user eye coordinates on face recognition like with opencv, I need the exact position of where the user is watching the screen, I have look everywhere and seem to find nothing on it. a little help of where can I research more about it would be nice, right now I can only find is opencv answers related to show the coordinates of both eyes when doing some facial recognition showing the camera, but thats not what I need.
thanks in advance!
I am looking for some kind of auto trim/crop functionality in android.
Which detects a object in captured image and creates a square box around object for
cropping. I have found face detection apis in android, but my problem is captured images are documents/pages not human faces so how can I detected documents or any other object from captured picture.
I am thinking of any algorithms for object detection or some color detection. Is there any apis or libraries available for it.
I have tried following link but not found any desired output.
Find and Crop relevant image area automatically (Java / Android)
https://github.com/biokys/cropimage
Any small hint would also help me alot. Please help. Thanks in advance
That depends on what you intend to capture and crop, but there are many ways to achieve this. Like littleimp suggested, you should use OpenCv for the effect.
I suggest you use edge-detection algorithms, such as Sobel, and perform image transformation on it with, for example, a Threshold function that will turn the image into a binary one (only black and white). Afterwards, you can search the image for the geometric shape you want, using what's suggested here. Filter the object you want by calculating the detected geometric figure's area and ratio.
It would help a lot to know what you're trying to detect in an image. Those methods I described were the ones I used for my specific case, which was developing an algorithm to detect and crop the license plate from a given vehicle image. It works close to perfect and it was all done by using OpenCV.
If you have anything else you'd like to know, don't hesitate to ask. I'm watching this post :)
Use OpenCV for android.
You can use the Watershed (Imgproc.watershed) function to segment the image into foreground and background. Then you can crop around the foreground (which will be the document).
The watershed algorithm needs some markers pre-defining the regions. You can for example assume the document to be in the middle of the image, so create a marked region in the middle of the image to get the watershed algorithm started.
I am trying to display a .md2 model over top of the camera preview on my android phone. I don't need to use the accelerometers or anything. If anyone could even just point me in the right direction as to have to set up an opengl overlay that would be fantastic. If you are able to provide code that shows how to enable this that would be even better! It would be greatly appreciated..
I'm not able to provide code until later this week, but you might want to check out a library called min3d, because I believe they already have a parser written for .md2 files. Then I believe that if you use a GLSurfaceView, the background can be set to be transparent, and you can put a view of the camera behind it. Are you trying to get some kind of augmented reality effect? There are android specific libraries for that too, but they're pretty laggy (at least on my Motorola Droid).