I try to make app like ink hunter, make it to detect object but on live camera overlaped image blink cause every time i detect object again n again in every frame.I read lot about track object but not found any useful algorithm, cause once algorithm i confuse with other algorithm, i very confuse which method or algorithm should i use.
Initially i detect hand written plus sign and get center point of ROI , Now which algorithm should i use to track center point of rect and place image over there.
After detecting plus (+) sign i have to place another image on it, Using copyto() method i place image on it,but cause overlaying image flick every time frame change , So i want once plus sign points are found then track it and place image at the point.
My sample image as below.
Moments m = moments(lrrIMG, true);
Point point = new Point(m.m10 / m.m00, m.m01 / m.m00);
using this code i able to get center point ,Now i have to place image that point and track until it disappear from mobile screen.
Related
As of now, when Android Vision detects a QR-code, the array "Barcode.cornerPoints" (which contains the code's corner points) is populated in a seemingly random order. I need is to determine which 3 out of the 4 corner points that contains "orientation squares".
The current approach I am using is very unsatisfying:
For every detected QR-code, I am forced to create a bitmap and attempt to find the QR-code again with another library (Zxing) that always returns the corner points in a consistent order with respect to rotation.
If Zxing finds the QR-code (which sadly doesn't happen about four times out of five), I need to cross-check and match the Zxing-corners with Android Vision corners.
What I would like is to get the array "Barcode.cornerPoints" populated with respect to orientation.
For example and clarification:
cornerPoints[0] = //First corner-point with an orientation square
cornerPoints[1] = //Second corner-point with an orientation square
cornerPoints[2] = //Third corner-point with an orientation square
cornerPoints[3] = //The corner-point that does not contain a orientation square
> Like in this picture <
I have been trying to find a clever workaround to this issue for quire a while now but I can't come up with any good solution, and it does not appear as Google has open source'd the code used when population the qrCorners array so i can't extend it...
Any help out there? I am not the only one who has been looking for a solution to this issue:
https://github.com/googlesamples/android-vision/issues/103
I am in a similar situation as well. What might help you to know is that Android Vision does not return the corner points in a completely random order.
I believe that the detector scans the image from top left to bottom right of the frame. The QR-code-corner which is detected first in the image will be returned as corner 0 and the rest in a clockwise direction.
What would be really helpful was if Android Vision returned the corners like you said, in a "static" order depending on orientation. I barely see any reason for the chosen way to return the corner points. Maybe better performance? For the QR-code to be read it has to be done according to a certain orientation which is determined by the corners of the QR-code. Which means Android Vision already has identified the orientation and corners but does not give this information to us.
Maybe this could be added in future updates?
I am working on an app that will compare histograms in hopes to match faces.
The app allows the user to take a photo, select a few key points in the image and then the app draws circles around those points. I then detect the circles using the OpenCV Hough Circle Transform functions. Up to this point the app works great.
What I need to implement now is one of two options:
Detect the circles and create separate histograms for the area inside of each circle.
Detect the circles and blackout the area(s) around the circles and create one histogram.
I'm leaning towards method 2, but I'm not sure how mask/color/paint around the area outside of the circles after they are detected. Any input would be appreciated. Thanks.
Instead of painting the area outside the circles in the original image, why not create a new image and copy the content of the circles to it?
Another point is that histograms are independent of translation. So, it does not matter if you copy the circles to the exact locations in the new image.
Do clarify if I did not answer your question, or if you have other questions now.
Now , i have the circle and image on the canvas
Ping-pong board and Ping-pong ball (draw by drawCircle)
The position of ball will depend on the accelerator
is possible to detect whether the ball the outside the board or not?
Or, i need to draw the board Programmatically without using the image
What you are looking for is called 'Collision Detection'. A technique used in game programming where the boundaries (area that defines where the object can be hit) are and then if an object enters those positions.
You can do this simply by saying that the boundaries are anything in the height / width of the image on the canvas. But I suspect that in your game you will want a subsection of that.
You will need to define a related object to your image that holds the 'Collision Boundary'. On a 2D game that will be the starting X,Y and then the height and width. While on a 3D game you will also need to store the Z position.
This is probably quite confusing to start with but I found you this little guide that explains it in more detail than I have space for here:
http://www.kilobolt.com/day-4-collision-detection-part-1.html
Let me know if you have any questions and the game sounds exciting!
I'm writing an android app using OpenCV for my masters that will be something like a game. The main goal is to a detect a car in selected area. The "prize" will be triggered randomly while detecting cars. When the user will hit the proper car I want to display a 3D object overlay on the screen and attach it to the middle of the car and keep it there so when the user will change the angle of his view on the car, the object will also be seen from diffrent angle.
at the moment I have EVERYTHING beside attaching the object. I've created detection, I'm drawing the 3D overlay, I've created functions that allow me to rotate the camera etc. BUT I do not have any clue how can I attach the overlay to the specific point. Cause I don't have this I have no point to recalculate the renderer to change the overlay perspective.
Please, I really need some help, even a small idea will be fine:
How can I attach the overlay to the specific in real world
(Sorry, I couldn't comment. Need at least 50 points to do that ... :P )
I assume your image of the car is coming from a camera feed and you are drawing 3d car in opengl. If so, then you can try this:
You set the pixel format of the opengl layer as RGBA_8888, so that you can set the background of the opengl camera as a transparent color.
You take a relative layout as layout of your activity.
first you add the opencv camera layout to it as full height and width.
then you add opengl layer as full height and width.
you get the position of the real car from opencv layer as pixel value or something you did.
then scale it to your opengl parameters so that you can draw it on the right spot.
it worked for me. hope it works for you too.
Fairly new at android dev, haven't got any code for this particular step yet so ill try give as much detail as possible. I'm trying to make an ImageView object move around the android view/activity, unlike java im not able to use the random generator to translate it onto an x and or y position on the frame, if anyone could point me on the right direction or more importantly have a good idea on how to do this, that'd be great.
There is a class in android called Random. And there is a function called nextInt() which can give you a random number. You can also calculate the width and height of your screen using DisplayMetrices so that you can keep the image inside the device's screen. And you can also move the ImageView. See this link.