Motion detection using OpenCV - android

I see queries related to opencv motion detection, but my requirement is much simpler , so i am asking the question again .
I would like to analyse video frames and see if something has changed in the frame. Any kind of motion occurring in the frame has be recognized. I just want to get notified if something happens. I don't need to track/ draw contours.
Attempts made :
1) Template matching using OpenCV ( TM_CCORR_NORMED ).
I get the similarity index using cvMinMaxLoc &
if( sim_index > threshold )
"Nothing chnged"
else
"Changed
Problem faced :
I couldn't find a way to decide on how to set thresholds. The values of false match and perfect were very close.
2) Method 2
a) Make running average
b) Take abs difference between current frame and moving average.
c) Threshold it and made it binary
d) Count the number of non zero values
Again am stuck with how to threshold it, because i am getting a large number of non zero values even for very similar frames.
Please advice me on what approach i should take. Am i going in the right direction with the above two methods, or is there a simple method which can work in all most generic scenarios.

Method 2 is generally regarded as the most simple method for motion detection, and is very effective if you have no water, swaying trees or highly variable lighting conditions in your video.
Normally you implement it like this:
motion_frame=abs(newframe-running_avg);
running_avg=(1-alpha)*running_avg+alpha*newframe;
You can threshold the motion_frame if you want, then count the nonzeroes. But you could also just sum the elements of the motion_frame and threshold that instead (be sure to work with floating point numbers). Optimizing the parameters for this is pretty easy, just make two trackbars and play around with it. Typically alpha is around [0.1; 0.3].
Lastly, it is probably overkill to do this on entire frames, you could just use subsampled versions and the result will be very similar.

Related

Continuous gesture recognizion with DTW

I try to use Dynamic Time Warping (DTW) to detect gestures performed with a smartphone by using the accelerometer sensor. I already implemented a simple DTW-algorithm.
So basicly I am comparing arrays of accelerometer-data (x,y,z) with DTW. The one array contains my predefiend gesture, the other should contain the measured values. My problem is, that the accelerometer-sensor measures continously new values and I don't know when to start the comparison with my predefined value-sequence.
I would need to know when the gesture starts and when it ends, but this might be different with different gestures. In my case all supported gestures start and end at the same point, but as far as I know I can't calculate the traveled distance from acceleration reliably.
So to sum things up: How would you determine the right time to compare my arrays using DTW?
Thanks in advance!
The answer is, you compare your predefined gesture to EVERY
subsequence.
You can do this in much faster than real time (see [a]).
You need to z-normalize EVERY subsequence, and z-normalize your predefined gesture.
So, by analogy, if you stream was.....
NOW IS THE WINTER OF OUR DISCONTENT, MADE GLORIOUS SUMMER..
And your predefined word was made, you can compare with every marked word beginning (denoted by white space)
DTW(MADE,NOW)
DTW(MADE,IS)
DTW(MADE,THE)
DTW(MADE,WINTER)
etc
In your case, you don’t have makers, you have this...
NOWISTHEWINTEROFOURDISCONTENTMADEGLORIOUSSUMMER..
So you just test every offset
DTW(MADE,NOWI)
DTW(MADE, OWIS)
DTW(MADE, WIST)
DTW(MADE, ISTH)
::
DTW(MADE, TMAD)
DTW(MADE, MADE) // Success!
eamonn
[a] https://www.youtube.com/watch?v=d_qLzMMuVQg
You want to apply DTW not only to a time-series, but to a continously evolving stream. Therefore you will have to use a sliding window of n recent data points.
This is exactly, what eamonn described in his second example. His target pattern consists of 4 events (M,A,D,E) and therefore he uses a sliding window with length of 4.
Yet in this case, he makes the assumption, that the data stream contains no distortions, such as (M,A,A,D,E). The advantage of DTW is that it allows these kind of distortions and yet recognizes the distorted target pattern as a match. In your case, distortions in time are likely to happen. I assume that you want equal gestures performed either slow or fast as the same gesture.
Thus, the length of the sliding window must be higher than the length of the target pattern (to be able to detect a slow target gesture). This is computationally expensive.
Finally, my point is: I want to recommed you this paper
Spring algorithm by Sakurai, Faloutsos and Yamamuro.
They optimized the DTW algorithm for datastreams. You will no longer need more than n*n computations per incoming event but only n. It basically is DTW but cutting down all unneccesary computations and only taking the best possible alignment of the template onto the stream into account.
p.s. most of what I know about time-series and pattern matching, I learned by reading what Eamonn Keogh provided. Thanks a lot, Mr. Keogh.

Can I Spawn objects in Corona SDK with different distances between them using a Horizontally Scrolling background?

I need to generate objects for my little character to jump over. I have the artwork for these obsticles and my character can jump and I have a scrolling background.
How can I spawn my artwork for my obsticles on my x axis with spacing inbetween them?
Can anyone provide me with some sample code or atleast try and point me in the right direction?
Many Thanks,
James
Yes. You can. You want to use some sort of loop that generates them:
2 options you can use:
local function frameHandler()
if should_I_make_object() then
createObstacle()
end
end
Runtime:addEventListener("enterFrame", frameHandler)
this approach will create new objects according to frame rate. IE, lets
you create objects every 100 frames lets say. This will make levels play the
same (have the same number of obstacles) on different devices that have varied frame-rate
Option 2:
local function createObstacle()
--your_create_obstacle_code()
if game_is_still_playing() then
timer.performWithDelay(object_spawn_delay, createObstacle)
end
end
This option will create a new object every object_spawn_delay milliseconds.
This is easy to code, and is a nice solution when you need things to happen on
a time-dependent interval. But you do need code to decide if the game is still playing.
Also, be aware that if the game ends, there might still be a lingering callback to
createObstacle() that can create nasty bugs. Make sure to do proper cleanup when
the level / game ends and be aware this callback may be an issue.

How to implement movement speed of object correctly?

So I am coing an android game and have managed to make a ball roll over the screen in the direction you tilt your phone. However I would like to make the ball roll faster the more you tilt your screen.
But what is the best way to implement this? Taking bigger steps is obv not good, it makes collisions hard to calculate. I want to move more steps per second instead.
So lets say you have a tiled board and you implement speed as tiles/millisecond. But that is problematic also speed will not be continous. You'd perhaps move 1 step every 10th time in a loop instead of every time in the loop. So you would move, then be still, then move, etc instead of continously moving. But maybe that is as good as it gets?
So this problem applies generally for any kind if computer graphics I guess. How do you implement this the best way? I'm specifically interested in what applies to Android.
The natural way of implementing speed and position problems is to have position calculated with the speed that way :
position = speed * dt
with dt constant, adapted for your implementation.
So basically the natural way is to increase the step. You say it's obviously bad for collision detection but with a limited speed and a small dt I don't really see why.

How to make a consistant game loop

So in my game my View gets drawn an inconsistent rates. Which in turn makes it glitchy. Ive been running into alot of problems with the invalidate(); meathod. Any simple ideas- Everywhere i look I get thrown up on by tons of intense code.
You haven't provided us with much information, specifically code.
A few things you could do are:
Set the initial frame rate to the lowest value you observe your application runs at, i.e., if currently set to 1/60, but the frame rate continuously dips to 1/30, set to 1/30 etc.
Rework your drawing calls to be more efficient.
Try to combine multiple transformations into a single transformation by multiplying matrices, i.e. if you need to scale, translate, and rotate, multiply those three matrices together and apply that single transformation to the vertices instead of applying three separate transformations.
Try not to iterate through entire lists/arrays if unnecessary.
Attempt to use the lowest level / most primitive structure possible for anything you have to process in the loop to avoid the overhead of unboxing.
[edit on 2012-08-27]
helpful link for fixing you timestep: http://gafferongames.com/game-physics/fix-your-timestep/
It sounds like your game loop doesn't take into account the actual time that has passed between iterations.
The problem is the assumption that there is a fixed amount of time between loop iterations. But this time can be variable depending on the number of objects in the scene, other processes on the computer, or even the computer itself.
This is a common, somewhat subtle, mistake in game programming, but it can easily be remedied. The trick is to store the time at the end of each draw loop and then take the difference of the last update with the current time at the start. Then you should scale all animations and game changes based on the actual elapsed time.
I've wrote more about this on my blog a while back here: http://laststop.spaceislimited.org/2008/05/17/programming-a-pong-clone-in-c-and-opengl-part-i/
Part II specifically covers this issue:
http://laststop.spaceislimited.org/2008/06/02/programming-pong-in-c-and-opengl-part-ii/

Image detection inside an image

I usually play a game called Burako.
It has some color playing pieces with numbers from 1-13.
After a match finishes you have to count your points.
For example:
1 == 15 points
2 == 20 points
I want to create an app that takes a picture and count the pieces for me.
So I need something that recognizes an image inside an image.
I was about to read about OpenCV since there is an Android port but it feels there should be something simpler to do this.
What do you think?
I had not used the Android port, but i think it's doable under good lighting conditions.
I would obtain the minimal bounding boxes of each of the pieces and rotate it accordingly so you can compare it with a model image.
Another way could be to get the contour of the numbers written on the piece ( which i guess are in color) and do some contour matching with the numbers.
Opencv is a big and complex framework but it's also suitable for simple tasks like this.

Categories

Resources