Android: recognize with gestures two digit numbers - android

I have create an application using gestures library and works well when I try to detect numbers from 0 to 9, but now I want to detect from 0 to 99. The application is easy, only just ask for an aritmethic operation and the user must to draw the correct result in screen. How can I implement the two digit reconize ?

You can't make gesture for two separate symbols unless make multi-touch. Which doesn't make much sense.
I think best approach would be to recognize individual digits. And then wait certain amount of time. After which you get other digit or interpret as single one

Related

Efficient way of handling percentile chance of event in programmatic way

I am developing a small app like game, I have done this before and still not sure if it was the correct way to do it. What I need is percentile chance of some event, for example gaining an item after win 10% chance.
I have been using random number generator each time on server side, if number is <= 10 then user will gain reward, but it still does not satisfy 10% criteria across the all users environment.
I was thinking about recording user's turn number on server side and reward an item every nth time, but don't know if it's right way to do it. I would like to know about your ideas of doing it and suggestions. Also I did not know if stackoverflow is right place to post this or any other community of stackexchange group. If so please guide me in comment and I'll move question in appropriate community. Thanks.
There are two completely different things that you mention in your question. You have to decide which you want because a given algorithm cannot do both.
Every single time you try, there is a 10% chance. You might see it hit two in a row and then it might not hit for 200 tries. But, in the long run, it hits 10% of the time.
You are guaranteeing that out of every 10 tries, it will hit exactly once, never more, never less.
The first one above just fine with a proper random number pick and comparison. But, you will get streaks of more hits than expected and fewer hits than expected. Over time, the average number of hits will be your 10%.
function getRandomTenPercentOutcome() {
return Math.random() < 0.1;
}
The second one requires a more complicated implementation that combines a random generator with keeping track of recent events. The simplest way to implement a guaranteed 1 in 10 hit is to have the server create an filled with zeroes and then randomly select one cell in the array and change it to a 1.
Then, as you need to pick a random outcome, you .shift() off the first item in the array and use that value. When the array becomes empty, you create a new one. This forces exactly 1 in 10 outcomes (starting from the beginning) to hit. Any given 10 outcomes might have 2 or 0 as you cross boundaries of creating a new array, but can't ever be further off than that.
You can make the array be either system-wide (one array for all users) or per-user (each user has their own outcome array). If you want each user to perceive that it is very close to 1 in 10 for them personally, then you would need to make a separate outcome array for each user.
The length of the array can be adjusted to control how much variance you want to allow. For example, you could create an array 50 long and randomly select 5 cells in the array. That would allow more variability within the 50, though would still force 5 hits within the 50.
Both methods will average 10% in the long run. The first may have greater deviations from 10% over any given interval, but will be much more random. The second has much smaller deviations from 10% but some would say it is less random (which is not necessarily bad - depends upon your objective). Both are "fair".
I know from my son's gaming that he perceives things to be "unfair" when a streak of unlikely things happen even though that streak is well within the occasionally expected results from a series of random events. So, sometimes truly random may not seem as fair to a participant due to the occasionally sporadic nature of truly random.
This questions seems to have been asked for multiple languages in other stackoverflow pages.
For example here it is for Java: Java: do something x percent of the time
Check out the link for some ideas, but remember that it will tend to 10% of the time eventually given a large sample, you cant expect 10% immediately for just a couple of calls.
I do agree rewarding an item every nth time is not the way to go.

Interrupt forEach_root in RenderScript from Java-Side?

I am writing an android application and I use renderscript for some complex calculation (I am simulating a magnetic pendulum) that is performed on each pixel of a bitmap (using script.forEach_root(...)). This calculation might last from tenth of a second to up to about 10 seconds or even more, depending on the input parameters.
I want to keep the application responsive and allow users to change parameters without waiting. Therefore I would like to interrupt a running calculation based on user input on the Java-Side of the program. Hence, can I interrupt a forEach_root-call?
I already tried some solutions but they either do not work or do not fully satisfy me:
Add a variable containing a cancel-Flag to RenderScript and check its status in root: Does not work because I cannot change variables using set while forEach_root is running (they are synchronized - I guess for good reasons).
Split the image up into multiple tiles: This is a possible solution and currently the one I favor the most, yet it is only a work around because calculating a single tile might also take several seconds.
Since I am new to renderscript I am wondering whether there are some other solutions which I was not aware of.
Unfortunately, there is no simple way to cancel a running kernel for RenderScript. I think that your tiling approach is probably the best solution today, which you should be able to implement using http://developer.android.com/reference/android/renderscript/Script.LaunchOptions.html when you begin kernel execution.

Two Dimensional Vector from Accelerometer

I'm trying to make an Android application that uses a smartphone moved along on a flat surface (e.g. a desk) as a mouse. Since I want to emulate a mouse, I ignore the z-axis, and figure that the best way to utilize the accelerometer data would be to construct a two dimensional vector that I could then scale to the size of the screen.
I've read other answers on SO and I see that the integration method has a large error as t increases, but I'm not sure if this error is a factor considering the short duration and position change of mouse movements (How long is the average mouse movement? I'd assume less than 2 sec.).
How would I go about designing an algorithm that meets my needs? Is an integration-based algorithm sufficient?
Yes, an accelerometer data have high mistake, that would create a large errors if we'll try to get absolute coordinates out of them. But a mouse needs no absolute coordinates. Relative ones are absolutely enough. Use your integration, not a doubt in it.
"the integration method has a large error as t increases" - correct, but a user is really interested in the last movement only. So, it will work as a mouse, and it will be felt as a normal mouse. How good the mouse will be, is up to the concrete device and the task. I am not at all sure about serious gaming, for example. You will have to do your own survey about it. But it will do really a very bad tablet/pen simulator.
Be careful about ignoring the Z axis, for notice, even for placing a point on the map GPS uses all three coordinates - for better precision. Often movements will not have Z change equal to 0. And simply ignoring one of the coordinates, instead of recounting all three of them into two you really need, will cause greater mistakes. I am not sure you can allow it. And you simply needn't - it is NOT a heavy algorithm, devouring much time and battery. And for a user the possibility to move the device in the air could bring much convenience - not everybody wants to scratch his device against a table. So, COUNT two coordinates from three source ones, but not simply GET two of the source ones, ignoring the third.
The problem will be elsewhere. When you use mouse and an error collected, you can raise the mouse up and move it to another point and start from it anew. You should realize something similar, too, for your device will collect errors in time as well.

Detect physical gesture with accelerometer

In an Android app I'm making, I would like to detect when a user is holding a phone in his hand, makes a gesture like he would when throwing a frissbee. I have seen a couple of apps implementing this, but I can't find any example code or tutorial on the web.
It would be great with some thoughts on how this could be done, and ofc.
It would be even better with some example code or link to a tutorial.
Accelerometer provides you with a stream of 3d vectors. In case your phone is help in hand, its direction is opposite of earth gravity pull and size is the same. (this way you can determine phone orientation)
If user lets if fall, vector value will go to 0 (the process as weighlessness on space station)
If user makes some gesture without throwing it, directon will shift, and amplitude will rise, then fall and then rise again (when user stops movement). To determine how it looks like, you can do some research by recording accelerometer data and performing desireg gestures.
Keep in mind, that accelerometer is pretty noisy - you will have to do some averaging over nearby values to get meaningful results.
I think that one workable approach to match gesture would be invariant moments (like Hu moments used to image recognition) - accelerometer vector over time defines 4 dimensional space, and you will need set of scaling / rotation invariant moments. Designing such set is not easy, but comptuing is not complicated.
After you got your moments, you may use standart techniques of matching vectors to clusters. ( see "moments" and "cluster" modules from our javaocr project: http://javaocr.svn.sourceforge.net/viewvc/javaocr/trunk/plugins/ )
PS: you may get away with just speed over time, which produces 2-Dimensional space and can be analysed with javaocr on the spot.
Not exactly what you are looking for:
Store orientation to an array - and compare
Tracking orientation works well. Perhaps you can do something similar with the accelerometer data (without any integration).
A similar question is Drawing in air with Android phone.
I am curious what other answers you will get.

Recognize numbers using gestures

I want to recognize numbers using gestures through coding. I've recognized using gesture library. Is there any possibility to recognize numbers perfectly?
Please suggest any sample code.
What do you mean by perfectly? As in successfully detect the number the user intended to gesture 100% of the time? As long as your users are human, this isn't possible. 4 can look like 9, 1 can look like 7, and depending on how quickly they swipe, what started out as a 0 can end up looking like a 6 (or vice versa). Every individual gestures differently than everyone else, and every time you gesture a 4, it's going to look just a little more or less different from your 9, as well as all your other 4's.
One possible solution is to have your app contain a "learning mode" which asks the user to gesture out a specific digits several times, so that you can pick up on patterns (where they start, where they stop, how many swipes are included, how big it is), and use those to narrow things down when the app is actually used. Sort of like a good spam filter- It won't get you 100% detection rate, but it'll definitely get you a lot closer than not having a data set to work off of.

Categories

Resources