Efficient way of handling percentile chance of event in programmatic way - android

I am developing a small app like game, I have done this before and still not sure if it was the correct way to do it. What I need is percentile chance of some event, for example gaining an item after win 10% chance.
I have been using random number generator each time on server side, if number is <= 10 then user will gain reward, but it still does not satisfy 10% criteria across the all users environment.
I was thinking about recording user's turn number on server side and reward an item every nth time, but don't know if it's right way to do it. I would like to know about your ideas of doing it and suggestions. Also I did not know if stackoverflow is right place to post this or any other community of stackexchange group. If so please guide me in comment and I'll move question in appropriate community. Thanks.

There are two completely different things that you mention in your question. You have to decide which you want because a given algorithm cannot do both.
Every single time you try, there is a 10% chance. You might see it hit two in a row and then it might not hit for 200 tries. But, in the long run, it hits 10% of the time.
You are guaranteeing that out of every 10 tries, it will hit exactly once, never more, never less.
The first one above just fine with a proper random number pick and comparison. But, you will get streaks of more hits than expected and fewer hits than expected. Over time, the average number of hits will be your 10%.
function getRandomTenPercentOutcome() {
return Math.random() < 0.1;
}
The second one requires a more complicated implementation that combines a random generator with keeping track of recent events. The simplest way to implement a guaranteed 1 in 10 hit is to have the server create an filled with zeroes and then randomly select one cell in the array and change it to a 1.
Then, as you need to pick a random outcome, you .shift() off the first item in the array and use that value. When the array becomes empty, you create a new one. This forces exactly 1 in 10 outcomes (starting from the beginning) to hit. Any given 10 outcomes might have 2 or 0 as you cross boundaries of creating a new array, but can't ever be further off than that.
You can make the array be either system-wide (one array for all users) or per-user (each user has their own outcome array). If you want each user to perceive that it is very close to 1 in 10 for them personally, then you would need to make a separate outcome array for each user.
The length of the array can be adjusted to control how much variance you want to allow. For example, you could create an array 50 long and randomly select 5 cells in the array. That would allow more variability within the 50, though would still force 5 hits within the 50.
Both methods will average 10% in the long run. The first may have greater deviations from 10% over any given interval, but will be much more random. The second has much smaller deviations from 10% but some would say it is less random (which is not necessarily bad - depends upon your objective). Both are "fair".
I know from my son's gaming that he perceives things to be "unfair" when a streak of unlikely things happen even though that streak is well within the occasionally expected results from a series of random events. So, sometimes truly random may not seem as fair to a participant due to the occasionally sporadic nature of truly random.

This questions seems to have been asked for multiple languages in other stackoverflow pages.
For example here it is for Java: Java: do something x percent of the time
Check out the link for some ideas, but remember that it will tend to 10% of the time eventually given a large sample, you cant expect 10% immediately for just a couple of calls.
I do agree rewarding an item every nth time is not the way to go.

Related

Is there a right way to programmatically prevent a brief wrong recognition (in object detection app) to trigger an action?

Context
I'm building an app which performs real-time object detection throught the camera module of the device. The render is like the image below.
Let's say I try to recognize an apple, most of the time the app will recognize an apple. However, sometimes, the app will recognize the wrong fruit (let's say a lemon) on a few camera frames.
Goal
As the recognition of a fruit triggers an action in my code, my goal is to programmatically prevent a brief wrong recognition to trigger an action, and only take into account the majority result.
What I've tried
I tried this way : if the same fruit is recognized several frames in a row, I assumed the result is supposed to be the right one. But as my device process image recognition several times per second, even a wrong guess can be recognized several times in a row, and leads to the wrong action.
Question
Is there any known techniques for avoiding this behavior ?
I feel like you've already answered your own question. In general the interpretation of a model's inference is it's own tuning step. You know for example in logistic regression tasks that the threshold does NOT have to be 0.5. In fact, it's quite common to flex the threshold to see what the recall and precision are at various thresholds, and you can pick a threshold that works given your business/product problem. (Fraud detection might favor high recall if you never want to miss any fraud... or high precision if you don't want to annoy users with lots of false positives).
In video this broad concept is extended to multiple frames as you know. You now have the tune the hyperparameters, "how many frames total?" and "how many frames voting [apple]"?
If you are analyzing fruit going down a conveyer belt one by one, and you know each piece of fruit will be in frame for X seconds and you are shooting at 60 fps, maybe you want 60 * X frames. And maybe you want 90% of the frames to agree.
You'll want to visualize how often your detector "flips" detections so you can make a business/product judgement call on what your threshold ought to be.
This answer hasn't been very helpful in giving you a bright line rule here, but I hope it's helpful in suggesting that there is in fact NO bright line rule. You have to understand the problem to set the key hyperparameters:
For each frame, is top-1 acc sufficient, or do I need [.75] or higher confidence?
How many frames get to vote? Say [100].
How many correlated votes are necessary to trigger an actual signal? maybe it's [85].
The above algo assumes you take a hardmax after step 1. another option would be to just average all 100 frames and pick a threshold. that's kind of a soft label version of the above algo.

How to display an image on two devices?

I want to implement a kind of feature like copying an image file from one device to another. During the image transferring, I need to update the UI simultaneously on both side. For example, the image flies out of the device A, and then flies into the device B. On the user's side, he/she just see that image moves from one screen to another screen, then the transfer is completed.
One possible way I'm thinking so far is to display an animation during the image transferring. But I don't know how to display an image partially on screen A, and partially on screen B. Hope someone could give me some hints. Thanks a lot.
The trick is to find the time difference between the two devices.
I wrote an app that performed synchronized playback of an audio file on multiple devices. To synchronize the devices, I had them ping a time server and make note of how much the device's clock differed from the server's clock. With this offset value, I was able to do a reasonably good job of synchronizing the playback. I'm glossing over a lot of the details (latency, variability, leap second, etc.), but this was the basic idea.
To synchronize the UI on both devices, the two devices need to know the difference between each other's clock. Once you have this value, you simply time the animation appropriately. I've only ever done it with a server, but if the two devices are talking to each other for the file transfer, perhaps you could have one device ask the other for its time and compute the offset.
Tip: compute the difference several times, then use standard deviation to select a good value. If you want to really study how this is done, check out how NTP does it: http://en.wikipedia.org/wiki/Network_Time_Protocol

Implementing a Pedometer: How to find a local peak?

I am trying to make a very simple Android pedometer, but so far it's failing pretty badly. I got some advice here and there on the internet, but nothing seems to be working.
I basically set an acceleration sensor and get the values of the x, y and z axis. After that I calculate their distance from the origin, which is basically:
d = sqrt(x²+y²+z²) followed by the calculation of their moving average. My idea was whenever I find a local peak I should count as a step. The issue is, I have no idea how to find the local peak right away in order to count the step. I am sorry if this seems like a simple problem, but I really have no idea how to go on from here.
Thank you very much.
I tried to implement this and the approach you take is subject to substantial measurement errors. You should just accept it. The reasons are:
a phone can be in any location, not only the trousers' pocket
phone accelerators are not medically precise, and they can deviate and "flow" given exactly the same position in space
moving average is not the best known technique to do this, a better one would use some sort of waves and wavelet analysis
One step has two local maximums and two local minimums (if I remember correctly)
There is no strict definition of a "step" globally accepted, this is due to physiology, measurements and various techniques used in the research field
Now to your question:
Plot the signal from the three axis you have, this will dramatically help you (signal vs time)
Define a window of a fixed (or slightly moving) size, moving window is required to detect people who walk slower, run or have disability
Every time you have a new measurement (usual frequency is about 20-30 Hz), put one to the tail of the window (your signal measurement's queue) and pop one from the head. In this way you will always have a queue with the last N measurements
Again for every mesurements recalculate your stuff and decide if the window contains one (or two!) minimums and count it as a step
good luck!

Recognize numbers using gestures

I want to recognize numbers using gestures through coding. I've recognized using gesture library. Is there any possibility to recognize numbers perfectly?
Please suggest any sample code.
What do you mean by perfectly? As in successfully detect the number the user intended to gesture 100% of the time? As long as your users are human, this isn't possible. 4 can look like 9, 1 can look like 7, and depending on how quickly they swipe, what started out as a 0 can end up looking like a 6 (or vice versa). Every individual gestures differently than everyone else, and every time you gesture a 4, it's going to look just a little more or less different from your 9, as well as all your other 4's.
One possible solution is to have your app contain a "learning mode" which asks the user to gesture out a specific digits several times, so that you can pick up on patterns (where they start, where they stop, how many swipes are included, how big it is), and use those to narrow things down when the app is actually used. Sort of like a good spam filter- It won't get you 100% detection rate, but it'll definitely get you a lot closer than not having a data set to work off of.

Android - Efficiently update counters in views

In the app I'm writing I have a bunch of stats which I want to display for the user.
The stats include when a specific module was last run, when it will be run next, when the last communication with the server was made and then the next one is going to be.
As well as this there are stuff like memory usage (simple memory usage, not measuring the actual usage).
The memory usage etc can be updated every few seconds so that not a problem the but the times needs to be updated every second at least (for counters).
Since running every second (or even with 500ms period) results in irregular updates/skipped seconds I now run it at 300ms period.
I did notice however that my app began to lag when starting.
After some profiling it turns out it's the views that need to resize that is taking 70% of the time and the string formatter (for formatting the counter) takes pretty much the rest.
Apart from the CPU being used I see a lot of allocations, every few seconds I see a GC_CONCURRENT in the logcat.
Any tips on solving this efficiently?
Can you restructure it in a way so that the fiews require less resizing? Eg: set the width of your element to fill_screen or a DP size that is bigger than the longest string size
I solved the problem by writing my own timer that sleeps in short increments and only updates the view when a full second has passed.
This way the fire interval will be [period, period+sleepTime) which is acceptable when you choose a short sleepTime.
I've also changed so it says "5 minutes ago" and I have two timers, one that fires every minute and one that fires every second.

Categories

Resources