I just released an app, Avoid The Spikes! on the android market.
Some friends of mine have downloaded the apps on their phones/tablets, and I am told it runs too fast. It runs just fine on my android (HTC Dream). Is there any way to tell how much faster it would run on certain androids, and how to adjust it accordingly? I have searched low and high and have yet to find an answer.
I have seen your game on Android Market and tried it.
You cannot actually determine the minimal and maximal speed you will obtain on the various existing hardware, as:
New hardware appear on the market every day
On a given hardware, other factors may influence the execution speed of your app.
What you should do is making your game independant of the framerate. To do so, avoid making your objects moving of a fixed distance each time you update their position. Prefer making them moving of a fixed distance for a certain amount of time.
To do so, measure the time elapsed since the last update each time you are updating the object's position, and calculate the distance to move to accordingly.
This way, depending of the quality of the hardware, these objects will jump or move nicely, but at least they will do so at the same speed, and this speed will be determined by you. This is a bit the same thing when you are playing a FPS game on a brand new machine or an old one: the speed of the motion is constant, only the framerate is not. On the old machine you have the impression the objects jump, while on the new they move gracefully, but it works in both cases (well, whithin certain limits, of course, for the old machine). And a machine can NEVER be too fast that way.
To give you some pseudo-code, for a paratrooper falling from the top to the bottom of the screen:
this is the loop to avoid:
while (playing) {
update_positions();
display();
}
update_positions() {
paratrooper.y -= 10; // the paratrooper's altitude decreases of 10 at each update
}
On a slow hardware it will take hours to cross the screen. On a fast hardware this will be way too fast, you won't even see the paratrooper falling.
this is the proper way to do:
while (playing) {
update_positions();
display();
}
update_positions() {
time_elapsed = getSystemTimeInMS() - lastUpdateTimeInMS();
paratrooper.y -= 0.01 * time_elapsed; // the paratrooper's altitude decreases of 10 per second
setLastUpdateTimeMS(getSystemTimeInMS()); // This way it includes the calculation time
}
On a slow hardware, crossing the whole screen may take 15 displays while on fast hardware it would take 45 displays, but in every cases your paratrooper is falling at 10dp per second.
Sounds like your app is a game? If so, you probably have a game loop that just runs the game as fast as possible. I've found this post about implementing game loops to be very useful: Game Loops, when I created my game which involved time.
Related
I have an OpenGL game for Android. It runs at a good 60fps when the screen is touched. When I release my finger it goes back down to around 30fps. Does the touch event/release raise/lower a thread's priority and if so how can I replicate this to keep it at a constant 60fps. This only seems to be an issue on Galaxy Note 2 so far.
I'll assume you are using onDrawFrame and setRenderMode(RENDERMODE_CONTINUOUSLY).
30 and 60FPS indicates that your implementation of onDrawFrame is called as the device's screen refreshes. Most displays refresh at 60Hz, giving you 60FPS.
It is likely that the Galaxy Note 2 has some power saving feature that limits screen refresh to 30Hz when there are no touches on screen. Check if there's any way to disable this feature.
AFAIK, OpenGL ES does not specify a standard for screen refresh rates, you will need a throttling function to ensure that your game runs/feels the same (i.e. at the same speed) despite differences in FPS.
Yes.
The best way to observe this phenomena is to use systrace with the "freq" tag enabled. You probably need a rooted device, and you definitely need one on which systrace is enabled.
systrace will record changes in the clock frequency for various components. It varies by device, but you can usually get the per-core CPU clocks and GPU memory rate. You will likely see several of them drop significantly at the same time your frame rate drops.
The motivation for doing this is to reduce power requirements and extend battery life. The assumption is that, while your finger is in contact with the screen, you're actively doing something and the device should be as responsive as possible. After a brief period of time, the clocks will slow to a level appropriate for the current workload. The heuristics that determine how long to wait before slowing, and how much to slow down, are tuned for each device.
(This has caused some people to create a thread that just sits and spins on multi-core devices as a way to artificially prop up the CPU clock rate. Not recommended. See also this answer.)
The bottom line is that this isn't a simple matter of adjusting thread priorities. You have to choose between recognizing that the slowdown will happen and adapting to it (by making your game updates independent of frame rate), or figure out some way to fool the device into staying in a higher-power mode when you want smooth animation.
(For anyone who wants to play along at home: build a copy of Grafika and start the "Record GL app" activity. If you drag your finger around the screen all will be well, but if you leave it alone for a few seconds you may start to see the dropped-frame counter rising as the app falls behind. Seen on Nexus 5, Nexus 7 (2013), and others.)
I want to put some animations on my activities when they open and close. But so far if i put them on it just makes it feel like the phone is lagging. And if I speed them up then you barely notice them.
What is the smallest time fragment which the average human can identify to? 50ms 100ms?
How can I make an animation noticeable, but not take away from the responsiveness of the application? Because obviously the animations them selves probably slow down the app allot.
Maybe Im asking a stupid question, If so I am sorry. But I thought that it is a fairly important aspect of designing a gpood app.
http://en.wikipedia.org/wiki/Frame_rate says:
"The human eye and its brain interface, the human visual system, can process 10 to 12 separate images per second, perceiving them individually."
The 1896 standard movie frame rate was 16 fps. Today it is at least 24 fps.
I also remember the rate 18 events per second, not sure if it was MS-DOS or old TV.
The car driver's reaction time is estimated to be about 0.2 sec. (In the case that the driver is prepared to react.) This means that if something happens faster, a human has no time to move, despite of seeing it.
You can base your design on these digits.
its supposed that, a time step is necesary to run a game at the correct speed, a fast hardware will adjust the speed to 30 or 60 fps, otherwise, the game will run so fast as the hardware can handle it. Now, my game runs as expected on the pc, but, when is launched on the device(galaxy ace), the body moves very slow, even has a maximum speed wich can not exceed, whatever be amount in Body.AppliLinearImpulse or AppliForce, also, i've changed the setLinearVelocity to a very high number, and always is the same speed.
could be a bug on libgdx box2d? or a bug with my galaxy ace android 2.3
You are probably simulating Box2D bodies while passing in the dimensions equal to pixel values. Box2D runs in meters, however, so creating bodies 300 meters in size really puts a low ceiling on your entire simulation.
Recommended approach is to use arbitrary ratio (1m == 64px) and scale down your Box2D system - initialize and manipulate bodies using meter values converted from px.
This will allow for a wider variety of motion and a higher ceiling on velocity. Had the same issue as you and took me a bit of time to figure it out.
I am working an a bike computer app. I was hoping to work out the inclination of the slope using the accelerometer but things are not working too well.
I have put in test code getting the sensor data I am just smapeling at the UI rate and keeping a moving average over 128 samples which is about 6 seconds worth. With the phone in hand the data is good and I can calculate a good angle compared to my calibration flat vector.
With the phone mounted on the bike things are not at all good. I expect to get a good bit of noise but I was hoping that the large number of samples over the big time window would remove the vibration effects and general bike movements. Unfortunately this just is not working, the magnitude of the acceleration vector is not really staying around the 9.8 mark but is dropping lower which indicates to me that something is not right somewhere.
Here is a plot of the data from part of a test ride.
As you can see when stationary at the start the magnitude is OK but once I get going it drops. I am fairly sure the problem is vibration related I initially descend and there was heavy vibration I then climb and the vibration is less and the magnitude gets back towards 9.8 but then I drop down quickly on a bad road and the magnitude ends up less than 3.
This is with a SonyErricson Xperia Active which uses a BMA250 sensor the datasheat looks like the sensor should be capable. My only theory for the cause of the problem is that the range is set to the 2g range and the vibration is causing data to go out of range and this is causing my problems.
Has anyone seen anything like this?
Has anyone got any ideas on the cause of the problem?
Is there any way to change the sensitivity that I have not found?
Additional information.
OK I logged the raw sensor data before my filtering. A very small portion presented here
The major axis is in green and on the flat as I belive this should be without the vibration it should be about 8.5. There is no obvious clamping on the data but I get more below 8.5 values than above 8.5 values. Even if the sensor is set up for it's most sensative 2g range it looks like the vibration shgould be OK I have a max value here of just over 15 and a minimum of -10 well ib a +- 20 ragnge just not centered correctly on the 8.5 it should be.
I will dig out my other phone which looks to have a slightly different sensor a BMA150 and try with that but unless it is perfect I think I will have to give up on the idea.
I suspect the accelerometer is not linear over such large G ranges. If so, and if there is any asymmetry, it will do what you see.
The solution for that is to pad the accelerometer mount a bit more, foam rubber, bungy-cord, whatever, possibly mount it on a heavier stage to filter the vibration more.
Or (not a good solution) try to model the error and compensate for it.
I used the same handset and by coincidence the same averaging interval of 6 seconds for an application a few years ago and I don't recall seeing the behaviour in the graph.
I'm wondering whether the issue is in the way the 6 second averages are being accumulated. One problem I had is that the sampling interval was not constant but depends on how busy the processor is. A sample is acquired in the specified time but the calling of the event handler depends on the scheduler. When the processor is unloaded sampling occurs at a constant frequency but as the processor works harder the sampling frequency becomes slower and more erratic. You can write your app to keep processor load low while sampling. What we did is sample for 6 seconds, doing nothing else, then stop sampling and process the last sample set but this was only partially successful as you can't control other apps running at the same time and the scheduler is sharing processor resources across them all. On the Xperia Active I found it can occasionally go out to seconds between samples which I attributed to garbage collection in one of the JVMs. The solution for us was to time stamp each sample then run some quality checks over a sample set and discard those that failed the quality check. This is a poor solution as defining what is good enough is imprecise and when the user runs another app that uses a lot of resources most sample sets can be discarded so the app needs additional logic to handle that.
The current Android API, unavailable on the Xperia Active, should have eliminated this as samples can be batched as described at https://source.android.com/devices/sensors/hal-interface.html#batch_sensor_flags_sampling_period_maximum_report_latency .
If the algorithm assumed a particular number of samples rather than counting them and the processor worked harder as the bike went faster, though I'm not sure why it would, it would produce something like the first graph because when the bike is going downhill magnitude goes down and when going up hill it goes up. There is a lot of speculation there but a 6 second average giving a magnitude of less than 3 m/s^2 looks implausible from my experience with this sensor.
My game uses too much battery. I don't know exactly how much it uses as compared to comparable games, but it uses too much. Players complain that it uses a lot, and a number of them note that it makes their device "run hot". I'm just starting to investigate this and wanted to ask some theoretical and practical questions to narrow the search space. This is mainly about the iOS version of my game, but probably many of the same issues affect the Android version. Sorry to ask many sub-questions, but they all seemed so interrelated I thought it best to keep them together.
Side notes: My game doesn't do network access (called out in several places as a big battery drain) and doesn't consume a lot of battery in the background; it's the foreground running that is the problem.
(1) I know there are APIs to read the battery level, so I can do some automated testing. My question here is: About how long (or perhaps: about how much battery drain) do I need to let the thing run to get a reliable reading? For instance, if it runs for 10 minutes is that reliable? If it drains 10% of the battery, is that reliable? Or is it better to run for more like an hour (or, say, see how long it takes the battery to drain 50%)? What I'm asking here is how sensitive/reliable the battery meter is, so I know how long each test run needs to be.
(2) I'm trying to understand what are the likely causes of the high battery use. Below I list some possible factors. Please help me understand which ones are the most likely culprits:
(2a) As with a lot of games, my game needs to draw the full screen on each frame. It runs at about 30 fps. I know that Apple says to "only refresh the screen as much as you need to", but I pretty much need to draw every frame. Actually, I could put some work into only drawing the parts of the screen that had changed, but in my case that will still be most of the screen. And in any case, even if I can localize the drawing to only part of the screen, I'm still making an OpenGL swap buffers call 30 times per second, so does it really matter that I've worked hard to draw a bit less?
(2b) As I draw the screen elements, there is a certain amount of floating point math that goes on (e.g., in computing texture UV coordinates), and some (less) double precision math that goes on. I don't know how expensive these are, battery-wise, as compared to similar integer operations. I could probably cache a lot of these values to not have to repeatedly compute them, if that was a likely win.
(2c) I do a certain amount of texture switching when rendering the scene. I had previously only been worried about this making the game too slow (it doesn't), but now I also wonder whether reducing texture switching would reduce battery use.
(2d) I'm not sure if this would be practical for me but: I have been reading about shaders and OpenCL, and I want to understand if I were to unload some of the CPU processing to the GPU, whether that would likely save battery (in addition to presumably running faster for vector-type operations). Or would it perhaps use even more battery on the GPU than on the CPU?
I realize that I can narrow down which factors are at play by disabling certain parts of the game and doing iterative battery test runs (hence part (1) of the question). It's just that that disabling is not trivial and there are enough potential culprits that I thought I'd ask for general advice first.
Try reading this article:
Android Documents on optimization
What works well for me, is decreasing the use for garbage collection e.g. when programming for a desktop computer, you're (or i'm) used to defining variables inside loops when they are not needed out side of the loop, this causes a massive use of garbage collection (and i'm not talking about primitive vars, but big objects.
try avoiding things like that.
One little tip that really helped me get Battery usage (and warmth of the device!) down was to throttle FPS in my custom OpenGL Engine.
Especially while the scene is static (e.g. a turn-based game or the user tapped pause) throttle down FPS.
Or throttle if the user isn't responsive for more then 10 seconds, like a screensaver on a desktop pc. In the real world users often get distracted while using mobile devices. Don't let your app drain battery while your user figures out what subway-station he's in ;)
Also on the iPhone, sometimes 60FPS is the default, throttling this manually to 30 FPS is barely visible and safes you about half of the gpu cycles (and therefore a lot of battery!).