I have made a falling ball in libgdx with box2d.
On my PC the ball falls as it should with clean animation
But when I try this application on my Samsung Galaxy S then all of a sudden it seems to run very slowly... (looks like the fps is 1) however the fps that is printed to the logcat is around 60 (using FPSLogger.log()).
so it seems it's not the drawing speed holding it back? But what is it? is this a bug with libgdx box2d?
Timestep is 1/60 and there is no special threads going on or something (this is mainly a test and all the code is basically in the render() function)
Ok.. So I figured it out...
My problem was the lack of understanding of how game loops and delta times should be used.
Resulting in my game speed depending on the frame rate (on PC it was around 2000)
If anyone has problems with this or something similar I recommend Gameloops and Fix Your Time Step
(P.S I recommend them anyway)
Thanks to kalle_h for helping me find these :)
Related
I have an object that is displayed on my android in AR using Kudan markerless, but when I rotate my phone so its off screen and then back again the object is either not there anymore or has scaled to an undesirable level. Quite simply, once placed I want the object to continue to exist as if it's really in the world. Like AR is supposed to do right?
I have just started with Kudan and I'm running the Markerless Unity tutorial but it doesn't detail any further settings that would make the object seem as if its in real world space. Currently it only seems vaguely real if you don't move the camera that much. Even then the object is quite jittery. Any tips? Thanks
After some experimentation there seem to be a number of other issues with Markerless Kudan:
1/ very erratic frame rates going from 0 - 60 fps with just one object, even after I have halved the screen resolution. There seems to be no reason the fps drops or increases.
2/ Occasionally very long freezes of 15 seconds or more.
3/ Markerless objects always seem to come closer to the camera steadily as if gravitating. They will eventually end up inside/on top of the camera.
4/ They never look at all like they are in the real world. Always shaking and moving around.
5/ If I keep the camera very still and wave my hand around in front of it, this actually pushes the object around the screen. Why on earth would I want that to happen as default behavior? Is that a bug? Surely it should only move when the camera moves/rotates? Can someone explain why this is happening?
Am I doing something wrong or is this technology still way off from being usable?
1) Frame rate is dependant on a lot of things.
If you have an old or cheap phone, it could be that your processor simply isn't up to the task.
If you're in an environment that is poorly lit or otherwise difficult to track, then the tracker has to do more work and subsequently there is more load on the processor.
Random frame rate drops in Unity are a problem in many games / apps, just because of the nature of Unity.
2) "Freezes" are essentially just the frame rate dropping to 0. See 1).
3) That simply doesn't happen in any of Kudan's demos, so something else must be going on here, possibly the reasons mentioned in 1).
4) Don't really know what you mean by "shaking", but it isn't something I've seen all that much.
5) Markerless tracking works by tracking the camera image, if you wave your hand in front of the camera, your hand becomes part of the tracked image. If you then move your hand away, the tracker attempts to adjust for the change in its "environment" and move the object in relation to your hand.
I'm developing a game using andengine. On my scene i have only 3 sprites moving in the same direction. So the problem is that it sometimes hangs after less than a second and sometimes not. Interface does not respond. Logcat shows no problems at all. It looks like it hangs somewhere inside andengine, but I can't figure it out. I just managed to track it down using step over in OnManagedUpdate method of Entity class on line
entities.get(i).onUpdate(pSecondsElapsed);
When i step over it just hangs and I can't step into it.
Did anybody face such problems? What it could be?
I found the bug - the sprites animation rate was generated randomly and sometimes it got negative value. I'm realy sorry for such a vague question, it's just the project is large enough to take a lot of time to isolate the problem code part.
I'm developing an engine and a game at the same time in C++ and I'm using box2D for the physics back end. I'm testing on different android devices and on 2 out of 3 devices, the game runs fine and so do the physics. However, on my galaxy tab 10.1 I'm sporadically getting a sort of "stutter". Here is a youtube video demonstrating:
http://www.youtube.com/watch?v=DSbd8vX9FC0
The first device the game is running on is an Xperia Play... the second device is a Galaxy Tab 10.1. Needless to say the Galaxy tab has much better hardware than the Xperia Play, yet Box2D is lagging at random intervals for random lengths of time. The code for both machines is exactly the same. Also, the rest of the engine/game is not actually lagging. The entire time, it's running at solid 60 fps. So this "stuttering" seems to be some kind of delay or glitch in actually reading values from box2D.
The sprites you see moving check to see if they have an attached physical body at render time and set their positional values based on the world position of the physical body. So it seems to be in this specific process that box2D is seemingly out of sync with the rest of the application. Quite odd. I realize it's a long shot but I figured I'd post it here anyway to see if anyone had ideas... since I'm totally stumped. Thanks for any input in advance!
Oh, P.S. I am using a fixed time step since that seems to be the most commonly suggested solution for things like this. I moved to a fixed time step while developing this on my desktop, I ran into a similar issue just more severe and the fixed step was the solution. Also like I said the game is running steady at 60 fps, which is controlled by a low latency timer so I doubt simple lag is the issue. Thanks again!
As I mentioned in the comments here, this came down to being a timer resolution issue. I was using a timer class which was supposed to access the highest resolution system timer, cross platform. Everything worked great, except when it came to Android, some versions worked and some versions it did not. The galaxy tab 10.1 was one such case.
I ended up re-writing my getSystemTime() method to use a new addition to C++11 called std::chrono::high_resolution_clock. This also worked great (everywhere but Android)... except it has yet to be implemented in any NDK for android. It is supposed to be implemented in version 5 of the crystax NDK R7, which at the time of this post is 80% complete.
I did some research into various methods of accessing the system time or something by which I could base a reliable timer on the NDK side, but what it comes down to is that these various methods are not supported on all platforms. I've went through the painful process of writing my own engine from scratch simply so that I could support every version of android, so betting on methods that are inconsistently implemented is nonsensical.
The only sensible solution for anyone facing this problem, in my opinion, is to simply abandon the idea of implementing such code on the NDK side. I'm going to do this on the Java end instead, since thus far in all my tests this has been sufficiently reliable across all devices that I've tested on. More on that here:
http://www.codeproject.com/Articles/189515/Androng-a-Pong-clone-for-Android#Gettinghigh-resolutiontimingfromAndroid7
Update
I have now implemented my proposed solution, to do timing on the java side and it has worked. I also discovered that handling any relatively large number, regardless of data type (a number such as the nano seconds from calling the monotonic clock) in the NDK side also results in serious lagging on some versions of android. As such I've optimized this as much as possible by passing around a pointer to the system time, to ensure we're not passing-by-copy.
One last thing too, my statement that calling the monotonic clock from the NDK side is unreliable is however, it would seem, false. From the Android docks on System.nanoTime(),
...and System.nanoTime(). This clock is guaranteed to be monotonic,
and is the recommended basis for the general purpose interval timing
of user interface events, performance measurements, and anything else
that does not need to measure elapsed time during device sleep.
So it would seem, if this can be trusted, that calling the clock is reliable, but as mentioned there are other issues that then arise, like handling allocating and dumping the massive number that results which alone nearly cut my framerate in half on the Galaxy Tab 10.1 with Android 3.2. Ultimate conclusion: supporting all android devices equally is either damn near or flat out impossible and using native code seems to make it worse.
I am very new to game development, and you seem a lot more experienced and it may be silly to ask, but are you using delta time to update your world? Altough you say you have a constant frame rate of 60 fps, maybe your frame counter calculates something wrong, and you should use delta time to skip some frames when the FPS is low, or your world seem to "stay behind". I am pretty sure that you are familiar with this, but I think a good example is here : DeltaTimeExample altough it is a C implementation. If you need I can paste some code from my Android Projects of how I use delta time, that I've developed following this book : Beginning Android Games.
It's my first question, I searched on google as always, round two was searching directly on SO, but still I couldn't get the exact answer.
I'm going to write 3D graphics engine for games I want to make in the future for Android platform. I played Doom recently on my mobile, on break in high-school, and when I lifted my head after being killed by Baron of Hell I saw more that twenty people struggling to see me playing, followed by loud "AAAAAAAW!" when they saw me dead. My jaw went down to the floor. None of them suspected that it was game from 1993.
But let's get back to the point. If you wanted to write "find it by yourself" you can stop typing now. I can't check this on my own, for different reasons. At first I own only one mid-cost device (HTC Wildfire) I can test my engine on. The second and more important reason is time. I don't have time to write whole OpenGL-ES 2.0 graphics engine and realize that my Wildfire can't even run it. Or it gets around 1FPS when walking.
For raycasting to be more realistic it needs few calculations, which are not important for me in OpenGL because I just set vertices & indices and it goes. I love the way that Doom levels are designed but I wanted to add ability to move your head up and down (rotation on X-axis) to look around and shoot more precisely, and the complexity of calculations in raycasting are growing (not for me, for CPU). I know Wildfire doesn't have any GPU and anything based on OGL (even 2D) is lagging as hell even if I overclock my CPU to #748 with PERFORMANCE governor (it turns off on higher values, and I got to move it from my Casemate to stop it from overheating). But Doom is running perfectly without any lag even if I underclock it to #448. Is this only because of lo-res textures and non-complex levels or because of raycasting?
And please don't flame for going back to such old thing like raycasting engine. Actual mobile devices, or smartphones - call them like you want - that doesn't cost $1k are in stage that computers was 18 (Doom release) years ago. There will always be a group of such devices. And even if I don't make my billion on this - I will have something to play on travel.
I've got some pretty nice ideas for games like this, and I want to hit the low cost devices as they don't have anything more spectacular than simple logic games. As this is my first question please correct anything I wrote wrong, poorly or just bad - I'll rewrite it.
I've been developing a game for Android for the past few months, and it's just about finished. I recently acquired a Motorola Droid for testing purposes, as I developed the game using a HTC Incredible. On my Incredible I can get a pretty solid 59 fps throughout the game. On the Droid, the game becomes very choppy with an average of about 40 fps. Both phones are running Android 2.2.
I looked up the tech specs are here are the only differences I noted that might affect gameplay: 1 GHz processor vs 550 MHz and 512 MB RAM vs 256 MB RAM.
Just for giggles, I thought I would strip down the game to a very minimal state to see if it was my coding to blame. I stripped it down to the point where the only thing being down was drawing the main menu and moving various bitmaps around the screen. Not a hair over 45 fps.
So, is this the approximate cap for the Motorola Droid? If so...my game is pretty simple and non-CPU intensive, so what can I do? There are thousands of other Android games that are much more demanding than my own, yet they seem to run very smoothly.
Is it the fact that I'm using Android's built-in Canvas and not Open-GL or some other alternative? Would anybody recommend doing that?
Could somebody enlighten me to what might be my problem here?
Thanks in advance.
OpenGL ES is the way to go. Canvas is most likely implemented on top of OpenGL ES anyway, and not very efficiently by the sounds of it.
40fps is pretty good, the human eye can only detect jerkiness when the framerate falls below 25fps. Anything above 15fps is considered 'full motion'.
If you can see jitter then it may be the game pausing while the garbage collector kicks in - you can reduce this by reducing the number of objects that you create and you should be able to see it happening by using ddms.
Other than that perhaps there is a glitch calculating the frame rate?