I created a game with createjs, but it runs very slowly on a mobile.
I added a function to show FPS obtained with createjs.Ticker.getMeasuredFPS(). However, the FPS shown by the function is quite normal. I set the FPS to 60, and the result of getMeasuredFPS() is about 55-60, while the animation is laggy and the FPS shouldn't be so high (it might be 5-10).
How can I get the real FPS on the device?
How can I profile it on a mobile?
getMeasuredFPS() is well tested, and should be accurate. That said, there's always a chance your mobile browser is doing something tricky (running the code every tick, but not rendering). There's also a chance your code may not be updating properly.
Either way, it's worth looking at a couple of alternatives. You can check getMeasuredTickTime(), which will tell you the average time spent in each tick (see the docs for more info).
You should also take a look at profiling using devtools.
http://google.com?q=profile+mobile+chrome
Related
I have trouble debugging an performance issue on a div with overflow:auto. As you can see in the image below (and read in the the breakdown below that) it looks like the webview is wasting precious time by not delivering frames.
Weirdest thing is that the same content scrolls pretty smoothly on two older devices (Moto G and X 2013, Lollipop 5.1) but shows noticeable framedrops on newer ones (Moto X 2014 (Marshmallow 6.0) and Pixel C (Nougat 7.0)).
Between 990 and 995ms it does some compositing, rasterizing and gpu stuff. Then it sits idly for 15ms (that's almost an entire frame), updates the layer tree, then waits for another 18ms.
So we've missed ~2 frames by now even when it looks like it should have been able to deliver at least one.
Then it fires a scroll event and gets to chew on the javascript that is tied to that. That takes ~5ms and now it does some painting and compositing, then waits ANOTHER 10ms and updates the layer tree, then more waiting and then, finally, delivers it's first new frame since 990ms.
This entire thing took 66ms but, according to the timeline, the webview used most of that time to sit on its ass. And this is not an exception, I'm seeing this pattern during the entire recording of the scroll.
When I look at a timeline taken from the Moto X 1st gen it looks like the webview wastes a lot less time and tries to deliver frames as often as possible. Sure, it's not doing 60fps all the time but at this point I'm happy it does 40 instead of 20 or even lower.
The obvious question here is "what the hell is happening?" - or maybe more accurate: "why are things (read: frames) NOT happening?"
PS: I've checked the Android System Webview version and all of the devices I tested this on have v52 installed. The only thing I can think of is that the OS is 'causing' this. Did something change since Marshmallow?
UPDATE
I've been able to solve most of the lag by relaying most of the onscroll-logic I did to requestAnimationFrame and hide content that is off-screen (with visibility:hidden). But this doesn't really explain why Chrome seems to just skip frames when it's not really doing anything. It seems it has to do with scrolling a large area fairly complex content but I'd expect to see that in the timeline in some form. Instead, it just shows large empty spaces that block frames from rendering..?
I'm working on Google Cardboard Game. I have 5-10 sprites, a terrain with 15-30 pieces of bush on it(single bushes like grass),15 trees(low poly with <64 vertices),a Camera (provided by GVR Sdk).
On the Editor Framerate is good. But when I test it on my Galaxy S6, the FPS is low as 10-20(I've attached a script that gets Framerate on a Text).
Any optimization tip?
Here is detail of stats in Unity Editor:
CPU: main 6.0ms
FPS: 160(average)
Batches:117
Tris: 6.1k
Verts: 3.3k
I believe you had lot of times used methods "GameObject.Find()" and "gameobject.GetComponent()" in the runtime.
This is most common issue of unity developers.
Better do not use them at runtime at all! Only inside of awake or in start methods.
You need to make all needed GetComponent<> inside of awake or start methods and assign result to some global variables. After -- just use this variable in runtime.
This will really increase your game speed on the mobile devices.
Make sure your device is not overheating, might seem obvious but you can't always tell when it is on your face (I also own an S6).
Also, make sure you are not in energy saver mode, sounds dumb but I fell for it already ;)
Among the huge amount of things that can ruin the performance of a game on smartphone are:
Make sure you don't have a script doing too much work in Update (especially Instanciate()/Destroy()
Don't move static objects, just don't
Make sure you don't use high resolution textures (in my small experience > 512x512 is, that they are squared and have a resolution that is a power of two
As a side note, GetComponents can be an issue, the alternative was already posted by #Andrew, just use GetComponent in the Start()/Awake() method and store them to use them later on.
I am timing my OpenGL frame rates, with v-sync turned on, and notice my timings aren't the precise frequency as set by the monitor. That is, on my desktop I have a 60Hz refresh, but the FPS is stable at 59.88, whereas on my tablet it's also 60Hz but the FPS can be 61/62 FPS. I'm curious as to what precisely causes these slight deviations.
These are the ideas I've had so far:
Dropped Frames: This is the obvious answer: I'm just missing some frames. This is however not the cause as I can verify I am not dropping frames and the drop in FPS would be higher if this happened. I calculate the time over 120 frames, so if 1 frame was lost the FPS would drop below 59.5 on the desktop.
Inaccurate timings: I use clock_gettime to get my timings. On Linux I know this is accurate enough (as I previously did nanosecond based timings with it, but here we could even live with +/- several hundred microseconds). On Android however I'm not sure of the accuracy of this.
API Oddity: I use glXSwapBuffers on the desktop and eglSwapBuffers on Android. There could be an oddity here, but I don't see how this could so subtley affect the frame rate.
Approximate Hz: This is my biggest guess that the video cards/monitor aren't actually running at 60Hz. This is probably tied to the exact speed of the monitor, and the video card frequency. This seems like something that could be concretely determined, but I don't know which tools can be used to do this. (Update: My current video mode in Linux shows 59.93Hz, so closer, but still not there)
If the answer is indeed #4 this is perhaps not the best exchange site for the question. But in all cases my ultimate goal is to figure out programmatically what the ideal refresh rate actaully is. So I'm hoping somebody can confirm/deny my ideas, and possibly point me in the right direction to getting the information I need.
I'm developing an engine and a game at the same time in C++ and I'm using box2D for the physics back end. I'm testing on different android devices and on 2 out of 3 devices, the game runs fine and so do the physics. However, on my galaxy tab 10.1 I'm sporadically getting a sort of "stutter". Here is a youtube video demonstrating:
http://www.youtube.com/watch?v=DSbd8vX9FC0
The first device the game is running on is an Xperia Play... the second device is a Galaxy Tab 10.1. Needless to say the Galaxy tab has much better hardware than the Xperia Play, yet Box2D is lagging at random intervals for random lengths of time. The code for both machines is exactly the same. Also, the rest of the engine/game is not actually lagging. The entire time, it's running at solid 60 fps. So this "stuttering" seems to be some kind of delay or glitch in actually reading values from box2D.
The sprites you see moving check to see if they have an attached physical body at render time and set their positional values based on the world position of the physical body. So it seems to be in this specific process that box2D is seemingly out of sync with the rest of the application. Quite odd. I realize it's a long shot but I figured I'd post it here anyway to see if anyone had ideas... since I'm totally stumped. Thanks for any input in advance!
Oh, P.S. I am using a fixed time step since that seems to be the most commonly suggested solution for things like this. I moved to a fixed time step while developing this on my desktop, I ran into a similar issue just more severe and the fixed step was the solution. Also like I said the game is running steady at 60 fps, which is controlled by a low latency timer so I doubt simple lag is the issue. Thanks again!
As I mentioned in the comments here, this came down to being a timer resolution issue. I was using a timer class which was supposed to access the highest resolution system timer, cross platform. Everything worked great, except when it came to Android, some versions worked and some versions it did not. The galaxy tab 10.1 was one such case.
I ended up re-writing my getSystemTime() method to use a new addition to C++11 called std::chrono::high_resolution_clock. This also worked great (everywhere but Android)... except it has yet to be implemented in any NDK for android. It is supposed to be implemented in version 5 of the crystax NDK R7, which at the time of this post is 80% complete.
I did some research into various methods of accessing the system time or something by which I could base a reliable timer on the NDK side, but what it comes down to is that these various methods are not supported on all platforms. I've went through the painful process of writing my own engine from scratch simply so that I could support every version of android, so betting on methods that are inconsistently implemented is nonsensical.
The only sensible solution for anyone facing this problem, in my opinion, is to simply abandon the idea of implementing such code on the NDK side. I'm going to do this on the Java end instead, since thus far in all my tests this has been sufficiently reliable across all devices that I've tested on. More on that here:
http://www.codeproject.com/Articles/189515/Androng-a-Pong-clone-for-Android#Gettinghigh-resolutiontimingfromAndroid7
Update
I have now implemented my proposed solution, to do timing on the java side and it has worked. I also discovered that handling any relatively large number, regardless of data type (a number such as the nano seconds from calling the monotonic clock) in the NDK side also results in serious lagging on some versions of android. As such I've optimized this as much as possible by passing around a pointer to the system time, to ensure we're not passing-by-copy.
One last thing too, my statement that calling the monotonic clock from the NDK side is unreliable is however, it would seem, false. From the Android docks on System.nanoTime(),
...and System.nanoTime(). This clock is guaranteed to be monotonic,
and is the recommended basis for the general purpose interval timing
of user interface events, performance measurements, and anything else
that does not need to measure elapsed time during device sleep.
So it would seem, if this can be trusted, that calling the clock is reliable, but as mentioned there are other issues that then arise, like handling allocating and dumping the massive number that results which alone nearly cut my framerate in half on the Galaxy Tab 10.1 with Android 3.2. Ultimate conclusion: supporting all android devices equally is either damn near or flat out impossible and using native code seems to make it worse.
I am very new to game development, and you seem a lot more experienced and it may be silly to ask, but are you using delta time to update your world? Altough you say you have a constant frame rate of 60 fps, maybe your frame counter calculates something wrong, and you should use delta time to skip some frames when the FPS is low, or your world seem to "stay behind". I am pretty sure that you are familiar with this, but I think a good example is here : DeltaTimeExample altough it is a C implementation. If you need I can paste some code from my Android Projects of how I use delta time, that I've developed following this book : Beginning Android Games.
I've written a game for Android, and I've tested it on the Dev Phone 1. It works perfectly, the speed is just right. However, I'm sure phone CPU's are getting faster. They may already be faster than the dev phone.
How do I make sure that my game runs at the exact same speed no matter what the device or how fast it runs? Do you know of any techniques? Should I check some kind of timer at the top of the loop each time?
I guess I'm referring to frame rate - but mostly the speed at which my game runs through the main game loop.
Any theory or experience would be great! Thank you.
If you are targeting certain frame rate, the basic idea is that you should have a timer or thread that executes your game's tick method at desired intervals. With timers the implementation is pretty trivial: just schedule a timer to execute at regular intervals. When using threads you have to put the thread to sleep between consecutive ticks if it runs faster than the desired frame rate.
However, this alone doesn't lead to the best possible results as the interval can vary a bit between the frames. There is a very good article on this issue: http://gafferongames.com/game-physics/fix-your-timestep/.
Also, there are already slower and faster Android phones than the dev phone 1. So you have to prepare for the both cases if you are targeting all Android devices. If your game is not that CPU heavy, it might be that you can achieve the desired frame rate on all the devices. But if you don't limit the frame rate, your game will be too fast on the faster Android phones.