Android: How to find the frame rate of a device? - android

Frame rate: I'm referring to the rate at which display changes. i.e. Ondraw() is called and the canvas is redrawn.
Is there a default rate for all android devices ? As this rate depends on the processing power of the device , How to find out the frame rate of a device , before starting to program for that mobile device ?

This may be a follow-up to this question, where I suggested that having a redraw loop that just kept drawing over and over again might be a bit excessive. There may be an api to find out the capabilities of the devices display, but if there is I'm not aware of it. When you're writing your own event loop / thread function you can control the framerate by how often you call your 'draw' method. Typically, I think for most purposes, you would be ok with a refresh rate of 30 or so. If you're writing a fast action game, that needs rapid animation then you may want to run as fast as you can, the more fps, the smoother it will be.
A typical event loop (thread run function) might look something like this:
// define the target fps
private static final int UPDATE_RATE = 30; // Frames per second (fps)
public void run() {
while(running) { // volatile flag, set somewhere else to shutdown
long beginTimeMillis, timeTakenMillis, timeLeftMillis;
// get the time before updates/draw
beginTimeMillis = System.currentTimeMillis();
// do the thread processing / draw
performUpdates(); // move things if required
draw(); // draw them on the screen
// get the time after processing and calculate the difference
timeTakenMillis = System.currentTimeMillis() - beginTimeMillis;
// check how long there is until we reach the desired refresh rate
timeLeftMillis = (1000L / UPDATE_RATE) - timeTakenMillis;
// set some kind of minimum to prevent spinning
if (timeLeftMillis < 5) {
timeLeftMillis = 5; // Set a minimum
}
// sleep until the end of the current frame
try {
TimeUnit.MILLISECONDS.sleep(timeLeftMillis);
} catch (InterruptedException ie) {
}
}
}

You can use the dumpsys tool provided by Android. To obtain information about the display of the device execute the command:
adb shell dumpsys display
The information about the frame rate of the device is provided in the attribute "mPhys".
You will find something like:
mPhys=PhysicalDisplayInfo{1080x1920, 60.000004 fps, densitiy 3.0,
480.0x480.0 dpi, secure true}
The frame rate of the device is in the second field, in my case is 60.000004 fps

You can't rely on a certain framerate. Android is a using multitasking operating system. If there are some threads running in the background that do some heavy lifting, you might not be able to reach the framerate you want. Even if you're the only active process, the framerate depends on your GPU and CPU, and the clock of each. Maybe the user has a hacked ROM that changes the clock to a custom value.
Some phones might be locked to a certain framerate. The HTC EVO was locked to 30fps for the longest time, until custom ROMs came out that removed that limitation. Newer EVO ROMs also removed that limitation.
I don't know what you're trying to do, but your best bet is to measure the time after each frame and use that delta for your animations. If you're trying to display the FPS, then use a smoothed average.

There is a simple tricky way to find device FPS during runtime.
Just call the following method:
long oneSecondLater=0;
int FPS=0;
int counter=0;
ValueAnimator v_animator;
private void logFPS()
{
oneSecondLater = System.currentTimeMillis()+1000;
v_animator = ValueAnimator.ofFloat(0.0f, 1.0f);
v_animator.setRepeatCount(ValueAnimator.INFINITE);
v_animator.addUpdateListener(new ValueAnimator.AnimatorUpdateListener() {
#Override
public void onAnimationUpdate(ValueAnimator animation) {
FPS++;
if(System.currentTimeMillis() > oneSecondLater)
{
counter++;
if(counter>1)//ignore the first onAnimationUpdate call (it is not correct)
Log.i("checkFPS","FPS:" + FPS);
FPS=0;
oneSecondLater = System.currentTimeMillis()+1000;
}
}
});
v_animator.start();
}
I log FPS every second, The output of my Logcat was as follows
It works because for ValueAnimator, onAnimationUpdate() method will call according to device FPS;

This might be an old question, but for future reference, I found this library named Takt
https://github.com/wasabeef/Takt.
Takt is Android library for measuring the FPS using Choreographer.

Related

How to avoid sluggish d3 force layout on Android?

We're using a d3.layout.force on a web app, and I've been investigating a bug report that it is sluggish on Android: it feels like the nodes are in oil, compared to how it works on desktop browsers, or iOS.
(By the way, we only ever have between 4 and 9 nodes, and the sluggishness does not feel different between 4 and 9.)
We set size(), linkDistance() and charge(); so we're using the defaults for friction, theta, alpha, gravity, etc. I experimented with these to try and reproduce the effect on desktop, but couldn't. (friction(0.67), instead of default of 0.9, was closest, but still felt different, somehow.)
I then set up an FPS meter (based on calls to the tick() function). We get 60fps on desktop, and it seems in the 40s and 50s on an ipad. But on Android Chrome (on a Nexus 7) it seems capped at 30fps, and is often half that. Android Firefox was in the 20s normally, but sometimes into the 30s.
So, is it a reasonable hypothesis that are Android devices are just slower? Could there be a cap of 30fps in Android Chrome?
Then how can I fix this? I believe d3.js uses requestAnimationFrame(). Often animation libraries take the time between calls to requestAnimationFrame() to decide how far to move objects (so when the CPU gets overloaded the animation becomes jerkier, but takes the same amount of time to complete). But it appears that d3.js does not do this, and moves everything the same amount by tick, not by elapsed time. What can I do about this?
(Ideally I'd like a solution based on how slow/fast the machine is, rather than having to sniff the browser.)
Curiously, adding more calls to force.tick() in my own requestAnimationFrame() handler (see https://stackoverflow.com/a/26189110/841830), does increase the FPS. That suggests it is not CPU bound, but instead a limit that Android is enforcing (perhaps to save battery?).
Here is the code I'm using, that tries to adapt dynamically to the current fps; it ain't beautiful but seems to be getting the job done in my test android devices, without changing the behaviour in iOS or desktop.
First, where you set up the force layout:
var ticksPerRender = 0;
var animStartTime,animFrameCount;
force.on('start',function start(){
animStartTime = new Date();animFrameCount=0;
});
requestAnimationFrame(function render() {
for(var i = 0;i < ticksPerRender;i++)force.tick();
if(force.alpha() > 0)requestAnimationFrame(render);
});
The above does two things:
sets up the fps counter
sets up our own animation callback, which does nothing by default (ticksPerRender starts off as zero).
Then at the end of your tick handler:
++animFrameCount;
if(animFrameCount>=15){ //Wait for 15, to get an accurate count
var now = new Date();
var fps = (animFrameCount / (now - animStartTime))*1000;
if(fps < 30){
ticksPerRender++;
animStartTime = now;animFrameCount = 0; //Reset the fps counter
}
if(fps > 60 && ticksPerRender >= 1){
ticksPerRender--;
animStartTime = now;animFrameCount = 0; //Reset the fps counter
}
}
This says that if the FPS is low (below 30), do an extra call to tick() on each animation frame. And if it goes high (over 60), remove that extra call.
Each time ticksPerRender is changed, we measure the FPS from scratch.

Android - Camera flash light blinking in wrong intervals in different devices

I am developing an app, in which I want to blink the flash light in specific interval.
Below are the steps I have followed.
1) Set the Timer for specific interval.
2) In run() method i did the code for TurnOn and TurnOff flash.
But the interval of flash blinking is different on different devices. The timer time is same for all devices, I have also put a Log in between, I am getting same values but, still the problem is there.
Is it a Hardware issue, because the hardware is different for different devices. I have also tested in iPhone 5s (By converting same code in iOS) but, the flash blinking is much faster than Android.
For Android, I have tested on Nexus 4, Motorola G2, Sony Xperia Neo and it is working fine.
Problem is with Nexus 5 and Samsung Galaxy S4.
EDIT
Code of Timer :
long delayLong = 200;
long timerValueLong = 500;
Timer timer;
timer = new Timer();
timer.schedule(new TimerTask() {
#Override
public void run() {
if (!mLightOn) {
turnOnFlash();
} else {
turnOffFlash();
}
}
}, delayLong, timerValueLong);
This is an older problem, but the problem still persists today so I'll post how I solved this.
The problem is that the call to turn the LED on or off takes variable amount of time to traverse through the Android operating system. The way these calls are handled are phone dependent.
First off you need to measure the time it takes for the LED to turn on and off starting from the time the call to do so. Use the input from the camera, keep the phone close to a surface and measure the change in brightness in the frame. You can use glReadPixels if working with OpenGL and read out the center line only each frame. You will need to make multiple measurements, as the call can be shorter or longer depending on the state of the OS. Preferably you'd want to have no buffer or a fixed buffer of frames, so timing of the frames is reliable (which might not be the case with compression). I used OpenGL and a SurfaceTexture and this is a snappy way.
You now know the minimum(1) and maximum(2) time it takes for the call to traverse the OS. Using this information you can make the LED blink as fast as possible on the phone. To truly get the most out of it, start the second call to the flash before maximum(2) time has passed; maximum(2) - minium(1).
Using the last trick, the speed of the flashing is mostly dependent on the difference in minimum and maximum time of the call traversal. This is typically very different per phone, from 10ms to 100ms+.
Also note that because the measuring of the call traversal time happens with the camera, times are rounded up/down to 33ms segments (#30fps).
I had the same issue with the flashlight and the problem is not related to the Timer. It is about how you are turning the flash on and off. On some devices like Nexus 5, you have to have and use a SurfaceView inside of your layout. It would be useful to show us the methods you are using for turning the flashlight on and off.
long delayLong = 20;
long timerValueLong = 100;
Timer timer;
final Parameters p = camera.getParameters();
timer = new Timer();
timer.schedule(new TimerTask() {
#Override
public void run() {
if (!isLighOn) {
p.setFlashMode(Parameters.FLASH_MODE_ON);
p.setFlashMode(Parameters.FLASH_MODE_TORCH);
camera.setParameters(p);
isLighOn = true;
} else {
p.setFlashMode(Parameters.FLASH_MODE_OFF);
camera.setParameters(p);
isLighOn = false;
}
}
}, delayLong, timerValueLong);
Maybe you can try to use Alarm functions like:
https://developer.android.com/training/scheduling/alarms.html
https://developer.android.com/reference/android/provider/AlarmClock.html
your can set repeat period for alarm. make 2 alarms, one for On and one for Off.
Even phone has hard working, alarm will work depending on clock time. so maybe it will not Off and twice On by error, but solution will be same.
and also you are using 200 milliseconds. it's hard for device catch such small time interval. maybe you'll try to increase you time intervals?

Android OpenGL previous frame re-appear for split moment when calling update

I have just started using the OpenGL ES 2 for android for my little game and have encountered a problem on redrawing the screen on each frame.
I have setup a loop on my Renderer's onDrawFrame, just a simple [ updateGameLogic() -> drawGame() ] or Thread.sleep() loop based on the time lapsed from last drawGame call.
Currently the updateGameLogic() method simply translate the camera to the +ve X direction (the game is 2d).
In the drawGame() call, I first clear my screen with GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT). Then I have 3 glBindTexture and glDrawElements calls for drawing 3 categories of objects with different texture atlas.
Here comes the problem, in between each frame drawn on screen, there is a blink of the previous frame appearing which is undesired and makes the game look dizzy. Precisely, say the game is just about to draw frame 3 from frame 2, right before frame 2 vanish and frame 3 appearing, there is a split moment where frame 1 is displayed.
I thought this may be due to the way the GLSurfaceView is buffered by the system so I experimented with calling multiple glClear before drawing but everything stays the same. Would be grateful if someone can provide some explanation / solution to the problem, and what I have done wrong, thanks. (basically paragraph 2 to 4 is all my code so I have not posted it, unless requested)
From the clarification in the comment, it sounds like you have something like this in your code:
public void onDrawFrame(GL10 gl) {
long currentTime = SystemClock.elapsedRealtime();
long deltaTime = currentTime - mLastFrameTime;
if (deltaTime < 33) {
Thread.sleep(33 - deltaTime);
return;
}
mLastFrameTime = currentTime;
updateGameLogic(deltaTime);
drawGame();
}
This will indeed cause problems. When onDrawFrame() is called, you have to render a frame. You can't just return without drawing anything. The caller will assume that you rendered a frame in any case, and it will end up being presented on the screen. If you decide not to render anything, whatever happened to be in the surface you were supposed to draw to will be presented. There's no telling what this will be, but it's quite likely that it's an old frame from 2-3 frames earlier.
If you want to artificially throttle the frame rate, e.g. to save power, unfortunately there's no very good way to do this in Android. Using sleeps in onDrawFrame() is kind of dirty (and inherently unreliable, IMHO), but it might be necessary in this case. The key is that either before or after you sleep, you still need to render a frame. As a first attempt, I would try tweaking the above to something like this:
public void onDrawFrame(GL10 gl) {
long currentTime = SystemClock.elapsedRealtime();
long deltaTime = currentTime - mLastFrameTime;
if (deltaTime < 33) {
Thread.sleep(33 - deltaTime);
currentTime = SystemClock.elapsedRealtime();
deltaTime = currentTime - mLastFrameTime;
}
mLastFrameTime = currentTime;
updateGameLogic(deltaTime);
drawGame();
}
Note that while there is still an artificial delay, there is no early return in the code anymore.
There are probably more robust variations of this idea for throttling the redraws to 30 fps. Some searching on SO or the rest of the internet should reveal previous discussions.

android game loop vs updating in the rendering thread

I'm making an android game and am currently not getting the performance I'd like. I have a game loop in its own thread which updates an object's position. The rendering thread will traverse these objects and draw them. The current behavior is what seems like choppy/uneven movement. What I cannot explain is that before I put the update logic in its own thread, I had it in the onDrawFrame method, right before the gl calls. In that case, the animation was perfectly smooth, it only becomes choppy/uneven specifically when I try to throttle my update loop via Thread.sleep. Even when I allow the update thread to go berserk (no sleep), the animation is smooth, only when Thread.sleep is involved does it affect the quality of the animation.
I've created a skeleton project to see if I could recreate the issue, below are the update loop and the onDrawFrame method in the renderer:
Update Loop
#Override
public void run()
{
while(gameOn)
{
long currentRun = SystemClock.uptimeMillis();
if(lastRun == 0)
{
lastRun = currentRun - 16;
}
long delta = currentRun - lastRun;
lastRun = currentRun;
posY += moveY*delta/20.0;
GlobalObjects.ypos = posY;
long rightNow = SystemClock.uptimeMillis();
if(rightNow - currentRun < 16)
{
try {
Thread.sleep(16 - (rightNow - currentRun));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
And here is my onDrawFrame method:
#Override
public void onDrawFrame(GL10 gl) {
gl.glClearColor(1f, 1f, 0, 0);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT |
GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTranslatef(transX, GlobalObjects.ypos, transZ);
//gl.glRotatef(45, 0, 0, 1);
//gl.glColor4f(0, 1, 0, 0);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, uvBuffer);
gl.glDrawElements(GL10.GL_TRIANGLES, drawOrder.length,
GL10.GL_UNSIGNED_SHORT, indiceBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
I've looked through replica island's source and he's doing his update logic in a separate thread, as well as throttling it with Thread.sleep, but his game looks very smooth. Does anyone have any ideas or has anyone experienced what I'm describing?
---EDIT: 1/25/13---
I've had some time to think and have smoothed out this game engine considerably. How I managed this might be blasphemous or insulting to actual game programmers, so please feel free to correct any of these ideas.
The basic idea is to keep a pattern of update, draw... update, draw... while keeping the time delta relatively the same (often out of your control).
My first course of action was to synchronize my renderer in such a way that it only drew after being notified it was allowed to do so. This looks something like this:
public void onDrawFrame(GL10 gl10) {
synchronized(drawLock)
{
while(!GlobalGameObjects.getInstance().isUpdateHappened())
{
try
{
Log.d("test1", "draw locking");
drawLock.wait();
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
When I finish my update logic, I call drawLock.notify(), releasing the rendering thread to draw what I just updated. The purpose of this is to help establish the pattern of update, draw... update, draw... etc.
Once I implemented that, it was considerably smoother, although I was still experiencing occasional jumps in movement. After some testing, I saw that I had multiple updates occurring between calls of ondrawFrame. This was causing one frame to show the result of two (or more) updates, a larger jump than normal.
What I did to resolve this was to cap the time delta to some value, say 18ms, between two onDrawFrame calls and store the extra time in a remainder. This remainder would be distributed to subsequent time deltas over the next few updates if they could handle it. This idea prevents all sudden long jumps, essentially smoothing a time spike out over multiple frames. Doing this gave me great results.
The downside to this approach is that for a little time, the position of objects will not be accurate with time, and will actually speed up to make up for that difference. But it's smoother and change in speed is not very noticeable.
Finally, I decided to rewrite my engine with the two above ideas in mind, rather than patching up the engine I had originally made. I made some optimizations for the thread synchronization that perhaps someone could comment on.
My current threads interact like this:
-Update thread updates the current buffer (double buffer system in order to update and draw simultaneously) and will then give this buffer to the renderer if the previous frame has been drawn.
-If the previous frame has not yet draw, or is drawing, the update thread will wait until the render thread notifies it that it has drawn.
-Render thread waits until notified by update thread that an update has occurred.
-When the render thread draws, it sets a "last drawn variable" indicating which of the two buffers it last drew and also notifies the update thread if it was waiting on the previous buffer to be drawn.
That may be a little convoluted, but what that's doing is allowing for the advantages of multithreading, in that it can perform the update for frame n while frame n-1 is drawing while also preventing multiple update iterations per frame if the renderer is taking a long time. To further explain, this multiple-update scenario is handled by the update thread locking if it detects that the lastDrawn buffer is equal to the one which was just updated. If they are equal, this indicates to the update thread that the frame before has not yet been drawn.
So far I'm getting good results. Let me know if anyone has any comments, would be happy to hear your thoughts on anything I'm doing, right or wrong.
Thanks
(The answer from Blackhex raised some interesting points, but I can't cram all this into a comment.)
Having two threads operating asynchronously is bound to lead to issues like this. Look at it this way: the event that drives animation is the hardware "vsync" signal, i.e. the point at which the Android surface compositor provides a new screen full of data to the display hardware. You want to have a new frame of data whenever vsync arrives. If you don't have new data, the game looks choppy. If you generated 3 frames of data in that period, two will be ignored, and you're just wasting battery life.
(Running a CPU full out may also cause the device to heat up, which can lead to thermal throttling, which slows everything in the system down... and can make your animation choppy.)
The easiest way to stay in sync with the display is to perform all of your state updates in onDrawFrame(). If it sometimes takes longer than one frame to perform your state updates and render the frame, then you're going to look bad, and need to modify your approach. Simply shifting all game state updates to a second core isn't going to help as much as you might like -- if core #1 is the renderer thread, and core #2 is the game state update thread, then core #1 is going to sit idle while core #2 updates the state, after which core #1 will resume to do the actual rendering while core #2 sits idle, and it's going to take just as long. To actually increase the amount of computation you can do per frame, you'd need to have two (or more) cores working simultaneously, which raises some interesting synchronization issues depending on how you define your division of labor (see http://developer.android.com/training/articles/smp.html if you want to go down that road).
Attempting to use Thread.sleep() to manage the frame rate generally ends badly. You can't know how long the period between vsync is, or how long until the next one arrives. It's different for every device, and on some devices it may be variable. You essentially end up with two clocks -- vsync and sleep -- beating against each other, and the result is choppy animation. On top of that, Thread.sleep() doesn't make any specific guarantees about accuracy or minimum sleep duration.
I haven't really gone through the Replica Island sources, but in GameRenderer.onDrawFrame() you can see the interaction between their game state thread (which creates a list of objects to draw) and the GL renderer thread (which just draws the list). In their model, the game state only updates as needed, and if nothing has changed it just re-draws the previous draw list. This model works well for an event-driven game, i.e. where the contents on screen update when something happens (you hit a key, a timer fires, etc). When an event occurs, they can do a minimal state update and adjust the draw list as appropriate.
Viewed another way, the render thread and the game state work in parallel because they're not rigidly tied together. The game state just runs around updating things as needed, and the render thread locks it down every vsync and draws whatever it finds. So long as neither side keeps anything locked up for too long, they don't visibly interfere. The only interesting shared state is the draw list, guarded with a mutex, so their multi-core issues are minimized.
For Android Breakout ( http://code.google.com/p/android-breakout/ ), the game has a ball bouncing around, in continuous motion. There we want to update our state as frequently as the display allows us to, so we drive the state change off of vsync, using a time delta from the previous frame to determine how far things have advanced. The per-frame computation is small, and the rendering is pretty trivial for a modern GL device, so it all fits easily in 1/60th of a second. If the display updated much faster (240Hz) we might occasionally drop frames (again, unlikely to be noticed) and we'd be burning 4x as much CPU on frame updates (which is unfortunate).
If for some reason one of these games missed a vsync, the player may or may not notice. The state advances by elapsed time, not a pre-set notion of a fixed-duration "frame", so e.g. the ball will either move 1 unit on each of two consecutive frames, or 2 units on one frame. Depending on the frame rate and the responsiveness of the display, this may not be visible. (This is a key design issue, and one that can mess with your head if you envisioned your game state in terms of "ticks".)
Both of these are valid approaches. The key is to draw the current state whenever onDrawFrame is called, and to update state as infrequently as possible.
Note for anyone else who happens to read this: don't use System.currentTimeMillis(). The example in the question used SystemClock.uptimeMillis(), which is based on the monotonic clock rather than wall-clock time. That, or System.nanoTime(), are better choices. (I'm on a minor crusade against currentTimeMillis, which on a mobile device could suddenly jump forward or backward.)
Update: I wrote an even longer answer to a similar question.
Update 2: I wrote an even longer longer answer about the general problem (see Appendix A).
One part of the problem may be caused by fact that Thread.sleep() is not accurate. Try to investigate what is the actual time of the sleep.
The most important thing that should make your animations smooth is that you should compute some interpolation factor, call it alpha, that linearly interpolates your animations in consecutive rendering thread calls between two consecutive animation update thread calls. In other words, if your update interval is high comparing to your framerate, not interpolating your animation update steps is like you'd be rendering at update interval framerate.
EDIT: As an example, this is how PlayN does it:
#Override
public void run() {
// The thread can be stopped between runs.
if (!running.get())
return;
int now = time();
float delta = now - lastTime;
if (delta > MAX_DELTA)
delta = MAX_DELTA;
lastTime = now;
if (updateRate == 0) {
platform.update(delta);
accum = 0;
} else {
accum += delta;
while (accum >= updateRate) {
platform.update(updateRate);
accum -= updateRate;
}
}
platform.graphics().paint(platform.game, (updateRate == 0) ? 0 : accum / updateRate);
if (LOG_FPS) {
totalTime += delta / 1000;
framesPainted++;
if (totalTime > 1) {
log().info("FPS: " + framesPainted / totalTime);
totalTime = framesPainted = 0;
}
}
}

Low FPS with android SurfaceView

I have some trubles with my framerate using a SurfaceView. Im doing the tipical stuff i found in some tutorials (all of them said the same), but i cant reach a decent framerate on my samsung galaxy S (the old one, i9000).
Here's the code i have for the loop thread. FPS is initialized at 30.
#Override
public void run() {
long ticksPS = 1000/FPS;
long startTime;
long sleepTime;
//fps checker
long contms=0;
long lasttimecheck = System.currentTimeMillis();
int fps=0;
while (running) {
long time = System.currentTimeMillis();
if(contms>1000) {
Log.v("FPS",String.valueOf(fps));
contms=time-lasttimecheck;
fps=1;
}
else {
fps++;
contms+=time-lasttimecheck;
}
lasttimecheck = time;
Canvas c = null;
startTime =time;
try {
c = view.getHolder().lockCanvas();
synchronized (view.getHolder()) {
view.onDraw(c);
}
} finally {
if (c != null) {
view.getHolder().unlockCanvasAndPost(c);
}
}
sleepTime = ticksPS-(System.currentTimeMillis() - startTime);
try {
if (sleepTime > 10)
sleep(sleepTime);
else {
Log.w("LOWFPS",String.valueOf(contms));
sleep(10);
}
} catch (Exception e) {}
}
}
In the surfaceView, I initialize the holder with holder.setFormat(PixelFormat.RGBA_8888); but i dont know if i have to do something with the bitmaps to avoid useless CPU work (I save the bitmaps in local variables, then I draw them). The game is simple, it should run much faster.
The framerate is quite random, sometimes it works smoothie, sometimes it doesnt, but always under the 30FPS.
Any ideas???
EDIT WITH ONDRAW EXPLANATION
#Override
protected void onDraw(Canvas canvas) {
canvas.drawBitmap(bg, 0, 0, null); //1
stars.draw(canvas,dx,dy); //2
if(playing.on()) reactors.draw_reaccio(canvas,dx,dy); //3
gotes.draw(canvas,dx,dy); //4
reactors.draw(canvas,dx,dy); //5
sg.draw(canvas); //6
sr.draw(canvas); //7
eraser.draw(canvas); //8
playing.draw(canvas); //9
opcions.draw(canvas); //10
}
1) Drawing the background (480x800)
2) this is a class that contains a list of "getHeight()" basic objects (stars) with its coordenates (x,y) and the ID with the associated image (about 9 different stars images)
3) it draws n*2 circles, one with fill and another with stroke per related object (about 20 or so)
4) It draws the main object of the game, little drops with an animation. There are 9 different kind of drops, and each of them have 5 related images of the animation (should i put the 5 images in 1 maybe?)
5) same as drops but without animation
6 to 10) irrelevant, it just draw an image
I guess the slowness is due to: (2) because of the number of stars (4) becouse of the animation, witch change every 2-3 frames to a different image and maybe it is too much for memory, i guess i should merge all the images in just 1.
The framerate is about 20-22 FPS with S. Galaxy S i9000
If you disable the drawing, how much fps do you achieve ?
Just to check how much the display pipeline is hogging.
I had tried following
create 5 bitmaps of size 720 x 480 : pix format RGB565
display them on a SurfaceView from a loop similar to yours.
The only difference is that I had "prepared bitmaps" and wasn't drawing them in a loop.
This is what was achieved on a Nexus-S phone
FPS : 55 (I tried to run the loop as fast, without regulation)
Cpu load : 85%
This is when I decided to render my SurfaceView from JNI :) !!
Do a similar experiment and see how much your device is able to throttle "without" drawing operations. If it looks good, then you can profile whether you can fit-in your your drawing operations within the budget.
Things to consider : How long does you draw routine take?
If you have problems with it running under 30 FPS, I would reconsider having it sleep. This is not an exact thing and could sleep less or more according to the API, so I would remove this for now and let it run as fast as it can.
I agree with above, please post drawing code or comment it out the the call to the draw routine and see what FPS you're running at.
In general, from a sampling stand point you may see it jitter in value if only measuring over 1 seconds.
Many drawing back ends only work in even divisors of 60. So if you were doing a lot of stuff it's possible one or two extra calls will bring you from 60/1 = 60 fps to 60/2 = 30 fps.

Categories

Resources