I am smooth scrolling a bitmap at a given speed. I'm doing this with a game loop.
It scrolls pretty smoothly, around 60 fps, except for occasional stutters / jumps. These jumps occur anywhere from once a second to a couple of times a second. Usually they start or become more frequent after a few seconds of running, but I'm not sure if this is a big clue or not.
The reason for the jumps is that occasionally an iteration of the run loop will take about twice as long as usual, so the bitmap stays in one place for a while and then jumps further to catch up and maintain its constant speed. I used interpolation to figure out the new position of the bitmap with each update based on the time that has elapsed. When a longer than usual time has elapsed, I've tried doing a couple of mini-updates instead of moving the entire distance at once, but the paused bitmap is still very noticeable.
I ran traceview, and the extra time is being spent inside lockCanvas. Most of the time this method takes around 10 ms, but in these long cases, it takes around 24ms. When I traced for four seconds, this happened 7 times.
In the following code, I'm having it sleep for a bit if it ran fast, but that's not actually making any difference. If I get rid of that part of the code my problem is not solved. There doesn't need to be a constant frame rate, since I'm just calculating the position based on how much time has passed.
#Override
public void run() {
long beginTime = 0; // the time when the cycle begun
long timeDiff; // the time it took for the cycle to execute
int sleepTime; // ms to sleep (<0 if we're behind)
sleepTime = 0;
while (mRun) {
Canvas c = null;
try {
beginTime = System.currentTimeMillis();
c = mSurfaceHolder.lockCanvas(null);
synchronized (mSurfaceHolder) {
if (mMode == STATE_RUNNING) {
updatePhysics();
}
doDraw(c);
}
} catch(Exception e){
System.out.println(e.getStackTrace());
}finally {
if (c != null) {
mSurfaceHolder.unlockCanvasAndPost(c);
}
}
timeDiff = System.currentTimeMillis() - beginTime;
sleepTime = (int)(FRAME_PERIOD - timeDiff);
if(sleepTime > 0){
try {
Thread.sleep(sleepTime);
} catch (InterruptedException e) {}
}
}
}
The code to updatePhysics() and doDraw() do a little bit of math, and I've tried to make that as efficient as possible. Basically they just calculate the new position of the bitmap based on time and speed. I just have one bitmap, and it is not being reallocated every time or something like that.
Also, I'm positive that my surfaceHolder is ready, so it's not the common answer I've found from searching google that repeated calls to a non-ready surfaceHolder have been throttled.
Any ideas what could cause this? My surface holder uses PixelFormat.RGB_565 and my Bitmap is encoded as Bitmap.Config.RGB_565 if that makes a difference. I originally got the Bitmap from a relative layout that I made.
One possible explanation is that your code generates a lot of short-lived objects, and the garbage collector kicks in periodically to reclaim memory.
These noticeable pauses were the bane of developers in early versions of Java, until generational garbage collection pretty much eliminated this issue. However, as far as I know, Android's Dalvik virtual machine does not employ generational garbage collection, so you should be cautious about creating objects that you immediately discard, especially in loops.
Profiling memory allocation will shine more light on this issue.
If this is indeed the problem, you could try to reuse objects, or handle data using primitives.
Related
I'm curious about this.
I wanted to check which function was faster, so I create a little code and I executed a lot of times.
public static void main(String[] args) {
long ts;
String c = "sgfrt34tdfg34";
ts = System.currentTimeMillis();
for (int k = 0; k < 10000000; k++) {
c.getBytes();
}
System.out.println("t1->" + (System.currentTimeMillis() - ts));
ts = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++) {
Bytes.toBytes(c);
}
System.out.println("t2->" + (System.currentTimeMillis() - ts));
}
The "second" loop is faster, so, I thought that Bytes class from hadoop was faster than the function from String class. Then, I changed the order of the loops and then c.getBytes() got faster. I executed many times, and my conclusion was, I don't know why, but something happen in my VM after the first code execute so that the results become faster for the second loop.
This is a classic java benchmarking issue. Hotspot/JIT/etc will compile your code as you use it, so it gets faster during the run.
Run around the loop at least 3000 times (10000 on a server or on 64 bit) first - then do your measurements.
You know there's something wrong, because Bytes.toBytes calls c.getBytes internally:
public static byte[] toBytes(String s) {
try {
return s.getBytes(HConstants.UTF8_ENCODING);
} catch (UnsupportedEncodingException e) {
LOG.error("UTF-8 not supported?", e);
return null;
}
}
The source is taken from here. This tells you that the call cannot possibly be faster than the direct call - at the very best (i.e. if it gets inlined) it would have the same timing. Generally, though, you'd expect it to be a little slower, because of the small overhead in calling a function.
This is the classic problem with micro-benchmarking in interpreted, garbage-collected environments with components that run at arbitrary time, such as garbage collectors. On top of that, there are hardware optimizations, such as caching, that skew the picture. As the result, the best way to see what is going on is often to look at the source.
The "second" loop is faster, so,
When you execute a method at least 10000 times, it triggers the whole method to be compiled. This means that your second loop can be
faster as it is already compiled the first time you run it.
slower because when optimised it doesn't have good information/counters on how the code is executed.
The best solution is to place each loop in a separate method so one loop doesn't optimise the other AND run this a few times, ignoring the first run.
e.g.
for(int i = 0; i < 3; i++) {
long time1 = doTest1(); // timed using System.nanoTime();
long time2 = doTest2();
System.out.printf("Test1 took %,d on average, Test2 took %,d on average%n",
time1/RUNS, time2/RUNS);
}
Most likely, the code was still compiling or not yet compiled at the time the first loop ran.
Wrap the entire method in an outer loop so you can run the benchmarks a few times, and you should see more stable results.
Read: Dynamic compilation and performance measurement.
It simply might be the case that you allocate so much space for objects with your calls to getBytes(), that the JVM Garbage Collector starts and cleans up the unused references (bringing out the trash).
Few more observations
As pointed by #dasblinkenlight above, Hadoop's Bytes.toBytes(c); internally calls the String.getBytes("UTF-8")
The variant method String.getBytes() which takes Character Set as input is faster than the one that does not take any character set. So for a given string, getBytes("UTF-8") would be faster than getBytes(). I have tested this on my machine (Windows8, JDK 7). Run the two loops one with getBytes("UTF-8") and other with getBytes() in sequence in equal iterations.
long ts;
String c = "sgfrt34tdfg34";
ts = System.currentTimeMillis();
for (int k = 0; k < 10000000; k++) {
c.getBytes("UTF-8");
}
System.out.println("t1->" + (System.currentTimeMillis() - ts));
ts = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++) {
c.getBytes();
}
System.out.println("t2->" + (System.currentTimeMillis() - ts));
this gives:
t1->1970
t2->2541
and the results are same even if you change order of executions of loop. To discount any JIT optimizations, I would suggest run the tests in separate methods to confirm this (as suggested by #Peter Lawrey above)
So, Bytes.toBytes(c) should always be faster than String.getBytes()
I'm encountering a strange problem when trying to implement low-latency streaming audio playback on a Nexus 6 running Android 6.0.1 using OpenSL ES.
My initial attempt seemed to be suffering from starvation issues, so I added some basic timing benchmarks in the buffer completion callback function. What I've found is that audio plays back fine if I continually tap the screen while my app is open, but if I leave it alone for a few seconds, the callback starts to take much longer. I'm able to reproduce this behavior consistently. A couple of things to note:
"a few seconds" ~= 3-5 seconds, not long enough to trigger a screen change
My application's activity sets FLAG_KEEP_SCREEN_ON, so no screen changes should occur anyway
I have taken no action to try to increase the audio callback thread's priority, since I was under the impression that Android reserves high priority for these threads already
The behavior occurs on my Nexus 6 (Android 6.0.1), but not on a Galaxy S6 I also have available (Android 5.1.1).
The symptoms I'm seeing really seem like the OS kicks down the audio thread priority after a few seconds of non-interaction with the phone. Is this right? Is there any way I can avoid this behavior?
While watching the latest Google I/O 2016 audio presentation, I finally found the cause and the (ugly) solution for this problem.
Just watch the around one minute of this you tube clip (starting at 8m56s):
https://youtu.be/F2ZDp-eNrh4?t=8m56s
It explains why this is happening and how you can get rid of it.
In fact, Android slows the CPU down after a few seconds of touch inactivity to reduce the battery usage. The guy in the video promises a proper solution for this soon, but for now the only way to get rid of it is to send fake touches (that's the official recommendation).
Instrumentation instr = new Instrumentation();
instr.sendKeyDownUpSync(KeyEvent.KEYCODE_BACKSLASH); // or whatever event you prefer
Repeat this with a timer every 1.5 seconds and the problem will vanish.
I know, this is an ugly hack, and it might have ugly side effects which must be handled. But for now, it is simply the only solution.
Update:
Regarding your latest comment ... here's my solution.
I'm using a regular MotionEvent.ACTION_DOWN at a location outside of the screen bounds. Everything else interfered in an unwanted way with the UI. To avoid the SecurityException, initialize the timer in the onStart() handler of the main activity and terminate it in the onStop() handler. There are still situations when the app goes to the background (depending on the CPU load) in which you might run into a SecurityException, therefore you must surround the fake touch call with a try catch block.
Please note, that I'm using my own timer framework, so you have to transform the code to use whatever timer you want to use.
Also, I cannot ensure yet that the code is 100% bulletproof. My apps have that hack applied, but are currently in beta state, therefore I cannot give you any guarantee if this is working correctly on all devices and Android versions.
Timer fakeTouchTimer = null;
Instrumentation instr;
void initFakeTouchTimer()
{
if (this.fakeTouchTimer != null)
{
if (this.instr == null)
{
this.instr = new Instrumentation();
}
this.fakeTouchTimer.restart();
}
else
{
if (this.instr == null)
{
this.instr = new Instrumentation();
}
this.fakeTouchTimer = new Timer(1500, Thread.MIN_PRIORITY, new TimerTask()
{
#Override
public void execute()
{
if (instr != null && fakeTouchTimer != null && hasWindowFocus())
{
try
{
long downTime = SystemClock.uptimeMillis();
MotionEvent event = MotionEvent.obtain(downTime, downTime, MotionEvent.ACTION_DOWN, -100, -100, 0);
instr.sendPointerSync(event);
event.recycle();
}
catch (Exception e)
{
}
}
}
}, true/*isInfinite*/);
}
}
void killFakeTouchTimer()
{
if (this.fakeTouchTimer != null)
{
this.fakeTouchTimer.interupt();
this.fakeTouchTimer = null;
this.instr = null;
}
}
#Override
protected void onStop()
{
killFakeTouchTimer();
super.onStop();
.....
}
#Override
protected void onStart()
{
initFakeTouchTimer();
super.onStart();
.....
}
It is well known that the audio pipeline in Android 6 has been completely rewritten. While this improved latency-related issues in most cases, it is not impossible that it generated a number of undesirable side-effects, as is usually the case with such large-scale changes.
While your issue does not seem to be a common one, there are a few things you might be able to try:
Increase the audio thread priority. The default priority for audio threads in Android is -16, with the maximum being -20, usually only available to system services. While you can't assign this value to you audio thread, you can assign the next best thing: -19 by using the ANDROID_PRIORITY_URGENT_AUDIO flag when setting the thread's priority.
Increase the number of buffers to prevent any kind of jitter or latency (you can even go up to 16). However on some devices the callback to fill a new buffer isn’t always called when it should.
This SO post has several suggestions to improve audio latency on Anrdoid. Of particular interest are points 3, 4 and 5 in the accepted answer.
Check whether the current Android system is low-latency-enabled by querying whether hasSystemFeature(FEATURE_AUDIO_LOW_LATENCY) or hasSystemFeature(FEATURE_AUDIO_PRO).
Additionally, this academic paper discusses strategies for improving audio latency-related issues in Android/OpenSL, including buffer- and callback interval-related approaches.
Force resampling to native device sample rate on Android 6.
Use the device's native sample rate of 48000. For example:
SLDataFormat_PCM dataFormat;
dataFormat.samplesPerSec = 48000;
I'm working on creating an app that allows very low bandwidth communication via high frequency sound waves. I've gotten to the point where I can create a frequency and do the fourier transform (with the help of Moonblink's open source code for Audalyzer).
But here's my problem: I'm unable to get the code to run with the correct timing. Let's say I want a piece of code to execute every 10ms, how would I go about doing this?
I've tried using a TimerTask, but there is a huge delay before the code actually executes, like up to 100ms.
I also tried this method simply by pinging the current time and executing only when that time has elapsed. But there is still a delay problem. Do you guys have any ideas?
Thread analysis = new Thread(new Runnable()
{
#Override
public void run()
{
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_DISPLAY);
long executeTime = System.currentTimeMillis();
manualAnalyzer.measureStart();
while (FFTransforming)
{
if(System.currentTimeMillis() >= executeTime)
{
//Reset the timer to execute again in 10ms
executeTime+=10;
//Perform Fourier Transform
manualAnalyzer.doUpdate(0);
//TODO: Analyze the results of the transform here...
}
}
manualAnalyzer.measureStop();
}
});
analysis.start();
I would recommend a very different approach: Do not try to run your code in real time.
Instead, rely on only the low-level audio code running in real time, by recording (or playing) continuously for a period of time encompassing the events of interest.
Your code then runs somewhat asynchronously to this, decoupled by the audio buffers. Your code's sense of time is determined not by the system clock as it executes, but rather by the defined inter-sample-interval of the audio data you work with. (ie, if you are using 48 Ksps then 10 mS later is 480 samples later)
You may need to modify your protocol governing interaction between the devices to widen the time window in which transmissions can be expected to occur. Ie, you can have precise timing with respect to the actual modulation and symbols within a "packet", but you should not expect nearly the same order of precision in determining when a packet is sent or received - you will have to "find" it amidst a longer recording containing noise.
Your thread/loop strategy is probably roughly as close as you're going to get. However, 10ms is not a lot of time, most Android devices are not super-powerful, and a Fourier transform is a lot of work to do. I find it unlikely that you'll be able to fit that much work in 10ms. I suspect you're going to have to increase that period.
i changed your code so that it takes the execution time of doUpdate into account. The use of System.nanoTime() should also increase accuracy.
public void run() {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_DISPLAY);
long executeTime=0;
long nextTime = System.nanoTime();
manualAnalyzer.measureStart();
while (FFTransforming)
{
if(System.nanoTime() >= nextTime)
{
executeTime = System.nanoTime();
//Perform Fourier Transform
manualAnalyzer.doUpdate(0);
//TODO: Analyze the results of the transform here...
executeTime = System.nanoTime() - executeTime;
//guard against the case that doUpdate took longer than 10ms
final long i = executeTime/10000000;
//set the timer to execute again at the next full 10ms intervall
nextTime+= 10000000+ i*10000000
}
}
manualAnalyzer.measureStop();
}
What else could you do?
eliminate Garbage Collection
go native with the NDK (just an idea, this might as well give no benefit)
I've been using this tutorial to create a game loop.
In the section marked "FPS dependent on Constant Game Speed" there is some example code that includes a Sleep command
I googled the equivalent in java and found it is
Thread.sleep();
but it returns an error in eclipse
Unhandled exception type InterruptedException
What on earth does that mean.
And also I was wondering what the
update_game();
display_game();
methods may contain in an opengl-es game (ie: where is the renderer updated and what sort of things go on in display_game();
I am currently using a system that uses the GLSurfaceView and GLSurfaceRenderer features
Here is my adaptation of the code in the tutorial
public Input(Context context){
super(context);
glSurfaceRenderer = new GLSurfaceRenderer();
checkcollisions = new Collisions();
while (gameisrunning) {
setRenderer(glSurfaceRenderer);
nextGameTick += skipTicks;
sleepTime = nextGameTick - SystemClock.uptimeMillis();
if(sleepTime >= 0) {
Thread.sleep(sleepTime);
}else{
//S*** we're behind
}
}
This is called in my GLSurfaceView although I'm not sure whether this is the right place to implement this.
Looks like you need to go through a couple of tutorials on Java before trying to tackle android game development. Then read some tutorials on Android development, then some more general game development tutorials. (Programming is a lot of reading.)
Thread is throwing an exception when it gets interrupted. You have to tell Java how to deal with that.
To answer your question directly, though, here's a method that sleeps till a specific time:
private void waitUntil(long time) {
long sleepTime = time - new Date().getTime();
while (sleepTime >= 0) {
try {
Thread.sleep(sleepTime);
} catch (InterruptedException e) {
// Interrupted. sleepTime will be positive, so we'll do it again.
}
sleepTime = time - new Date().getTime();
}
}
You should understand at least this method before continuing on game development.
Ironically, Ron Romero's code doesn't work. There are two issues with it: the while loop and the sleeptime variable.
The sleepTime variable is set based off of a specified time (in milliseconds) to sleep. It takes the time and subtracts it from current millisecond time. The problem with this is that the current millisecond time is a huge number (hence the long variable type), which provides a NEGATIVE sleepTime number. You'll never enter the while loop with this code.
The second thing that's wrong is the while loop check itself. Utilizing the sleep function in java, you don't need to do a loop like this. All you have to do is pass the sleepTime that you want to sleep for in milliseconds to the sleep function and it will work just fine.
private void waitUntil(long time) {
try {
Thread.sleep(time);
}
catch (InterruptedException e) {
//This is just here to handle an error without crashing
}
}
Java performs compile-time checking of checked exceptions.
I have a C++ game running through JNI in Android. The frame rate varies from about 20-45fps due to scene complexity. Anything above 30fps is silly for the game; it's just burning battery. I'd like to limit the frame rate to 30 fps.
I could switch to RENDERMODE_WHEN_DIRTY, and use a Timer or ScheduledThreadPoolExecutor to requestRender(). But that adds a whole mess of extra moving parts that might or might not work consistently and correctly.
I tried injecting Thread.sleep() when things are running quickly, but this doesn't seem to work at all for small time values. And it may just be backing events into the queue anyway, not actually pausing.
Is there a "capFramerate()" method hiding in the API? Any reliable way to do this?
The solution from Mark is almost good, but not entirely correct. The problem is that the swap itself takes a considerable amount of time (especially if the video driver is caching instructions). Therefore you have to take that into account or you'll end with a lower frame rate than desired.
So the thing should be:
somewhere at the start (like the constructor):
startTime = System.currentTimeMillis();
then in the render loop:
public void onDrawFrame(GL10 gl)
{
endTime = System.currentTimeMillis();
dt = endTime - startTime;
if (dt < 33)
Thread.Sleep(33 - dt);
startTime = System.currentTimeMillis();
UpdateGame(dt);
RenderGame(gl);
}
This way you will take into account the time it takes to swap the buffers and the time to draw the frame.
When using GLSurfaceView, you perform the drawing in your Renderer's onDrawFrame which is handled in a separate thread by the GLSurfaceView. Simply make sure that each call to onDrawFrame takes (1000/[frames]) milliseconds, in your case something like 33ms.
To do this: (in your onDrawFrame)
Measure the current time before your start drawing using System.currentTimeMillis (Let's call it startTime)
Perform the drawing
Measure time again (Let's call it endTime)
deltaT = endTime - starTime
if deltaT < 33, sleep (33-deltaT)
That's it.
Fili's answer looked great to me, bad sadly limited the FPS on my Android device to 25 FPS, even though I requested 30. I figured out that Thread.sleep() works not accurately enough and sleeps longer than it should.
I found this implementation from the LWJGL project to do the job:
https://github.com/LWJGL/lwjgl/blob/master/src/java/org/lwjgl/opengl/Sync.java
Fili's solution is failing for some people, so I suspect it's sleeping until immediately after the next vsync instead of immediately before. I also feel that moving the sleep to the end of the function would give better results, because there it can pad out the current frame before the next vsync, instead of trying to compensate for the previous one. Thread.sleep() is inaccurate, but fortunately we only need it to be accurate to the nearest vsync period of 1/60s. The LWJGL code tyrondis posted a link to seems over-complicated for this situation, it's probably designed for when vsync is disabled or unavailable, which should not be the case in the context of this question.
I would try something like this:
private long lastTick = System.currentTimeMillis();
public void onDrawFrame(GL10 gl)
{
UpdateGame(dt);
RenderGame(gl);
// Subtract 10 from the desired period of 33ms to make generous
// allowance for overhead and inaccuracy; vsync will take up the slack
long nextTick = lastTick + 23;
long now;
while ((now = System.currentTimeMillis()) < nextTick)
Thread.sleep(nextTick - now);
lastTick = now;
}
If you don't want to rely on Thread.sleep, use the following
double frameStartTime = (double) System.nanoTime()/1000000;
// start time in milliseconds
// using System.currentTimeMillis() is a bad idea
// call this when you first start to draw
int frameRate = 30;
double frameInterval = (double) 1000/frame_rate;
// 1s is 1000ms, ms is millisecond
// 30 frame per seconds means one frame is 1s/30 = 1000ms/30
public void onDrawFrame(GL10 gl)
{
double endTime = (double) System.nanoTime()/1000000;
double elapsedTime = endTime - frameStartTime;
if (elapsed >= frameInterval)
{
// call GLES20.glClear(...) here
UpdateGame(elapsedTime);
RenderGame(gl);
frameStartTime += frameInterval;
}
}
You may also try and reduce the thread priority from onSurfaceCreated():
Process.setThreadPriority(Process.THREAD_PRIORITY_LESS_FAVORABLE);