I'm trying to measure and increase the execution time of a test case. What I'm doing is the following:
Let's assume I'm testing the method abc() using testAbc().
I'm using android studio and junit for my software development.
At the very beginning I'm recording the timestamp in nano seconds in
start variable, and then when the method finishes, it returns the
difference between the current nano seconds and the start.
testAbc() is divided into 3 parts: initialization, testing abc() and
assertion (checking the test results).
I keep track the test time inside testAbc() same way as I do in
abc() method.
after executing the test I found that abc() method takes about
45-50% of the test time.
I modified the testAbc() as follows:
void testAbc(){
startTime = System.nanoTime();
//no modification to the initialization part
//testing abc part is modified by placing it in a for loop to increase its
//execution time.
for(int i=0 ; i < 100; i++)
{
//test abc code goes here ...
abcTime += abc();
}
//assertion part wasn't modified
testEndTime = System.nanoTime - startTime;
}
By repeating the test part I thought the ratio between abcTime and testEndTime will increase (by dramatically increasing abcTime), however, it didn't at all I mean it's 45-50%.
My questions:
Why the ratio didn't increase? I mean in principle the execution time for the initialization and assertion parts should not be affected by the for loop and therefore the time for abc should get closer to the test time after 100 repetitions.
How can I increase the ratio between abc() and testAbc()?
thank you for your time.
Related
I'm making a game that has obstacles which the player must jump over. I'm using the Corona Labs Simulator to run it but every time I do either the obstacle gets half way across the screen instantly and stops or the whole thing just crashes. Here is my code:
function obstacles()
loop = 2000
while loop > 0 do
obstacle:translate( -1, 0 )
if obstacle.x > 0 then
loop = loop - 1
else
loop = 2000
obstacle:translate( display.contentWidth, 0 )
end
end
end
Any help much appreciated.
Its important to understand that Corona SDK is an event driven system. There really isn't a game loop. Code executes so fast you can't use structures like loops to move things.
As mentioned above, the transition library (transition.to for instance) can move an object over time.
The closest thing to a game loop is to create a function and attach it to the Runtime object using the "enterFrame" event. Every frame (either 30 times per second or 60 times per second), this function will be called. It's not going to be precisely 30 or 60 times per second because it make take too much time doing other work to give you the full frame rate. If you want to use an enterFrame listener:
local function doSomethingEachFrame( event )
-- put code here you want to execute over time.
-- just remember it fires a lot...
end
Runtime:addEventListener( "enterFrame", doSomethingEachFrame )
If you need to stop it, then you would do:
Runtime:removeEventListener( "enterFrame", doSomethingEachFrame )
Corona SDK provides tools to do exactly that. Rather than using a loop, use transition library:
transition.to(target, params)
where target is your obstacle object, and in params, you can specify x position, time etc.
There can be a few sources of weirdness regarding where things appear on-screen. I'd put obstacle at (0,0) with no movement at all and make sure I understand both the coordinate system of the parent, and the anchorX and anchorY of obstacle before trying to animate it.
I assume there's other code outside the block you've included, but I think you can make the movement logic simpler, and you'll thank yourself later (this isn't perfect, but you get the idea):
-- set initial position
obstacle.x = display.contentWidth
-- define per-frame movement logic
local moveObstacle = function(event)
obstacle.x = obstacle.x - 1
if obstacle.x < 0 then
obstacle.x = display.contentWidth
end
end
-- run that logic once every frame
Runtime:addEventListener("enterFrame", moveObstacle)
I have been reading up on game loops and am having a hard time understanding the concept of interpolation. From what I seen so far, a high level game loop design should look something like the sample below.
ASSUME WE WANT OUR LOOP TO TAKE 50 TICKS
while(true){
beginTime = System.currentTimeMillis();
update();
render();
cycleTime = System.currentTimeMillis() - beginTime;
//if processing is quicker than we need, let the thread take a nap
if(cycleTime < 50)
Thread.sleep(cycleTime);
)
//if processing time is taking too long, update until we are caught up
if(cycleTime > 50){
update();
//handle max update loops here...
}
}
Lets assume that update() and render() both take only 1 tick to complete, leaving us with 49 ticks to sleep. While this is great for our target tick rate, it still results in a 'twitchy' animation due to so much sleep time. To adjust for this, instead of sleeping, I would assume that some kind of rendering should be going on within the first if condition. Most code samples I have found simply pass an interpolated value into the render method like this...
while(true){
beginTime = System.currentTimeMillis();
update();
render(interpolationValue);
cycleTime = System.currentTimeMillis() - beginTime;
//if processing is quicker than we need, let the thread take a nap
if(cycleTime < 50)
Thread.sleep(cycleTime);
)
//if processing time is taking too long, update until we are caught up
if(cycleTime > 50){
update();
//handle max update loops here...
}
interpolationValue = calculateSomeRenderValue();
}
I just don't see how this can work due to the 49 tick sleep time? If anyone knows of an article or sample I can check out please let me know as I am not really sure what the best approach to interpolation is...
I know its a bit late, but hopefully this article will help
http://gameprogrammingpatterns.com/game-loop.html
It explains game time scheduling very well. I think the main reason you are a bit confused is because of passing the render function the current elapsed time. Oh course this depending on which system you are using but conventionally the render doesn't modify the Scene in any way, it only draws it, therefore it doesn't need to know how much time has passed.
However the update call modifies the objects in the scene, and in order to keep them in time (e.g. playing animations, Lerps ect...) then the update function needs to know how much time has passed either globally, or since the last update.
Anyway no point me going to fair into it.... that article is very useful.
Hope this helps
I am trying to sync time between two android devices. The precision has to be upto 5 ms. The gps and network time werent this precise so i thought about sharing time between the devices over the local network and syncing the time using PTP(Precision time protocol).
Now as i can't change the time on non-rooted devices so i thought about saving the time difference shared by device and kept on showing the time to user in a textview.
Now the textview needed to be updated every one ms so the user can see the time in ms too.
I am updating the textview in a thread which is updated every ms.
class CountDownRunner implements Runnable {
// #Override
public void run() {
Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
while (!Thread.currentThread().isInterrupted()) {
try {
setCurrentTime();
Thread.sleep(1); // Pause of 1/100 Second
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} catch (Exception e) {
}
}
}
}
This is where the problem comes that after syncing the time , the time diff is less then 5 ms but after some time of syncing the time starts to drift a part and after 10-15 mins the time diff is more then 1 sec.
So any ideas on how to rectify this issue?
seems like making thread.sleep(1000) works more like thread.sleep(1003 or 1004) and these changes keep on adding up to make a bigger drift
As I noted, Android is not a RTOS.
Moreover, you should not care whether sleep() returns in 1003 or 1004 milliseconds. You seem to be combining two things:
Determining how much time has elapsed since you synchronized the time
Displaying the current synchronized time
These have nothing to do with each other.
Use your sleep() loop -- or, better yet, postDelayed() for greater efficiency -- only to get control roughly every second to know that you need to update the TextView.
To determine what value to show in the TextView, do not add some assumed amount of time based on the sleep(). Instead:
When you get the synchronized time value from the other device, also save the current value of SystemClock.elapsedRealtime()
To determine how much time in milliseconds has elapsed since the synchronization, subtract the saved value of elapsedRealtime() from the current value of calling elapsedRealtime()
for(int i=0;i<100;i++)
This loop was run repeatly by incrementing value of i by 1.
I have develop in Android JNI.
An OpenGL Renderer's onDrawFrame method calls repeatly.
In onDrawFrame, render function in C++ was call.
in a c++ render function. for loop runs 100 times over.
for(int i=0;i<100;i++)
{
__android_log_print(ANDROID_LOG_DEBUG , "", "%d", i);
}
This loop runs incorrectly.
i value has not incremented by 1.
in other case, loop is not run 100 times. some times 90, 98, or 96.....
I don't know reason.
You cannot trust that __android_log_print() output will be displayed, especially if you are using the logcat window in Eclispe. You can write instead to local file, and then you will for sure see 100 lines in that file after you start the loop.
I have a big array, iterating and doing my work over it takes about 50ms.
App i am developing will run on tegra3 or other fast cpu.
I have divided my work for four threads, using pthread, i have taken
width of my array, divided it by total core count found in system, and i am iterating for 1/fourth of array in each thread, everything is ok, but it now need 80ms to do the work.
Any idea why multithread approach is slower than single thread? If i lower cpu count to 1 everything is back on 50ms.
for(int y = 0; y<height;y++)
{
for(int x = 0; x<width; x++)
{
int index = (y*width)+x;
int sourceIndex = source->getIndex(vertex_points[index].position[0]/ww, vertex_points[index].position[1]/hh);
vertex_points[index].position[0]+=source->x[sourceIndex]*ww;
vertex_points[index].position[1]+=source->y[sourceIndex]*hh;
}
};
i am dividing first for loop of above code into four parts based on cpu count.
vertex_points is a vector with positions.
so it looks like
for(int y=start;y<end;y++)
and start/end vary on each thread
Thread startup time is typically on the order of milliseconds - that's what's eating your time.
With that in mind, 50 ms is not the kind of delay I'd worry about. If we were talking 5 seconds, that'd be a good candidate for paralellizing.
If the loop needs to be performed often, consider a solution with threads that are spun up early on and kept dormant, waiting for work to do. That'll run faster.
Also, is the CPU really 4-core? Honest cores or hyperthreading?