I am initializing an array of 30 MB in Android Studio.
byte[] myarray = new byte[30 * 1024 * 1024];
for (int i = 0; i < myarray.length; i++) {
myarray[i] = 0;
}
Around this I had a time measurement with SystemClock that calculates how many milliseconds the loop takes.
It's 2.5 minutes if the app runs started with Android Studio. No breakpoints involved of course.
It's 0.5 seconds if the app runs started directly without Android Studio.
When I call other operations on this array such as System.arraycopy I don't see such a huge difference. I understand there is a difference between debugging or not but this is a factor of 300.
What is happening here and how can I modify this so I can debug my app efficiently?
I am building an app in which I use the spongycastle library (which is run-down version of bouncycastle), but the problem is when I perform this:
KeyParameter key = (KeyParameter) generator.generateDerivedMacParameters(keyLength * 8); // key length in bits
and build it on any phone which has API of 6.0 or below the operation is extremely slow. To pinpoint the exact code which runs very slow (note that this code is in spongy library):
for (int count = 1; count < iterationCount; count++)
{
hMac.update(state, 0, state.length);
hMac.doFinal(state, 0);
for (int j = 0; j != state.length; j++)
{
out[outOff + j] ^= state[j];
}
}
The iteration count is always 800000 because I need it to be very secure, but the process to execute this code takes almost 5 minutes on these devices. The interesting part is that on API 4.4 it only takes about a minute. So, is there any workaround for this without reducing the iteration count, maybe I should just use bouncycastle or something else?
I'm trying to get a precise clock that is not influenced by other processes inside the app.
I currently use System.nanoTime() like below inside a thread.
I use to calculate the timing of each of the sixteen step.
Currently timed operations have sometime a perceptible delay that i try to fix.
I would like to know if there is a more precise way to obtaining timed operations, maybe by check the internal soundcard clock and use it to generate the timing i need.
I need it to send midi notes from android device to external audio sinthetizers and for audio i need precise timing
Is there anyone who can help me improve this aspect?
Thanks
cellLength = (long)(0.115*1000000000L);
for ( int x = 0; x < 16; x++ ) {
noteStartTimes[x] = x*cellLength ;
}
long startTime = System.nanoTime();
index = 0;
while (isPlaying) {
if (noteStartTimes[index] < System.nanoTime() - startTime) {
index++ ;
if (index == 16) { //reset things
startTime = System.nanoTime() + cellLength;
index = 0 ;
}
}
}
For any messages that you receive, the onSend callback gives you a timestamp.
For any messages that you send, you can provide a timestamp.
These timestamps are based on System.nanoTime(), so your own code should use this as well.
If your code is delayed (by its own processing, or by other apps, or by background services), System.nanoTime() will accurately report the delay. But no timer will function can make your code run earlier.
I'm curious about this.
I wanted to check which function was faster, so I create a little code and I executed a lot of times.
public static void main(String[] args) {
long ts;
String c = "sgfrt34tdfg34";
ts = System.currentTimeMillis();
for (int k = 0; k < 10000000; k++) {
c.getBytes();
}
System.out.println("t1->" + (System.currentTimeMillis() - ts));
ts = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++) {
Bytes.toBytes(c);
}
System.out.println("t2->" + (System.currentTimeMillis() - ts));
}
The "second" loop is faster, so, I thought that Bytes class from hadoop was faster than the function from String class. Then, I changed the order of the loops and then c.getBytes() got faster. I executed many times, and my conclusion was, I don't know why, but something happen in my VM after the first code execute so that the results become faster for the second loop.
This is a classic java benchmarking issue. Hotspot/JIT/etc will compile your code as you use it, so it gets faster during the run.
Run around the loop at least 3000 times (10000 on a server or on 64 bit) first - then do your measurements.
You know there's something wrong, because Bytes.toBytes calls c.getBytes internally:
public static byte[] toBytes(String s) {
try {
return s.getBytes(HConstants.UTF8_ENCODING);
} catch (UnsupportedEncodingException e) {
LOG.error("UTF-8 not supported?", e);
return null;
}
}
The source is taken from here. This tells you that the call cannot possibly be faster than the direct call - at the very best (i.e. if it gets inlined) it would have the same timing. Generally, though, you'd expect it to be a little slower, because of the small overhead in calling a function.
This is the classic problem with micro-benchmarking in interpreted, garbage-collected environments with components that run at arbitrary time, such as garbage collectors. On top of that, there are hardware optimizations, such as caching, that skew the picture. As the result, the best way to see what is going on is often to look at the source.
The "second" loop is faster, so,
When you execute a method at least 10000 times, it triggers the whole method to be compiled. This means that your second loop can be
faster as it is already compiled the first time you run it.
slower because when optimised it doesn't have good information/counters on how the code is executed.
The best solution is to place each loop in a separate method so one loop doesn't optimise the other AND run this a few times, ignoring the first run.
e.g.
for(int i = 0; i < 3; i++) {
long time1 = doTest1(); // timed using System.nanoTime();
long time2 = doTest2();
System.out.printf("Test1 took %,d on average, Test2 took %,d on average%n",
time1/RUNS, time2/RUNS);
}
Most likely, the code was still compiling or not yet compiled at the time the first loop ran.
Wrap the entire method in an outer loop so you can run the benchmarks a few times, and you should see more stable results.
Read: Dynamic compilation and performance measurement.
It simply might be the case that you allocate so much space for objects with your calls to getBytes(), that the JVM Garbage Collector starts and cleans up the unused references (bringing out the trash).
Few more observations
As pointed by #dasblinkenlight above, Hadoop's Bytes.toBytes(c); internally calls the String.getBytes("UTF-8")
The variant method String.getBytes() which takes Character Set as input is faster than the one that does not take any character set. So for a given string, getBytes("UTF-8") would be faster than getBytes(). I have tested this on my machine (Windows8, JDK 7). Run the two loops one with getBytes("UTF-8") and other with getBytes() in sequence in equal iterations.
long ts;
String c = "sgfrt34tdfg34";
ts = System.currentTimeMillis();
for (int k = 0; k < 10000000; k++) {
c.getBytes("UTF-8");
}
System.out.println("t1->" + (System.currentTimeMillis() - ts));
ts = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++) {
c.getBytes();
}
System.out.println("t2->" + (System.currentTimeMillis() - ts));
this gives:
t1->1970
t2->2541
and the results are same even if you change order of executions of loop. To discount any JIT optimizations, I would suggest run the tests in separate methods to confirm this (as suggested by #Peter Lawrey above)
So, Bytes.toBytes(c) should always be faster than String.getBytes()
I developed an auto-call application. The app reads a text file that includes a phone number list and calls for a few second, ends the call and then repeats.
My problem is that the app does not send calls after 10~16 hours. I don't know the reason exactly, but I guess that the problem is the CPU usage. My app's CPU usage is almost 50%! How do I reduce CPU usage?
Here is part of source code:
if(r_count.compareTo("0")!=0) {
while(index < repeat_count) {
count = 1;
time_count = 2;
while(count < map.length) {
performDial(); //start call
reject(); //end call
finishActivity(1);
TimeDelay("60"); // wait for 60sec
count = count + 2;
time_count = time_count + 2;
onBackPressed(); // press back button for calling next number
showCallLog();
finishActivity(0);
}
index++;
}
This is the TimeDelay() method source:
public void TimeDelay(String delayTime) {
saveTime = System.currentTimeMillis()/1000;
currentTime = 0;
dTime = Integer.parseInt(delayTime);
while(currentTime - saveTime < dTime) {
currentTime = System.currentTimeMillis()/1000;
}
}
TimeDelay() repeats in the while loop for a few times.
The reason it's using 50% of your CPU is that Android apparently won't let it use 100% of the CPU, which a loop like the one in your TimeDelay() ordinarily would. (Or else you have two CPUs and it is in fact using 100% of one CPU.) What you're doing is called a busy wait and it should be obvious why continually checking a condition will use lots of CPU. So don't do that. Use Thread.sleep() instead. Your app will then use no CPU at all during the wait.
Also, for God's sake, why are you passing a string and then parseInting it, rather than just passing an Integer in the first place? :-)
If your method take a long time to finish , especially the while loop. You should put Thread.sleep(50) inside your loop. This makes you processor be able to handle other processes.
Your CPU will be reduced. Not sure but you should try.
Hope you get good result.