I'm using AsyncTask to upload an image to webserver, when I impelement a method to update the progressbar, it slow down the uploading.But without the loop updater for progressbar it uploads the image in few seconds (very fast). Here is the code that i use to update the progressbar :
bufferSize = Math.min(bytesAvailable, maxBufferSize);
mFileLen = file.length();
for (int i = 0; i < bufferSize; i++) {
publishProgress((int) ((i / (float) mFileLen) * 100));
}
Is there anyway i can update the progressbar without leading to slow down the operation ?
A few ideas:
You don't need to set the progressBar1 max on each iteration of the loop, it's pointless since bufferSize doesn't change inside the loop. Move that command outside the loop.
try to remember your previous progress: prevProgress = (i / mFileLen). Then measure your new progress: currProgress = (i / mFileLen). Then, if the difference between the two progresses isn't greater than 0.01 (1%), don't update the UI: if (currProgress - prevProgress >= 0.01) publishProgress(...);
Here's an example:
float currProgress = 0;
float prevProgress = 0;
for (int i = 0; i < bufferSize; i++)
{
currProgress = ((float)i / (float) mFileLen);
if (currProgress - prevProgress >= 0.01)
{
publishProgress((int) (currProgress * 100));
prevProgress = currProgress;
}
}
You could make it work even faster by updating the UI when the progress is larger than 2% (0.02) rather than 1% (0.01), etc.
Also, I'm feeling that there's no real correlation between i and mFileLen... What's your bufferSize? Is it less than mFileLen (it should be). In this case, you should use a separate variable to count the overall progress it use that instead of i to measure the progress.
Hope this helps.
As a reminder, you can send data to a web server in an IntentService, which is also asynchronous. An IntentService is preferred if the user doesn't immediately have to do work with the results of the operation. It's completely background. On the other hand, you can post notifications from it. Ask yourself these questions:
Do I want this operation to continue, even if the user changes the device orientation? (Asynctask will stop if the Activity is destroyed; IntentService won't).
Conversely, do I mind if the operation has to reload (AsyncTask is a bit more simple).
Am I storing persistent data (Use an IntentService or even a bound Service and a SyncAdapter)
Does the user need to stick around while the operation completes (if yes, an AsyncTask gives more immediate feedback when its finished. If no, an IntentService is somewhat disconnected from the app's Activities, so it can churn along merrily while other stuff is happening.
Related
I'm trying to get a precise clock that is not influenced by other processes inside the app.
I currently use System.nanoTime() like below inside a thread.
I use to calculate the timing of each of the sixteen step.
Currently timed operations have sometime a perceptible delay that i try to fix.
I would like to know if there is a more precise way to obtaining timed operations, maybe by check the internal soundcard clock and use it to generate the timing i need.
I need it to send midi notes from android device to external audio sinthetizers and for audio i need precise timing
Is there anyone who can help me improve this aspect?
Thanks
cellLength = (long)(0.115*1000000000L);
for ( int x = 0; x < 16; x++ ) {
noteStartTimes[x] = x*cellLength ;
}
long startTime = System.nanoTime();
index = 0;
while (isPlaying) {
if (noteStartTimes[index] < System.nanoTime() - startTime) {
index++ ;
if (index == 16) { //reset things
startTime = System.nanoTime() + cellLength;
index = 0 ;
}
}
}
For any messages that you receive, the onSend callback gives you a timestamp.
For any messages that you send, you can provide a timestamp.
These timestamps are based on System.nanoTime(), so your own code should use this as well.
If your code is delayed (by its own processing, or by other apps, or by background services), System.nanoTime() will accurately report the delay. But no timer will function can make your code run earlier.
I'm curious about this.
I wanted to check which function was faster, so I create a little code and I executed a lot of times.
public static void main(String[] args) {
long ts;
String c = "sgfrt34tdfg34";
ts = System.currentTimeMillis();
for (int k = 0; k < 10000000; k++) {
c.getBytes();
}
System.out.println("t1->" + (System.currentTimeMillis() - ts));
ts = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++) {
Bytes.toBytes(c);
}
System.out.println("t2->" + (System.currentTimeMillis() - ts));
}
The "second" loop is faster, so, I thought that Bytes class from hadoop was faster than the function from String class. Then, I changed the order of the loops and then c.getBytes() got faster. I executed many times, and my conclusion was, I don't know why, but something happen in my VM after the first code execute so that the results become faster for the second loop.
This is a classic java benchmarking issue. Hotspot/JIT/etc will compile your code as you use it, so it gets faster during the run.
Run around the loop at least 3000 times (10000 on a server or on 64 bit) first - then do your measurements.
You know there's something wrong, because Bytes.toBytes calls c.getBytes internally:
public static byte[] toBytes(String s) {
try {
return s.getBytes(HConstants.UTF8_ENCODING);
} catch (UnsupportedEncodingException e) {
LOG.error("UTF-8 not supported?", e);
return null;
}
}
The source is taken from here. This tells you that the call cannot possibly be faster than the direct call - at the very best (i.e. if it gets inlined) it would have the same timing. Generally, though, you'd expect it to be a little slower, because of the small overhead in calling a function.
This is the classic problem with micro-benchmarking in interpreted, garbage-collected environments with components that run at arbitrary time, such as garbage collectors. On top of that, there are hardware optimizations, such as caching, that skew the picture. As the result, the best way to see what is going on is often to look at the source.
The "second" loop is faster, so,
When you execute a method at least 10000 times, it triggers the whole method to be compiled. This means that your second loop can be
faster as it is already compiled the first time you run it.
slower because when optimised it doesn't have good information/counters on how the code is executed.
The best solution is to place each loop in a separate method so one loop doesn't optimise the other AND run this a few times, ignoring the first run.
e.g.
for(int i = 0; i < 3; i++) {
long time1 = doTest1(); // timed using System.nanoTime();
long time2 = doTest2();
System.out.printf("Test1 took %,d on average, Test2 took %,d on average%n",
time1/RUNS, time2/RUNS);
}
Most likely, the code was still compiling or not yet compiled at the time the first loop ran.
Wrap the entire method in an outer loop so you can run the benchmarks a few times, and you should see more stable results.
Read: Dynamic compilation and performance measurement.
It simply might be the case that you allocate so much space for objects with your calls to getBytes(), that the JVM Garbage Collector starts and cleans up the unused references (bringing out the trash).
Few more observations
As pointed by #dasblinkenlight above, Hadoop's Bytes.toBytes(c); internally calls the String.getBytes("UTF-8")
The variant method String.getBytes() which takes Character Set as input is faster than the one that does not take any character set. So for a given string, getBytes("UTF-8") would be faster than getBytes(). I have tested this on my machine (Windows8, JDK 7). Run the two loops one with getBytes("UTF-8") and other with getBytes() in sequence in equal iterations.
long ts;
String c = "sgfrt34tdfg34";
ts = System.currentTimeMillis();
for (int k = 0; k < 10000000; k++) {
c.getBytes("UTF-8");
}
System.out.println("t1->" + (System.currentTimeMillis() - ts));
ts = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++) {
c.getBytes();
}
System.out.println("t2->" + (System.currentTimeMillis() - ts));
this gives:
t1->1970
t2->2541
and the results are same even if you change order of executions of loop. To discount any JIT optimizations, I would suggest run the tests in separate methods to confirm this (as suggested by #Peter Lawrey above)
So, Bytes.toBytes(c) should always be faster than String.getBytes()
I'm getting a very peculiar issue with my audio callbacks in my Android app (that's using NDK/OpenSL ES). I'm streaming audio output at 44.1 kHz and 512 frames (which gives me a callback time of 11.6 ms). In the callback, I'm synthesizing a couple of waveforms, filters, etc (like a synthesizer). Due to optimization I never reach over 5 ms of the callback time. However, when I turn on a specific effect (digital delay line), it starts to take a radically longer time in the callback. The digital delay line will jump from 7.5 ms (after all voices/filters have been processed) and jump up to 100 to 350 ms.
This is the most confusing part; after maybe 1 or 2 seconds, the digital delay execution time will jump from the extremely high time to 0.2 ms completion time per callback.
Why would the Android app take a long time to complete my digital delay processing code the first few callbacks and then die down to a very short and audio-happy time? I'm kind of at a loss right now and not sure how to fix this. To confirm, this only happens with the delay processing method. It's just a standard digital delay line (you can find some on github) and I feel like the algorithm isn't the problem here...
Kind of a pseudocode/rough sketch of what my audio callback code looks like:
static bool myAudioCallback(void *userData, short int *audIO, int numSamples, int srate) {
AudioData *data = (AudioData *)userData;
// Resets pointer array values to 0
for (int i = 0; i < numSamples; i++) data->buffer[i] = 0;
// Voice Generation Block
for (int voice = 0; voice < data->numVoices; voice++) {
// Reset voice buffers:
for (int i = 0; i < numSamples; i++) data->voiceBuffer[i] = 0;
// Generate Voice
data->voiceManager[voice]->generateVoiceBlock(data->voiceBuffer, numSamples);
// Sum voices
for (int i = 0; i < numSamples; i++) data->buffer[i] += data->voiceBuffer[i]];
}
// When app first starts, delayEnabled = false so user must click on a
// button on the UI to enable it.
// Trouble is that when we enable processDelay(double *buffer, in frames) the
// first time, we get a long execution time.
if (data->delayEnabled) {
data->delay->processDelay(data->buffer, numSamples);
}
// Conversion loop
for (int i = 0; i < numSamples; i++) {
double sample = clipOutput(data->buffer[i]);
audIO[2*i] = audIO[(2*i)+1] = CONV_FLT_TO_16BIT(sample * data->volume);
}
}
Thanks!
Not a great answer to the solution but this is what I did:
Before the user is able to do anything on the app, I turned on the delay and let it run its course for like 2 seconds before switching it off. This allows the callback to do its weird long 300 ms execution time while not destroying the audio.
Obviously this is not a great answer and if anyone can figure out a more logical explanation I would be more than happy to mark that as the answer.
My application has a UI (implemented with an Activity) and a service (implemented with the IntentService). The service is used to send data (synchronous, using NetworkStream.Write) to a remote server as well as to update the transmission status to the UI (implemented using Broadcast Receiver method).
Here is my problem:
The application works properly if the size of the buffer used for the NetworkStream.Write is 11 KB or less.
However, if the size of the buffer is larger than 11 KB, say 20 KB (this size needed in order to send jpg images), then the sevice keeps working properly (verified with log file), nonetheless the UI its gone (similar as if device's back button is pushed) and I can't find the way to bring it back. Its important to point out that in this case the Activity its not going into OnStop() nor OnDestroy() states.
At first I thought this would be some ApplicationNotResponding related issue due to a server delay, yet the UI crashes after about 5 sec.
Moreover, this only happens with the Hardware version. The emulator version works fine.
// SEND STREAM:
Byte[] outStream = new Byte[20000];
// -- Set up TCP connection: --
TcpClient ClientSock = new TcpClient();
ClientSock.Connect("myserver.com", 5555);
NetworkStream serverStream = ClientSock.GetStream();
serverStream.Write(outStream, 0, outStream.Length);
serverStream.Flush();
// . . .
// RECEIVE STREAM:
inStream.Initialize(); // Clears any previous value.
int nBytesRead = 0;
nBytesRead = serverStream.Read(inStream, 0, 1024);
// -- Closing communications socket: --
ClientSock.Close();
One thing first: I would have been commented the question to clarify one thing before I give an answer, but unfortunately I don't have enough reputation yet.
The thing I would have asked for is: Why do you need to have a buffer greater than 11k to send an JPG image?
I nearly do the same in one (async) task with an image of 260k, but with a buffer of 10240 Bytes. Works without difficulties.
byte[] buffer = new byte[10240];
for (int length = 0; (length = in.read(buffer)) > 0;) {
outputStream.write(buffer, 0, length);
outputStream.flush();
bytesWritten += length;
progress = (int) ((double) bytesWritten * 100 / totalBytes);
publishProgress();
}
outputStream.flush();
I use this part to read an JPG image from resources or SD and post to my server.
Well you may want to change your application to use asynctask and take a look to the guide :
http://developer.android.com/training/basics/network-ops/connecting.html
Network operations can involve unpredictable delays. To prevent this from causing a poor user experience, always perform network operations on a separate thread from the UI.
Since android 4.0 it's impossible to perform network related task in the same thread as the UI thread. Also just to be clear http://developer.android.com/guide/components/services.html
Caution: A service runs in the main thread of its hosting process—the
service does not create its own thread and does not run in a separate
process
I developed an auto-call application. The app reads a text file that includes a phone number list and calls for a few second, ends the call and then repeats.
My problem is that the app does not send calls after 10~16 hours. I don't know the reason exactly, but I guess that the problem is the CPU usage. My app's CPU usage is almost 50%! How do I reduce CPU usage?
Here is part of source code:
if(r_count.compareTo("0")!=0) {
while(index < repeat_count) {
count = 1;
time_count = 2;
while(count < map.length) {
performDial(); //start call
reject(); //end call
finishActivity(1);
TimeDelay("60"); // wait for 60sec
count = count + 2;
time_count = time_count + 2;
onBackPressed(); // press back button for calling next number
showCallLog();
finishActivity(0);
}
index++;
}
This is the TimeDelay() method source:
public void TimeDelay(String delayTime) {
saveTime = System.currentTimeMillis()/1000;
currentTime = 0;
dTime = Integer.parseInt(delayTime);
while(currentTime - saveTime < dTime) {
currentTime = System.currentTimeMillis()/1000;
}
}
TimeDelay() repeats in the while loop for a few times.
The reason it's using 50% of your CPU is that Android apparently won't let it use 100% of the CPU, which a loop like the one in your TimeDelay() ordinarily would. (Or else you have two CPUs and it is in fact using 100% of one CPU.) What you're doing is called a busy wait and it should be obvious why continually checking a condition will use lots of CPU. So don't do that. Use Thread.sleep() instead. Your app will then use no CPU at all during the wait.
Also, for God's sake, why are you passing a string and then parseInting it, rather than just passing an Integer in the first place? :-)
If your method take a long time to finish , especially the while loop. You should put Thread.sleep(50) inside your loop. This makes you processor be able to handle other processes.
Your CPU will be reduced. Not sure but you should try.
Hope you get good result.