From my app I upload a file to our server using this basic code (it is a bit more than this of course but this is basically it)
HttpParams params = new BasicHttpParams();
HttpConnectionParams.setConnectionTimeout(params, 15000);
HttpConnectionParams.setSoTimeout(params, 5 * 60 * 1000);
HttpClient client = new DefaultHttpClient(params);
HttpPost post = new HttpPost("upload url");
HttpEntity requestEntity = (new FileEntity(tmpFile, "multipart/form-data;boundary="+boundary);
post.setEntity(requestEntity);
HttpResponse response = (HttpResponse) client.execute(post);
That works fine MOST of the time.
For some phones running Android 2.2+ the file received on the server
side is not complete. Small portions of the file are simply missing,
and the parts that are missing are at different locations of the file
each time.
We have verified this by comparing the file from the app against what
is received on the server side. On the server side we captured
packets with tcpdump to make sure it wasn't an issue with our web
server or web server code.
We also checked the data with tcpdump from the phone. The tcpdump file from the phone DOES differ from the data we are trying to send. For one case we did analysis on the tcpdump file is missing data between the address 8d68 and 9000 from the file. The packets from tcpdump line up with those addresses (one packet has a portion of the data up until 8d68 and then the next packet has data starting from 9000).
For these phones the problem only happens some of the time. Sometimes
file uploads work and the entire file is received intact on our end.
This is happening ONLY for 2.2+ phones. It happens for a wide variety
of phones, and a variety of carriers, and for hundreds of users. It appears to happen over both wifi and 3g based on the IP addresses seen on the server side.
This is anecdotal but when trying to get this to happen on my Nexus over the past 2 days I have seen it happen 6 times and those times are always right around when I am entering or leaving the room close to a certain wifi router. The rest of the day when I'm in the office on a different wifi router or on a cell network the issue never happens. My theory being that the app is busy sending data and now we move from wifi to cell network or vice versa, is that a dumb idea or a possibility?
I can put the tcpdump files and data files up somewhere if anyone cares to take a look.
What else should I be investigating to figure out the reason for this?
I was facing some similar issue when uploading the binary data. I figured out that it was a problem with the server code.
There were some filters at the server side, which were reading the incoming requests and trying to log them. The code was trying to put the incoming stream in some String type of DS. Its was because of this that the binary images were distorted.
Just check if there are any filters at the server side. Hope this helps
Related
I have a problem - I am trying to make a POST request from my app, but it always drop IOexception error on response(like response code, message etc...).
I made that POST request (what i am trying to make from my app) from my PC and I used WIRESHARK to see the response, but the response comes in multiple PIECES, not in one as usual.
In my app I use httpulrconnection in acync task.
How do I manage to catch all of the response?
I've added a pics of my WIRESHARK file and make it red which contains the response:
Wireshark http post with response in pieces
This is just how anything over a networks is working.
The data is splitted into smaller packets which will get routed through the network. The maximum of the packet size is defined by the different hops in the network.
In most cases those packets are something like 1460bytes of actual payload.
TCP will make this packet-handling transparent, as it is a protocol on top of this raw layer. So normally you shouldn't take care of this.
If you want to see the whole response (like your application will see it), you can view single tcp streams in wireshark:
https://www.wireshark.org/docs/wsug_html_chunked/ChAdvFollowTCPSection.html
Edit
The errors you are seeing don't have anything to do with the packaging.
I am using the class HttpUrlConnection for requesting JSON responses
I realized that no matter if I set or not
System.setProperty("http.keepAlive", "false");
The first response is always going to take longer, while the next responses are very quick, with and without keepAlive. I am not even using SSL.
Notice that, my app doesn't need to perform any authentication with the server, so there isn't any startup call to the webservices. The first request I make to the webservices is actually the very first.
I am also verifying server-side with "netstat", that by setting keepAlive false on the Android client the connections disappear straight away, while without specifying keepAlive false they keep staying as "ESTABLISHED".
How can you explain that subsequent responses are quicker even if the connection doesn't persist?
ANDROID CODE:
line 1) URL url = new URL(stringUrl);
line 2) HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection();
line 3) InputStream instream = new BufferedInputStream(urlConnection.getInputStream());
Until line 2 everything always gets executed very quickly, either with keepAlive or not. Line 3 in the first request takes around 3 seconds, while in all subsequent always less than 1 second. Each request is about 0.5KB gzipped.
SYSTEM:
I am testing using a Nexus 5, connected via 3G
My webservices are written in Go, running on a CentOS 6.4 linux server
I am using standard tcp v4
UPDATE:
For the moment I have decided to use a trick: when the fragment is resuming, I make a HTTP HEAD request to the server. In this way all subsequent calls in the next 10 seconds are very quick. If the user waits more than 10 seconds then the first one will be slow, and the next ones will be quick again. This is all happening without using KeepAlive.
It's a really big mistery now. It looks like there is some kind of "awake" period which lasts for about 10 seconds. I don't think there is anything strange on my code which can result on that. Also because everything seems to happen during the line 3 I reported above.
SOLVED! thanks to Mark Allison!
Here is a very clear explanation:
http://developer.android.com/training/efficient-downloads/efficient-network-access.html
Also, everything can easily be monitored using Android DDMS's Network Statistics. If you wait some seconds (let's say 20) from last request, you can see that it takes 2 seconds to transmit a new request.
I suspect that the lag that you are seeing is simply down to the cellular radio transitioning from either low power or idle state to full power (which can take over 2 seconds).
Check out Reto Meier's excellent DevBytes series on Efficient Data Transfer for an in-depth explanation of what's going on.
The first request cannot leverage a keep-alive obviously, because thankfully Android doesn't keep the connections alive for minutes or hours. Only subsequent requests can reuse keep-alive connections and only after a short period of time.
It's natural that you have to wait in line 3. Before something like conn.getResponseCode() or conn.getInputStream() the HttpURLConnection is in CREATED state. There is no network activity until it's getting in CONNECTED state. Buffered* shouldn't make any difference here.
I've observed long delays when using SSL and there was a time-shift between server and device. This happens very often when using an emulator which is not cold-booted. For that I've a small script running before test. It's important that PC and emulator are in the same time-zone, otherwise it's very contra-productive: (see below, because it's hard to show the command inline).
I can imagine that Android saves battery in putting 3G into sleep mode when there is no activity. This is just speculation, but you could make a test by creating some other network activity with other apps (browser, twitter, ...) and then see whether your app needs the same long "think time" until first connection.
There are other good options for losing time: DNS resolution, Server-side "sleep" (e.g. a virtual machine loading "memory" from disk).
The command to set time of Android emulator:
adb -e shell date -s `date +"%Y%m%d.%H%M%S"`
Edit
To further analyze the problem, you could run tcpdump on your server. Here is tutorial in case you don't know it well. Store the dumps to files (pcap) and then you can view them with wireshark. Depending on the traffic on your CentOS server you have to set some filters so you only record the traffic from your Android device. I hope that this gives some insight to the problem.
To exclude your server from being the bad guy, you could create a small script with curl commands doing the equivalent as your app.
You could create a super-tiny service without database or other i/o dependencies and measure the performance. I don't know "Go", but the best thing would be a static JSON file delivered by Apache or nginx. If you only have Go, then take something like /ping -> { echo "pong" }. Please tell us your measurements and observations.
Instead of using so many classes I suggest you use this library
you can have a look at here
http://loopj.com/android-async-http/
your code will become very very less , instead of declaring so many classes writing bulk of code , you can just use 4 lines of code
AsyncHttpClient client = new AsyncHttpClient();
client.get("http://www.google.com", new AsyncHttpResponseHandler() {
#Override
public void onSuccess(String response) {
System.out.println(response);
}
});
It is very efficient in geting the response very quickly(1 or 2 secs including parsing).
I hope this will help you out. :)
I'm seeing a problem with the HttpClient posting/putting a long StringEntity. When the entity is short, there is no problem at all. However, when the length exceeds a value (something around 1400 characters), the http packet can never be sent out (I sniffed the interface using WireShark). Actually, the connection is established, but the data is not transmitted, so the receiver side got a timeout exception.
I'm wondering if these is a lenght limit.
I tried to increase the connection timeout and socket timeout, which only made me wait longer to see the timeout ...
And i also tried to use InputStreamEntity, didn't work either.
[Update]: I tried to used HttpURLConnection directly instead of HttpClient. The same problem still exists. However, I do have some findings. When I forced the packet to be cut into chunks (using HttpURLConnection.setChunkedStreamingMode), WireShare did captured some segment of the packet, with the previous segments missing. I guess this must be a bug in the apache http library.
The issue is resolved. It took me several days to find out that, the problem is with my wireless router. It has some weird settings which truncates large incoming http messages.
It is possible it is timing out on the server side. Also make sure you are using the org.apache.client.httpclient jar. Officially there is no max length for a url. Look here http://www.w3.org/Protocols/rfc2616/rfc2616.html
I think that can help you urllength
I want to measure the data transferred and received for my HTTP request over the mobile network. I found the TrafficStats API in Android 2.2, but the numbers I get from this seem to be too low. I am measuring on a real device (HTC Desire), over 3G network.
Immediately before the request a compute the tx/rx data since boot time, then do a GET request, then calculate the rx/tx numbers again:
long bytesStart = TrafficStats.getTotalTxBytes() + TrafficStats.getTotalRxBytes();
// execute request (Apache HTTP client)
client.execute(get);
long totalBytesNow = TrafficStats.getTotalTxBytes() + TrafficStats.getTotalRxBytes();
long bytesDiff = totalBytesNow - bytesStart;
For bytesDiff I get numbers around 1.5KB, where it should have been 3.5KB. The totalBytes number is around 5MB, which seems to be a reasonable amount of data for 10 hours that the device is running.
I read here on stackoverflow that the numbers come from a file, can it be that the data is not flushed to the file immediately? Or why are my numbers always too low (I checked the data size on the server and it is really larger)?
Is it a problem of my HTC Desire device (Android 2.2 by HTC), does the TrafficStats API work for anyone?
Thanks,
Martin
Could it be that the HTTP stack is compressing your data?
Also, the files on /sys are not actual files in the common sense, they are a file interface to kernel data (you know, the 'everything is a file' philosophy of Unix/Linux)
OK, I found out the problem in my coding. After "client.execute(get);" I only have the headers etc. transferred, not the content. After evaluating the content InputStream the numbers are correct...
Is there a substantial overhead of using HTTP over plain sockets (Java on Android) to send a large (50-200 MB) file [file is on the SD card] from an Android device to a Linux server over a Wi-Fi network.
In my current prototype I'm using CherryPy-3.2.0 to implement my HTTP server. I'm running Android 2.3.3 on a Nexus one as my client.
Currently it's taking around ~100 seconds** (on slower network 18 Mbps*) and ~50 seconds (on a faster 54 Mbps*) Wi-Fi network to upload a 50 MB binary file.
NOTE:
*I'm using WifiInfo.getLinkSpeed() to measure the network link speed
** This is the time difference before and after the HTTPClient.execute(postRequest)
Any other ideas regarding other expensive operations that may have a substantial part in the total time apart from the network and how to reduce this time would be appreciated.
Thanks.
EDIT - HTTP post code on Android
private void doHttpPost(String fileName) throws Exception{
HttpParams httpParameters = new BasicHttpParams();
// Set the timeout in milliseconds until a connection is established.
int timeoutConnection = 9000000;
HttpConnectionParams.setConnectionTimeout(httpParameters, timeoutConnection);
// Set the default socket timeout (SO_TIMEOUT)
// in milliseconds which is the timeout for waiting for data.
int timeoutSocket = 9000000;
HttpConnectionParams.setSoTimeout(httpParameters, timeoutSocket);
HttpClient client = new DefaultHttpClient(httpParameters);
client.getParams().setParameter(ClientPNames.COOKIE_POLICY, CookiePolicy.RFC_2109);
HttpPost postRequest = new HttpPost();
postRequest.setURI(new URI("http://192.168.1.107:9999/upload/"));
MultipartEntity multiPartEntity = new MultipartEntity();
multiPartEntity.addPart("myFile", new FileBody(new File(fileName)));
postRequest.setEntity(multiPartEntity);
long before = TrafficStats.getTotalTxBytes();
long start = System.currentTimeMillis();
HttpResponse response = client.execute(postRequest);
long end = System.currentTimeMillis();
long after = TrafficStats.getTotalTxBytes();
Log.d(LOG_TAG, "HTTP Post Execution took " + (end - start) + " ms.");
if( before != TrafficStats.UNSUPPORTED && after != TrafficStats.UNSUPPORTED)
Log.d(LOG_TAG, (after-before) + " bytes transmitted to the server");
else
Log.d(LOG_TAG, "This device doesnot support Network Traffic Stats");
HttpEntity responseEntity = response.getEntity();
if (responseEntity != null) {
responseEntity.consumeContent();
Log.d(LOG_TAG, "HTTP Post Response " + response.getEntity().getContent().toString() );
}
client.getConnectionManager().shutdown();
}
EDIT 2: Based on the results reported by this tool it looks like the SD card read speed is not an issue. So it may either be the HttpClient library or something else.
Overhead on HTTP connection comes from the headers that it sends along with your data (which is basically a constant). So the more data you send, the less the headers 'hurt you'. However, the much more important aspect to consider is encoding.
For example, if you are sending non-ASCII data, paired with a mime type of application/x-www-form-urlencoded you run the risk of exploding the input size because non-ASCII characters must be escaped.
From the spec:
The content type "application/x-www-form-urlencoded" is inefficient for sending
large quantities of binary data or text containing non-ASCII characters. The
content type "multipart/form-data" should be used for submitting forms that
contain files, non-ASCII data, and binary data.
The alternative is multipart/form-data which efficient for binary data. So, make sure your application is using this MIME type (you can even probably check this on your server logs).
Another method which can considerably reduce your upload time is compression. If you are uploading data which isn't already compressed (most image and video formats are already compressed) try adding gzip compression to your uploads. Another post shows the details of setting this up in android.
If your data is of a specific format (say an image), you can look into lossless compression algorithms for your type of data (png for images, FLAC for audio, etc.). Compression always comes at the price of CPU (battery), so keep that in mind.
Remember:
Don't optimize something until you know its the bottleneck. Maybe your server's connection is slow, maybe you can't read from the android file system fast enough to push your data to the network. Run some tests and see what works.
If it were me, I would not implement the straight tcp approach. Just my 2 cents, good luck!
No there is no significant overhead associated with using HTTP over raw sockets. However, it really depends on how you're using HttpClient to send this file. Are you properly buffering between the file system and HttpClient? The latency might not be the network, but reading the file from the filesystem. In fact you increased the raw link speed by 3x and only saw a reduction of 2x. That probably means there is some latency else where in your code or the server or filesystem. You might try uploading a file from a desktop client to make sure it's not the server causing the latency. Then look at the filesystem through put. If that all checks out then look at the code you've written using HttpClient and see if that could be optimized.
Note also in CherryPy 3.2 that the system for handling request bodies has been completely reworked, and you are much more free to implement varying handlers based on the media type of the request. By default, CherryPy will read your uploaded bytes into a temporary file; I assume your code then copies that to a more permanent location, which might be overhead that isn't useful to you (although there are good security reasons to use a temporary file). See also this question for discussion on renaming temp files.
You can override that behavior; make a subclass of _cpreqbody.Part with a make_file function that does what you want, then, in a Tool, replace cherrypy.request.body.part_class for that URI. Then post your code on http://tools.cherrypy.org so everyone can benefit :)