Unable to track upload progress with HttpURLConnection as well as OkHttp - android

For some reason HttpURLConnection appears to be buffering the upload data no matter what I try. I can show the progress percentage of the data, but it is clear that the progress advances way too fast while the data is not flowing at that high rate.
The receiving server is not in the intranet, but hosted somewhere. The edge router is throttling the upload bandwidth to 2mbit in order to simulate a slow network, and in the bandwidth graph of the router I can see the data rate graph for the development device. The WiFi AP also allows me to see a graph of the data rate, and it looks just like the one of the edge router, so no device in the intranet is buffering the data. It is definitely the development device (Nexus 5X)
The following is the code that is being used:
HttpURLConnection hucConnection = (HttpURLConnection) url.openConnection();
//hucConnection.setUseCaches(false); // does not solve the issue
//hucConnection.setDefaultUseCaches(false); // does not solve the issue
//hucConnection.setAllowUserInteraction(true); // does not solve the issue
hucConnection.setConnectTimeout(6 * 1000);
hucConnection.setReadTimeout(30 * 1000);
hucConnection.setRequestProperty("content-type", "application/json; charset=UTF-8");
hucConnection.setRequestMethod("POST");
hucConnection.setDoInput(true);
hucConnection.setDoOutput(true);
// Data to transfer
byte[] bData = joTransfer.toString().getBytes("UTF-8");
int iDataLength = bData.length;
//hucConnection.setRequestProperty("content-transfer-encoding", "binary"); // does not solve the issue
// use compression
hucConnection.setRequestProperty("content-encoding", "deflate");
ByteArrayOutputStream stream = new ByteArrayOutputStream();
Deflater deflater = new Deflater(Deflater.DEFAULT_COMPRESSION);
DeflaterOutputStream zip = new DeflaterOutputStream(stream, deflater);
zip.write(bData);
zip.close();
deflater.end();
byte[] bZippedData = stream.toByteArray();
Integer iZippedDataLength = bZippedData.length;
int iChunk = 1000;
hucConnection.setChunkedStreamingMode(iChunk);
//hucConnection.setFixedLengthStreamingMode(iZippedDataLength); // does not solve the issue
hucConnection.connect();
OutputStream osOutputStream = hucConnection.getOutputStream();
// FROM HERE ---->>>
int iUploadedLength;
for (iUploadedLength = 0; iUploadedLength < iZippedDataLength - iChunk; iUploadedLength += iChunk) {
LogWrapper.e(TAG, "l -> f:" + iUploadedLength + " t:" + (iUploadedLength+iChunk));
osOutputStream.write(Arrays.copyOfRange(bZippedData, iUploadedLength , iUploadedLength+iChunk));
osOutputStream.flush();
}
LogWrapper.e(TAG, "r -> f:" + iUploadedLength + " t:" + iZippedDataLength);
osOutputStream.write(Arrays.copyOfRange(bZippedData, iUploadedLength, iZippedDataLength));
osOutputStream.flush();
osOutputStream.close();
// <<<---- TO HERE ---- XXXXXXXXX max 1 second XXXXXXXXX
// FROM HERE ---->>>
int iResponseCode = hucConnection.getResponseCode();
// <<<---- TO HERE ---- XXXXXXXXX about 10 seconds XXXXXXXXX
if (iResponseCode != HttpURLConnection.HTTP_OK) {
...
I expected the calls to osOutputStream.flush(); to force the HttpURLConnection to send the data to the server, but for some reason that isn't happening.
It appears to get buffered somewhere, because after the osOutputStream.close(); and before the hucConnection.getResponseCode(); the data is getting uploaded to the server.
All the transfers (upload and download) are working properly, no data is damaged.
Is there a way to fix this, or an alternative to using HttpURLConnection? I've read that the Socket class does not have this problem, but I'm not sure if it handles redirects and stuff like that properly. I don't need to use cookies or some other stuff.
The aprox. 10 seconds it takes for hucConnection.getResponseCode(); to finish is when about 3MB are uploaded (3MB*8b/B = 24Mb, 24Mb/2Mb/s = 12s), the data that is downloaded is getting sent after that call. The progress of the downloaded data is precise.
Is it possible that a 3rd party library is altering HttpURLConnection's behavior and doing some proxying? Like Firebase or something? I already disabled Crashlytics, but I think that Firebase also does some kind of stats gathering (response time). I think I had some strange issues about 1-2 months ago in another app, where I was getting a Proxy error issue in the domain name resolution, as if something inside of Android was proxying network traffic.
I'm about to give OkHttp a try, one of their recipies has a Post Streaming example (https://github.com/square/okhttp/wiki/Recipes)
Update: I implemented it using okhttp3, following the above mentioned recipie. I have the exact same problem there.
This is on Android 8.1
The server is an nginx instance.
I also ran the app on a Genymotion emulator instance, same OS, and it looks like it's better there, yet the problem still seems to be present, a bit. While radical throttling on the edge router has no effect on the Nexus 5X, it does have an effect on the emulator. But nonetheless, even the emulator upload tracking precision leaves much to be desired.
Would it make sense to use a WebSocket connection for that? That would be my last resort.

The logic is for downloading used in AsyncTask, but I think, that it should be the same (just a switching input>output and so on)
InputStream inputStream = null;
try {
try {
OutputStream outputStream = new FileOutputStream(documentFile, false);
try {
inputStream = httpConn.getInputStream();
byte[] buffer = new byte[4 * 1024]; // or other buffer size
long downloaded = 0;
long target = dataLength;
int readed;
long updateSize = target / 10;
long updateHelp = 0;
while ((readed = inputStream.read(buffer)) != -1) {
downloaded += readed;
updateHelp += readed;
if (updateHelp >= updateSize) {
updateHelp = 0;
publishProgress(downloaded, target);
}
outputStream.write(buffer, 0, readed);
if (isCancelled()) {
return false;
}
}
outputStream.flush();
outputStream.close();
return true;
} catch (Exception e) {
return false;
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (inputStream != null) {
inputStream.close();
}
}
} catch (Exception e) {
e.printStackTrace();
}

Related

Why is Android file creation suddenly failing when it worked on Android 11 previously?

In the last few days, my Android app is suddenly failing to download files from a web server to store in the app. This is the same for all users I have contacted. It was previously working in Android 11, so it's something that has only just changed. It's a (free) niche app for UK glider pilots to process NOTAMS, and has relatively large number of users who I don't want to let down.
The published app uses getExternalFilesDir(null) to return the directory in which to store the downloaded files, with android:requestLegacyExternalStorage set to "true" in the manifest.
I changed getExternalFilesDir(null) to getFilesDir() in Android Studio since that's what I understand should now be used for internal app data files. This returns /data/user/0/(my package name)/files. I'm running the Pixel 2 API 30 emulator for debugging, and the File Explorer shows that /data/data/(my package name)/files directory has been created. Everything I've read on here says that this is what is supposed to happen and it should all work. However no file was created when I attempted the download.
I changed android:requestLegacyExternalStorage to "false", and this time a file was created as expected. However it was empty and the download thread was giving an exception "unexpected end of stream on com.android.okhttp.Address#89599f3f".
This is the relevant code in my DownloadFile class which runs as a separate thread (comments removed for compactness):
public class DownloadFile implements Runnable
{
private String mUrlString;
private String mFileName;
private CountDownLatch mLatch;
public DownloadFile(String urlString, String fileName, CountDownLatch latch)
{
mUrlString = urlString;
mFileName = fileName;
mLatch = latch;
}
public void run()
{
HttpURLConnection urlConnection = null;
// Note for StackOverflow: following is a public static variable defined in the main activity
Spine.mDownloadStatus = false;
try
{
URL url = new URL(mUrlString);
urlConnection = (HttpURLConnection) url.openConnection();
urlConnection.setRequestMethod("GET");
urlConnection.setDoOutput(true);
urlConnection.setUseCaches(false);
urlConnection.connect();
File file = new File(Spine.dataDir, mFileName);
FileOutputStream fileOutput = new FileOutputStream(file);
InputStream inputStream = urlConnection.getInputStream();
byte[] buffer = new byte[1024];
int bufferLength = 0; // used to store a temporary size of the buffer
while ((bufferLength = inputStream.read(buffer)) > 0)
{
fileOutput.write(buffer, 0, bufferLength);
}
fileOutput.close();
Spine.mDownloadStatus = true;
}
// catch some possible errors...
catch (IOException e)
{
Spine.mErrorString = e.getMessage();
}
if (urlConnection != null)
urlConnection.disconnect();
// Signal completion
mLatch.countDown();
}
}
I now believe the problem lies with the URL connection, rather than the changes to local file storage access which is what I first thought. Incidentally, if I enter the full URL into my web browser the complete text file is displayed OK, so it's not a problem with the server.
The problem has been narrowed down to changes to the functionality of the website that hosts the data files to be downloaded. It's been made https secure and they are currently working on further changes.
I temporarily moved the hosting to my own website in Android Studio and everything worked so it's down to those website changes and nothing to do with my code (at least it may need changing later to support the upgrade to the main hosting site).
Thanks to all for responding.

Android's SSLServerSocket causes increasing native memory in the App, OOM

Background
I am developing an Android App which provides a simple HTTP/HTTPS server. If the HTTPS serving is configured then on every connection an increasing native memory usage is observed which eventually leads to an app crash (oom), while using the HTTP configuration keeps the native memory usage relative constant. The app's Java VM keeps relative constant in both configurations.
The app serves an HTML page which contains a javascript with periodic polling (one json poll every second), so calling the app page using the HTTPS configuration and keeping the page open for several hours will lead to the mentioned out-of-memory because of increasing native memory usage. I have tested many SSLServerSocket and SSLContext configurations found on internet with no luck.
I observe the same problem on various Android devices and various Android versions beginning with 2.2 up to 4.3.
The code for handling client requests is the same for both configurations HTTP/HTTPS. The only difference between the two configurations is the setup of the server socket. While in the case of HTTP server socket one single line similar to this "ServerSocket serversocket = new ServerSocket(myport);" does the job, in the case of HTTPS server setup the usual steps for setting up the SSLContext are taken -- i.e. setting up the keymanager and initializing the SSLContext. For now, I use the default TrustManager.
Need For Your Advice
Does somebody know about any memory leak problems in Android's default TLS Provider using OpenSSL? Is there something special I should consider to avoid the leak in the native memory? Any hint is highly appreciated.
Update: I have also tried both TLS providers: OpenSSL and JSSE by explicitly giving the provider name in SSLContext.getInstance( "TLS", providerName ). But that did not change anything.
Here is a code block which demonstrates the problem. Just create a sample app put it into the bottom of the main activity's onCreate and build & run the app. Make sure that your Wifi is on and call the HTML page by following address:
https://android device IP:9090
Then watch the adb logs, after a while you will see the native memory beginning to increase.
new Thread(new Runnable() {
public void run() {
final int PORT = 9090;
SSLContext sslContext = SSLContext.getInstance( "TLS" ); // JSSE and OpenSSL providers behave the same way
KeyManagerFactory kmf = KeyManagerFactory.getInstance( KeyManagerFactory.getDefaultAlgorithm() );
KeyStore ks = KeyStore.getInstance( KeyStore.getDefaultType() );
char[] password = KEYSTORE_PW.toCharArray();
// we assume the keystore is in the app assets
InputStream sslKeyStore = getApplicationContext().getResources().openRawResource( R.raw.keystore );
ks.load( sslKeyStore, null );
sslKeyStore.close();
kmf.init( ks, password );
sslContext.init( kmf.getKeyManagers(), null, new SecureRandom() );
ServerSocketFactory ssf = sslContext.getServerSocketFactory();
sslContext.getServerSessionContext().setSessionTimeout(5);
try {
SSLServerSocket serversocket = ( SSLServerSocket )ssf.createServerSocket(PORT);
// alternatively, the plain server socket can be created here
//ServerSocket serversocket = new ServerSocket(9090);
serversocket.setReceiveBufferSize( 8192 );
int num = 0;
long lastnatmem = 0, natmemtotalincrease = 0;
while (true) {
try {
Socket soc = (Socket) serversocket.accept();
Log.i(TAG, "client connected (" + num++ + ")");
soc.setSoTimeout(2000);
try {
SSLSession session = ((SSLSocket)soc).getSession();
boolean valid = session.isValid();
Log.d(TAG, "session valid: " + valid);
OutputStream os = null;
InputStream is = null;
try {
os = soc.getOutputStream();
// just read the complete request from client
is = soc.getInputStream();
int c = 0;
String itext = "";
while ( (c = is.read() ) > 0 ) {
itext += (char)c;
if (itext.contains("\r\n\r\n")) // end of request detection
break;
}
//Log.e(TAG, " req: " + itext);
} catch (SocketTimeoutException e) {
// this can occasionally happen (handshake timeout)
Log.d(TAG, "socket timeout: " + e.getMessage());
if (os != null)
os.close();
if (is != null)
is.close();
soc.close();
continue;
}
long natmem = Debug.getNativeHeapSize();
long diff = 0;
if (lastnatmem != 0) {
diff = natmem - lastnatmem;
natmemtotalincrease += diff;
}
lastnatmem = natmem;
Log.i(TAG, " answer the request, native memory in use: " + natmem / 1024 + ", diff: " + diff / 1024 + ", total increase: " + natmemtotalincrease / 1024);
String html = "<!DOCTYPE html><html><head>";
html += "<script type='text/javascript'>";
html += "function poll() { request(); window.setTimeout(poll, 1000);}\n";
html += "function request() { var xmlHttp = new XMLHttpRequest(); xmlHttp.open( \"GET\", \"/\", false ); xmlHttp.send( null ); return xmlHttp.responseText; }";
html += "</script>";
html += "</head><body onload=\"poll()\"><p>Refresh the site to see the inreasing native memory when using HTTPS: " + natmem + " </p></body></html> ";
byte[] buffer = html.getBytes("UTF-8");
PrintWriter pw = new PrintWriter( os );
pw.print("HTTP/1.0 200 OK \r\n");
pw.print("Content-Type: text/html\r\n");
pw.print("Content-Length: " + buffer.length + "\r\n");
pw.print("\r\n");
pw.flush();
os.write(buffer);
os.flush();
os.close();
} catch (IOException e) {
e.printStackTrace();
}
soc.close();
}
catch (IOException e) {
e.printStackTrace();
}
}
} catch (SocketException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}).start();
-- EDIT --
I have uploaded a sample app project called SSLTest for eClipse which demonstrates the problem:
http://code.google.com/p/android/issues/detail?id=59536
-- UPDATE --
Good news: today the reported Android issue above was identified and proper submissions were made to fix the memory leak. For more details see the link above.
I imagine this would be a substantial time investment, but I see that Valgrind has been ported to Android. You could try getting that up and running. Of course, if you find there's an internal memory leak, there isn't a lot you can do about it except attempt to get the bug fixed in future Android releases.
As a workaround, you could make your application multi-process and put the https service in a separate process. That way you could restart it periodically, avoiding OOM. You might also have to have a third process just accepting port 443 connections and passing them on to the https worker - in order to avoid tiny outages when the https worker is restarted.
This also sounds like a substantial time investment :) But it would presumably successfully avoid the problem.
--- EDIT: More detail ---
Yes, if you have a main application with its own UI, a worker process for handling SSL and a worker process for accepting the SSL requests (which as you say probably can't be 443), then on top of your normal Activity classes, you would have two Service classes, and the manifest would place them in separate processes.
Handling SSL process: Rather than waiting for an OOM to crash the service, the service could monitor its own Debug.getNativeHeapSize(), and explicitly restart the service when it increased too much. Either that, or restart automatically after every 100 requests or so.
Handling listening socket process: This service would just listen on the TCP port you choose and pass on the raw data to the SSL process. This bit needs some thought, but the most obvious solution is to just have the SSL process listen on a different local port X (or switch between a selection of different ports), and the listening socket process would forward data to port X. The reason for having the listening socket process is to gracefully handle the possibility that X is down - as it might be whenever you restart it.
If your requirements allow for there being occasional mini-outages I would just do the handling SSL process, and skip the listening socket process, it's a relatively simple solution then - not that different to what you'd do normally. It's the listening socket process that adds complexity to the solution...
Does it help to explicitly close the input stream? In the sample code the input stream seems to only be closed in the case of a SocketTimeoutException exception.
--EDIT--
You could rename run() to run2() and move the while loop into run() and remove it from run2() and see if that makes a difference? This couldn't be a solution but would tell you if any of the long-lived objects free up the memory when their references are dropped.
There is one detail I would recommend changing in your implementation.
Make a list of all your resource variables, for example Sockets, Streams, Writers, etc. Be sure to have the declaration outside your try statement and be sure to do cleanup / closing in the finally statement. I normally do something like this to be 100% sure:
InputStream in = null;
OutputStream out = null;
try {
//assign a proper value to in and out, and use them as needed.
} catch(IOException e) {
//normal error handling
} finally {
try {
in.close();
} catch(IOException e) {}
try {
out.close();
} catch(IOException e) {}
}
It looks a little bit confusing, but imagine you use your in Stream inside the try block and you get some Exception, then your Streams never get closed and that is a potential reason for memory leaks.
I cannot guarantee that this is the reason, but it should be a good startup point.
About managing your service. I had a lot of bad experiences with Android services because I was running them in the same thread as the GUI. Under some circumstances, Android will see some code that is executing for too long and kill your main process in order to protect from crashes. The solution I found was to follow the suggestion from this tutorial (look at point 4):
http://www.vogella.com/articles/AndroidServices/article.html
After this, my service just worked as expected and didn't interfere with my GUI Process.
Regards

Is there any way to get upload progress correctly with HttpUrlConncetion

Android Developers Blog recommend to use HttpURLConnection other than apache's HttpClient
(http://android-developers.blogspot.com/2011/09/androids-http-clients.html). I take the advice
and get problem in reporting file upload progress.
my code to grab progress is like this:
try {
out = conncetion.getOutputStream();
in = new BufferedInputStream(fin);
byte[] buffer = new byte[MAX_BUFFER_SIZE];
int r;
while ((r = in.read(buffer)) != -1) {
out.write(buffer, 0, r);
bytes += r;
if (null != mListener) {
long now = System.currentTimeMillis();
if (now - lastTime >= mListener.getProgressInterval()) {
lastTime = now;
if (!mListener.onProgress(bytes, mSize)) {
break;
}
}
}
}
out.flush();
} finally {
closeSilently(in);
closeSilently(out);
}
this code excutes very fast for whatever file size, but the file is actually still uploading to the server util i get response from the server. it seems that HttpURLConnection caches all data in internal buffer when i call out.write().
So, how can i get the actual file upload progress? Seems like httpclient can do that, but
httpclient is not prefered...any idea?
I found the explanation on developer document http://developer.android.com/reference/java/net/HttpURLConnection.html
To upload data to a web server, configure the connection for output using setDoOutput(true).
For best performance, you should call either setFixedLengthStreamingMode(int) when the body length is known in advance, or setChunkedStreamingMode(int) when it is not. Otherwise HttpURLConnection will be forced to buffer the complete request body in memory before it is transmitted, wasting (and possibly exhausting) heap and increasing latency.
Calling setFixedLengthStreamingMode() first fix my problem.
But as mentioned by this post, there is a bug in android that makes HttpURLConnection caches all content even if setFixedLengthStreamingMode() has been called, which is not fixed until post-froyo. So i use HttpClient instead for pre-gingerbread.
use Asynctask to upload file to upload your file to server and create a Progressdialog
1) run your code in
doinbackground(){
your code here..
}
2) update the progress in
publishProgress("" + (int) ((total * 100) / lenghtOfFile));
//type this in the while loop before write..
3) and On updating the progress
protected void onProgressUpdate(String... progress) {
Progress.setProgress(Integer.parseInt(progress[0]));
}
4) dismiss the progress in
protected void onPostExecute(String file_url) {
dismissDialog(progress);

Android: Too many open files error

I have the following operation which runs every 3 seconds.
Basically it downloads a file from a server and save it into a local file every 3 seconds.
The following code does the job for a while.
public class DownloadTask extends AsyncTask<String, Void, String>{
#Override
protected String doInBackground(String... params) {
downloadCommandFile( eventUrl);
return null;
}
}
private void downloadCommandFile(String dlUrl){
int count;
try {
URL url = new URL( dlUrl );
NetUtils.trustAllHosts();
HttpsURLConnection con = (HttpsURLConnection) url.openConnection();
con.setDoInput(true);
con.setDoOutput(true);
con.connect();
int fileSize = con.getContentLength();
Log.d(TAG, "Download file size = " + fileSize );
InputStream is = url.openStream();
String dir = Environment.getExternalStorageDirectory() + Utils.DL_DIRECTORY;
File file = new File( dir );
if( !file.exists() ){
file.mkdir();
}
FileOutputStream fos = new FileOutputStream(file + Utils.DL_FILE);
byte data[] = new byte[1024];
long total = 0;
while( (count = is.read(data)) != -1 ){
total += count;
fos.write(data, 0, count);
}
is.close();
fos.close();
con.disconnect(); // close connection
} catch (Exception e) {
Log.e(TAG, "DOWNLOAD ERROR = " + e.toString() );
}
}
Everything works fine, but if I leave it running for 5 to 10 minutes I get the following error.
06-04 19:40:40.872: E/NativeCrypto(6320): AppData::create pipe(2)
failed: Too many open files 06-04 19:40:40.892: E/NativeCrypto(6320):
AppData::create pipe(2) failed: Too many open files 06-04
19:40:40.892: E/EventService(6320): DOWNLOAD ERROR =
javax.net.ssl.SSLException: Unable to create application data
I have been doing some researches for the last 2 days.
There are suggestions that they are many connections open, like this one https://stackoverflow.com/a/13990490/1503155 but still I can not figure out what's the problem.
Any ideas what may cause the problem?
Thanks in advance.
I think you get this error because you have too many files open at the same times, meaning that you have too many async tasks running in the same time (each async task opens a file), which makes sense if you say that you run a new one every 3 seconds.
You should try to limit the number of async task running in the same time using a thread pool executor.
Try using OkHttp Instead.
Your issue isn't with too many threads, although that's what's causing your issue to surface.
As #stdout mentioned in the comments, AsyncTask already runs in a threadpool that is common and shared amongst all AsyncTasks, unless you specify otherwise. The issue here is that the file descriptors are not being closed properly and in time.
The issue is that your file descriptors are not being closed fast enough.
I struggled with this for hours/days/weeks, doing everything you should like setting small read/connect timeouts and using a finally block to close out connections, input streams, output streams, etc. But we never found a working solution. It seemed like HttpsUrlConnection was flawed in some way.
So we tried OkHttp as a drop-in replacement for HttpsUrlConnection and voila! It worked out of the box.
So, if you're struggling with this and are having a really hard time fixing it, I suggest you try using OkHttp as well.
Here are the basics:
Once you get the Maven dependency added, you can do something like the following to download a file:
OkHttpClient okHttpClient = new OkHttpClient.Builder().build();
OutputStream output = null;
try {
Request request = new Request.Builder().url( download_url ).build();
Response response = okHttpClient.newCall( request ).execute();
if ( !response.isSuccessful() ) {
throw new FileNotFoundException();
}
output = new FileOutputStream( output_path );
output.write( response.body().bytes() );
}
finally {
// Ensure streams are closed, even if there's an exception.
if ( output != null ) output.flush();
if ( output != null ) output.close();
}
Switching to OkHttp instantly fixed our leaked file descriptor issue so it's worth trying if you're stuck, even at the expense of adding another library dependency.
I had to download several hunders of files at a time and meet the error.
You may check open descriptors with the following command:
adb shell ps
Find your application PID in the list and use another command:
adb shell run-as YOUR_PACKAGE_NAME ls -l /proc/YOUR_PID/fd
I see about 150 open descriptors on a usual launch. And there are 700+ when files are downloading. Their number decreases only after some minutes, looks like Android frees them in the backgound, not when you can close on a stream.
The only working solution was use a custom ThreadPool to limit the concurrency. Here is the Kotlin code:
private val downloadDispatcher = Executors.newFixedThreadPool(8).asCoroutineDispatcher()
private suspend fun downloadFile(sourceUrl: String, destPath: String, progressBlock: suspend (Long) -> Unit) = withContext(downloadDispatcher) {
val url = URL(sourceUrl)
url.openConnection().apply { connect() }
url.openStream().use { input ->
FileOutputStream(File(destPath)).use { output ->
val buffer = ByteArray(DEFAULT_BUFFER_SIZE)
var bytesRead = input.read(buffer)
var bytesCopied = 0L
while (bytesRead >= 0) {
if (!coroutineContext.isActive) break
output.write(buffer, 0, bytesRead)
bytesCopied += bytesRead
progressBlock(bytesCopied)
bytesRead = input.read(buffer)
}
}
}
}

Fastest way to seek (skip) an inputstream with http protocol

I am making a download service of sorts, and it has the ability to resume a previous partial download. I am currently using the skip method like this
long skipped = 0;
while (skipped < track.getCacheFile().length()){
skipped += is.skip(track.getCacheFile().length()-skipped);
}
I just did a test and it took about 57 seconds seconds to skip 45 mb in an inputstream. I am curious how certain native code does this, for instance, the mediaplayer can seek to any part of a remote stream instantaneously. I realize that I do not have access to the same libraries, but can I achieve something similar.
btw, that test was on wifi. It is obviously much slower on normal data networks.
Update: very simple (thanks to below)
if (track.getCacheFile().length() > 0){
request.setHeader("Range","bytes="+track.getCacheFile().length()+"-");
}
If you are using http as a protocol to initiate your inputstream, you may try the RANGE header.
Take a look here :
http://www.west-wind.com/Weblog/posts/244.aspx
The problem with the skip method is that you have to read the data even if you skip them so you need to receive them. The best solution is probably to request the server only the part you want.
You can do it like this:
private InputStream getRemote(String url, long offset) {
try {
URLConnection cn = new URL( url ).openConnection();
cn.setRequestProperty ("Range", "bytes="+offset+"-");
cn.connect();
length = cn.getContentLength();
return cn.getInputStream();
} catch (MalformedURLException e ) { e.printStackTrace();
} catch (IOException e) { e.printStackTrace(); }
return null;
}
Then when you need to skip, you actually do a reconnect via HTTP to the new offset. Works quick and reliable, much better than using the inputstream's skip.

Categories

Resources