Volley out of memory error, weird allocation attempt - android

Sometimes randomly Volley crashes my app upon startup, it crashes in the application class and a user would not be able to open the app again until they go into settings and clear app data
java.lang.OutOfMemoryError
at com.android.volley.toolbox.DiskBasedCache.streamToBytes(DiskBasedCache.java:316)
at com.android.volley.toolbox.DiskBasedCache.readString(DiskBasedCache.java:526)
at com.android.volley.toolbox.DiskBasedCache.readStringStringMap(DiskBasedCache.java:549)
at com.android.volley.toolbox.DiskBasedCache$CacheHeader.readHeader(DiskBasedCache.java:392)
at com.android.volley.toolbox.DiskBasedCache.initialize(DiskBasedCache.java:155)
at com.android.volley.CacheDispatcher.run(CacheDispatcher.java:84)
The "diskbasedbache" tries to allocate over 1 gigabyte of memory, for no obvious reason
how would I make this not happen? It seems to be an issue with Volley, or maybe an issue with a custom disk based cache but I don't immediately see (from the stack trace) how to 'clear' this cache or do a conditional check or handle this exception
Insight appreciated

In the streamToBytes(), first it will new bytes by the cache file length, does your cache file was too large than application maximum heap size ?
private static byte[] streamToBytes(InputStream in, int length) throws IOException {
byte[] bytes = new byte[length];
...
}
public synchronized Entry get(String key) {
CacheHeader entry = mEntries.get(key);
File file = getFileForKey(key);
byte[] data = streamToBytes(..., file.length());
}
If you want to clear the cache, you could keep the DiskBasedCache reference, after clear time's came, use ClearCacheRequest and pass that cache instance in :
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
DiskBasedCache cache = new DiskBasedCache(cacheDir);
RequestQueue queue = new RequestQueue(cache, network);
queue.start();
// clear all volley caches.
queue.add(new ClearCacheRequest(cache, null));
this way will clear all caches, so I suggest you use it carefully. of course, you can doing conditional check, just iterating the cacheDir files, estimate which was too large then remove it.
for (File cacheFile : cacheDir.listFiles()) {
if (cacheFile.isFile() && cacheFile.length() > 10000000) cacheFile.delete();
}
Volley wasn't design as a big data cache solution, it's common request cache, don't storing large data anytime.
------------- Update at 2014-07-17 -------------
In fact, clear all caches is final way, also isn't wise way, we should suppressing these large request use cache when we sure it would be, and if not sure? we still can determine the response data size whether large or not, then call setShouldCache(false) to disable it.
public class TheRequest extends Request {
#Override
protected Response<String> parseNetworkResponse(NetworkResponse response) {
// if response data was too large, disable caching is still time.
if (response.data.length > 10000) setShouldCache(false);
...
}
}

I experienced the same issue.
We knew we didn't have files that were GBs in size on initialization of the cache. It also occurred when reading header strings, which should never be GBs in length.
So it looked like the length was being read incorrectly by readLong.
We had two apps with roughly identical setups, except that one app had two independent processes created on start up. The main application process and a 'SyncAdapter' process following the sync adapter pattern. Only the app with two processes would crash.
These two processes would independently initialize the cache.
However, the DiskBasedCache uses the same physical location for both processes. We eventually concluded that concurrent initializations were resulting in concurrent reads and writes of the same files, leading to bad reads of the size parameter.
I don't have a full proof that this is the issue, but I'm planning to work on a test app to verify.
In the short term, we've just caught the overly large byte allocation in streamToBytes, and throw an IOException so that Volley catches the exception and just deletes the file.
However, it would probably be better to use a separate disk cache for each process.
private static byte[] streamToBytes(InputStream in, int length) throws IOException {
byte[] bytes;
// this try-catch is a change added by us to handle a possible multi-process issue when reading cache files
try {
bytes = new byte[length];
} catch (OutOfMemoryError e) {
throw new IOException("Couldn't allocate " + length + " bytes to stream. May have parsed the stream length incorrectly");
}
int count;
int pos = 0;
while (pos < length && ((count = in.read(bytes, pos, length - pos)) != -1)) {
pos += count;
}
if (pos != length) {
throw new IOException("Expected " + length + " bytes, read " + pos + " bytes");
}
return bytes;
}

Once the problem occurs, it seems to recur on every subsequent initialization, pointing to an invalid cached header.
Fortunately, this issue has been fixed in the official Volley repo:
https://github.com/google/volley/issues/12
See related issues in the android-volley mirror:
https://github.com/mcxiaoke/android-volley/issues/141
https://github.com/mcxiaoke/android-volley/issues/61
https://github.com/mcxiaoke/android-volley/issues/37

Related

How to avoid data leakage and thread blocking while writing data on a file on android

I'm working with android sensors and have a method inside a listener that keeps appending data on a string builder with really high frequency. After some data is collected I compress the string with gzip and write it on a file to avoid out of memory exceptions. This keeps repeating forever. This is all in the same thread so as the file gets bigger it starts to block the thread and the data appending on the string. I do create new files if they get too large but i think i need to implement a threading and lock mechanism for the compression and file writing to avoid any blocking but at the same time not have any problems with leakage of data. Can anyone help me with that? Im not sure if im wording my question correctly.
// on rotation method of gyroscope
#Override
public void onRotation(long timestamp,float rx, float ry, float rz) {
try {
//get string of new lines of the write data for the sensor
str.append("gyroTest,userTag=testUser,deviceTag="+deviceName+" rx="+rx+",ry="+ry+",rz="+rz+" "+timestamp+"\n");
if(count >=2000){
b = GZIPCompression.compress(str);
Log.i(FILE_TAG, "Write gyroscope file");
FileHandling.testWrite( GYROSCOPE,b);
str.setLength(0);
count=0;
}
count++;
} catch (Exception e) {
e.printStackTrace();
}
}
You're on the right track in that you need to separate reading from the sensor, processing the data, and writing it all back to disk.
To pass the data from the sensor reads, you may consider using something like a LinkedBlockingQueue with your Strings.
private LinkedBlockingQueue<String> queue = new LinkedBlockingQueue<String>();
#Override
public void onRotation(long timestamp, float rx, float ry, float rz) {
queue.add(
"gyroTest,userTag=testUser,deviceTag="+deviceName+" rx="+rx+",ry="+ry+",rz="+rz+" "+timestamp+"\n"
);
}
And then in another Thread, looping until canceled, you could drain the queue, process, and write without blocking the reading (main) Thread.
private boolean canceled = false;
private void startProcessingQueue() {
Runnable processQueueRunnable = new Runnable() {
#Override
public void run() {
while (!canceled) {
drainQueueAndWriteLog();
Thread.sleep(250);
}
}
};
new Thread(processQueueRunnable)
.start();
}
private void drainQueueAndWriteLog() {
List<String> dequeuedRotations = new ArrayList<String>();
queue.drainTo(dequeuedRotations);
if (0 < dequeuedRotations.size()) {
// Write each line, or all lines together
}
}
Note: take care to ensure the runnable is canceled when your Activity is paused.
As mentioned in your question, the more data you're writing, the slower it's going to be. Since you're writing data from a sensor, it's inevitably going to grow. For this, you could partition your files into smaller segments, by using something like a date-based naming convention for your log files.
For instance, a log name pattern of yyyyMMddHHmm would create minute-spaced log files, which you could then later aggregate and sort.
private SimpleDateFormat logFileDateFormat = new SimpleDateFormat("yyyyMMddHHmm");
private String getCurrentLogFileName() {
return String.format(
"rotations-%s.log",
logFileDateFormat.format(new Date())
);
}
Just keep in mind that since you're not writing in the same thread you're reading from, your timestamps may not match up perfectly with your log file names. This shouldn't be a problem, though, as you're already including the timestamps in the persisted data.
Further down the line, if you're still finding you're not quite hitting the level of write-throughput that your project requires, you may also want to consider condensing the amount of information you're actually storing by encoding common byte usages, or even reducing the length of each key to their most-unique values. For example, consider this 1 line output:
"gyroTest,userTag=testUser,deviceTag=some-device-name rx=12345,ry=4567,rz=87901872166251542545144\n"
And now reducing the keys:
"gyroTest,u=testUser,d=some-device-name x=12345,y=4567,z=87901872166251542545144\n"
Removes 18 characters from every line that needs to be written, without sacrificing any information.
Also worth noting: you either need a space (or better a comma) before the timestamp in your data line, else you won't be able to nicely pick out rz from it. And your deviceName should be escaped with quotation marks if it can contain spaces, else it will conflict with pulling out rx.

Using mp4parser , how can I handle videos that are taken from Uri and ContentResolver?

Background
We want to let the user choose a video from any app, and then trim a video to be of max of 5 seconds.
The problem
For getting a Uri to be selected, we got it working fine (solution available here) .
As for the trimming itself, we couldn't find any good library that has permissive license, except for one called "k4l-video-trimmer" . The library "FFmpeg", for example, is considered not permission as it uses GPLv3, which requires the app that uses it to also be open sourced. Besides, as I've read, it takes quite a lot (about 9MB).
Sadly, this library (k4l-video-trimmer) is very old and wasn't updated in years, so I had to fork it (here) in order to handle it nicely. It uses a open sourced library called "mp4parser" to do the trimming.
Problem is, this library seems to be able to handle files only, and not a Uri or InputStream, so even the sample can crash when selecting items that aren't reachable like a normal file, or even have paths that it can't handle. I know that in many cases it is possible to get a path of a file, but in many other cases, it's not, and I also know it's possible to just copy the file (here), but this isn't a good solution, as the file could be large and take a lot of space even though it's already accessible.
What I've tried
There are 2 places that the library uses a file:
In "K4LVideoTrimmer" file, in the "setVideoURI" function, which just gets the file size to be shown. Here the solution is quite easy, based on Google's documentation:
public void setVideoURI(final Uri videoURI) {
mSrc = videoURI;
if (mOriginSizeFile == 0) {
final Cursor cursor = getContext().getContentResolver().query(videoURI, null, null, null, null);
if (cursor != null) {
int sizeIndex = cursor.getColumnIndex(OpenableColumns.SIZE);
cursor.moveToFirst();
mOriginSizeFile = cursor.getLong(sizeIndex);
cursor.close();
mTextSize.setText(Formatter.formatShortFileSize(getContext(), mOriginSizeFile));
}
}
...
In "TrimVideoUtils" file, in "startTrim" which calls "genVideoUsingMp4Parser" function. There, it calls the "mp4parser" library using :
Movie movie = MovieCreator.build(new FileDataSourceViaHeapImpl(src.getAbsolutePath()));
It says that they use FileDataSourceViaHeapImpl (from "mp4parser" library) to avoid OOM on Android, so I decided to stay with it.
Thing is, there are 4 CTORS for it, all expect some variation of a file: File, filePath, FileChannel , FileChannel+fileName .
The questions
Is there a way to overcome this?
Maybe implement FileChannel and simulate a real file, by using ContentResolver and Uri ? I guess it might be possible, even if it means re-opening the InputStream when needed...
In order to see what I got working, you can clone the project here. Just know that it doesn't do any trimming, as the code for it in "K4LVideoTrimmer" file is commented:
//TODO handle trimming using Uri
//TrimVideoUtils.startTrim(file, getDestinationPath(), mStartPosition, mEndPosition, mOnTrimVideoListener);
Is there perhaps a better alternative to this trimming library, which is also permissive (meaning of Apache2/MIT licences , for example) ? One that don't have this issue? Or maybe even something of Android framework itself? I think MediaMuxer class could help (as written here), but I think it might need API 26, while we need to handle API 21 and above...
EDIT:
I thought I've found a solution by using a different solution for trimming itself, and wrote about it here, but sadly it can't handle some input videos, while mp4parser library can handle them.
Please let me know if it's possible to modify mp4parser to handle such input videos even if it's from Uri and not a File (without a workaround of just copying to a video file).
First of all a caveat: I am not familiar with the mp4parser library but your question looked interesting so I took a look.
I think its worth you looking at one of the classes the code comments say is "mainly for testing". InMemRandomAccessSourceImpl. To create a Movie from any URI, the code would be as follows:
try {
InputStream inputStream = getContentResolver().openInputStream(uri);
Log.e("InputStream Size","Size " + inputStream);
int bytesAvailable = inputStream.available();
int bufferSize = Math.min(bytesAvailable, MAX_BUFFER_SIZE);
final byte[] buffer = new byte[bufferSize];
int read = 0;
int total = 0;
while ((read = inputStream.read(buffer)) !=-1 ) {
total += read;
}
if( total < bytesAvailable ){
Log.e(TAG, "Read from input stream failed")
return;
}
//or try inputStream.readAllBytes() if using Java 9
inputStream.close();
ByteBuffer bb = ByteBuffer.wrap(buffer);
Movie m2 = MovieCreator.build(new ByteBufferByteChannel(bb),
new InMemRandomAccessSourceImpl(bb), "inmem");
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
But I would say, there looks to be somewhat of a conflict between what you want to achieve and the approach the parser takes. It is depending on local files to avoid large memory overheads, and random access to bytes can only be done if the entire set of data is available, which differs from a streaming approach.
It will require buffering at least the amount of data required for your clip in one go before the parser is given the buffer. That might be workable for you if you are looking to grab short sections and the buffering is not too cumbersome. You may be subject to IO exceptions and the like if the read from the InputStream has issues, especially if it is remote content, whereas you really aren't expecting that with a file on a modern system.
There is also MemoryFile to consider which provides an ashmem backed file-like object. I think somehow that could be worked in.
Next a snipped shows how to open a MediaStore Uri with IsoFile from Mp4Parser. So, you can see how to get a FileChannel from a Uri.
public void test(#NonNull final Context context, #NonNull final Uri uri) throws IOException
{
ParcelFileDescriptor fileDescriptor = null;
try
{
final ContentResolver resolver = context.getContentResolver();
fileDescriptor = resolver.openFileDescriptor(uri, "rw");
if (fileDescriptor == null)
{
throw new IOException("Failed to open Uri.");
}
final FileDescriptor fd = fileDescriptor.getFileDescriptor();
final FileInputStream inputStream = new FileInputStream(fd);
final FileChannel fileChannel = inputStream.getChannel();
final DataSource channel = new FileDataSourceImpl(fileChannel);
final IsoFile isoFile = new IsoFile(channel);
... do what you need ....
}
finally
{
if (fileDescriptor != null)
{
fileDescriptor.close();
}
}
}

Volley - download directly to file (no in memory byte array)

I'm using Volley as my network stack in a project I'm working on in Android. Part of my requirements is to download potentially very large files and save them on the file system.
Ive been looking at the implementation of volley, and it seems that the only way volley works is it downloads an entire file into a potentially massive byte array and then defers handling of this byte array to some callback handler.
Since these files can be very large, I'm worried about an out of memory error during the download process.
Is there a way to tell volley to process all bytes from an http input stream directly into a file output stream? Or would this require me to implement my own network object?
I couldn't find any material about this online, so any suggestions would be appreciated.
Okay, so I've come up with a solution which involves editing Volley itself. Here's a walk through:
Network response can't hold a byte array anymore. It needs to hold an input stream. Doing this immediately breaks all request implementations, since they rely on NetworkResponse holding a public byte array member. The least invasive way I found to deal with this is to add a "toByteArray" method inside NetworkResponse, and then do a little refactoring, making any reference to a byte array use this method, rather than the removed byte array member. This means that the transition of the input stream to a byte array happens during the response parsing. I'm not entirely sure what the long term effects of this are, and so some unit testing / community input would be a huge help here. Here's the code:
public class NetworkResponse {
/**
* Creates a new network response.
* #param statusCode the HTTP status code
* #param data Response body
* #param headers Headers returned with this response, or null for none
* #param notModified True if the server returned a 304 and the data was already in cache
*/
public NetworkResponse(int statusCode, inputStream data, Map<String, String> headers,
boolean notModified, ByteArrayPool byteArrayPool, int contentLength) {
this.statusCode = statusCode;
this.data = data;
this.headers = headers;
this.notModified = notModified;
this.byteArrayPool = byteArrayPool;
this.contentLength = contentLength;
}
public NetworkResponse(byte[] data) {
this(HttpStatus.SC_OK, data, Collections.<String, String>emptyMap(), false);
}
public NetworkResponse(byte[] data, Map<String, String> headers) {
this(HttpStatus.SC_OK, data, headers, false);
}
/** The HTTP status code. */
public final int statusCode;
/** Raw data from this response. */
public final InputStream inputStream;
/** Response headers. */
public final Map<String, String> headers;
/** True if the server returned a 304 (Not Modified). */
public final boolean notModified;
public final ByteArrayPool byteArrayPool;
public final int contentLength;
// method taken from BasicNetwork with a few small alterations.
public byte[] toByteArray() throws IOException, ServerError {
PoolingByteArrayOutputStream bytes =
new PoolingByteArrayOutputStream(byteArrayPool, contentLength);
byte[] buffer = null;
try {
if (inputStream == null) {
throw new ServerError();
}
buffer = byteArrayPool.getBuf(1024);
int count;
while ((count = inputStream.read(buffer)) != -1) {
bytes.write(buffer, 0, count);
}
return bytes.toByteArray();
} finally {
try {
// Close the InputStream and release the resources by "consuming the content".
// Not sure what to do about the entity "consumeContent()"... ideas?
inputStream.close();
} catch (IOException e) {
// This can happen if there was an exception above that left the entity in
// an invalid state.
VolleyLog.v("Error occured when calling consumingContent");
}
byteArrayPool.returnBuf(buffer);
bytes.close();
}
}
}
Then to prepare the NetworkResponse, we need to edit the BasicNetwork to create the NetworkResponse correctly (inside BasicNetwork.performRequest):
int contentLength = 0;
if (httpResponse.getEntity() != null)
{
responseContents = httpResponse.getEntity().getContent(); // responseContents is now an InputStream
contentLength = httpResponse.getEntity().getContentLength();
}
...
return new NetworkResponse(statusCode, responseContents, responseHeaders, false, mPool, contentLength);
That's it. Once the data inside network response is an input stream, I can build my own requests which can parse it directly into a file output stream which only hold a small in-memory buffer.
From a few initial tests, this seems to be working alright without harming other components, however a change like this probably requires some more intensive testing & peer reviewing, so I'm going to leave this answer not marked as correct until more people weigh in, or I see it's robust enough to rely on.
Please feel free to comment on this answer and/or post answers yourselves. This feels like a serious flaw in Volley's design, and if you see flaws with this design, or can think of better designs yourselves, I think it would benefit everyone.

Android's SSLServerSocket causes increasing native memory in the App, OOM

Background
I am developing an Android App which provides a simple HTTP/HTTPS server. If the HTTPS serving is configured then on every connection an increasing native memory usage is observed which eventually leads to an app crash (oom), while using the HTTP configuration keeps the native memory usage relative constant. The app's Java VM keeps relative constant in both configurations.
The app serves an HTML page which contains a javascript with periodic polling (one json poll every second), so calling the app page using the HTTPS configuration and keeping the page open for several hours will lead to the mentioned out-of-memory because of increasing native memory usage. I have tested many SSLServerSocket and SSLContext configurations found on internet with no luck.
I observe the same problem on various Android devices and various Android versions beginning with 2.2 up to 4.3.
The code for handling client requests is the same for both configurations HTTP/HTTPS. The only difference between the two configurations is the setup of the server socket. While in the case of HTTP server socket one single line similar to this "ServerSocket serversocket = new ServerSocket(myport);" does the job, in the case of HTTPS server setup the usual steps for setting up the SSLContext are taken -- i.e. setting up the keymanager and initializing the SSLContext. For now, I use the default TrustManager.
Need For Your Advice
Does somebody know about any memory leak problems in Android's default TLS Provider using OpenSSL? Is there something special I should consider to avoid the leak in the native memory? Any hint is highly appreciated.
Update: I have also tried both TLS providers: OpenSSL and JSSE by explicitly giving the provider name in SSLContext.getInstance( "TLS", providerName ). But that did not change anything.
Here is a code block which demonstrates the problem. Just create a sample app put it into the bottom of the main activity's onCreate and build & run the app. Make sure that your Wifi is on and call the HTML page by following address:
https://android device IP:9090
Then watch the adb logs, after a while you will see the native memory beginning to increase.
new Thread(new Runnable() {
public void run() {
final int PORT = 9090;
SSLContext sslContext = SSLContext.getInstance( "TLS" ); // JSSE and OpenSSL providers behave the same way
KeyManagerFactory kmf = KeyManagerFactory.getInstance( KeyManagerFactory.getDefaultAlgorithm() );
KeyStore ks = KeyStore.getInstance( KeyStore.getDefaultType() );
char[] password = KEYSTORE_PW.toCharArray();
// we assume the keystore is in the app assets
InputStream sslKeyStore = getApplicationContext().getResources().openRawResource( R.raw.keystore );
ks.load( sslKeyStore, null );
sslKeyStore.close();
kmf.init( ks, password );
sslContext.init( kmf.getKeyManagers(), null, new SecureRandom() );
ServerSocketFactory ssf = sslContext.getServerSocketFactory();
sslContext.getServerSessionContext().setSessionTimeout(5);
try {
SSLServerSocket serversocket = ( SSLServerSocket )ssf.createServerSocket(PORT);
// alternatively, the plain server socket can be created here
//ServerSocket serversocket = new ServerSocket(9090);
serversocket.setReceiveBufferSize( 8192 );
int num = 0;
long lastnatmem = 0, natmemtotalincrease = 0;
while (true) {
try {
Socket soc = (Socket) serversocket.accept();
Log.i(TAG, "client connected (" + num++ + ")");
soc.setSoTimeout(2000);
try {
SSLSession session = ((SSLSocket)soc).getSession();
boolean valid = session.isValid();
Log.d(TAG, "session valid: " + valid);
OutputStream os = null;
InputStream is = null;
try {
os = soc.getOutputStream();
// just read the complete request from client
is = soc.getInputStream();
int c = 0;
String itext = "";
while ( (c = is.read() ) > 0 ) {
itext += (char)c;
if (itext.contains("\r\n\r\n")) // end of request detection
break;
}
//Log.e(TAG, " req: " + itext);
} catch (SocketTimeoutException e) {
// this can occasionally happen (handshake timeout)
Log.d(TAG, "socket timeout: " + e.getMessage());
if (os != null)
os.close();
if (is != null)
is.close();
soc.close();
continue;
}
long natmem = Debug.getNativeHeapSize();
long diff = 0;
if (lastnatmem != 0) {
diff = natmem - lastnatmem;
natmemtotalincrease += diff;
}
lastnatmem = natmem;
Log.i(TAG, " answer the request, native memory in use: " + natmem / 1024 + ", diff: " + diff / 1024 + ", total increase: " + natmemtotalincrease / 1024);
String html = "<!DOCTYPE html><html><head>";
html += "<script type='text/javascript'>";
html += "function poll() { request(); window.setTimeout(poll, 1000);}\n";
html += "function request() { var xmlHttp = new XMLHttpRequest(); xmlHttp.open( \"GET\", \"/\", false ); xmlHttp.send( null ); return xmlHttp.responseText; }";
html += "</script>";
html += "</head><body onload=\"poll()\"><p>Refresh the site to see the inreasing native memory when using HTTPS: " + natmem + " </p></body></html> ";
byte[] buffer = html.getBytes("UTF-8");
PrintWriter pw = new PrintWriter( os );
pw.print("HTTP/1.0 200 OK \r\n");
pw.print("Content-Type: text/html\r\n");
pw.print("Content-Length: " + buffer.length + "\r\n");
pw.print("\r\n");
pw.flush();
os.write(buffer);
os.flush();
os.close();
} catch (IOException e) {
e.printStackTrace();
}
soc.close();
}
catch (IOException e) {
e.printStackTrace();
}
}
} catch (SocketException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}).start();
-- EDIT --
I have uploaded a sample app project called SSLTest for eClipse which demonstrates the problem:
http://code.google.com/p/android/issues/detail?id=59536
-- UPDATE --
Good news: today the reported Android issue above was identified and proper submissions were made to fix the memory leak. For more details see the link above.
I imagine this would be a substantial time investment, but I see that Valgrind has been ported to Android. You could try getting that up and running. Of course, if you find there's an internal memory leak, there isn't a lot you can do about it except attempt to get the bug fixed in future Android releases.
As a workaround, you could make your application multi-process and put the https service in a separate process. That way you could restart it periodically, avoiding OOM. You might also have to have a third process just accepting port 443 connections and passing them on to the https worker - in order to avoid tiny outages when the https worker is restarted.
This also sounds like a substantial time investment :) But it would presumably successfully avoid the problem.
--- EDIT: More detail ---
Yes, if you have a main application with its own UI, a worker process for handling SSL and a worker process for accepting the SSL requests (which as you say probably can't be 443), then on top of your normal Activity classes, you would have two Service classes, and the manifest would place them in separate processes.
Handling SSL process: Rather than waiting for an OOM to crash the service, the service could monitor its own Debug.getNativeHeapSize(), and explicitly restart the service when it increased too much. Either that, or restart automatically after every 100 requests or so.
Handling listening socket process: This service would just listen on the TCP port you choose and pass on the raw data to the SSL process. This bit needs some thought, but the most obvious solution is to just have the SSL process listen on a different local port X (or switch between a selection of different ports), and the listening socket process would forward data to port X. The reason for having the listening socket process is to gracefully handle the possibility that X is down - as it might be whenever you restart it.
If your requirements allow for there being occasional mini-outages I would just do the handling SSL process, and skip the listening socket process, it's a relatively simple solution then - not that different to what you'd do normally. It's the listening socket process that adds complexity to the solution...
Does it help to explicitly close the input stream? In the sample code the input stream seems to only be closed in the case of a SocketTimeoutException exception.
--EDIT--
You could rename run() to run2() and move the while loop into run() and remove it from run2() and see if that makes a difference? This couldn't be a solution but would tell you if any of the long-lived objects free up the memory when their references are dropped.
There is one detail I would recommend changing in your implementation.
Make a list of all your resource variables, for example Sockets, Streams, Writers, etc. Be sure to have the declaration outside your try statement and be sure to do cleanup / closing in the finally statement. I normally do something like this to be 100% sure:
InputStream in = null;
OutputStream out = null;
try {
//assign a proper value to in and out, and use them as needed.
} catch(IOException e) {
//normal error handling
} finally {
try {
in.close();
} catch(IOException e) {}
try {
out.close();
} catch(IOException e) {}
}
It looks a little bit confusing, but imagine you use your in Stream inside the try block and you get some Exception, then your Streams never get closed and that is a potential reason for memory leaks.
I cannot guarantee that this is the reason, but it should be a good startup point.
About managing your service. I had a lot of bad experiences with Android services because I was running them in the same thread as the GUI. Under some circumstances, Android will see some code that is executing for too long and kill your main process in order to protect from crashes. The solution I found was to follow the suggestion from this tutorial (look at point 4):
http://www.vogella.com/articles/AndroidServices/article.html
After this, my service just worked as expected and didn't interfere with my GUI Process.
Regards

Obfuscate JPG by bit-toggle - reading performance on Android

Abstract:
reading images from file
with toggled bits to make unusable for preview tools
cant use encryption, to much power needed
can I either optimize the code below, or is there a better approach
Longer description:
I am trying to improve my code, maybe you got some ideas or improvements for the following situation. Please be aware that I neither try to beat the CIA, nor care much if somebody "brakes" the encryption.
The background is simple: My app loads a bunch of images from a server into a folder on the SD card. I do NOT want the images to be simple JPG files, because in this case the media indexer would list them in the library, and a user could simply copy the whole folder to his harddrive.
The obvious way to go is encryption. But a full blown AES or other encryption does not make sense, for two reasons: I would have to store the passkey in the app, so anyone could get the key with some effort anyway. And the price for decrypting images on the fly is way too high (we are talking about e.g. a gallery with 30 200kB pictures).
So I decided to toggle some bits in the image. This makes the format unreadable for image tools (or previews), but is pretty easy undone when reading the images. For "encrypting" I use some C# tool, the "decrypt" lines are the following ones:
public class CustomInputStream extends InputStream {
private String _fileName;
private BufferedInputStream _stream;
public CustomInputStream(String fileName) {
_fileName = fileName;
}
public void Open() throws IOException {
int len = (int) new File(_fileName).length();
_stream = new BufferedInputStream(new FileInputStream(_fileName), len);
}
#Override
public int read() throws IOException {
int value = _stream.read() ^ (1 << 7);
return value;
}
#Override
public void close() throws IOException {
_stream.close();
}
}
I tried overwriting the other methods (read with more then one byte) too, but this kills the BitmapFactory - not sure why, maybe I did something wrong. Here is the code for the image bitmap creation:
Bitmap bitmap = null;
try {
InputStream i = CryptoProvider.GetInstance().GetDecoderStream(path);
bitmap = BitmapFactory.decodeStream(i);
i.close();
} catch (Exception e1) {
_logger.Error("Cant load image " + path + " ERROR " + e1);
}
if (bitmap == null) {
_logger.Error("Image is NULL for path " + path);
}
return bitmap;
Do you have any feedback on the chosen approach? Any way to optimize it, or a completely different approach for Android devices?
You could try XORing the bytestream with the output of a fast PRNG. Just use a different seed for each file and you're done.
note: As already noted in the question, such methods are trivial to bypass.

Categories

Resources