I have a problem with SHA-1 performance on Android. In C# I get calculated hash in about 3s, same calculation for Android takes about 75s. I think the problem is in reading operation from file, but I'm not sure how to improve performance.
Here's my hash generation method.
private static String getSHA1FromFileContent(String filename)
{
try
{
MessageDigest digest = MessageDigest.getInstance("SHA-1");
//byte[] buffer = new byte[65536]; //created at start.
InputStream fis = new FileInputStream(filename);
int n = 0;
while (n != -1)
{
n = fis.read(buffer);
if (n > 0)
{
digest.update(buffer, 0, n);
}
}
byte[] digestResult = digest.digest();
return asHex(digestResult);
}
catch (Exception e)
{
return null;
}
}
Any ideas how can I improve performance?
I tested it on my SGS (i9000) and it took 0.806s to generate the hash for a 10.1MB file.
Only difference is that in my code i am using BufferedInputStream in addition to the FileInputStream and the hex conversion library found at:
http://apachejava.blogspot.com/2011/02/hexconversions-convert-string-byte-byte.html
Also I would suggest that you close your file input stream in a finally clause
If I were you I would use the JNI like this guy did and get the speed up that way. This is exactly what the C interface was made for.
Related
I have 10000 to 12000 image files and having space up to 800 MB present in external storage.
I am using a loop which takes each file path and generates md5 of it, but due to huge amount of files being read to create md5, this takes alot of time.
This is the algorithm for generating md5 of file.
public static String getMd5OfFile(String filePath) {
String returnVal = "";
try {
InputStream input = new FileInputStream(filePath);
// byte[] buffer = new byte[1024];
byte[] buffer = new byte[2048];
MessageDigest md5Hash = MessageDigest.getInstance("MD5");
int numRead = 0;
while (numRead != -1) {
numRead = input.read(buffer);
if (numRead > 0) {
md5Hash.update(buffer, 0, numRead);
}
}
input.close();
byte[] md5Bytes = md5Hash.digest();
for (int i = 0; i < md5Bytes.length; i++) {
returnVal += Integer.toString((md5Bytes[i] & 0xff) + 0x100, 16).substring(1);
}
} catch (Throwable t) {
t.printStackTrace();
}
return returnVal.toUpperCase();
}
So the question is can i increase the buffer size to make operation faster and by how much should i do it, which would not either break the operation or create an issue for generation of md5.
And does wrap the buffer stream in input stream will make it faster?
As with any optimisation problems, you should measure your performance to learn if any of the changes you make have impact.
2k is certainly a small buffer size and a larger one could do better. But I/O stacks have buffers all the way down, so it might have negligible impact. Try and measure yourself.
Another optimisation worth trying out is to notice that reading a file is an I/O-bound operation and computing MD5 is CPU-bound. Have one thread read file content and another thread just update MD5 state. Depending on the number of CPU cores on your device, you could hash multiple files in parallel with performance gains.
I have an application in which I download images from a server. I would like to encrypt these images but I don't know what is the best way to do it without losing a lot of performance. My application needs to access like a lot of images at the same time but I need them to be ciphered in order that the user can't get it easily.
Thank you very much in advance :)
You might try running your own crypto .. issue, of course, will be how to handle the "key" that you want to use to make sure it is not compromised. Here is an example of the use of "DES" to encrypt a file. (You can extend to handle the decryption).
public class Obscure {
private byte[] k = "Now is the time for all good men to come to the aid of their country."
.getBytes();
public Obscure(String keyString) {
k = keyString.getBytes();
}
public boolean encryptFile(String source, String target)
throws NoSuchAlgorithmException, NoSuchPaddingException,
InvalidKeyException, IOException {
Cipher encoding;
byte[] buffer = new byte[8192];
FileInputStream fis = new FileInputStream(source);
FileOutputStream fos = new FileOutputStream(target);
SecretKeySpec key = new SecretKeySpec(k, "DES");
encoding = Cipher.getInstance("DES");
encoding.init(Cipher.ENCRYPT_MODE, key);
CipherOutputStream cos = new CipherOutputStream(fos, encoding);
int numBytes;
while ((numBytes = fis.read(buffer)) != -1) {
cos.write(buffer, 0, numBytes);
}
fos.flush();
fis.close();
fos.close();
cos.close();
return true;
}
}
I must be doing something really wrong. Running following code on Android it produces truncated file (_items_) without any exceptions or problems in the log. Running the same code with OpenJDK 7 it decompresses the file correctly.
try {
final InputStream fis = new GZIPInputStream(new FileInputStream("/storage/sdcard/_items"));
try {
final FileOutputStream fos = new FileOutputStream("/storage/sdcard/_items_");
try {
final byte[] buffer = new byte[1024];
int n;
while ((n = fis.read(buffer)) != -1) {
fos.write(buffer, 0, n);
}
} finally {
fos.close();
}
} finally {
fis.close();
}
} catch (final IOException e) {
e.printStackTrace();
throw new RuntimeException(e);
}
I've tried this with Android emulator (API 18) and on Desire HD (Android 2.3.5) with the same buggy result.
Input file (_items): https://drive.google.com/file/d/0B6M72P2gzYmwaHg4SzRTYnRMOVk/edit?usp=sharing
Android truncated output file (_items_): https://drive.google.com/file/d/0B6M72P2gzYmwMUZIZ2FEaHNZUFk/edit?usp=sharing
The AOSP bug has been updated with analysis by an engineer from the Dalvik team.
Summary: the gzip stream has multiple members concatenated together, and the decompressor is expected to process them all, but Dalvik's implementation is stopping after the first.
Unless there's a way to convince the data source to compress its streams differently, you will need to find a replacement for GZIPInputStream.
Workaround is to use GZIPInputStream from JZlib (currently only in concatenated_gzip_streams branch). See https://github.com/ymnk/jzlib/issues/12 for more details.
All I need is convert byte[] to String. Then do something with that string and convert back to byte[] array. But in this testing I'm just convert byte[] to string and convert back to byte[] and the result is different.
to convert byte[] to string by using this:
byte[] byteEntity = EntityUtils.toByteArray(entity);
String s = new String(byteEntity,"UTF-8");
Then i tried:
byte[] byteTest = s.getBytes("UTF-8");
Then i complared it:
if (byteEntity.equals(byteTest) Log.i("test","equal");
else Log.i("test","diff");
So the result is different.
I searched in stackoverflow about this but it doesn't match my case. The point is my data is .png picture so the string converted is unreadable. Thanks in advance.
Solved
Using something like this.
byte[] mByteEntity = EntityUtils.toByteArray(entity);
byte[] mByteDecrypted = clip_xor(mByteEntity,"your_key".getBytes());
baos.write(mByteDecrypted);
InputStream in = new ByteArrayInputStream(baos.toByteArray());
and this is function clip_xor
protected byte[] clip_xor(byte[] data, byte[] key) {
int num_key = key.length;
int num_data = data.length;
try {
if (num_key > 0) {
for (int i = 0, j = 0; i < num_data; i++, j = (j + 1)
% num_key) {
data[i] ^= key[j];
}
}
} catch (Exception ex) {
Log.i("error", ex.toString());
}
return data;
}
Hope this will useful for someone face same problem. Thanks you your all for helping me solve this.
Special thanks for P'krit_s
primitive arrays are actually Objects (that's why they have .equals method) but they do not implement the contract of equality (hashCode and equals) needed for comparison. You cannot also use == since according to docs, .getBytes will return a new instance byte[]. You should use Arrays.equals(byteEntity, byteTest) to test equality.
Have a look to the answer here.
In that case my target was transform a png image in a bytestream to display it in embedded browser (it was a particular case where browser did not show directly the png).
You may use the logic of that solution to convert png to byte and then to String.
Then reverse the order of operations to get back to the original file.
I have an encryption and decryption code which I use to encrypt and decrypt video files (mp4). I'm trying to speed up the decryption process as the encryption one is not that relevant for my case. This is the code that I have for the decryption process:
private static void decryptFile() throws IOException, ShortBufferException, IllegalBlockSizeException, BadPaddingException
{
//int blockSize = cipher.getBlockSize();
int blockSize = cipher.getBlockSize();
int outputSize = cipher.getOutputSize(blockSize);
System.out.println("outputsize: " + outputSize);
byte[] inBytes = new byte[blockSize];
byte[] outBytes = new byte[outputSize];
in= new FileInputStream(inputFile);
out=new FileOutputStream(outputFile);
BufferedInputStream inStream = new BufferedInputStream(in);
int inLength = 0;;
boolean more = true;
while (more)
{
inLength = inStream.read(inBytes);
if (inLength == blockSize)
{
int outLength
= cipher.update(inBytes, 0, blockSize, outBytes);
out.write(outBytes, 0, outLength);
}
else more = false;
}
if (inLength > 0)
outBytes = cipher.doFinal(inBytes, 0, inLength);
else
outBytes = cipher.doFinal();
out.write(outBytes);
}
My question is how to speed up the decryption process in this code. I've tried decrypting a 10MB mp4 file and it decrypts in 6-7 seconds. However, I'm aiming for < 1 seconds. Another thing I would like to know is if my writing to the FileOutputStream out is actually slowing the process down rather than the decryption process itself. Any suggestions on how to go about speeding things up here.
I'm using AES for encryption/decryption.
Until I find a solution, I will be using a ProgressDialog which tells the user to wait until the video has been decrypted (Obviously, I'm not going to use the word: decrypted).
Why are you decrypting data only by blockSize increments ? You do not show what type of object cipher is, but I am guessing this is a javax.crypto.Cipher instance. It can handle update() calls over arrays of arbitrary length, and you will have much less overhead if you use longer arrays. You should process data by blocks of, say, 8192 bytes (that's the traditional length for a buffer, it interacts reasonably well with CPU inner caches).
bytebiscuit, your question gave me the solution which I am trying from past 6 days. I just modified your code little bit, and my 52 mb video file is getting decrypted in just 4 seconds. Previous decrypting technique took 45 seconds which was a different logic (not yours) . Thats a massive difference 45 seconds to 4 seconds. Where ever I have done modification I am putting //modified comment lines. I am sure if your video is 10mb video, it will get decrypted in 1 second for sure. Try applying this, it should work out.
private static void decryptFile() throws IOException, ShortBufferException, IllegalBlockSizeException, BadPaddingException
{
//int blockSize = cipher.getBlockSize();
int blockSize = cipher.getBlockSize();
int outputSize = cipher.getOutputSize(blockSize);
System.out.println("outputsize: " + outputSize);
byte[] inBytes = new byte[blockSize*1024]; //modified
byte[] outBytes = new byte[outputSize * 1024]; //modified
in= new FileInputStream(inputFile);
out=new FileOutputStream(outputFile);
BufferedInputStream inStream = new BufferedInputStream(in);
int inLength = 0;;
boolean more = true;
while (more)
{
inLength = inStream.read(inBytes);
if (inLength/1024 == blockSize) //modified
{
int outLength
= cipher.update(inBytes, 0, blockSize*1024, outBytes);//modified
out.write(outBytes, 0, outLength);
}
else more = false;
}
if (inLength > 0)
outBytes = cipher.doFinal(inBytes, 0, inLength);
else
outBytes = cipher.doFinal();
out.write(outBytes);
}
I suggest you use the profiling tool provided in the android sdk. it will tell you where you spend the most time (i.e. : file writing or decoding).
see http://developer.android.com/guide/developing/debugging/debugging-tracing.html
This work on the emulator as well as on an actual device.
Consider using the NDK. On devices before Froyo (and even Froyo itself), it would be really slow due to the lack of JIT (or a very simple one in Froyo). Even with the JIT, native architecture-optimized crypto code will always outrun Dalvik.
See also this question.
As an aside, if you're using AES directly, you're probably doing something wrong. If this is part of an effort to do DRM, make sure you realize the full extent of the fact that decompiling an Android app is trivial. Your key will not be secure, which by definition defeats the encryption.
Instead of spending efforts to improve an inadequate architecture, you should consider a streaming solution: it has the great advantage to spread the computation time for the decryption so that it becomes no more noticeable. I mean: do not produce another file from your video source but rather a stream, with a local http server. Unfortunately there is no such component in the SDK, you have to make your own implementation or search for an existing one.