Speed up encryption/decryption? - android

I have an encryption and decryption code which I use to encrypt and decrypt video files (mp4). I'm trying to speed up the decryption process as the encryption one is not that relevant for my case. This is the code that I have for the decryption process:
private static void decryptFile() throws IOException, ShortBufferException, IllegalBlockSizeException, BadPaddingException
{
//int blockSize = cipher.getBlockSize();
int blockSize = cipher.getBlockSize();
int outputSize = cipher.getOutputSize(blockSize);
System.out.println("outputsize: " + outputSize);
byte[] inBytes = new byte[blockSize];
byte[] outBytes = new byte[outputSize];
in= new FileInputStream(inputFile);
out=new FileOutputStream(outputFile);
BufferedInputStream inStream = new BufferedInputStream(in);
int inLength = 0;;
boolean more = true;
while (more)
{
inLength = inStream.read(inBytes);
if (inLength == blockSize)
{
int outLength
= cipher.update(inBytes, 0, blockSize, outBytes);
out.write(outBytes, 0, outLength);
}
else more = false;
}
if (inLength > 0)
outBytes = cipher.doFinal(inBytes, 0, inLength);
else
outBytes = cipher.doFinal();
out.write(outBytes);
}
My question is how to speed up the decryption process in this code. I've tried decrypting a 10MB mp4 file and it decrypts in 6-7 seconds. However, I'm aiming for < 1 seconds. Another thing I would like to know is if my writing to the FileOutputStream out is actually slowing the process down rather than the decryption process itself. Any suggestions on how to go about speeding things up here.
I'm using AES for encryption/decryption.
Until I find a solution, I will be using a ProgressDialog which tells the user to wait until the video has been decrypted (Obviously, I'm not going to use the word: decrypted).

Why are you decrypting data only by blockSize increments ? You do not show what type of object cipher is, but I am guessing this is a javax.crypto.Cipher instance. It can handle update() calls over arrays of arbitrary length, and you will have much less overhead if you use longer arrays. You should process data by blocks of, say, 8192 bytes (that's the traditional length for a buffer, it interacts reasonably well with CPU inner caches).

bytebiscuit, your question gave me the solution which I am trying from past 6 days. I just modified your code little bit, and my 52 mb video file is getting decrypted in just 4 seconds. Previous decrypting technique took 45 seconds which was a different logic (not yours) . Thats a massive difference 45 seconds to 4 seconds. Where ever I have done modification I am putting //modified comment lines. I am sure if your video is 10mb video, it will get decrypted in 1 second for sure. Try applying this, it should work out.
private static void decryptFile() throws IOException, ShortBufferException, IllegalBlockSizeException, BadPaddingException
{
//int blockSize = cipher.getBlockSize();
int blockSize = cipher.getBlockSize();
int outputSize = cipher.getOutputSize(blockSize);
System.out.println("outputsize: " + outputSize);
byte[] inBytes = new byte[blockSize*1024]; //modified
byte[] outBytes = new byte[outputSize * 1024]; //modified
in= new FileInputStream(inputFile);
out=new FileOutputStream(outputFile);
BufferedInputStream inStream = new BufferedInputStream(in);
int inLength = 0;;
boolean more = true;
while (more)
{
inLength = inStream.read(inBytes);
if (inLength/1024 == blockSize) //modified
{
int outLength
= cipher.update(inBytes, 0, blockSize*1024, outBytes);//modified
out.write(outBytes, 0, outLength);
}
else more = false;
}
if (inLength > 0)
outBytes = cipher.doFinal(inBytes, 0, inLength);
else
outBytes = cipher.doFinal();
out.write(outBytes);
}

I suggest you use the profiling tool provided in the android sdk. it will tell you where you spend the most time (i.e. : file writing or decoding).
see http://developer.android.com/guide/developing/debugging/debugging-tracing.html
This work on the emulator as well as on an actual device.

Consider using the NDK. On devices before Froyo (and even Froyo itself), it would be really slow due to the lack of JIT (or a very simple one in Froyo). Even with the JIT, native architecture-optimized crypto code will always outrun Dalvik.
See also this question.
As an aside, if you're using AES directly, you're probably doing something wrong. If this is part of an effort to do DRM, make sure you realize the full extent of the fact that decompiling an Android app is trivial. Your key will not be secure, which by definition defeats the encryption.

Instead of spending efforts to improve an inadequate architecture, you should consider a streaming solution: it has the great advantage to spread the computation time for the decryption so that it becomes no more noticeable. I mean: do not produce another file from your video source but rather a stream, with a local http server. Unfortunately there is no such component in the SDK, you have to make your own implementation or search for an existing one.

Related

How to increase speed of generating md5 of multiple files?

I have 10000 to 12000 image files and having space up to 800 MB present in external storage.
I am using a loop which takes each file path and generates md5 of it, but due to huge amount of files being read to create md5, this takes alot of time.
This is the algorithm for generating md5 of file.
public static String getMd5OfFile(String filePath) {
String returnVal = "";
try {
InputStream input = new FileInputStream(filePath);
// byte[] buffer = new byte[1024];
byte[] buffer = new byte[2048];
MessageDigest md5Hash = MessageDigest.getInstance("MD5");
int numRead = 0;
while (numRead != -1) {
numRead = input.read(buffer);
if (numRead > 0) {
md5Hash.update(buffer, 0, numRead);
}
}
input.close();
byte[] md5Bytes = md5Hash.digest();
for (int i = 0; i < md5Bytes.length; i++) {
returnVal += Integer.toString((md5Bytes[i] & 0xff) + 0x100, 16).substring(1);
}
} catch (Throwable t) {
t.printStackTrace();
}
return returnVal.toUpperCase();
}
So the question is can i increase the buffer size to make operation faster and by how much should i do it, which would not either break the operation or create an issue for generation of md5.
And does wrap the buffer stream in input stream will make it faster?
As with any optimisation problems, you should measure your performance to learn if any of the changes you make have impact.
2k is certainly a small buffer size and a larger one could do better. But I/O stacks have buffers all the way down, so it might have negligible impact. Try and measure yourself.
Another optimisation worth trying out is to notice that reading a file is an I/O-bound operation and computing MD5 is CPU-bound. Have one thread read file content and another thread just update MD5 state. Depending on the number of CPU cores on your device, you could hash multiple files in parallel with performance gains.

Chunk size sent to client affects the file completeness

Update: The problem must be on the Android side, not Qt.
The problem is simply I can't send more than 1000 bytes (correctly) from Windows (via Qt) to Android. Here I post full information:
Code in Qt Creator (Windows side):
QFile inputFile(fileInfo->absoluteFilePath());
QByteArray read;
inputFile.open(QIODevice::ReadOnly);
int size = 0;
while(1){
read.clear();
read = inputFile.read(1000);
qDebug()<< "Read : " <<read.size();
size += read.size();
if(read.size() == 0){
break;
}
QByteArray toWrite(read);
newSocket->write(toWrite);
newSocket->flush();
newSocket->waitForBytesWritten();
this->sleep(1);
}
inputFile.close();
qDebug()<<"Transfer Done! " << size << " bytes";}
Java code (Android side):
DataOutputStream dos;
DataInputStream dis;
Socket s;
s = new Socket("192.168.137.1",8080);
dos = new DataOutputStream(s.getOutputStream());
dis = new DataInputStream(s.getInputStream());
while(true){
if(dis.available() > 0 ) {
int chunkSize = 1000;
byte[] b = new byte[chunkSize];
dis.read(b);
writeToExternalStoragePublic("test.png",b);}
The code works pretty good. It even works when I set the chunks size on both sides to 1,000,000, and the data is written to Android, but a lot of the bytes are empty.
Check these photos from hex Workshop data visualizer. Using 1000 as chunkSize in the left and 2000 in right.
Here are the files:
1000ChunkSize
2000ChunkSize
The problem is 1000 is too slow, and it takes a lot of time even for small files though it works. What do you suggest?
A possible problem was the timing on the server side even though it has:
waitForBytesWritten();
I put
this->sleep(1);
But it didn't help.
Update: The problem must be on the Android side, not Qt.

IOException when trying to restore data in BackupAgent in chunks instead of all at once

I've implemented a custom BackupAgent and part of my data are images which are about 1 MB large. When creating the backup, every image is written as a separate entity. On restoring the images, I wanted to read the data in 4K (BUFFER_SIZE) chunks like this and write it to a file like this:
FileOutputStream out = new FileOutputStream(file);
byte[] buffer = new byte[BUFFER_SIZE];
int offset = 0;
int n = 0;
// readEntityData returns 0 when all data of entity is read
while (0 != (n = data.readEntityData(buffer, offset, BUFFER_SIZE))) {
out.write(buffer, 0, n);
offset += n;
}
However, this only reads the first 4K chunk correctly, on the second call of readEntityData an IOException with error code 0xffffffff is thrown.
When I make the buffer as large as the entity's data size and read all the data at once, it works perfectly, but I think it would be safer to use a smaller buffer.
Has anybody experienced something like that? All examples I found read the data at once and not in multiple chunks.

Android NIO - java.io.IOException: Value too large for defined data type

I'm trying to write a very large file to another very large file. I'm receiving this error on the filechannel writing line and I'm unsure why. I thought it was because I was going out of the limits of the data type long but long can go up to 9,223,372,036,854,775,807 and I'm only going up to 5,372,896,745 at the most. Any ideas why this is occurring? Is there some limit that MappedByteBuffer has? This doesn't occur for smaller files and I haven't run into any issues using the same code in a java desktop application. (Only happens on Android)
File f1 = new File(filename1);
FileChannel fic, foc;
long fsize;
MappedByteBuffer mBUf;
FileOutputStream out = new FileOutputStream(f1,true);
foc = out.getChannel();
File f2 = new File(filename2);
FileInputStream in = new FileInputStream(f2);
fic = in.getChannel();
fsize = fic.size();
for (long b = 0; b < fsize; b += 65536)
{
if (fsize - b < Resource.MEMORY_ALLOC_SIZE)
mBUf = fic.map(FileChannel.MapMode.READ_ONLY, b, fsize - b);
else
mBUf = fic.map(FileChannel.MapMode.READ_ONLY, b, Resource.MEMORY_ALLOC_SIZE);
foc.write(mBUf); //ERROR HERE!
}
fic.close();
in.close();
foc.close();
out.close();
Any ideas/feedback is appreciated!
Is there some limit that MappedByteBuffer has?
Of course there is. It is limited by the available virtual memory for a start, and after that by the virtual address space.
You should be using transferTo() for this task rather than MappedByteBuffers,, as there is no agreed means of disposing of the virtual address space occupied by the latter.
Unfortunately a Long does not go that high on a 32-bit system (which I believe Android is since it doesn't have over 4Gb of RAM). Therefore the maximum length of an unsigned long on Android is 4,294,967,295 which means you are exceeding its limit.

Can 2 WritableByteChannels be used at the same time?

When I write directly to 2 outputstreams, everything works fine. When I try to write to 2 channels though, the second one seemingly does not receive it.
Does anyone know if 2 WritableByteChannels can be written to at the same time? If not, any other ideas of what I can do to perform the same action still using NIO/Channels?
connection2 = new Socket(Resource.LAN_DEV2_IP_ADDRESS, Resource.LAN_DEV2_SOCKET_PORT);
out2 = connection2.getOutputStream();
connection = new Socket(Resource.LAN_HOST_IP_ADDRESS, Resource.LAN_HOST_SOCKET_PORT);
out = connection.getOutputStream();
File f = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS), filename);
in = new FileInputStream(f);
fic = in.getChannel();
fsize = fic.size();
channel2 = Channels.newChannel(out2);
channel = Channels.newChannel(out);
//Send Header
byte[] p = createHeaderPacket(filename, f.length());
out2.write(p); // Received correctly
out.write(p); // Received correctly
//Send file
long currPos = 0;
while (currPos < fsize)
{
if (fsize - currPos < Resource.MEMORY_ALLOC_SIZE)
{
mappedByteBuffer = fic.map(FileChannel.MapMode.READ_ONLY, currPos, fsize - currPos);
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
currPos = fsize;
}
else
{
mappedByteBuffer = fic.map(FileChannel.MapMode.READ_ONLY, currPos, Resource.MEMORY_ALLOC_SIZE);
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
currPos += Resource.MEMORY_ALLOC_SIZE;
}
}
Try:
channel2.write(mappedByteBuffer.duplicate());
channel.write(mappedByteBuffer);
The way to understand NIO Buffers is to keep in mind its basic properties:
the underlying data store (which is commonly an ordinary byte array, but can be other things, such as a memory-mapped region of a file);
the start and capacity within that underlying space;
your current position in the buffer; and
the limit of the buffer.
All buffer operations provided by NIO are documented in terms of how the operation affects these properties. For example, the WritableByteChannel.write() documentation tells us that:
Between 0 and src.remaining() (inclusive) bytes will be written to the channel; and
If count bytes were written, the ByteBuffer's position will be increased by count when write() returns.
So looking at your original code:
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
If the first write writes the entire remaining mappedByteBuffer to channel2, after that statement mappedByteBuffer.remaining() will be zero, so the write to channel will not write any bytes at all.
Hence my suggestion above to use ByteBuffer.duplicate() on the first write. This method returns a new ByteBuffer object which:
shares the original buffer's underlying store (so you're not making an unnecessary copy in memory of the actual bytes you want to write twice); but
has its own position (and remaining) values, so when channel2.write() adjusts that (duplicate) ByteBuffer's position, it will leave the position unchanged in the original buffer,
so channel.write() will still receive the intended range of bytes.
As an alternative, you could also write:
mappedByteBuffer.mark(); // store the current position
channel2.write(mappedByteBuffer);
mappedByteBuffer.reset(); // move position to the previously marked position
channel.write(mappedByteBuffer);
I'm also inclined to agree with EJP's point that you're probably not making the best use of MappedByteBuffer here. You could simplify your copying loop to:
ByteBuffer buffer = ByteBuffer.allocate(Resource.MEMORY_ALLOC_SIZE);
while (fic.read(buffer) >= 0) {
buffer.flip();
channel2.write(buffer.duplicate());
channel.write(buffer);
}
Here the read() method increases position by the number of bytes read from the channel, then the flip() method sets the limit to that position and the position back to 0, which means the bytes you've just read are in the remaining range that write() will consume.
However, you'll notice that EJP's loop is a little more complicated than that. That's because write operations on channels might not necessarily write every remaining byte. (The write() documentation gives the example of a networking socket opened in non-blocking mode.) However that sample code (and the similar sample in the documentation of ByteBuffer.compact()) relies on the fact that you're only writing to a single channel; when you're writing to two different channels, you have to handle the fact that the two channels might accept a different number of bytes. So:
ByteBuffer buffer = ByteBuffer.allocate(Resource.MEMORY_ALLOC_SIZE);
while (fic.read(buffer) >= 0) {
buffer.flip();
buffer.mark();
while (buffer.hasRemaining()) {
channel2.write(buffer);
}
buffer.reset():
while (buffer.hasRemaining()) {
channel.write(buffer);
}
buffer.clear();
}
Of course multiple channels can be used at the same time, but more to the point that's a terrible way to send a file. Creating lots of MappedByteBuffers causes all kinds of problems as the underlying mapped regions are never released. Just open it as a normal channel and use the canonical NIO copy loop:
while (in.read(buffer) >= 0 || buffer.position() > 0)
{
buffer.flip();
out.write(buffer);
buffer.compact();
}

Categories

Resources