Can 2 WritableByteChannels be used at the same time? - android

When I write directly to 2 outputstreams, everything works fine. When I try to write to 2 channels though, the second one seemingly does not receive it.
Does anyone know if 2 WritableByteChannels can be written to at the same time? If not, any other ideas of what I can do to perform the same action still using NIO/Channels?
connection2 = new Socket(Resource.LAN_DEV2_IP_ADDRESS, Resource.LAN_DEV2_SOCKET_PORT);
out2 = connection2.getOutputStream();
connection = new Socket(Resource.LAN_HOST_IP_ADDRESS, Resource.LAN_HOST_SOCKET_PORT);
out = connection.getOutputStream();
File f = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS), filename);
in = new FileInputStream(f);
fic = in.getChannel();
fsize = fic.size();
channel2 = Channels.newChannel(out2);
channel = Channels.newChannel(out);
//Send Header
byte[] p = createHeaderPacket(filename, f.length());
out2.write(p); // Received correctly
out.write(p); // Received correctly
//Send file
long currPos = 0;
while (currPos < fsize)
{
if (fsize - currPos < Resource.MEMORY_ALLOC_SIZE)
{
mappedByteBuffer = fic.map(FileChannel.MapMode.READ_ONLY, currPos, fsize - currPos);
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
currPos = fsize;
}
else
{
mappedByteBuffer = fic.map(FileChannel.MapMode.READ_ONLY, currPos, Resource.MEMORY_ALLOC_SIZE);
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
currPos += Resource.MEMORY_ALLOC_SIZE;
}
}

Try:
channel2.write(mappedByteBuffer.duplicate());
channel.write(mappedByteBuffer);
The way to understand NIO Buffers is to keep in mind its basic properties:
the underlying data store (which is commonly an ordinary byte array, but can be other things, such as a memory-mapped region of a file);
the start and capacity within that underlying space;
your current position in the buffer; and
the limit of the buffer.
All buffer operations provided by NIO are documented in terms of how the operation affects these properties. For example, the WritableByteChannel.write() documentation tells us that:
Between 0 and src.remaining() (inclusive) bytes will be written to the channel; and
If count bytes were written, the ByteBuffer's position will be increased by count when write() returns.
So looking at your original code:
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
If the first write writes the entire remaining mappedByteBuffer to channel2, after that statement mappedByteBuffer.remaining() will be zero, so the write to channel will not write any bytes at all.
Hence my suggestion above to use ByteBuffer.duplicate() on the first write. This method returns a new ByteBuffer object which:
shares the original buffer's underlying store (so you're not making an unnecessary copy in memory of the actual bytes you want to write twice); but
has its own position (and remaining) values, so when channel2.write() adjusts that (duplicate) ByteBuffer's position, it will leave the position unchanged in the original buffer,
so channel.write() will still receive the intended range of bytes.
As an alternative, you could also write:
mappedByteBuffer.mark(); // store the current position
channel2.write(mappedByteBuffer);
mappedByteBuffer.reset(); // move position to the previously marked position
channel.write(mappedByteBuffer);
I'm also inclined to agree with EJP's point that you're probably not making the best use of MappedByteBuffer here. You could simplify your copying loop to:
ByteBuffer buffer = ByteBuffer.allocate(Resource.MEMORY_ALLOC_SIZE);
while (fic.read(buffer) >= 0) {
buffer.flip();
channel2.write(buffer.duplicate());
channel.write(buffer);
}
Here the read() method increases position by the number of bytes read from the channel, then the flip() method sets the limit to that position and the position back to 0, which means the bytes you've just read are in the remaining range that write() will consume.
However, you'll notice that EJP's loop is a little more complicated than that. That's because write operations on channels might not necessarily write every remaining byte. (The write() documentation gives the example of a networking socket opened in non-blocking mode.) However that sample code (and the similar sample in the documentation of ByteBuffer.compact()) relies on the fact that you're only writing to a single channel; when you're writing to two different channels, you have to handle the fact that the two channels might accept a different number of bytes. So:
ByteBuffer buffer = ByteBuffer.allocate(Resource.MEMORY_ALLOC_SIZE);
while (fic.read(buffer) >= 0) {
buffer.flip();
buffer.mark();
while (buffer.hasRemaining()) {
channel2.write(buffer);
}
buffer.reset():
while (buffer.hasRemaining()) {
channel.write(buffer);
}
buffer.clear();
}

Of course multiple channels can be used at the same time, but more to the point that's a terrible way to send a file. Creating lots of MappedByteBuffers causes all kinds of problems as the underlying mapped regions are never released. Just open it as a normal channel and use the canonical NIO copy loop:
while (in.read(buffer) >= 0 || buffer.position() > 0)
{
buffer.flip();
out.write(buffer);
buffer.compact();
}

Related

Android USB host : interrupt do not respond immedietly

I have a usb device which have a button.
And I want to an android app to catch a signal of the button.
I found inferface and endpoint number of the button.
It had seemed to perform ordinarily at galaxy S3 and galaxy note.
But later, I found that it has delay at other phones.
I was able to receive instant responses about 10% of the time; usually there was a 2-second delay, with some cases where the whole response was lost.
Although I couldn't figure out the exact reason, I realized that the phones that had response delays were those with kernel version 3.4 or later.
Here is the code that I used initially.
if(mConnection != null){
mConnection.claimInterface(mInterfaces.get(0), true);
final UsbEndpoint endpoint = mInterfaces.get(0).getEndpoint(0);
Thread getSignalThread = new Thread(new Runnable() {
#Override
public synchronized void run() {
byte[] buffer = new byte[8];
final ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
while(mConnection!=null){
int len = mConnection.bulkTransfer(endpoint, buffer, buffer.length, 0);
if( len>=0 ){
// do my own code
}
}
}
});
getSignalThread.setPriority(Thread.MAX_PRIORITY);
getSignalThread.start();
}
edit timeout
when the timeout was set to 50ms, I wasn't able to receive responses most of the time. When the timeout was 500ms, I was able to initially get some delayed-responses; however, I lost all responses after several tries with this setting.
Using UsbRequest
In addition to using the bulktransfer method, I also tried using UsbRequest and below is the code that I used.
#Override
public synchronized void run() {
byte[] buffer = new byte[8];
final ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
UsbRequest inRequest = new UsbRequest();
inRequest.initialize(mConnection, endpoint);
while(mConnection!=null){
inRequest.queue( byteBuffer , buffer.length);
if( mConnection.requestWait() == inRequest ){
// do my own code
}
}
}
However, the same kind of delay happened even after using UsbRequest.
Using libusb
I also tried using libusb_interrupt_transfer from an open source library called libusb.
However this also produced the same type of delay that I had when using UsbDeviceConnection.
unsigned char data_bt[8] = { 0, };
uint32_t out[2];
int transfered = 0;
while (devh_usb != NULL) {
libusb_interrupt_transfer(devh_usb, 0x83, data_bt, 8, &transfered, 0);
memcpy(out, data_bt, 8);
if (out[0] == PUSH) {
LOGI("button pushed!!!");
memset(data_bt, 0, 8);
//(env)->CallVoidMethod( thiz, mid);
}
}
After looking into the part where libusb_interrupt_transfer is processed libusb, I was able to figure out the general steps of interrupt_transfer:
1. make a transfer object of type interrupt
2. make a urb object that points to the transfer object
3. submit the urb object to the device's fd
4. detect any changes in the fd object via urb object
5. read urb through ioctl
steps 3, 4, 5 are the steps regarding file i/o.
I was able to find out that at step 4 the program waits for the button press before moving onto the next step.
Therefore I tried changing poll to epoll in order to check if the poll function was causing the delay; unfortunately nothing changed.
I also tried setting the timeout of the poll function to 500ms and making it always get values of the fd through ioctl but only found out that the value changed 2~3 seconds after pressing the button.
So in conclusion I feel that there is a delay in the process of updating the value of the fd after pressing the button. If there is anyone who could help me with this issue, please let me know. Thank you.
Thanks for reading

IOException when trying to restore data in BackupAgent in chunks instead of all at once

I've implemented a custom BackupAgent and part of my data are images which are about 1 MB large. When creating the backup, every image is written as a separate entity. On restoring the images, I wanted to read the data in 4K (BUFFER_SIZE) chunks like this and write it to a file like this:
FileOutputStream out = new FileOutputStream(file);
byte[] buffer = new byte[BUFFER_SIZE];
int offset = 0;
int n = 0;
// readEntityData returns 0 when all data of entity is read
while (0 != (n = data.readEntityData(buffer, offset, BUFFER_SIZE))) {
out.write(buffer, 0, n);
offset += n;
}
However, this only reads the first 4K chunk correctly, on the second call of readEntityData an IOException with error code 0xffffffff is thrown.
When I make the buffer as large as the entity's data size and read all the data at once, it works perfectly, but I think it would be safer to use a smaller buffer.
Has anybody experienced something like that? All examples I found read the data at once and not in multiple chunks.

Reading a .NET Stream : high CPU usage - how to read wihtout while (true)?

Since my problem is close to this one, I haven been looing at feedbacks from this possible solution : Reading on a NetworkStream = 100% CPU usage but I fail to find the solution I need.
Much like in this other question, I want to use something else than an infinite while loop.
More precisely, I am using Xamarin to build Android application in Visual Studio. Since I need a Bluetooth service I am using a Stream to read and send data.
Reading data from Stream.InputStrem is where I have a problem : is there some sort of a blocking call to wait for data to be available without using a while (true) loop ?
I tried :
Begin/End Read
Task.Run and await
Here is a code sample:
public byte[] RetrieveDataFromStream()
{
List<byte> packet = new List<byte>();
int readBytes = 0;
while (_inputStream.CanRead && _inputStream.IsDataAvailable() && readBytes < 1024 && _state == STATE_CONNECTED)
{
try
{
byte[] buffer = new byte[1];
readBytes = _inputStream.Read(buffer, 0, buffer.Length);
packet.Add(buffer[0]);
}
catch (Java.IO.IOException e)
{
return null;
}
}
return packet.ToArray();
}
I call this method from a while loop.
This loop will check until this method returns something else than NULL in which case I will process the data accordingly.
As soon as there is data to be processed, the CPU usage gets low, way lower than if there was no data to process.
I know why my CPU usage is high : the loop will check as often as possible if there is something to read. On the plus side, there is close to no delay when recieving data, but no, that's not a viable solution.
Any ideas to change this ?
# UPDATE 1
As per Marc Gravell's idea, here is what I would like to understand and try :
byte buffer = new byte[4096];
while (_inputStream.CanRead
&& (readBytes = _inputStream.Read(buffer, 0, buffer.Length)) > 0
&& _state == STATE_CONNECTED)
{
for(int i = 0 ; i < readBytes; i++)
packet.Add(buffer[i]);
// or better: some kind of packet.AddRange(buffer, 0, readBytes)
}
How do you call this code snippet ?
Two questions :
If there is nothing to read, then the while condition will be
dismissed : what to do next ?
Once you're done reading, what do you do next ? What do you do to catch any new incoming packets ?
Here are some explanations that should help :
The android device is connected, via bluetooth, to another device that sends data. It will always send a pre-designed packet with a specified size (1024)
That device can stream the data continuously for some time but can also stop at any time for a long period too. How to deal with such behavior ?
An immediate fix would be:
don't read one byte at a time
don't create a new buffer per-byte
don't sit in a hot loop when there is no data available
For example:
byte buffer = new byte[4096];
while (_inputStream.CanRead
&& (readBytes = _inputStream.Read(buffer, 0, buffer.Length)) > 0
&& _state == STATE_CONNECTED)
{
for(int i = 0 ; i < readBytes; i++)
packet.Add(buffer[i]);
// or better: some kind of packet.AddRange(buffer, 0, readBytes)
}
Note that the use of readBytes in the original while check looked somewhat... confused; I've replaced it with a "while we don't get an EOF" check; feel free to add your own logic.

Byte Dropped Over Bluetooth Connection in Android

I am having some issues with bytes being dropped over a bluetooth connection between an android device (Gingerbread 2.3.1) and a PC. The way I receiving the data is in a 2 byte buffer. The values being received is streaming from the PC over a few minutes (values represent a waveform). Here are just a few snippets of code so you can get the idea. The base of my code is from the android bluetooth chat sample code.
BluetoothSocket socket;
...
mmInStream=socket.getInputStream;
...
byte[] buffer= new byte[2];
...
bytes = mmInStream.read(buffer);
Has anyone has issues with this type of thing? The dropped bytes seem to happen at random times while at other times the values received are as expected. I am using a 2 byte buffer because the values I am receiving are 16 bit signed integers. From the PC side of things I am using RealTerm to send the binary files of data.
Is it possible that my buffer is too small and that is causing the dropped bytes?
Thanks
Following up to your answer. You could just use a counter to remember how many bytes already read and compare it to the number wanted and also use it for the index to write the next byte(s). See a C# version at http://www.yoda.arachsys.com/csharp/readbinary.html
public static void ReadWholeArray (Stream stream, byte[] data)
{
int offset=0;
int remaining = data.Length;
while (remaining > 0)
{
int read = stream.Read(data, offset, remaining);
if (read <= 0)
throw new EndOfStreamException
(String.Format("End of stream reached with {0} bytes left to read", remaining));
remaining -= read;
offset += read;
}
}
I have found what the issue is. I want to thank alanjmcf for pointing me in the right direction.
I wasn't checking by bytes variable to see how many bytes were returned from the mmInStream.read(buffer). I was simply expecting that every buffer returned would contain 2 bytes. The way i solved the issue was with the following code after getting the buffer back from the InputStream:
//In the case where buffer returns with only 1 byte
if(lagging==true){
if(bytes==1){
lagging=false;
newBuf=new byte[] {laggingBuf, buffer[0]};
ringBuffer.store(newBuf);
}else if(bytes==2){
newBuf=new byte[] {laggingBuf, buffer[0]};
laggingBuf=buffer[1];
ringBuffer.store(newBuf);
}
}else if(lagging==false){
if(bytes==2){
newBuf = buffer.clone();
ringBuffer.store(newBuf);
}else if(bytes==1){
lagging=true;
laggingBuf=buffer[0];
}
}
This fixed my problem. Any suggestions on a better methodology?

Speed up encryption/decryption?

I have an encryption and decryption code which I use to encrypt and decrypt video files (mp4). I'm trying to speed up the decryption process as the encryption one is not that relevant for my case. This is the code that I have for the decryption process:
private static void decryptFile() throws IOException, ShortBufferException, IllegalBlockSizeException, BadPaddingException
{
//int blockSize = cipher.getBlockSize();
int blockSize = cipher.getBlockSize();
int outputSize = cipher.getOutputSize(blockSize);
System.out.println("outputsize: " + outputSize);
byte[] inBytes = new byte[blockSize];
byte[] outBytes = new byte[outputSize];
in= new FileInputStream(inputFile);
out=new FileOutputStream(outputFile);
BufferedInputStream inStream = new BufferedInputStream(in);
int inLength = 0;;
boolean more = true;
while (more)
{
inLength = inStream.read(inBytes);
if (inLength == blockSize)
{
int outLength
= cipher.update(inBytes, 0, blockSize, outBytes);
out.write(outBytes, 0, outLength);
}
else more = false;
}
if (inLength > 0)
outBytes = cipher.doFinal(inBytes, 0, inLength);
else
outBytes = cipher.doFinal();
out.write(outBytes);
}
My question is how to speed up the decryption process in this code. I've tried decrypting a 10MB mp4 file and it decrypts in 6-7 seconds. However, I'm aiming for < 1 seconds. Another thing I would like to know is if my writing to the FileOutputStream out is actually slowing the process down rather than the decryption process itself. Any suggestions on how to go about speeding things up here.
I'm using AES for encryption/decryption.
Until I find a solution, I will be using a ProgressDialog which tells the user to wait until the video has been decrypted (Obviously, I'm not going to use the word: decrypted).
Why are you decrypting data only by blockSize increments ? You do not show what type of object cipher is, but I am guessing this is a javax.crypto.Cipher instance. It can handle update() calls over arrays of arbitrary length, and you will have much less overhead if you use longer arrays. You should process data by blocks of, say, 8192 bytes (that's the traditional length for a buffer, it interacts reasonably well with CPU inner caches).
bytebiscuit, your question gave me the solution which I am trying from past 6 days. I just modified your code little bit, and my 52 mb video file is getting decrypted in just 4 seconds. Previous decrypting technique took 45 seconds which was a different logic (not yours) . Thats a massive difference 45 seconds to 4 seconds. Where ever I have done modification I am putting //modified comment lines. I am sure if your video is 10mb video, it will get decrypted in 1 second for sure. Try applying this, it should work out.
private static void decryptFile() throws IOException, ShortBufferException, IllegalBlockSizeException, BadPaddingException
{
//int blockSize = cipher.getBlockSize();
int blockSize = cipher.getBlockSize();
int outputSize = cipher.getOutputSize(blockSize);
System.out.println("outputsize: " + outputSize);
byte[] inBytes = new byte[blockSize*1024]; //modified
byte[] outBytes = new byte[outputSize * 1024]; //modified
in= new FileInputStream(inputFile);
out=new FileOutputStream(outputFile);
BufferedInputStream inStream = new BufferedInputStream(in);
int inLength = 0;;
boolean more = true;
while (more)
{
inLength = inStream.read(inBytes);
if (inLength/1024 == blockSize) //modified
{
int outLength
= cipher.update(inBytes, 0, blockSize*1024, outBytes);//modified
out.write(outBytes, 0, outLength);
}
else more = false;
}
if (inLength > 0)
outBytes = cipher.doFinal(inBytes, 0, inLength);
else
outBytes = cipher.doFinal();
out.write(outBytes);
}
I suggest you use the profiling tool provided in the android sdk. it will tell you where you spend the most time (i.e. : file writing or decoding).
see http://developer.android.com/guide/developing/debugging/debugging-tracing.html
This work on the emulator as well as on an actual device.
Consider using the NDK. On devices before Froyo (and even Froyo itself), it would be really slow due to the lack of JIT (or a very simple one in Froyo). Even with the JIT, native architecture-optimized crypto code will always outrun Dalvik.
See also this question.
As an aside, if you're using AES directly, you're probably doing something wrong. If this is part of an effort to do DRM, make sure you realize the full extent of the fact that decompiling an Android app is trivial. Your key will not be secure, which by definition defeats the encryption.
Instead of spending efforts to improve an inadequate architecture, you should consider a streaming solution: it has the great advantage to spread the computation time for the decryption so that it becomes no more noticeable. I mean: do not produce another file from your video source but rather a stream, with a local http server. Unfortunately there is no such component in the SDK, you have to make your own implementation or search for an existing one.

Categories

Resources