Android SSLEngine BUFFER_UNDERFLOW after unwrap while reading - android

I don't know why but half the sites that go through ssl get a buffer_underflow during my read.
When I have my program chained to call different ssl sites consecutively it doesn't work on half the links, but if I call one by one individually, they work. For example, I used Chrome's developer tools to call https://www.facebook.com on my nexus 7 tablet. When I see the requests, The links called are:
https://www.facebook.com/
https://fbstatic-a.akamaihd.net/rsrc.php/v2/y7/r/Zj_YpNlIRKt.css
https://fbstatic-a.akamaihd.net/rsrc.php/v2/yI/r/5EMLHs-7t29.css etc
... (about 26 links).
If I chain them together (to simulate a call to https://www.facebook.com from the browser), I get half the links getting buffer underflows until eventually I have to close their connections (Reading 0 bytes). However, if I cal them one by one individually, they are always fine. Here is my code for reading:
public int readSSLFrom(SelectionKey key, SecureIO session) throws IOException{
int result = 0;
String TAG = "readSSLFrom";
Log.i(TAG,"Hanshake status: "+session.sslEngine.getHandshakeStatus().toString());
synchronized (buffer){
ByteBuffer sslIn = ByteBuffer.allocate(session.getApplicationSizeBuffer());
ByteBuffer tmp = ByteBuffer.allocate(session.getApplicationSizeBuffer());
ReadableByteChannel channel = (ReadableByteChannel) key.channel();
if (buffer.remaining() < session.getPacketBufferSize()){
increaseSize(session.getPacketBufferSize());
}
int read = 0;
while (((read = channel.read(sslIn)) > 0) &&
buffer.remaining() >= session.getApplicationSizeBuffer()){
if (read < 0){
session.sslEngine.closeInbound();
return -1;
}
inner: while (sslIn.position() > 0){
sslIn.flip();
tmp.clear();
SSLEngineResult res = session.sslEngine.unwrap(sslIn, tmp);
result = result + res.bytesProduced();
sslIn.compact();
tmp.flip();
if (tmp.hasRemaining()){
buffer.put(tmp);
}
switch (res.getStatus()){
case BUFFER_OVERFLOW:
Log.i(TAG,"Buffer overflow");
throw new Error();
case BUFFER_UNDERFLOW:
Log.i(TAG,"Buffer underflow");
if (session.getPacketBufferSize() > tmp.capacity()){
Log.i(TAG,"increasing capacity");
ByteBuffer b = ByteBuffer.allocate(session.getPacketBufferSize());
sslIn.flip();
b.put(sslIn);
sslIn = b;
}
break inner;
case CLOSED:
Log.i(TAG,"Closed");
if (sslIn.position() == 0){
break inner;
} else{
return -1;
}
case OK:
Log.i(TAG,"OK");
session.checkHandshake(key);
break;
default:
break;
}
}
}
if (read < 0){
//session.sslEngine.closeInbound();
return -1;
}
}
dataEnd = buffer.position();
return result;
}
Thank you.

Buffer underflows are acceptable during unwrap and occur often. This happens when you have a partial TLS record (<16KB). This can happen in two cases, 1) When you have less than 16KB, so you get no result when you unwrap - just cache all this data and wait for the remainder to arrive to unwrap it. 2) When you have more than 16KB but the last TLS packet isn't complete, for example 20KB or 36KB. In this case the first 16KB/32KB will give you a result during an unwrap and you need to cache the remaining 4KB and just wait for the rest of the 12KB that completes this TLS packet - before you can unwrap.
I hope this helps, try out my code and see if it works for you.
Sorry this didn't fit in the comment so I responded with an answer instead.

Related

Arduino/Android Bluetooth delay

We are developping an app that uses Bluetooth library to communicate with an Arduino in bluetooth via an HC-05 module. We made a dummy configuration to test the delay without any computation from eather the Arduino or the app and we have a huge delay of about 1 second between a request and an answer...
Protocol looks easy : Android send byte -2 and if byte received is -2, Arduino send -6, -9 and Android answer again and again.
Android Code :
h = new Handler() {
public void handleMessage(android.os.Message msg) {
switch (msg.what) {
case RECIEVE_MESSAGE: // if receive massage
byte[] readBuf = (byte[]) msg.obj;
for(int i=0;i < readBuf.length;i++)
{
if((int) readBuf[i] != 0) {
txtArduino.append(String.valueOf((int) readBuf[i]) + ", ");
}
}
byte[] msg = {-2};
mConnectedThread.writeByte(msg);
break;
}
};
};
Arduino Code :
const int receveidBuffLen = 8*4;
void setup() {
Serial.begin(115200);
}
void loop() {
if (Serial.available() > 0)
{
byte buff[receveidBuffLen];
Serial.readBytes(buff, receveidBuffLen);
for(int i=0; i < receveidBuffLen;i++)
{
if(buff[i] == (byte) -2) // 254
{
byte message[2] = {(byte) -6, (byte) -9};
Serial.write(message, 2);
Serial.flush();
}
}
}
delay(3);
}
Does anyone know where the delay comes from?
We changed the HC05 baudrate (from 9600 to 115 200) : nothing happened. We changed HC05 with another : nothing happened. We used the Blue2Serial library (Bluetooth as SPP) before and delay was the same... We used another controler (ESP8266) and delay still was 1 second...
Looks like this string is an issue:
Serial.readBytes(buff, receveidBuffLen);
Where receveidBuffLen is 32.
Although you get single byte at a time, you're trying to read 32 of them. Of course, if there are no more bytes, the code will be stuck until timeout.
Furthermore, after bytes is read, you never check how many bytes were actually read, but do scan whole the array from bottom to top:
for(int i=0; i < receveidBuffLen;i++)
instead, you have to do something like this:
int bytesAvailable = Serial.available();
if (bytesAvailable > 0)
{
byte buff[receveidBuffLen];
int bytesToRead = (bytesAvailable < receveidBuffLen) ? bytesAvailable : receveidBuffLen;
// Read no more than the buffer size, but not more than available
int bytesActuallyRead = Serial.readBytes(buff, bytesToRead);
for(int i=0; i < bytesActuallyRead;i++)
...
There are a couple problems with the code that might cause delays:
delay function at end of loop - This will slow down the processing that the Ardunio can keep up with
Calling Serial.flush() - This will block the processing loop() until the internal TX serial buffer is empty. That means the Arduino is blocked and new RX data can pile up, slowing the response time.
Calling Serial.readBytes() - You should focus on the smallest unit of data and process that each loop() iteration. If you are trying to deal with multiple message per loop, that will slow now the loop time causing a delay.
You can try to implement a SerialEvent pattern on the Arduino. We will only read one byte at a time from the serial buffer, keeping the processing that the loop() function has todo to a bare minimum. If we receive the -2 byte we will mark a flag. If the flag is marked the loop() function will call the Serial.write() function but will not block for the data to transmit. Here is a quick example.
bool sendMessage = false;
byte message[2] = {(byte) -6, (byte) -9};
void loop()
{
if (sendMessage == true)
{
Serial.write(message, 2);
sendMessage = false;
}
}
/*
SerialEvent occurs whenever a new data comes in the hardware serial RX. This
routine is run between each time loop() runs, so using delay inside loop can
delay response. Multiple bytes of data may be available.
*/
void serialEvent()
{
while (Serial.available())
{
// get the new byte:
byte inChar = ((byte) Serial.read());
if (inChar == ((byte) -2))
{
sendMessage = true;
}
}
}
We just find some solutions by ourselves and want to share them :
Initial situation : 1050 ms for an answer. Alls solutions are independent and done with the initial situation.
Remove Serial.flush() : 1022 ms.
Add a simple Serial.setTimeout(100) in Arduino Code : 135 ms. (Oh man!)
Add a simple timeout to inputStream of 100ms in Android : 95 ms.
Which solution is the best, we can't say but it works now...

c++ - bluetooth over winsock, how to remove byte order mark

I want to transfer some stringdata from my Android-device to my Windows-Laptop via Bluetooth.
Using the codesample for bluetooth with winsock2 provided by Microsoft I was able to transfer data using the below code. Unfortunately I receive a byte order mark at the beginning of the string I send. Of course, I could simply remove the first four bytes, but that seems a little bit dirty to me. Is there any other option I could use?
C++-code for receiving (slightly modified for better readability -> no error handling no comment, etc.)
ClientSocket = accept(LocalSocket, NULL, NULL);
BOOL bContinue = TRUE;
pszDataBuffer = (char *)HeapAlloc(GetProcessHeap(),
HEAP_ZERO_MEMORY,
CXN_TRANSFER_DATA_LENGTH);
pszDataBufferIndex = pszDataBuffer;
uiTotalLengthReceived = 0;
while ( bContinue && (uiTotalLengthReceived < CXN_TRANSFER_DATA_LENGTH) ) {
iLengthReceived = recv(ClientSocket,
(char *)pszDataBufferIndex,
(CXN_TRANSFER_DATA_LENGTH - uiTotalLengthReceived),
0);
switch ( iLengthReceived ) {
case 0: // socket connection has been closed gracefully
bContinue = FALSE;
break;
case SOCKET_ERROR:
wprintf(L"=CRITICAL= | recv() call failed. WSAGetLastError=[%d]\n", WSAGetLastError());
bContinue = FALSE;
ulRetCode = CXN_ERROR;
break;
default:
pszDataBufferIndex += iLengthReceived;
uiTotalLengthReceived += iLengthReceived;
break;
}
}
if ( CXN_SUCCESS == ulRetCode ) {
pszDataBuffer[uiTotalLengthReceived] = '\0';
wprintf(L"*INFO* | Received following data string from remote device:\n%s\n", (wchar_t *)pszDataBuffer);
closesocket(ClientSocket);
ClientSocket = INVALID_SOCKET;
}
Android-code for sending:
OutputStream socketOutpuStream = socket.getOutputStream();
socketOutputStream.write(dataString.getBytes(Charsets.UTF_16));
Ok, I feel quite stupid right now. Working in homogeneous java environments for some years made me completely forget that java sets the byte order mark when calling getBytes() with a unicodecharset as parameter.
After changing dataString.getBytes(Charsets.UTF_16) to dataString.getBytes(StandardCharsets.UTF_16LE) (windows is little endian) on the android side everything works as expected.

Android USB host : interrupt do not respond immedietly

I have a usb device which have a button.
And I want to an android app to catch a signal of the button.
I found inferface and endpoint number of the button.
It had seemed to perform ordinarily at galaxy S3 and galaxy note.
But later, I found that it has delay at other phones.
I was able to receive instant responses about 10% of the time; usually there was a 2-second delay, with some cases where the whole response was lost.
Although I couldn't figure out the exact reason, I realized that the phones that had response delays were those with kernel version 3.4 or later.
Here is the code that I used initially.
if(mConnection != null){
mConnection.claimInterface(mInterfaces.get(0), true);
final UsbEndpoint endpoint = mInterfaces.get(0).getEndpoint(0);
Thread getSignalThread = new Thread(new Runnable() {
#Override
public synchronized void run() {
byte[] buffer = new byte[8];
final ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
while(mConnection!=null){
int len = mConnection.bulkTransfer(endpoint, buffer, buffer.length, 0);
if( len>=0 ){
// do my own code
}
}
}
});
getSignalThread.setPriority(Thread.MAX_PRIORITY);
getSignalThread.start();
}
edit timeout
when the timeout was set to 50ms, I wasn't able to receive responses most of the time. When the timeout was 500ms, I was able to initially get some delayed-responses; however, I lost all responses after several tries with this setting.
Using UsbRequest
In addition to using the bulktransfer method, I also tried using UsbRequest and below is the code that I used.
#Override
public synchronized void run() {
byte[] buffer = new byte[8];
final ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
UsbRequest inRequest = new UsbRequest();
inRequest.initialize(mConnection, endpoint);
while(mConnection!=null){
inRequest.queue( byteBuffer , buffer.length);
if( mConnection.requestWait() == inRequest ){
// do my own code
}
}
}
However, the same kind of delay happened even after using UsbRequest.
Using libusb
I also tried using libusb_interrupt_transfer from an open source library called libusb.
However this also produced the same type of delay that I had when using UsbDeviceConnection.
unsigned char data_bt[8] = { 0, };
uint32_t out[2];
int transfered = 0;
while (devh_usb != NULL) {
libusb_interrupt_transfer(devh_usb, 0x83, data_bt, 8, &transfered, 0);
memcpy(out, data_bt, 8);
if (out[0] == PUSH) {
LOGI("button pushed!!!");
memset(data_bt, 0, 8);
//(env)->CallVoidMethod( thiz, mid);
}
}
After looking into the part where libusb_interrupt_transfer is processed libusb, I was able to figure out the general steps of interrupt_transfer:
1. make a transfer object of type interrupt
2. make a urb object that points to the transfer object
3. submit the urb object to the device's fd
4. detect any changes in the fd object via urb object
5. read urb through ioctl
steps 3, 4, 5 are the steps regarding file i/o.
I was able to find out that at step 4 the program waits for the button press before moving onto the next step.
Therefore I tried changing poll to epoll in order to check if the poll function was causing the delay; unfortunately nothing changed.
I also tried setting the timeout of the poll function to 500ms and making it always get values of the fd through ioctl but only found out that the value changed 2~3 seconds after pressing the button.
So in conclusion I feel that there is a delay in the process of updating the value of the fd after pressing the button. If there is anyone who could help me with this issue, please let me know. Thank you.
Thanks for reading

Reading a .NET Stream : high CPU usage - how to read wihtout while (true)?

Since my problem is close to this one, I haven been looing at feedbacks from this possible solution : Reading on a NetworkStream = 100% CPU usage but I fail to find the solution I need.
Much like in this other question, I want to use something else than an infinite while loop.
More precisely, I am using Xamarin to build Android application in Visual Studio. Since I need a Bluetooth service I am using a Stream to read and send data.
Reading data from Stream.InputStrem is where I have a problem : is there some sort of a blocking call to wait for data to be available without using a while (true) loop ?
I tried :
Begin/End Read
Task.Run and await
Here is a code sample:
public byte[] RetrieveDataFromStream()
{
List<byte> packet = new List<byte>();
int readBytes = 0;
while (_inputStream.CanRead && _inputStream.IsDataAvailable() && readBytes < 1024 && _state == STATE_CONNECTED)
{
try
{
byte[] buffer = new byte[1];
readBytes = _inputStream.Read(buffer, 0, buffer.Length);
packet.Add(buffer[0]);
}
catch (Java.IO.IOException e)
{
return null;
}
}
return packet.ToArray();
}
I call this method from a while loop.
This loop will check until this method returns something else than NULL in which case I will process the data accordingly.
As soon as there is data to be processed, the CPU usage gets low, way lower than if there was no data to process.
I know why my CPU usage is high : the loop will check as often as possible if there is something to read. On the plus side, there is close to no delay when recieving data, but no, that's not a viable solution.
Any ideas to change this ?
# UPDATE 1
As per Marc Gravell's idea, here is what I would like to understand and try :
byte buffer = new byte[4096];
while (_inputStream.CanRead
&& (readBytes = _inputStream.Read(buffer, 0, buffer.Length)) > 0
&& _state == STATE_CONNECTED)
{
for(int i = 0 ; i < readBytes; i++)
packet.Add(buffer[i]);
// or better: some kind of packet.AddRange(buffer, 0, readBytes)
}
How do you call this code snippet ?
Two questions :
If there is nothing to read, then the while condition will be
dismissed : what to do next ?
Once you're done reading, what do you do next ? What do you do to catch any new incoming packets ?
Here are some explanations that should help :
The android device is connected, via bluetooth, to another device that sends data. It will always send a pre-designed packet with a specified size (1024)
That device can stream the data continuously for some time but can also stop at any time for a long period too. How to deal with such behavior ?
An immediate fix would be:
don't read one byte at a time
don't create a new buffer per-byte
don't sit in a hot loop when there is no data available
For example:
byte buffer = new byte[4096];
while (_inputStream.CanRead
&& (readBytes = _inputStream.Read(buffer, 0, buffer.Length)) > 0
&& _state == STATE_CONNECTED)
{
for(int i = 0 ; i < readBytes; i++)
packet.Add(buffer[i]);
// or better: some kind of packet.AddRange(buffer, 0, readBytes)
}
Note that the use of readBytes in the original while check looked somewhat... confused; I've replaced it with a "while we don't get an EOF" check; feel free to add your own logic.

Can 2 WritableByteChannels be used at the same time?

When I write directly to 2 outputstreams, everything works fine. When I try to write to 2 channels though, the second one seemingly does not receive it.
Does anyone know if 2 WritableByteChannels can be written to at the same time? If not, any other ideas of what I can do to perform the same action still using NIO/Channels?
connection2 = new Socket(Resource.LAN_DEV2_IP_ADDRESS, Resource.LAN_DEV2_SOCKET_PORT);
out2 = connection2.getOutputStream();
connection = new Socket(Resource.LAN_HOST_IP_ADDRESS, Resource.LAN_HOST_SOCKET_PORT);
out = connection.getOutputStream();
File f = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS), filename);
in = new FileInputStream(f);
fic = in.getChannel();
fsize = fic.size();
channel2 = Channels.newChannel(out2);
channel = Channels.newChannel(out);
//Send Header
byte[] p = createHeaderPacket(filename, f.length());
out2.write(p); // Received correctly
out.write(p); // Received correctly
//Send file
long currPos = 0;
while (currPos < fsize)
{
if (fsize - currPos < Resource.MEMORY_ALLOC_SIZE)
{
mappedByteBuffer = fic.map(FileChannel.MapMode.READ_ONLY, currPos, fsize - currPos);
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
currPos = fsize;
}
else
{
mappedByteBuffer = fic.map(FileChannel.MapMode.READ_ONLY, currPos, Resource.MEMORY_ALLOC_SIZE);
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
currPos += Resource.MEMORY_ALLOC_SIZE;
}
}
Try:
channel2.write(mappedByteBuffer.duplicate());
channel.write(mappedByteBuffer);
The way to understand NIO Buffers is to keep in mind its basic properties:
the underlying data store (which is commonly an ordinary byte array, but can be other things, such as a memory-mapped region of a file);
the start and capacity within that underlying space;
your current position in the buffer; and
the limit of the buffer.
All buffer operations provided by NIO are documented in terms of how the operation affects these properties. For example, the WritableByteChannel.write() documentation tells us that:
Between 0 and src.remaining() (inclusive) bytes will be written to the channel; and
If count bytes were written, the ByteBuffer's position will be increased by count when write() returns.
So looking at your original code:
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
If the first write writes the entire remaining mappedByteBuffer to channel2, after that statement mappedByteBuffer.remaining() will be zero, so the write to channel will not write any bytes at all.
Hence my suggestion above to use ByteBuffer.duplicate() on the first write. This method returns a new ByteBuffer object which:
shares the original buffer's underlying store (so you're not making an unnecessary copy in memory of the actual bytes you want to write twice); but
has its own position (and remaining) values, so when channel2.write() adjusts that (duplicate) ByteBuffer's position, it will leave the position unchanged in the original buffer,
so channel.write() will still receive the intended range of bytes.
As an alternative, you could also write:
mappedByteBuffer.mark(); // store the current position
channel2.write(mappedByteBuffer);
mappedByteBuffer.reset(); // move position to the previously marked position
channel.write(mappedByteBuffer);
I'm also inclined to agree with EJP's point that you're probably not making the best use of MappedByteBuffer here. You could simplify your copying loop to:
ByteBuffer buffer = ByteBuffer.allocate(Resource.MEMORY_ALLOC_SIZE);
while (fic.read(buffer) >= 0) {
buffer.flip();
channel2.write(buffer.duplicate());
channel.write(buffer);
}
Here the read() method increases position by the number of bytes read from the channel, then the flip() method sets the limit to that position and the position back to 0, which means the bytes you've just read are in the remaining range that write() will consume.
However, you'll notice that EJP's loop is a little more complicated than that. That's because write operations on channels might not necessarily write every remaining byte. (The write() documentation gives the example of a networking socket opened in non-blocking mode.) However that sample code (and the similar sample in the documentation of ByteBuffer.compact()) relies on the fact that you're only writing to a single channel; when you're writing to two different channels, you have to handle the fact that the two channels might accept a different number of bytes. So:
ByteBuffer buffer = ByteBuffer.allocate(Resource.MEMORY_ALLOC_SIZE);
while (fic.read(buffer) >= 0) {
buffer.flip();
buffer.mark();
while (buffer.hasRemaining()) {
channel2.write(buffer);
}
buffer.reset():
while (buffer.hasRemaining()) {
channel.write(buffer);
}
buffer.clear();
}
Of course multiple channels can be used at the same time, but more to the point that's a terrible way to send a file. Creating lots of MappedByteBuffers causes all kinds of problems as the underlying mapped regions are never released. Just open it as a normal channel and use the canonical NIO copy loop:
while (in.read(buffer) >= 0 || buffer.position() > 0)
{
buffer.flip();
out.write(buffer);
buffer.compact();
}

Categories

Resources