c++ - bluetooth over winsock, how to remove byte order mark - android

I want to transfer some stringdata from my Android-device to my Windows-Laptop via Bluetooth.
Using the codesample for bluetooth with winsock2 provided by Microsoft I was able to transfer data using the below code. Unfortunately I receive a byte order mark at the beginning of the string I send. Of course, I could simply remove the first four bytes, but that seems a little bit dirty to me. Is there any other option I could use?
C++-code for receiving (slightly modified for better readability -> no error handling no comment, etc.)
ClientSocket = accept(LocalSocket, NULL, NULL);
BOOL bContinue = TRUE;
pszDataBuffer = (char *)HeapAlloc(GetProcessHeap(),
HEAP_ZERO_MEMORY,
CXN_TRANSFER_DATA_LENGTH);
pszDataBufferIndex = pszDataBuffer;
uiTotalLengthReceived = 0;
while ( bContinue && (uiTotalLengthReceived < CXN_TRANSFER_DATA_LENGTH) ) {
iLengthReceived = recv(ClientSocket,
(char *)pszDataBufferIndex,
(CXN_TRANSFER_DATA_LENGTH - uiTotalLengthReceived),
0);
switch ( iLengthReceived ) {
case 0: // socket connection has been closed gracefully
bContinue = FALSE;
break;
case SOCKET_ERROR:
wprintf(L"=CRITICAL= | recv() call failed. WSAGetLastError=[%d]\n", WSAGetLastError());
bContinue = FALSE;
ulRetCode = CXN_ERROR;
break;
default:
pszDataBufferIndex += iLengthReceived;
uiTotalLengthReceived += iLengthReceived;
break;
}
}
if ( CXN_SUCCESS == ulRetCode ) {
pszDataBuffer[uiTotalLengthReceived] = '\0';
wprintf(L"*INFO* | Received following data string from remote device:\n%s\n", (wchar_t *)pszDataBuffer);
closesocket(ClientSocket);
ClientSocket = INVALID_SOCKET;
}
Android-code for sending:
OutputStream socketOutpuStream = socket.getOutputStream();
socketOutputStream.write(dataString.getBytes(Charsets.UTF_16));

Ok, I feel quite stupid right now. Working in homogeneous java environments for some years made me completely forget that java sets the byte order mark when calling getBytes() with a unicodecharset as parameter.
After changing dataString.getBytes(Charsets.UTF_16) to dataString.getBytes(StandardCharsets.UTF_16LE) (windows is little endian) on the android side everything works as expected.

Related

Android USB host : interrupt do not respond immedietly

I have a usb device which have a button.
And I want to an android app to catch a signal of the button.
I found inferface and endpoint number of the button.
It had seemed to perform ordinarily at galaxy S3 and galaxy note.
But later, I found that it has delay at other phones.
I was able to receive instant responses about 10% of the time; usually there was a 2-second delay, with some cases where the whole response was lost.
Although I couldn't figure out the exact reason, I realized that the phones that had response delays were those with kernel version 3.4 or later.
Here is the code that I used initially.
if(mConnection != null){
mConnection.claimInterface(mInterfaces.get(0), true);
final UsbEndpoint endpoint = mInterfaces.get(0).getEndpoint(0);
Thread getSignalThread = new Thread(new Runnable() {
#Override
public synchronized void run() {
byte[] buffer = new byte[8];
final ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
while(mConnection!=null){
int len = mConnection.bulkTransfer(endpoint, buffer, buffer.length, 0);
if( len>=0 ){
// do my own code
}
}
}
});
getSignalThread.setPriority(Thread.MAX_PRIORITY);
getSignalThread.start();
}
edit timeout
when the timeout was set to 50ms, I wasn't able to receive responses most of the time. When the timeout was 500ms, I was able to initially get some delayed-responses; however, I lost all responses after several tries with this setting.
Using UsbRequest
In addition to using the bulktransfer method, I also tried using UsbRequest and below is the code that I used.
#Override
public synchronized void run() {
byte[] buffer = new byte[8];
final ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
UsbRequest inRequest = new UsbRequest();
inRequest.initialize(mConnection, endpoint);
while(mConnection!=null){
inRequest.queue( byteBuffer , buffer.length);
if( mConnection.requestWait() == inRequest ){
// do my own code
}
}
}
However, the same kind of delay happened even after using UsbRequest.
Using libusb
I also tried using libusb_interrupt_transfer from an open source library called libusb.
However this also produced the same type of delay that I had when using UsbDeviceConnection.
unsigned char data_bt[8] = { 0, };
uint32_t out[2];
int transfered = 0;
while (devh_usb != NULL) {
libusb_interrupt_transfer(devh_usb, 0x83, data_bt, 8, &transfered, 0);
memcpy(out, data_bt, 8);
if (out[0] == PUSH) {
LOGI("button pushed!!!");
memset(data_bt, 0, 8);
//(env)->CallVoidMethod( thiz, mid);
}
}
After looking into the part where libusb_interrupt_transfer is processed libusb, I was able to figure out the general steps of interrupt_transfer:
1. make a transfer object of type interrupt
2. make a urb object that points to the transfer object
3. submit the urb object to the device's fd
4. detect any changes in the fd object via urb object
5. read urb through ioctl
steps 3, 4, 5 are the steps regarding file i/o.
I was able to find out that at step 4 the program waits for the button press before moving onto the next step.
Therefore I tried changing poll to epoll in order to check if the poll function was causing the delay; unfortunately nothing changed.
I also tried setting the timeout of the poll function to 500ms and making it always get values of the fd through ioctl but only found out that the value changed 2~3 seconds after pressing the button.
So in conclusion I feel that there is a delay in the process of updating the value of the fd after pressing the button. If there is anyone who could help me with this issue, please let me know. Thank you.
Thanks for reading

Communicating between C & Arduino

I have battled this issue for a while now and it is driving me nuts:
I am trying to communicate very simply with an Arduino Mega 2560 (via USB as a serial device) from pc running Linux (Knoppix on a usb-dok) when all I am trying to accomplish at this stage is that for each number sent by the laptop to the Arduino, a 'stobe' signal will switch for High to Low or the other way around, and i use this strobe to light turn an LED on and off.
pc side C code:
#include <stdio.h>
int main ()
{
FILE * Device = NULL;
int counter = 0;
Device = fopen("/dev/ttyACM0", "w+");
if(Device == NULL)
{
printf("could not open Device\n");
return -1;
}
while (counter < 10)
{
fprintf(Device, "%d\n", counter);
printf("Sent to Device: %d\n", counter);
counter++;
sleep(2);
}
fclose(Device);
return 0;
}
Arduino code:
int cnt = 0;
int strobe = 0;
int num;
int ValidInput = 0;
char intBuffer[12];
String intData = "";
int delimiter = (int) '\n';
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
pinMode(3, OUTPUT);
}
int input;
void loop()
{
while(num = Serial.available())
{
delay(5);
// Serial.println(num);
int ch = Serial.read();
if(ch == delimiter)
{
ValidInput = 1;
break;
}
else
{
intData += (char) ch;
}
}
int intLen = intData.length() + 1;
intData.toCharArray(intBuffer, intLen);
intData = "";
int i = atoi(intBuffer);
if(ValidInput)
{
if(i == 0)
{
strobe = 0;
Serial.print("Initializing strobe");
}
else
{
strobe = !strobe;
}
digitalWrite(3, (strobe) ? HIGH : LOW);
Serial.println(i);
ValidInput = 0;
}
}
The problems I am having:
Not sure if fopen is the correct way to communicate with a serial device in Linux, and if so in which mode?
This is the main issue - I am experiencing non-deterministic behavior:
if i run this code right before opening the Arduino editor's 'Serial monitor' it doesn't work as I explained above, instead - it will turn the LED on and then off right away, for each incoming new number.
but once I open the 'Serial monitor' it would act as I want it to - changing the LED's state for each new incoming number.
I am thinking this has something to do with the Arduino's reset or something of that sort.
I looked in many threads here and other forums and couldn't find any solution to this problem.
I'd really appreciate your insight.
First of all, the arduino side looks ok. On the Linux side you need to do some reasearch since the serial communication on posix systems is a little bit more complicated than only opening a file and writing to it. Please use the linux man pages for termios where you can find information on how to setup the communication port parameters and use this document http://tldp.org/HOWTO/Serial-Programming-HOWTO/ for actually learning how to put everything altogether. The serial programming howto will guide you through the process of setting up a port, learning how to control it and learn how to accept input from multiple sources. Also in order to access successfully the serial port from an unprivileged account, you might need to add that user (your user) to a specific group (dialout group in Ubuntu and Fedora). You can search on Google about serial port access under linux and you can fine a lot of code samples ready for you to integrate in your application. You can find an excellent reference and a full documented implementation at the bottom of this thread , also on SO How do I read data from serial port in Linux using C?
A simple fopen doesn't setup any of the serial ports communication parameters. You need to set the baud rate, number of bits, parity, and number of stop bits. And, if you want to use the linux line discipline or not. The termio structure is used to do this.
There are a couple good tutorial on how to use serial between linux and arduinos.
http://chrisheydrick.com/2012/06/12/how-to-read-serial-data-from-an-arduino-in-linux-with-c-part-1/
http://todbot.com/blog/2006/12/06/arduino-serial-c-code-to-talk-to-arduino/

Reading a .NET Stream : high CPU usage - how to read wihtout while (true)?

Since my problem is close to this one, I haven been looing at feedbacks from this possible solution : Reading on a NetworkStream = 100% CPU usage but I fail to find the solution I need.
Much like in this other question, I want to use something else than an infinite while loop.
More precisely, I am using Xamarin to build Android application in Visual Studio. Since I need a Bluetooth service I am using a Stream to read and send data.
Reading data from Stream.InputStrem is where I have a problem : is there some sort of a blocking call to wait for data to be available without using a while (true) loop ?
I tried :
Begin/End Read
Task.Run and await
Here is a code sample:
public byte[] RetrieveDataFromStream()
{
List<byte> packet = new List<byte>();
int readBytes = 0;
while (_inputStream.CanRead && _inputStream.IsDataAvailable() && readBytes < 1024 && _state == STATE_CONNECTED)
{
try
{
byte[] buffer = new byte[1];
readBytes = _inputStream.Read(buffer, 0, buffer.Length);
packet.Add(buffer[0]);
}
catch (Java.IO.IOException e)
{
return null;
}
}
return packet.ToArray();
}
I call this method from a while loop.
This loop will check until this method returns something else than NULL in which case I will process the data accordingly.
As soon as there is data to be processed, the CPU usage gets low, way lower than if there was no data to process.
I know why my CPU usage is high : the loop will check as often as possible if there is something to read. On the plus side, there is close to no delay when recieving data, but no, that's not a viable solution.
Any ideas to change this ?
# UPDATE 1
As per Marc Gravell's idea, here is what I would like to understand and try :
byte buffer = new byte[4096];
while (_inputStream.CanRead
&& (readBytes = _inputStream.Read(buffer, 0, buffer.Length)) > 0
&& _state == STATE_CONNECTED)
{
for(int i = 0 ; i < readBytes; i++)
packet.Add(buffer[i]);
// or better: some kind of packet.AddRange(buffer, 0, readBytes)
}
How do you call this code snippet ?
Two questions :
If there is nothing to read, then the while condition will be
dismissed : what to do next ?
Once you're done reading, what do you do next ? What do you do to catch any new incoming packets ?
Here are some explanations that should help :
The android device is connected, via bluetooth, to another device that sends data. It will always send a pre-designed packet with a specified size (1024)
That device can stream the data continuously for some time but can also stop at any time for a long period too. How to deal with such behavior ?
An immediate fix would be:
don't read one byte at a time
don't create a new buffer per-byte
don't sit in a hot loop when there is no data available
For example:
byte buffer = new byte[4096];
while (_inputStream.CanRead
&& (readBytes = _inputStream.Read(buffer, 0, buffer.Length)) > 0
&& _state == STATE_CONNECTED)
{
for(int i = 0 ; i < readBytes; i++)
packet.Add(buffer[i]);
// or better: some kind of packet.AddRange(buffer, 0, readBytes)
}
Note that the use of readBytes in the original while check looked somewhat... confused; I've replaced it with a "while we don't get an EOF" check; feel free to add your own logic.

Android SSLEngine BUFFER_UNDERFLOW after unwrap while reading

I don't know why but half the sites that go through ssl get a buffer_underflow during my read.
When I have my program chained to call different ssl sites consecutively it doesn't work on half the links, but if I call one by one individually, they work. For example, I used Chrome's developer tools to call https://www.facebook.com on my nexus 7 tablet. When I see the requests, The links called are:
https://www.facebook.com/
https://fbstatic-a.akamaihd.net/rsrc.php/v2/y7/r/Zj_YpNlIRKt.css
https://fbstatic-a.akamaihd.net/rsrc.php/v2/yI/r/5EMLHs-7t29.css etc
... (about 26 links).
If I chain them together (to simulate a call to https://www.facebook.com from the browser), I get half the links getting buffer underflows until eventually I have to close their connections (Reading 0 bytes). However, if I cal them one by one individually, they are always fine. Here is my code for reading:
public int readSSLFrom(SelectionKey key, SecureIO session) throws IOException{
int result = 0;
String TAG = "readSSLFrom";
Log.i(TAG,"Hanshake status: "+session.sslEngine.getHandshakeStatus().toString());
synchronized (buffer){
ByteBuffer sslIn = ByteBuffer.allocate(session.getApplicationSizeBuffer());
ByteBuffer tmp = ByteBuffer.allocate(session.getApplicationSizeBuffer());
ReadableByteChannel channel = (ReadableByteChannel) key.channel();
if (buffer.remaining() < session.getPacketBufferSize()){
increaseSize(session.getPacketBufferSize());
}
int read = 0;
while (((read = channel.read(sslIn)) > 0) &&
buffer.remaining() >= session.getApplicationSizeBuffer()){
if (read < 0){
session.sslEngine.closeInbound();
return -1;
}
inner: while (sslIn.position() > 0){
sslIn.flip();
tmp.clear();
SSLEngineResult res = session.sslEngine.unwrap(sslIn, tmp);
result = result + res.bytesProduced();
sslIn.compact();
tmp.flip();
if (tmp.hasRemaining()){
buffer.put(tmp);
}
switch (res.getStatus()){
case BUFFER_OVERFLOW:
Log.i(TAG,"Buffer overflow");
throw new Error();
case BUFFER_UNDERFLOW:
Log.i(TAG,"Buffer underflow");
if (session.getPacketBufferSize() > tmp.capacity()){
Log.i(TAG,"increasing capacity");
ByteBuffer b = ByteBuffer.allocate(session.getPacketBufferSize());
sslIn.flip();
b.put(sslIn);
sslIn = b;
}
break inner;
case CLOSED:
Log.i(TAG,"Closed");
if (sslIn.position() == 0){
break inner;
} else{
return -1;
}
case OK:
Log.i(TAG,"OK");
session.checkHandshake(key);
break;
default:
break;
}
}
}
if (read < 0){
//session.sslEngine.closeInbound();
return -1;
}
}
dataEnd = buffer.position();
return result;
}
Thank you.
Buffer underflows are acceptable during unwrap and occur often. This happens when you have a partial TLS record (<16KB). This can happen in two cases, 1) When you have less than 16KB, so you get no result when you unwrap - just cache all this data and wait for the remainder to arrive to unwrap it. 2) When you have more than 16KB but the last TLS packet isn't complete, for example 20KB or 36KB. In this case the first 16KB/32KB will give you a result during an unwrap and you need to cache the remaining 4KB and just wait for the rest of the 12KB that completes this TLS packet - before you can unwrap.
I hope this helps, try out my code and see if it works for you.
Sorry this didn't fit in the comment so I responded with an answer instead.

Android Radio Interface Layer (RIL) and /dev/

Does anyone know how the RIL (/hardware/reference/reference-ril/) determines what gets mounted in /dev/ when the baseband radio gets initiated?
In older phones and in other documentation, GSM phones use /dev/smd0. Not all phones use /dev/smd0. I am trying to determine a way to find out what gets mounted regardless of the type of radio and vendor.
If someone can specifically identify where in the /hardware/reference/reference-ril/ I can see where this is set and where it's pulling the info from upon initialization, that would be perfect.
RIL is in your application Framework.
if you want to see the RIL and Implements the functionality with use of command prompt it is done.
There is below command :
void (*RIL_RequestFunc) (int request, void *data, size_t datalen, RIL_Token t);
I found this from here:
you are serious about this please go through link:
RIL Study LInk
If you want to know about example :
GIT HUB
It actually depends on what interface you are using to connect. You might use USB, UART or SPI interface to connect the upper layer with the modem. The paramter passed in the RIL_Init function determines the device you are trying to connect to. If you want to know specifically where this is done, please see the RIL_Init function in reference-ril.c.
const RIL_RadioFunctions *RIL_Init(const struct RIL_Env *env, int argc, char **argv)
{
int ret;
int fd = -1;
int opt;
pthread_attr_t attr;
s_rilenv = env;
while ( -1 != (opt = getopt(argc, argv, "p:d:s:"))) {
switch (opt) {
case 'p':
s_port = atoi(optarg);
if (s_port == 0) {
usage(argv[0]);
return NULL;
}
RLOGI("Opening loopback port %d\n", s_port);
break;
case 'd':
s_device_path = optarg;
RLOGI("Opening tty device %s\n", s_device_path);
break;
case 's':
s_device_path = optarg;
s_device_socket = 1;
RLOGI("Opening socket %s\n", s_device_path);
break;
default:
usage(argv[0]);
return NULL;
}
}
if (s_port < 0 && s_device_path == NULL) {
usage(argv[0]);
return NULL;
}
sMdmInfo = calloc(1, sizeof(ModemInfo));
if (!sMdmInfo) {
RLOGE("Unable to alloc memory for ModemInfo");
return NULL;
}
pthread_attr_init (&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
ret = pthread_create(&s_tid_mainloop, &attr, mainLoop, NULL);
return &s_callbacks;
}
I hope things are clear now.

Categories

Resources