I have a routine that uses a HttpURLConnection to upload a file via a multipart/form-data content type. The server does some processing on the uploaded file and returns a short response code. This works fine in all cases so far except with one user with an HTC Evo when on 4G. If the user switches to 3G then everything works fine. When on 4G the app will wait on the while ((line = reader.readLine()) != null) { until a socket connection timeout exception is thrown. I have the connection timeout set to 70 seconds. The server is in php here is the relevant snippet
//all ob_ related entries were added because I found some info indicating
//that some clients would not acknowledge the response without the content-length header
ob_end_clean();
header("Connection: close");
ob_start();
...
//the response is one of either
echo "BACKGROUND"; //this one works!
//or
echo $rv //$rv = "1336757671374T37171FR"
//or
echo "FailedQA";
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush();
ob_flush();
flush();
die();
?>
Note that the 'BACKGROUND' response works, and the rest cause the client to sit until the timeout exception. I have currently 2 notions on this, but I am not in a 4G area so I can only test this through the end user and I really want to limit the number of attempts. My first thought is that the 'BACKGROUND' is slightly longer than the 'FailedQA', and while the other one is longer it has a numeric start. So maybe adding white space to the response would help? The other aspect is response time. The 'BACKGROUND' message is normally sent faster than the other ones. But, I do have a counter example here so I am not sire. Example: the 'BACKGROUND' message normally goes out within 15 to 20 seconds. The other messages are normally 30-40 seconds. However, I have one example where the '1336757671374T37171FR' style response went out in 24 seconds and was not received and one where the 'BACKGROUND' message went out in 27 seconds and was received.
So to sum up: This only happens on Sprint 4G. I suspect it might either be content length or response time that is causing the issue, but in both cases I have a counter example to the contrary. Except with the length case the one that is the longer counter example has a numeric beginning, so there's that.
It seems to be the delay before a response that was causing the problem. I used this guide Easy Parallel Processing in PHP to set up a multi-tasking php configuration. This way I have a script that just counts and echos the elapsed seconds while the other one does the job processing. The problem is now resolved.
Related
I'm writing my thesis on how to obtain data from different fitness bands.
At the moment I'm doing some research on the Mi Band 2 using bluetooth connection with my PC, unfortunately BLE is a new field for me.
By looking at projects like Gadgetbridge or miband2-python-test I try to understand the protocol. I get how the authentication works and how to extract data like battery or time information. However, I don't understand the protocol to obtain past data, like the minutely steps from two days ago until now.
I would be pleased if someone could help me by giving a tip or explaining the steps of the protocol. Thanks in advance!
That's my code for now, as far as I understood the protocol:
UUID_CHAR_ACTIVITY_DATA = "00000005-0000-3512-2118-0009af100700"
UUID_CHAR_FETCH = "00000004-0000-3512-2118-0009af100700"
CCCD_UUID = 0x2902
class MiBand2(Peripheral):
[...]
self.char_activity_data = self.getCharacteristics(uuid=UUID_CHAR_ACTIVITY_DATA)[0]
self.char_fetch = self.getCharacteristics(uuid=UUID_CHAR_FETCH)[0]
self.cccd_fetch = self.char_fetch.getDescriptors(forUUID=CCCD_UUID)[0]
def fetch_activity_data(self):
# \x01\x01 key?
# \xe2\x07 2018 year
# \x05 month
# \x03 year
# \x11 hour
# \x2f minute
# \x00\x08 timezone
value = b'\x01\x01\xe2\x07\x05\x03\x11\x2f\x00\x08'
self.cccd_fetch.write(b'\x01\x00', False)
self.char_fetch.write(value_from_wireshark, False)
for i in range(30):
self.waitForNotifications(1.0)
class AuthenticationDelegate(DefaultDelegate):
[...]
def handleNotification(self, hnd, data):
[...]
if hnd == self.device.char_fetch.getHandle():
if data[:3] == b'\x10\x01\x01':
self.device.char_activity_data.write(b'\x01\x00', False)
# After \x02 I receive \x10\x02\x01 instead of fitness data as I thought
self.device.char_fetch.write(b'\x02', False)
It is need to analyze btsnoop_hci.log
On every 30 minute the device send a notification value 0x0e from 00000010-0000-3512-2118-0009af100700. Then you must start to take your past data. Firstly you need to enable notification descriptors for UUID_CHAR_ACTIVITY_DATA and so called UUID_CHAR_FETCH. Then you need to get count of packages from your last successful getting of data. So you send a value 0x0101+datatime+tz to UUID_CHAR_FETCH. The device response to you with value of 0x100101+packages_count+1st_package_datetimetz if no gaps the 1st_package_datetimetz is that you send previously. Now you need to start transfer past data, just send one byte value 0x02 to UUID_CHAR_FETCH and device will send notifications from UUID_CHAR_ACTIVITY_DATA. Every activity data notification value has a queue number in first byte and maximum 4 packages of data in remain bytes. Every single package of past data consists of 4 bytes and has this format: activity_type,intensity,steps,heart_rate. The device stores data for every minute. So usualy on every 0x0e event you will get 30 packages in 8 notification message values by 4 packages most of time. After the last notification got the device will send notification of success 0x100201 from UUID_CHAR_FETCH. I don't know why but it need to be done the last 3-rd step: send a single byte 0x03 to UUID_CHAR_FETCH then get success response 0x100301. This actualy all what you need but Mi Fit does bouble check for a new data packages, then gets zero count and then does last 3-rd step. Now it need to set notification descriptors off with value 0x0000. After all this your success synchronize datatime will be grater for count of past data packets you got * 60 seconds.
If you have response packages count = 0 after 0x0101 command the device will obviously send to you nothing after command 0x02 and then send success 0x100201 :)
I don't know what for 0x0102+datatimetz is. It always response packages count = 0 in my btsnoop_hci.logs.
I think it is not necessary to synchronize by 0x0e event.
https://gist.github.com/Roxxor91/0d3ff17153270e447d01e7afd0c54e0f
When I try to analyse CDN download, some logs looks like below:
GET http://1234.apk?track=mmmmmmm range:bytes-sent=[500-500], content-length:1500 ...
In my understanding, range:bytes-sent represents continue download after break and it should have different number in bytes-sent, the followings are reasonable:
bytes-sent=[500-600]
bytes-sent=[500-]
bytes-sent=[-500]
but what meaning of range start = range end like [500-500]? It seems no data should be downloaded but generates http response.
Thanks first~
The bytes mentioned in the range are also sent. Hence if the server wants to only send the 500th byte, the server would send [500-500] as the bytes range. Have a look at https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-p5-range-26 which has an example of how the first and last byte are sent :
o The first and last bytes only (bytes 0 and 9999):
bytes=0-0,-1
rather than an answer I'm looking for an idea here.
I'd like to measure the scheduling latency of sensor sampling in Android. In particular I want to measure the time from the sensor interrupt request to when the bottom half, which is in charge of the data read, is executed.
The bottom half already has, besides the data read, a timestamping instruction. Indeed samples are collected by applications (being java or native, no difference) as a tuple [measurement, timestamp].
The timestamp follows the clock source clock_gettime(CLOCK_MONOTONIC, &t);
So assuming that the bottom-half is not preempted, somehow this timestamp gives an indication of the task scheduling instant. What is missing is a direct or indirect way to find out its corresponding irq instant.
Safely assume that we can ask any sampling rate to the sensor. The driver skeleton is the following (Galaxy's S3 gyroscope)
err = request_threaded_irq(data->client->irq, NULL,
lsm330dlc_gyro_interrupt_thread\
, IRQF_TRIGGER_RISING | IRQF_ONESHOT,\
"lsm330dlc_gyro", data);
static irqreturn_t lsm330dlc_gyro_interrupt_thread(int irq\
, void *lsm330dlc_gyro_data_p) {
...
struct lsm330dlc_gyro_data *data = lsm330dlc_gyro_data_p;
...
res = lsm330dlc_gyro_read_values(data->client,
&data->xyz_data, data->entries);
...
input_report_rel(data->input_dev, REL_RX, gyro_adjusted[0]);
input_report_rel(data->input_dev, REL_RY, gyro_adjusted[1]);
input_report_rel(data->input_dev, REL_RZ, gyro_adjusted[2]);
input_sync(data->input_dev);
...
}
The key constraint is that I need to (well, I only have enough resources to) perform this measurement from user-space, on a commercial device, without toucing and recompliling the kernel. Hopefully with a limited mpact on the experiment accuracy. I don't know if such an experiment is possible with this constraint and so far I couldn't figure out any reasonable method.
I might consider also recompiling the kernel if the experiment then becomes straightforward.
Thanks.
First Its not possible to perform this measurement without touching the kernel.
Second I didnt see any bottom half configured in your ISR code.
Third if at all Bottom half is scheduled and kernel can be recompiled , you can sample jiffie value in ISR and again resample it in bottom half. take the difference between the two samples and subtract that offset from timestamp that is exported to U-space.
Helo.
Im developing an application that transferes data over bluetooth(with a flight recorder device). When i am recieving a lot of data data(3000 - 40000 lines of text, depends of the file size) my aplication seems to stop recieving the data. I recieve the data with InputStream.read(buffer). For example: I send a command to the flight recorder, it starts sending me a file(line by line), on my phone i recieve 120 lines and then the app stucks.
Intresting is that on my HTC Desire the app stucks just sometimes, on the Samsung Galaxy S phone the application stucks every single time i try to recive more than 50 lines.
The code is based on the BluetoothChat example. This is the part of code where i am listening to the BluetoothSocket:
byte[] buffer = new byte[1024];
int bytes =0;
while(true)
{
bytes = mmInStream.read(buffer);
readMessage = new String(buffer, 0, bytes);
Log.e("read", readMessage);
String read2 = readMessage;
//searching for the end of line to count the lines(the star means the start of the checksum)
int currentHits = read2.replaceAll("[^*]","").length();
nmbrOfTransferedFligts += currentHits;
.
.
.
//parsing and saving the recieved data
I must say that i am running this in a while(true) loop, in a Thread, that is implemented in an android Service. The app seems to stuck at "bytes = mmInStream.read(buffer);"
I have tried to do this with BufferedReader, but with no success.
Thanks.
The app seems to stuck at "bytes = mmInStream.read(buffer);"
But that is normal behavior: InputStream.read(byte[]) blocks when there is no more data available.
This suggests to me that the problem is on the other end or in the communication between the devices. Is is possible that you have a communication problem (which is a bit different on the Galaxy vs. the Desire) that is preventing more data from being received?
Also, I would suggest that you wrap a try/catch around the read statement to be sure that you catch any possible IOException's. Though I guess you would have seen it in logcat if that were happening.
Speaking of logcat, I would suggest that you look at the logcat statements that Android itself it generating. I find that it generates a lot for Bluetooth and this might help you to figure out whether there really is any more data to be read().
On Android 2.1/2.2 I use DefaultHttpClient found in Android SDK.
Apache says in their docs there are 2 timeouts:
CoreConnectionPNames.SO_TIMEOUT='http.socket.timeout': defines the socket timeout (SO_TIMEOUT) in milliseconds, which is the timeout for waiting for data or, put differently, a maximum period inactivity between two consecutive data packets). A timeout value of zero is interpreted as an infinite timeout. This parameter expects a value of type java.lang.Integer. If this parameter is not set, read operations will not time out (infinite timeout).
CoreConnectionPNames.CONNECTION_TIMEOUT='http.connection.timeout': determines the timeout in milliseconds until a connection is established. A timeout value of zero is interpreted as an infinite timeout. This parameter expects a value of type java.lang.Integer. If this parameter is not set, connect operations will not time out (infinite timeout).
I tried searching Android sources for default values for these 2 timeouts, but was unable to find. Does anyone know what are the default values for these timeouts? I'd like to get a link to sources where the values are set or an official doc on this (versus just to hear an opinion).
Just try below below code section:
import android.net.http.AndroidHttpClient;
...
AndroidHttpClient h = AndroidHttpClient.newInstance("My http client");
// ...
Log.d(TAG, "http.socket.timeout: " + h.getParams().getParameter("http.socket.timeout"));
Log.d(TAG, "http.connection.timeout: " + h.getParams().getParameter("http.connection.timeout"));
It works on my device:
12-02 16:27:54.119 D/Exam(17121): http.socket.timeout: 60000
12-02 16:27:54.119 D/Exam(17121): http.connection.timeout: 60000
Wouldn't you be able to get the default (or whatever values are set) using something like the following:
DefaultHttpClient h;
// ...
Log.d(TAG, "http.socket.timeout: " +
h.getParams().getParameter("http.socket.timeout"));
Log.d(TAG, "http.connection.timeout: "
+ h.getParams().getParameter("http.connection.timeout"));
It's worth a shot if you really want to know what the default values are (as opposed to just setting the values yourself).