I have a sensor driver, which instead of processing interrupts does poll of gyroscope and accelerometer every 5 ms, with help of workqueues.
static void sensor_delaywork_func(struct work_struct *work)
{
struct delayed_work *delaywork = container_of(work, struct delayed_work, work);
struct sensor_private_data *sensor = container_of(delaywork, struct sensor_private_data, delaywork);
struct i2c_client *client = sensor->client;
int result;
mutex_lock(&sensor->sensor_mutex);
result = sensor->ops->report(client);
if (result < 0)
dev_err(&client->dev, "%s: Get data failed\n", __func__);
mutex_unlock(&sensor->sensor_mutex);
if ((!sensor->pdata->irq_enable) && (sensor->stop_work == 0))
schedule_delayed_work(&sensor->delaywork, msecs_to_jiffies(sensor->pdata->poll_delay_ms));
}
The problem is - under heavy load kworker/* threads get preempted and receive less cpu so the amount of generated
input events drops from 200 till 170-180 per second.
What really helps - setting higher (rt) priorities for kworker/* threads with chrt for example.
But what is the correct way to rework it in kernel to always have 200 events under any circumstances?
High priority tasklets can not sleep for example so it doesn't seem like a way for implementation.
Creating inside driver an own kernel thread with high priority seems like pretty much hack.
Any suggestions?
Related
This is my first question ever asked on this board
The project explained short:
5 sensors, connected with an esp32 board are transmitting 1000 samples/second, each sample has 16 bit. Those values should be transmitted via BLE (With the BLE Arduino library and an ESP32). The connected device (Smartphone) should read those values and do something with them (Also via BLE, with the following library: https://github.com/RobotPajamas/Blueteeth). The ESP32 is the Server! Java is used in Android Studio!
The problem:
While testing the BLE connection a simple "hello world" was transmitted as the value for a characteristic. Every time i received the "hello world" on the android-device-side, a variable was incremented: The problem is, the variable only got incremented 4 times in one second. This means (assuming 1 char in a string equals 1 byte) 11byte*4(1/s)=44byte/s are being transmitted. -> This clearly is not enough (should not BLE transmit ~2MBit/s (minus the protocol-data))
Code Fragments
ESP32: BLE-Server that transmits value
#include <BLEDevice.h>
#include <BLEUtils.h>
#include <BLEServer.h>
#define SERVICE_UUID "4fafc201-1fb5-459e-8fcc-c5c9c331914b"
#define CHARACTERISTIC_UUID "beb5483e-36e1-4688-b7f5-ea07361b26a8"
class MyCallbacks: public BLECharacteristicCallbacks {
void onWrite(BLECharacteristic *pCharacteristic) {
std::string value = pCharacteristic->getValue();
if (value.length() > 0) {
Serial.println("*********");
Serial.print("New value: ");
for (int i = 0; i < value.length(); i++)
Serial.print(value[i]);
Serial.println();
Serial.println("*********");
}
}
};
void setup() {
Serial.begin(115200);
BLEDevice::init("MyESP32");
BLEServer *pServer = BLEDevice::createServer();
BLEService *pService = pServer->createService(SERVICE_UUID);
BLECharacteristic *pCharacteristic = pService->createCharacteristic(
CHARACTERISTIC_UUID,
BLECharacteristic::PROPERTY_READ |
BLECharacteristic::PROPERTY_WRITE
);
pCharacteristic->setCallbacks(new MyCallbacks());
pCharacteristic->setValue("Hello World");
pService->start();
BLEAdvertising *pAdvertising = pServer->getAdvertising();
pAdvertising->start();
}
void loop() {
// put your main code here, to run repeatedly:
delay(2000);
}
Android Studio Code (Snippet of the receiving source):
try
{
while(sampleBluetoothData)
{
this.selectedDevice.readCharacteristic(MainActivity.characteristicUUID, MainActivity.serviceUUID, (response, data) ->
{
if (response != BlueteethResponse.NO_ERROR) {
return;
}
Log.d("AUSGANG", new String(data) + "times: "+ i);
i++;
});
}
}
catch (Exception e)
{
e.printStackTrace();
}
The write on the ESP32 side is a blank example code of the Arduino IDE, the read on the Android-side is made by the BLE-Library publisher. Yes the Log.d effects the performance, but it does not drop it that much.
The variable "data" of the Android code is the received char-array. The bluetooth-reading runs on a background thread.
Question I ask myself now:
Is the Android-Studio library the problem or the Arduino library
Is this a normal behaviour, that if a value of a characteristic does not change, it is being transmitted quite slowly.
How fast can you update a value of a characteristic
Thank you in advance!
BLE can definitely transfer much more than 4 portions of 11 bytes per second.
Approach of reading:
Generally, continuos reading all the time is NOT the expected BLE way - it's better to subscribe to data changes, so ESP32 will notify only when needed (e.g. do selectedDevice.subscribeToCharacteristic once, instead of reading in a loop, but then ESP32 code should be changed accordingly)
I guess selectedDevice.readCharacteristic requests asynchronous BLE read, and when you call it in while(sampleBluetoothData), your Bluetooth library is adding more and more read requests. Maybe it would be wise to request new read only after the previous read is done - in read callback add if(sampleBluetoothData) { this.readAgain(); }
Consider making a testing prototype from this kickstart example: BLEProof on github - Android & ESP32, read, write, notify (but it uses just system API without Bluetooth library, you approach is better, it's easier and safer to use the library).
What else to check:
Android side: are you sure that your code doesn't go inside of if (response != BlueteethResponse.NO_ERROR) ?
Android side: to ensure Bluetooth library is not overloaded with read requests, try adding a delay 50 milliseconds in the reading loop (just to check, it's not a solution)
Android side: are you sure that you don't have other BLE read/writes while you read those data?
ESP32 side: use shorter BLE connection interval (BLE throughput article) - add pAdvertising->setMinPreferred(0x06); and pAdvertising->setMaxPreferred(0x20); before pAdvertising->start(); (but that sets only "preferred" interval, Android may ignore that)
Using read requests, you are mainly limited by the connection interval for transfer speeds - that is 2 intervals for request + response.
If for example your client has a connection interval of 50ms, you should expect to read a characteristic of up to 20 bytes 10 times per second.
If another client has a connection interval of 30ms, this rate improves to 16.6 reads per second.
The fastest negotiatiable connection interval is 7.5ms for a maximum of 66.6 reads per second (10.7kbps with 20 byte reads).
I was reading BLE wikipedia page and the minimum data rate is 125Kbit/s , so I think that in your case is viable, because you only will transmit 16Kbit/s. Take a look in BLE wikipedia.
I learned that the loop that checks the GPIO interrupt, which is supposed to loop at ~100 usec, is actually looping every 3 usec. The reason is that the timespec I gave the GPIO event wait routine isn't working. The wait instantly times out.
(Note that I removed extra code, error handling, and other features that are typical in a loop... like how to exit the loop)
struct timespec timeSpec = { 0, 100000 }; // sec, nanosec
struct gpiod_line_event event;
for (;;)
{
result = gpiod_line_event_wait(pGpioLine, &timeSpec); // ppoll(....timeSpec) is called inside this wait()
if (result > 0)
{
// GPIO IRQ detected
if (gpiod_line_event_read(pGpioLine, &event) >= 0)
{
HandleIrq();
}
}
else
{
// 100 usec timeout
HandleTimeout()
}
}
I tried to work around failure to properly wait by using many approaches.
Adding a usleep(1); to the for loop doesn't work. The actual sleep time is 30 to 40 msec which is too long for our driver. This wasn't a surprise as this isn't a real time OS.
I tried changing the timeSpec to have an absolute instead of a relative time. For example, I tried to create my own wait based on the REAL_TIME clock.
struct timespec timeToWait;
clock_gettime(CLOCK_REALTIME, &timeToWait);
timeToWait.tv_nsec += 10000000UL;
result = gpiod_line_event_wait(pGpioLine, &timeToWait);
This didn't work and I eventually concluded that the timespec is supposed to be relative for this gpiod API. (The API is actually ppoll() )
I implemented an algorithm on android using OpenCL and OpenMP. The OpenMP implementation runs about 10 times slower than the OpenCL one.
OpenMP: ~250 ms
OpenCL: ~25 ms
But overall, if I measure the time from the java android side, I get roughly the same time to call and get my values.
For example:
Java code:
// calls C implementation using JNI (Java Native Interface)
bool useOpenCL = true;
myFunction(bitmap, useOpenCL); // ~300 ms, timed with System.nanoTime() here, but omitted code for clarity
myFunction(bitmap, !useOpenCL); // ~300 ms, timed with System.nanoTime() here, but omitted code for clarity
C code:
JNIEXPORT void JNICALL Java_com_xxxxx_myFunctionNative(JNIEnv * env, jobject obj, jobject pBitmap, jboolean useOpenCL)
{
// same before, setting some variables
clock_t startTimer, stopTimer;
startTimer = clock();
if ((bool) useOpenCL) {
calculateUsingOpenCL(); // runs in ~25 ms, timed here, using clock()
}
else {
calculateUsingOpenMP(); // runs in ~250 ms
}
stopTimer = clock();
__android_log_print(ANDROID_LOG_VERBOSE, APPNAME, "Time in ms: %f\n", 1000.0f* (float)(stopTimer - startTimer) / (float)CLOCKS_PER_SEC);
// same from here on, e.g.: copying values to java side
}
The Java code, in both cases executes roughly in the same time, around 300 ms. To be more precise, elapsedTime is a bit more for OpenCL, that is OpenCL is slower on average.
Looking at the individual run-times of the OpenMP, and OpenCL implementations, OpenCL version should be much faster overall. But for some reason, there is an overhead that I cannot find.
I also compared OpenCL vs Normal native code (no OpenMP), I still got the same results, with roughly same runtime overall, even though the calculateUsingOpenCL ran at least 10 times faster.
Ideas:
Maybe the GPU (in OpenCL case) is less efficient in general, because it has less memory available. There are few variables that we need to preallocate, which are used every frame. So, we checked the time it takes for android to draw a bitmap in both cases (OpenMP, OpenCL). In the OpenCL case, sometimes drawing a bitmap took longer (3 times longer), but not by the amount that would equalize the overall run time of the program.
Does JNI use GPU to accelerate some calls, which could cause the OpenCL version to be slower?
EDIT:
Is it possible that Java Garbage collection is triggered by OpenCL, causing the big overhead?
It turns out, clock() is unreliable (on Android), so instead we used the following method to measure time. With this method, everything is ok.
int64_t getTimeNsec() {
struct timespec now;
clock_gettime(CLOCK_MONOTONIC, &now);
return (int64_t) now.tv_sec*1000000000LL + now.tv_nsec;
}
clock_t startTimer, stopTimer;
startTimer = getTimeNsec();
function_to_measure();
stopTimer = getTimeNsec();
__android_log_print(ANDROID_LOG_VERBOSE, APPNAME, "Runtime in milliseconds (ms): %f", (float)(stopTimer - startTimer) / 1000000.0f);
This was suggested here:
How to obtain computation time in NDK
I try to access the accelerometer from the NDK. So far it works. But the way events are written to the eventqueue seems a little bit strange.
See the following code:
ASensorManager* AcquireASensorManagerInstance(void) {
typedef ASensorManager *(*PF_GETINSTANCEFORPACKAGE)(const char *name);
void* androidHandle = dlopen("libandroid.so", RTLD_NOW);
PF_GETINSTANCEFORPACKAGE getInstanceForPackageFunc = (PF_GETINSTANCEFORPACKAGE) dlsym(androidHandle, "ASensorManager_getInstanceForPackage");
if (getInstanceForPackageFunc) {
return getInstanceForPackageFunc(kPackageName);
}
typedef ASensorManager *(*PF_GETINSTANCE)();
PF_GETINSTANCE getInstanceFunc = (PF_GETINSTANCE) dlsym(androidHandle, "ASensorManager_getInstance");
return getInstanceFunc();
}
void init() {
sensorManager = AcquireASensorManagerInstance();
accelerometer = ASensorManager_getDefaultSensor(sensorManager, ASENSOR_TYPE_ACCELEROMETER);
looper = ALooper_prepare(ALOOPER_PREPARE_ALLOW_NON_CALLBACKS);
accelerometerEventQueue = ASensorManager_createEventQueue(sensorManager, looper, LOOPER_ID_USER, NULL, NULL);
auto status = ASensorEventQueue_enableSensor(accelerometerEventQueue,
accelerometer);
status = ASensorEventQueue_setEventRate(accelerometerEventQueue,
accelerometer,
SENSOR_REFRESH_PERIOD_US);
}
That's how I initialize everything. My SENSOR_REFRESH_PERIOD_US is 100.000 - so 10 refreshs per second. Now I have the following method to receive the events of the event queue.
vector<sensorEvent> update() {
ALooper_pollAll(0, NULL, NULL, NULL);
vector<sensorEvent> listEvents;
ASensorEvent event;
while (ASensorEventQueue_getEvents(accelerometerEventQueue, &event, 1) > 0) {
listEvents.push_back(sensorEvent{event.acceleration.x, event.acceleration.y, event.acceleration.z, (long long) event.timestamp});
}
return listEvents;
}
sensorEvent at this point is a custom struct which I use. This update method gets called via JNI from Android every 10 seconds from an IntentService (to make sure it runs even when the app itself is killed). Now I would expect to receive 100 values (10 per second * 10 seconds). In different tests I received around 130 which is also completly fine for me even it's a bit off. Then I read in the documentation of ASensorEventQueue_setEventRate that it's not forced to follow the given refresh period. So if I would get more than I wanted it would be totally fine.
But now the problem: Sometimes I receive like 13 values in 10 seconds and when I continue to call update 10 secods later I get the 130 values + the missing 117 of the run before. This happens completly random and sometimes it's not the next run but the fourth following or something like that.
I am completly fine with being off from the refresh period by having more values. But can anyone explain why it happens that there are so many values missing and they appear 10 seconds later in the next run? Or is there maybe a way to make sure I receive them in their desired run?
Your code is correct and as i see only one reason can be cause such behaviour. It is android system, for avoid drain battery, decreases frequency of accelerometer stream of events in some time after app go to background or device fall asleep.
You need to revise all axelerometer related logic and optimize according
Doze and App Standby
Also you can try to work with axelerometer in foreground service.
I'm writing a simple NDK OpenSL ES audio app that records the users touches on a virtual piano keyboard and then plays them back forever over a set loop. After much experimenting and reading, I've settled on using a separate POSIX loop to achieve this. As you can see in the code it subtracts any processing time taken from the sleep time in order to make the interval of each loop as close to the desired sleep interval as possible (in this case it's 5000000 nanoseconds.
void init_timing_loop() {
pthread_t fade_in;
pthread_create(&fade_in, NULL, timing_loop, (void*)NULL);
}
void* timing_loop(void* args) {
while (1) {
clock_gettime(CLOCK_MONOTONIC, &timing.start_time_s);
tic_counter(); // simple logic gates that cycle the current tic
play_all_parts(); // for-loops through all parts and plays any notes (From an OpenSL buffer) that fall on the current tic
clock_gettime(CLOCK_MONOTONIC, &timing.finish_time_s);
timing.diff_time_s.tv_nsec = (5000000 - (timing.finish_time_s.tv_nsec - timing.start_time_s.tv_nsec));
nanosleep(&timing.diff_time_s, NULL);
}
return NULL;
}
The problem is that even using this the results are better, but quite inconsistent. sometimes notes will delay for perhaps even 50ms at a time, which makes for very wonky playback.
Is there a better way of approaching this? To debug I ran the following code:
gettimeofday(&timing.curr_time, &timing.tzp);
__android_log_print(ANDROID_LOG_DEBUG, "timing_loop", "gettimeofday: %d %d",
timing.curr_time.tv_sec, timing.curr_time.tv_usec);
Which gives a fairly consistent readout - that doesn't reflect the playback inaccuracies whatsoever. Are there other forces at work with Android preventing accurate timing? Or is OpenSL ES a potential issue? All the buffer data is loaded into memory - could there be bottlenecks there?
Happy to post more OpenSL code if needed... but at this stage I'm trying figure out if this thread loop is accurate or if there's a better way to do it.
You should consider seconds when using clock_gettime as well, you may get greater timing.start_time_s.tv_nsec than timing.finish_time_s.tv_nsec. tv_nsec starts from zero when tv_sec is increased.
timing.diff_time_s.tv_nsec =
(5000000 - (timing.finish_time_s.tv_nsec - timing.start_time_s.tv_nsec));
try something like
#define NS_IN_SEC 1000000000
(timing.finish_time_s.tv_sec * NS_IN_SEC + timing.finish_time_s.tv_nsec) -
(timing.start_time_s.tv_nsec * NS_IN_SEC + timing.start_time_s.tv_nsec)