I learned that the loop that checks the GPIO interrupt, which is supposed to loop at ~100 usec, is actually looping every 3 usec. The reason is that the timespec I gave the GPIO event wait routine isn't working. The wait instantly times out.
(Note that I removed extra code, error handling, and other features that are typical in a loop... like how to exit the loop)
struct timespec timeSpec = { 0, 100000 }; // sec, nanosec
struct gpiod_line_event event;
for (;;)
{
result = gpiod_line_event_wait(pGpioLine, &timeSpec); // ppoll(....timeSpec) is called inside this wait()
if (result > 0)
{
// GPIO IRQ detected
if (gpiod_line_event_read(pGpioLine, &event) >= 0)
{
HandleIrq();
}
}
else
{
// 100 usec timeout
HandleTimeout()
}
}
I tried to work around failure to properly wait by using many approaches.
Adding a usleep(1); to the for loop doesn't work. The actual sleep time is 30 to 40 msec which is too long for our driver. This wasn't a surprise as this isn't a real time OS.
I tried changing the timeSpec to have an absolute instead of a relative time. For example, I tried to create my own wait based on the REAL_TIME clock.
struct timespec timeToWait;
clock_gettime(CLOCK_REALTIME, &timeToWait);
timeToWait.tv_nsec += 10000000UL;
result = gpiod_line_event_wait(pGpioLine, &timeToWait);
This didn't work and I eventually concluded that the timespec is supposed to be relative for this gpiod API. (The API is actually ppoll() )
Related
I have a sensor driver, which instead of processing interrupts does poll of gyroscope and accelerometer every 5 ms, with help of workqueues.
static void sensor_delaywork_func(struct work_struct *work)
{
struct delayed_work *delaywork = container_of(work, struct delayed_work, work);
struct sensor_private_data *sensor = container_of(delaywork, struct sensor_private_data, delaywork);
struct i2c_client *client = sensor->client;
int result;
mutex_lock(&sensor->sensor_mutex);
result = sensor->ops->report(client);
if (result < 0)
dev_err(&client->dev, "%s: Get data failed\n", __func__);
mutex_unlock(&sensor->sensor_mutex);
if ((!sensor->pdata->irq_enable) && (sensor->stop_work == 0))
schedule_delayed_work(&sensor->delaywork, msecs_to_jiffies(sensor->pdata->poll_delay_ms));
}
The problem is - under heavy load kworker/* threads get preempted and receive less cpu so the amount of generated
input events drops from 200 till 170-180 per second.
What really helps - setting higher (rt) priorities for kworker/* threads with chrt for example.
But what is the correct way to rework it in kernel to always have 200 events under any circumstances?
High priority tasklets can not sleep for example so it doesn't seem like a way for implementation.
Creating inside driver an own kernel thread with high priority seems like pretty much hack.
Any suggestions?
I try to access the accelerometer from the NDK. So far it works. But the way events are written to the eventqueue seems a little bit strange.
See the following code:
ASensorManager* AcquireASensorManagerInstance(void) {
typedef ASensorManager *(*PF_GETINSTANCEFORPACKAGE)(const char *name);
void* androidHandle = dlopen("libandroid.so", RTLD_NOW);
PF_GETINSTANCEFORPACKAGE getInstanceForPackageFunc = (PF_GETINSTANCEFORPACKAGE) dlsym(androidHandle, "ASensorManager_getInstanceForPackage");
if (getInstanceForPackageFunc) {
return getInstanceForPackageFunc(kPackageName);
}
typedef ASensorManager *(*PF_GETINSTANCE)();
PF_GETINSTANCE getInstanceFunc = (PF_GETINSTANCE) dlsym(androidHandle, "ASensorManager_getInstance");
return getInstanceFunc();
}
void init() {
sensorManager = AcquireASensorManagerInstance();
accelerometer = ASensorManager_getDefaultSensor(sensorManager, ASENSOR_TYPE_ACCELEROMETER);
looper = ALooper_prepare(ALOOPER_PREPARE_ALLOW_NON_CALLBACKS);
accelerometerEventQueue = ASensorManager_createEventQueue(sensorManager, looper, LOOPER_ID_USER, NULL, NULL);
auto status = ASensorEventQueue_enableSensor(accelerometerEventQueue,
accelerometer);
status = ASensorEventQueue_setEventRate(accelerometerEventQueue,
accelerometer,
SENSOR_REFRESH_PERIOD_US);
}
That's how I initialize everything. My SENSOR_REFRESH_PERIOD_US is 100.000 - so 10 refreshs per second. Now I have the following method to receive the events of the event queue.
vector<sensorEvent> update() {
ALooper_pollAll(0, NULL, NULL, NULL);
vector<sensorEvent> listEvents;
ASensorEvent event;
while (ASensorEventQueue_getEvents(accelerometerEventQueue, &event, 1) > 0) {
listEvents.push_back(sensorEvent{event.acceleration.x, event.acceleration.y, event.acceleration.z, (long long) event.timestamp});
}
return listEvents;
}
sensorEvent at this point is a custom struct which I use. This update method gets called via JNI from Android every 10 seconds from an IntentService (to make sure it runs even when the app itself is killed). Now I would expect to receive 100 values (10 per second * 10 seconds). In different tests I received around 130 which is also completly fine for me even it's a bit off. Then I read in the documentation of ASensorEventQueue_setEventRate that it's not forced to follow the given refresh period. So if I would get more than I wanted it would be totally fine.
But now the problem: Sometimes I receive like 13 values in 10 seconds and when I continue to call update 10 secods later I get the 130 values + the missing 117 of the run before. This happens completly random and sometimes it's not the next run but the fourth following or something like that.
I am completly fine with being off from the refresh period by having more values. But can anyone explain why it happens that there are so many values missing and they appear 10 seconds later in the next run? Or is there maybe a way to make sure I receive them in their desired run?
Your code is correct and as i see only one reason can be cause such behaviour. It is android system, for avoid drain battery, decreases frequency of accelerometer stream of events in some time after app go to background or device fall asleep.
You need to revise all axelerometer related logic and optimize according
Doze and App Standby
Also you can try to work with axelerometer in foreground service.
I'm trying to get a precise clock that is not influenced by other processes inside the app.
I currently use System.nanoTime() like below inside a thread.
I use to calculate the timing of each of the sixteen step.
Currently timed operations have sometime a perceptible delay that i try to fix.
I would like to know if there is a more precise way to obtaining timed operations, maybe by check the internal soundcard clock and use it to generate the timing i need.
I need it to send midi notes from android device to external audio sinthetizers and for audio i need precise timing
Is there anyone who can help me improve this aspect?
Thanks
cellLength = (long)(0.115*1000000000L);
for ( int x = 0; x < 16; x++ ) {
noteStartTimes[x] = x*cellLength ;
}
long startTime = System.nanoTime();
index = 0;
while (isPlaying) {
if (noteStartTimes[index] < System.nanoTime() - startTime) {
index++ ;
if (index == 16) { //reset things
startTime = System.nanoTime() + cellLength;
index = 0 ;
}
}
}
For any messages that you receive, the onSend callback gives you a timestamp.
For any messages that you send, you can provide a timestamp.
These timestamps are based on System.nanoTime(), so your own code should use this as well.
If your code is delayed (by its own processing, or by other apps, or by background services), System.nanoTime() will accurately report the delay. But no timer will function can make your code run earlier.
I'm getting a very peculiar issue with my audio callbacks in my Android app (that's using NDK/OpenSL ES). I'm streaming audio output at 44.1 kHz and 512 frames (which gives me a callback time of 11.6 ms). In the callback, I'm synthesizing a couple of waveforms, filters, etc (like a synthesizer). Due to optimization I never reach over 5 ms of the callback time. However, when I turn on a specific effect (digital delay line), it starts to take a radically longer time in the callback. The digital delay line will jump from 7.5 ms (after all voices/filters have been processed) and jump up to 100 to 350 ms.
This is the most confusing part; after maybe 1 or 2 seconds, the digital delay execution time will jump from the extremely high time to 0.2 ms completion time per callback.
Why would the Android app take a long time to complete my digital delay processing code the first few callbacks and then die down to a very short and audio-happy time? I'm kind of at a loss right now and not sure how to fix this. To confirm, this only happens with the delay processing method. It's just a standard digital delay line (you can find some on github) and I feel like the algorithm isn't the problem here...
Kind of a pseudocode/rough sketch of what my audio callback code looks like:
static bool myAudioCallback(void *userData, short int *audIO, int numSamples, int srate) {
AudioData *data = (AudioData *)userData;
// Resets pointer array values to 0
for (int i = 0; i < numSamples; i++) data->buffer[i] = 0;
// Voice Generation Block
for (int voice = 0; voice < data->numVoices; voice++) {
// Reset voice buffers:
for (int i = 0; i < numSamples; i++) data->voiceBuffer[i] = 0;
// Generate Voice
data->voiceManager[voice]->generateVoiceBlock(data->voiceBuffer, numSamples);
// Sum voices
for (int i = 0; i < numSamples; i++) data->buffer[i] += data->voiceBuffer[i]];
}
// When app first starts, delayEnabled = false so user must click on a
// button on the UI to enable it.
// Trouble is that when we enable processDelay(double *buffer, in frames) the
// first time, we get a long execution time.
if (data->delayEnabled) {
data->delay->processDelay(data->buffer, numSamples);
}
// Conversion loop
for (int i = 0; i < numSamples; i++) {
double sample = clipOutput(data->buffer[i]);
audIO[2*i] = audIO[(2*i)+1] = CONV_FLT_TO_16BIT(sample * data->volume);
}
}
Thanks!
Not a great answer to the solution but this is what I did:
Before the user is able to do anything on the app, I turned on the delay and let it run its course for like 2 seconds before switching it off. This allows the callback to do its weird long 300 ms execution time while not destroying the audio.
Obviously this is not a great answer and if anyone can figure out a more logical explanation I would be more than happy to mark that as the answer.
I am trying to keep track of when the system wakes and suspends (ideally when monotonic_time starts and stops) so that I can accurately correlate monotonic time-stamps to the realtime clock.
On android the first method that came to mind was to monitor kmsg for a wakeup message and use its timestamp as a fairly accurate mark. As I was unsure of the accuracy of this timestamp, I decided to log the current monotonic time as well.
The following code is running in a standalone executable
while(true)
{
fgets(mLineBuffer, sizeof(mLineBuffer), mKmsgFile);
//Find first space
char * messageContent = strchr(mLineBuffer,' ');
//Offset one to get character after space
messageContent++;
if (strncmp (messageContent,"Enabling non-boot CPUs ...",25) == 0 )
{
clock_gettime(CLOCK_MONOTONIC,&mMono);
std::cout << mLineBuffer;
std::cout << std::to_string(mMono.tv_sec) << "." << std::to_string(mMono.tv_nsec) << "\n";
}
}
I expected the time returned by clock_gettime to be at some point after the kmsg log timestamp, but instead it is anywhere from 600ms before to 200ms after.
<6>[226692.217017] Enabling non-boot CPUs ...
226691.681130889
-0.535886111
<6>[226692.626100] Enabling non-boot CPUs ...
226692.80532881
+0.17922881
<6>[226693.305535] Enabling non-boot CPUs ...
226692.803398747
-0.502136253
During this particular session, CLOCK_MONOTONIC consistently differed from the kmsg timestamp by roughly -500ms, only once flipping over to +179ms over the course of 10 wakeups. During a later session it was consistently off by -200ms.
The same consistent offset is present when monitoring all kmsg entries during normal operation (not suspending or waking). Perhaps returning from suspend occasionally delays my process long enough to produce a timestamp that is ahead of kmsg, resulting in the single +179ms difference.
CLOCK_MONOTONIC_COARSE and CLOCK_MONOTONIC_RAW behave in the same manner.
Is this expected behavior? Does the kernel run on a separate monotonic clock?
Is there any other way to get wakeup/suspend times that correlate to monotonic time?
The ultimate goal is to use this information to help graph the contents of wakeup_sources over time, with a particular focus on activity immediately after waking. Though, if the kmsg timestamps are "incorrect", then the wakeup_sources ones probably are too.