Get 64bit timestamps from a 32bit timer - android

On my stm32wb55, I am using the 32bit-timer "tim2" for reading the time from 32bit-register "CNT" since system startup. With prescaling, I display the time in microseconds on my putty-console and it works very well. But now, I need to memory higher values. So I want to memory the time in a 64bit integer.
Does anyone know a simple way for doing that?

The tim2 timer is a 32bit resolution timer, you want a 64bit resolution. There are two ways to emulate a 64bit counter, to keep track of your uptime.
One would be incrementing a variable each time you reach the unit of time that you want to keep track of. But that would be extremely inefficient giving that the microcontroller would be doing a lot of constant context switching.
The second way would be to extend the timer with a 32bit variable. Then incrementing such variable on an overflow.
MSB LSB
+--------------+ +--------------+
| 32bit uint | | 32bit timer |
+--------------+ +--------------+
The way this works is that after the timer reaches 0xffffffff which is the maximum for a 32bit unsigned counter, the timer will overflow and start back at 0. If there was another bit after that 32'th bit, it will flip on(which is the same as incrementing). What you can do is emulate this exact behavior by incrementing a variable.
First, set up your timer.
static TIM_HandleTypeDef s_TimerInstance = {
.Instance = TIM2
};
void setup_timer()
{
__TIM2_CLK_ENABLE();
s_TimerInstance.Init.Prescaler = ##; //Chose the correct value that fits your needs
s_TimerInstance.Init.CounterMode = TIM_COUNTERMODE_UP;
s_TimerInstance.Init.Period = 0xffffffff; //Chose the correct value that fits your needs
s_TimerInstance.Init.ClockDivision = TIM_CLOCKDIVISION_DIV1; //Also choose this value
s_TimerInstance.Init.RepetitionCounter = 0;
HAL_TIM_Base_Init(&s_TimerInstance);
HAL_TIM_Base_Start(&s_TimerInstance);
}
Your handler, this has to be called each time your timer reaches 0xffffffff
extern void TIM2_IRQHandler();
void TIM2_IRQHandler()
{
HAL_TIM_IRQHandler(&s_TimerInstance);
}
uint32_t extension;
void HAL_TIM_PeriodElapsedCallback(TIM_HandleTypeDef *htim)
{
extension++; //Increment
}
Combine the extension variable and the timer value. Use this function each time you want to get the extender counter value. You can make it inline to avoid extra calls, or as a macro.
uint64_t get_time()
{
return (extension << 32) & (__HAL_TIM_GET_COUNTER(&s_TimerInstance));
}
Now glue everything together
int main(void)
{
HAL_Init(); //Initialize HAL library
InitializeTimer(); //Initialize timer
HAL_NVIC_SetPriority(TIM2_IRQn, 0, 0);
HAL_NVIC_EnableIRQ(TIM2_IRQn);
while(1);
}
Note, that now tim2 will be used until it overflows. It should not be changed, or the following code will not work. Also, setup the divider, so the timer increment each microsecond as you stated earlier.
Also, you can use the timer to count seconds and then calculate the microsecond instead. If you count seconds instead you can count up to 2^32 seconds which is 4294967296. A year has about 31536000 seconds. With a 32bit counter (4294967296/31536000) you can count up to 136.19252 years of uptime. Then get the microseconds by dividing the uptime with 1000000 (uptime/1000000). I don't know what are you planning to do with the microcontroller, but counting seconds sounds more sensical for me.
If you really want precision, you can still do it by counting seconds, you can add the timer counter value to the microsecond count, which you can get by diving the seconds down into microseconds, that way you offset microseconds that haven't been added to the second count.

If you only access this from a non-ISR [non interrupt service] context, it's pretty simple.
If you have an ISR, the base level needs to lock/unlock interrupt handling. The ISR does not have to be related to the timer interrrupt. It could be for any ISR (e.g. tty, disk, SPI, video/audio, whatever).
Here's some representative code for a simple semi-baremetal implementation [this is similar to what I've done in some R/T commercial products, notably in a microblaze inside a Xilinx FPGA]:
typedef unsigned int u32;
typedef unsigned long long u64;
volatile int in_isr; // 1=inside an ISR
volatile u32 oldlo; // old LSW timer value
volatile u32 oldhi; // MSW of 64 bit timer
// clear and enable the CPU interrupt flag
void cli(void);
void sti(void);
// tmr32 -- get 32 bit timer/counter
u32 tmr32(void);
// tmrget -- get 64 bit timer value
u64
tmrget(void)
{
u32 curlo;
u32 curhi;
u64 tmr64;
// base level must prevent interrupts from occurring ...
if (! in_isr)
cli();
// get the 32 bit counter/timer value
curlo = tmr32();
// get the upper 32 bits of the 64 bit counter/timer
curhi = oldhi;
// detect rollover
if (curlo < oldlo)
curhi += 1;
oldhi = curhi;
oldlo = curlo;
// reenable interrupts
if (! in_isr)
sti();
tmr64 = curhi;
tmr64 <<= 32;
tmr64 |= curlo;
return tmr64;
}
// isr -- interrupt service routine
void
isr(void)
{
// say we're in an ISR ...
in_isr += 1;
u64 tmr = tmrget();
// do stuff ...
// leaving the ISR ...
in_isr -= 1;
}
// baselevel -- normal non-interrupt context
void
baselevel(void)
{
while (1) {
u64 tmr = tmrget();
// do stuff ...
}
}
This works fine if tmrget is called frequently enough that it catches each rollover of the 32 bit timer value.

Related

Android C++ - socket select is driving me nuts! why is it not respecting the timeout value I specify?

I have a loop in a thread. At the beginning of each loop cycle, I define a socket set and then I use select to wait for activity on any of the sockets in the socket set. I have set the time out value to 2 sec and 500ms. For some reason, the 'select' function returns immediately (like after 1ms) and it doesn't seem to respect the time-out value I defined. So what am I doing wrong?
Here's the code snippet:
/* Define a time-out value of 2 seconds and 500ms */
struct timeval sock_timeout;
sock_timeout.tv_sec = 2;
sock_timeout.tv_usec = 500 * 1000;
while (m_keepRunning)
{
fd_set UdpSocketSet;
SOCKET maxfd = INVALID_SOCKET;
std::map<uint16_t, UdpChannel*>::iterator k;
/* Define socket set */
pthread_mutex_lock(&m_udpChannelsMutex);
FD_ZERO(&UdpSocketSet);
for (k = m_udpChannels.begin(); k != m_udpChannels.end(); ++k)
{
UdpChannel* thisUdpChannel = k->second;
FD_SET(thisUdpChannel->m_udpRxSocket, &UdpSocketSet);
if (maxfd == INVALID_SOCKET)
{
maxfd = thisUdpChannel->m_udpRxSocket;
}
else
{
if (thisUdpChannel->m_udpRxSocket > maxfd) maxfd = thisUdpChannel->m_udpRxSocket;
}
}
pthread_mutex_unlock(&thisAudioStreamer->m_udpChannelsMutex);
/* TIMES OUT LITERALLY EVERY MILLISECOND!!! WHY????? */
int retval = pal_select(maxfd + 1, &UdpSocketSet, NULL, NULL, (timeval*)&sock_timeout);
UPDATE:
I hate Android Studio. It doesn't pick up incremental changes, so I was launching the same app over and over again without noticing that it didn't pick up the changes in the native library.
EJP's suggestion must have helped because once I did a clean rebuild of the apk with EJP's suggested change, the problem went away.
You have to reset the socket timeout struct every time around the loop. From man select (Linux):
select() may update the timeout argument to indicate how much time was left.

Android CPU usage of each function

Currently in my Android code what I am doing to calculate each function CPU Usage is -
double start = System.currentTimeMillis();
double start1 = Debug.threadCpuTimeNanos();
foo();
double end = System.currentTimeMillis();
double end1 = Debug.threadCpuTimeNanos();
double t = (end - start);
double t1 = (end1 - start1)/1000000;
double CPUusage;
if(t==0){
CPUusage = 0;
}else{
CPUusage = (t1/t) * 100;
}
I am doing t1/t to calculate CPU Usage. Is this a correct way of calculating CPU usage of each function in my Android code or is it conceptually wrong? Request someone to guide me in this.
From documentation:
static long currentTimeMillis()
Returns the current time in milliseconds since January 1, 1970 00:00:00.0 UTC.
Please, replace the double(s) you are using, with long(s).
While long(s) have precision issues, they are almost irrelevant for the variables used, also, the rounding will likely be close enough, that the returned value can be used in relation with each other
Also, you are comparing two independent values. Try either the current thread, or the full thread time.
From the Debug documentation:
public static long threadCpuTimeNanos()
Added in API level 1
Get an indication of thread CPU usage. The value returned indicates the amount of time that the current thread has spent executing code or waiting for certain types of I/O. The time is expressed in nanoseconds, and is only meaningful when compared to the result from an earlier call. Note that nanosecond resolution does not imply nanosecond accuracy. On system which don't support this operation, the call returns -1.
Try using, in the same Runnable (sequentially placed method calls):
long start = Debug.threadCpuTimeNanos();
foo();
long finish = Debug.threadCpuTimeNanos();
long outputValue = finish - start ;
System.out.println("foo() took " + outputValue + " ns.");

Java long is calculated wrongly

Consider this piece of code:
// calculate age of dog
long interval = 1000*60*60*24*30;
int ageInWeeks = Weeks.weeksBetween(geburtsDatumDateTime, nowDateTime).getWeeks();
if (ageInWeeks < 20){
// wöchentlich
interval = 1000*60*60*24*7;
} else if (ageInWeeks >= 20 && ageInWeeks < 52){
// 2-wöchentlich
interval = 1000*60*60*24*14;
} else if (ageInWeeks >= 52){
// monatlich
interval = 1000*60*60*24*30;
}
The debugger shows, that in case of ageInWeeks >= 52 the interval is: -1702967296, but it should be: 2592000000
The minus sign suggests some kind of overflow error.
However the maximum value of a long in Java is 2E63-1 which is: 9.2233E18
What I am missing here? Is an Android maximum value for a long smaller?
You're computing 32-bit signed ints, the computation overflows, and then you assign the result to a long.
Do your calculation in 64 bits by making one of the operands a long. For example, add L to one of the operands to make it a long literal:
interval = 1000L*60*60*24*30;
As laalto said, adding a 'L' should make it work.
However, for avoiding these kind of errors in the future, you could use the TimeUnit class (available in Android SDK):
long interval = TimeUnit.DAYS.toMillis(30);

Is there a preferred way to get the system time in cocos2d-x?

I am writing a cross-platform application in Cocos2d-x. I need to get the time to create a countdown clock to a certain time of day. Since it is in C++, I can use time(...), mktime(...), and difftime(...) if I need to as a direct approach.
Is there a preferred method in Cocos2d-x for doing this in a cross-platform way (i.e. something built directly into the framework)? I want the app to work the same on iPhones, iPads, and Android.
try this:
time_t rawtime;
struct tm * timeinfo;
time (&rawtime);
timeinfo = localtime (&rawtime);
CCLog("year------->%04d",timeinfo->tm_year+1900);
CCLog("month------->%02d",timeinfo->tm_mon+1);
CCLog("day------->%02d",timeinfo->tm_mday);
CCLog("hour------->%02d",timeinfo->tm_hour);
CCLog("minutes------->%02d",timeinfo->tm_min);
CCLog("seconds------->%02d",timeinfo->tm_sec);
Try this code
static inline long millisecondNow()
{
struct cc_timeval now;
CCTime::gettimeofdayCocos2d(&now, NULL);
return (now.tv_sec * 1000 + now.tv_usec / 1000);
}
I used this function to get current time in millisecond. I am new in cocos2d-x so hope this can be helpful.
You should try this lib, I just tested and it works fine.
https://github.com/Ghost233/CCDate
If you receive some wrong values, set timezoneOffset = 0;
Note: 0 <= month <= 11
You can sheduleUpdate in clock class.
The update call with a float argument which is a delta time in seconds after last calls, this method is called every frame and cocos2d-x get time through from the system and count the delta.
I thought this code would do the trick:
static inline long millisecondNow()
{
struct cc_timeval now;
CCTime::gettimeofdayCocos2d(&now, NULL);
return (now.tv_sec * 1000 + now.tv_usec / 1000);
}
HOWEVER, only gives a part of what I need. In general, I need a real "date and time" object (or structure), not just the time of day in milliseconds.
The best solution, for now, seems to be using the "classic" localtime, mktime, difftime trifecta in C++. I have a few examples below of some basic operations...I may cook up a general class to do these kinds of operations, but for now, these are a good start and show how to get moving:
double Utilities::SecondsTill(int hour, int minute)
{
time_t now;
struct tm target;
double seconds;
time(&now);
target = *localtime(&now);
target.tm_hour = hour;
target.tm_min = minute;
target.tm_sec = 0;
seconds = difftime(mktime(&target),now);
return seconds;
}
DAYS_OF_WEEK_T Utilities::GetDayOfWeek()
{
struct tm tinfo;
time_t rawtime;
time (&rawtime);
tinfo = *localtime(&rawtime);
return (DAYS_OF_WEEK_T)tinfo.tm_wday;
}

How to update Android textviews efficiently?

I am working on an Android app which encounters performance issues.
My goal is to receive strings from an AsyncTask and display them in a TextView. The TextView is initially empty and each time the other process sends a string concatenates it to the current content of the textview.
I currently use a StringBuilder to store the main string and each time I receive a new string, I append it to the StringBuilder and call
myTextView.setText(myStringBuilder.toString())
The problem is that the background process can send up to 100 strings per second, and my method is not efficient enough.
Redrawing the whole TextView everytime is obviously a bad idea (time complexity O(N²)), but I'm not seeing another solution...
Do you know of an alternative to TextView which could do these concatenations in O(N) ?
As long as there is a newline between the strings, you could use a ListView to append the strings and hold the strings themselves in an ArrayList or LinkedList to which you append as the AsyncTask receives the strings.
You might also consider simply invalidating the TextField less frequently; say 10 times a second. This would certainly improve responsiveness. Something like the following could work:
static long lastTimeUpdated = 0;
if( receivedString.size() > 0 )
{
myStringBuilder.append( receivedString );
}
if( (System.currentTimeMillis() - lastTimeUpdated) > 100 )
{
myTextView.setText( myStringBuilder.getChars( 0, myStringBuilder.length() );
}
If the strings come in bursts -- such that you have a delay between bursts greater than, say, a second -- then reset a timer every update that will trigger this code to run again to pick up the trailing portion of the last burst.
I finally found an answer with the help of havexz and Greyson here, and some code here.
As the strings were coming in bursts, I chose to update the UI every 100ms.
For the record, here's what my code looks like:
private static boolean output_upToDate = true;
/* Handles the refresh */
private Handler outputUpdater = new Handler();
/* Adjust this value for your purpose */
public static final long REFRESH_INTERVAL = 100; // in milliseconds
/* This object is used as a lock to avoid data loss in the last refresh */
private static final Object lock = new Object();
private Runnable outputUpdaterTask = new Runnable() {
public void run() {
// takes the lock
synchronized(lock){
if(!output_upToDate){
// updates the outview
outView.setText(new_text);
// notifies that the output is up-to-date
output_upToDate = true;
}
}
outputUpdater.postDelayed(this, REFRESH_INTERVAL);
}
};
and I put this in my onCreate() method:
outputUpdater.post(outputUpdaterTask);
Some explanations: when my app calls its onCreate() method, my outputUpdater Handler receives one request to refresh. But this task (outputUpdaterTask) puts itself a refresh request 100ms later. The lock is shared with the process which send the new strings and sets output_upToDate to false.
Try throttling the update. So instead of updating 100 times per sec as that is the rate of generation. Keep the 100 strings in string builder and then update once per sec.
Code should like:
StringBuilder completeStr = new StringBuilder();
StringBuilder new100Str = new StringBuilder();
int counter = 0;
if(counter < 100) {
new100Str.append(newString);
counter++;
} else {
counter = 0;
completeStr.append(new100Str);
new100Str = new StringBuilder();
myTextView.setText(completeStr.toString());
}
NOTE: Code above is just for illustration so you might have to alter it as per your needs.

Categories

Resources