Currently in my Android code what I am doing to calculate each function CPU Usage is -
double start = System.currentTimeMillis();
double start1 = Debug.threadCpuTimeNanos();
foo();
double end = System.currentTimeMillis();
double end1 = Debug.threadCpuTimeNanos();
double t = (end - start);
double t1 = (end1 - start1)/1000000;
double CPUusage;
if(t==0){
CPUusage = 0;
}else{
CPUusage = (t1/t) * 100;
}
I am doing t1/t to calculate CPU Usage. Is this a correct way of calculating CPU usage of each function in my Android code or is it conceptually wrong? Request someone to guide me in this.
From documentation:
static long currentTimeMillis()
Returns the current time in milliseconds since January 1, 1970 00:00:00.0 UTC.
Please, replace the double(s) you are using, with long(s).
While long(s) have precision issues, they are almost irrelevant for the variables used, also, the rounding will likely be close enough, that the returned value can be used in relation with each other
Also, you are comparing two independent values. Try either the current thread, or the full thread time.
From the Debug documentation:
public static long threadCpuTimeNanos()
Added in API level 1
Get an indication of thread CPU usage. The value returned indicates the amount of time that the current thread has spent executing code or waiting for certain types of I/O. The time is expressed in nanoseconds, and is only meaningful when compared to the result from an earlier call. Note that nanosecond resolution does not imply nanosecond accuracy. On system which don't support this operation, the call returns -1.
Try using, in the same Runnable (sequentially placed method calls):
long start = Debug.threadCpuTimeNanos();
foo();
long finish = Debug.threadCpuTimeNanos();
long outputValue = finish - start ;
System.out.println("foo() took " + outputValue + " ns.");
Related
On my stm32wb55, I am using the 32bit-timer "tim2" for reading the time from 32bit-register "CNT" since system startup. With prescaling, I display the time in microseconds on my putty-console and it works very well. But now, I need to memory higher values. So I want to memory the time in a 64bit integer.
Does anyone know a simple way for doing that?
The tim2 timer is a 32bit resolution timer, you want a 64bit resolution. There are two ways to emulate a 64bit counter, to keep track of your uptime.
One would be incrementing a variable each time you reach the unit of time that you want to keep track of. But that would be extremely inefficient giving that the microcontroller would be doing a lot of constant context switching.
The second way would be to extend the timer with a 32bit variable. Then incrementing such variable on an overflow.
MSB LSB
+--------------+ +--------------+
| 32bit uint | | 32bit timer |
+--------------+ +--------------+
The way this works is that after the timer reaches 0xffffffff which is the maximum for a 32bit unsigned counter, the timer will overflow and start back at 0. If there was another bit after that 32'th bit, it will flip on(which is the same as incrementing). What you can do is emulate this exact behavior by incrementing a variable.
First, set up your timer.
static TIM_HandleTypeDef s_TimerInstance = {
.Instance = TIM2
};
void setup_timer()
{
__TIM2_CLK_ENABLE();
s_TimerInstance.Init.Prescaler = ##; //Chose the correct value that fits your needs
s_TimerInstance.Init.CounterMode = TIM_COUNTERMODE_UP;
s_TimerInstance.Init.Period = 0xffffffff; //Chose the correct value that fits your needs
s_TimerInstance.Init.ClockDivision = TIM_CLOCKDIVISION_DIV1; //Also choose this value
s_TimerInstance.Init.RepetitionCounter = 0;
HAL_TIM_Base_Init(&s_TimerInstance);
HAL_TIM_Base_Start(&s_TimerInstance);
}
Your handler, this has to be called each time your timer reaches 0xffffffff
extern void TIM2_IRQHandler();
void TIM2_IRQHandler()
{
HAL_TIM_IRQHandler(&s_TimerInstance);
}
uint32_t extension;
void HAL_TIM_PeriodElapsedCallback(TIM_HandleTypeDef *htim)
{
extension++; //Increment
}
Combine the extension variable and the timer value. Use this function each time you want to get the extender counter value. You can make it inline to avoid extra calls, or as a macro.
uint64_t get_time()
{
return (extension << 32) & (__HAL_TIM_GET_COUNTER(&s_TimerInstance));
}
Now glue everything together
int main(void)
{
HAL_Init(); //Initialize HAL library
InitializeTimer(); //Initialize timer
HAL_NVIC_SetPriority(TIM2_IRQn, 0, 0);
HAL_NVIC_EnableIRQ(TIM2_IRQn);
while(1);
}
Note, that now tim2 will be used until it overflows. It should not be changed, or the following code will not work. Also, setup the divider, so the timer increment each microsecond as you stated earlier.
Also, you can use the timer to count seconds and then calculate the microsecond instead. If you count seconds instead you can count up to 2^32 seconds which is 4294967296. A year has about 31536000 seconds. With a 32bit counter (4294967296/31536000) you can count up to 136.19252 years of uptime. Then get the microseconds by dividing the uptime with 1000000 (uptime/1000000). I don't know what are you planning to do with the microcontroller, but counting seconds sounds more sensical for me.
If you really want precision, you can still do it by counting seconds, you can add the timer counter value to the microsecond count, which you can get by diving the seconds down into microseconds, that way you offset microseconds that haven't been added to the second count.
If you only access this from a non-ISR [non interrupt service] context, it's pretty simple.
If you have an ISR, the base level needs to lock/unlock interrupt handling. The ISR does not have to be related to the timer interrrupt. It could be for any ISR (e.g. tty, disk, SPI, video/audio, whatever).
Here's some representative code for a simple semi-baremetal implementation [this is similar to what I've done in some R/T commercial products, notably in a microblaze inside a Xilinx FPGA]:
typedef unsigned int u32;
typedef unsigned long long u64;
volatile int in_isr; // 1=inside an ISR
volatile u32 oldlo; // old LSW timer value
volatile u32 oldhi; // MSW of 64 bit timer
// clear and enable the CPU interrupt flag
void cli(void);
void sti(void);
// tmr32 -- get 32 bit timer/counter
u32 tmr32(void);
// tmrget -- get 64 bit timer value
u64
tmrget(void)
{
u32 curlo;
u32 curhi;
u64 tmr64;
// base level must prevent interrupts from occurring ...
if (! in_isr)
cli();
// get the 32 bit counter/timer value
curlo = tmr32();
// get the upper 32 bits of the 64 bit counter/timer
curhi = oldhi;
// detect rollover
if (curlo < oldlo)
curhi += 1;
oldhi = curhi;
oldlo = curlo;
// reenable interrupts
if (! in_isr)
sti();
tmr64 = curhi;
tmr64 <<= 32;
tmr64 |= curlo;
return tmr64;
}
// isr -- interrupt service routine
void
isr(void)
{
// say we're in an ISR ...
in_isr += 1;
u64 tmr = tmrget();
// do stuff ...
// leaving the ISR ...
in_isr -= 1;
}
// baselevel -- normal non-interrupt context
void
baselevel(void)
{
while (1) {
u64 tmr = tmrget();
// do stuff ...
}
}
This works fine if tmrget is called frequently enough that it catches each rollover of the 32 bit timer value.
Consider this piece of code:
// calculate age of dog
long interval = 1000*60*60*24*30;
int ageInWeeks = Weeks.weeksBetween(geburtsDatumDateTime, nowDateTime).getWeeks();
if (ageInWeeks < 20){
// wöchentlich
interval = 1000*60*60*24*7;
} else if (ageInWeeks >= 20 && ageInWeeks < 52){
// 2-wöchentlich
interval = 1000*60*60*24*14;
} else if (ageInWeeks >= 52){
// monatlich
interval = 1000*60*60*24*30;
}
The debugger shows, that in case of ageInWeeks >= 52 the interval is: -1702967296, but it should be: 2592000000
The minus sign suggests some kind of overflow error.
However the maximum value of a long in Java is 2E63-1 which is: 9.2233E18
What I am missing here? Is an Android maximum value for a long smaller?
You're computing 32-bit signed ints, the computation overflows, and then you assign the result to a long.
Do your calculation in 64 bits by making one of the operands a long. For example, add L to one of the operands to make it a long literal:
interval = 1000L*60*60*24*30;
As laalto said, adding a 'L' should make it work.
However, for avoiding these kind of errors in the future, you could use the TimeUnit class (available in Android SDK):
long interval = TimeUnit.DAYS.toMillis(30);
I am writing a cross-platform application in Cocos2d-x. I need to get the time to create a countdown clock to a certain time of day. Since it is in C++, I can use time(...), mktime(...), and difftime(...) if I need to as a direct approach.
Is there a preferred method in Cocos2d-x for doing this in a cross-platform way (i.e. something built directly into the framework)? I want the app to work the same on iPhones, iPads, and Android.
try this:
time_t rawtime;
struct tm * timeinfo;
time (&rawtime);
timeinfo = localtime (&rawtime);
CCLog("year------->%04d",timeinfo->tm_year+1900);
CCLog("month------->%02d",timeinfo->tm_mon+1);
CCLog("day------->%02d",timeinfo->tm_mday);
CCLog("hour------->%02d",timeinfo->tm_hour);
CCLog("minutes------->%02d",timeinfo->tm_min);
CCLog("seconds------->%02d",timeinfo->tm_sec);
Try this code
static inline long millisecondNow()
{
struct cc_timeval now;
CCTime::gettimeofdayCocos2d(&now, NULL);
return (now.tv_sec * 1000 + now.tv_usec / 1000);
}
I used this function to get current time in millisecond. I am new in cocos2d-x so hope this can be helpful.
You should try this lib, I just tested and it works fine.
https://github.com/Ghost233/CCDate
If you receive some wrong values, set timezoneOffset = 0;
Note: 0 <= month <= 11
You can sheduleUpdate in clock class.
The update call with a float argument which is a delta time in seconds after last calls, this method is called every frame and cocos2d-x get time through from the system and count the delta.
I thought this code would do the trick:
static inline long millisecondNow()
{
struct cc_timeval now;
CCTime::gettimeofdayCocos2d(&now, NULL);
return (now.tv_sec * 1000 + now.tv_usec / 1000);
}
HOWEVER, only gives a part of what I need. In general, I need a real "date and time" object (or structure), not just the time of day in milliseconds.
The best solution, for now, seems to be using the "classic" localtime, mktime, difftime trifecta in C++. I have a few examples below of some basic operations...I may cook up a general class to do these kinds of operations, but for now, these are a good start and show how to get moving:
double Utilities::SecondsTill(int hour, int minute)
{
time_t now;
struct tm target;
double seconds;
time(&now);
target = *localtime(&now);
target.tm_hour = hour;
target.tm_min = minute;
target.tm_sec = 0;
seconds = difftime(mktime(&target),now);
return seconds;
}
DAYS_OF_WEEK_T Utilities::GetDayOfWeek()
{
struct tm tinfo;
time_t rawtime;
time (&rawtime);
tinfo = *localtime(&rawtime);
return (DAYS_OF_WEEK_T)tinfo.tm_wday;
}
I current have (from server) a date stamp returned as ticks (.NET Date).
In general I managed to convert the above by subtracting by 10000 to produce secs and offset accordingly to get EPOC ms.
Now, the issue is that the ms passed from server include the zone offset and what I needed to do is get a TimeZone object for the zone (always the same) and subtract the ms offset (depending on DST) from the original value to produce a new object to properly get a Date.
Any better way of doing this without so many conversion?
private static long netEpocTicksConv = 621355968000000000L;
public static Date dateTimeLongToDate(long ticks) {
TimeZone greeceTz = TimeZone.getTimeZone("Europe/Athens");
Calendar cal0 = new GregorianCalendar(greeceTz);
long time = (ticks - netEpocTicksConv)/ 10000;
time -= greeceTz.getOffset(time);
cal0.setTimeInMillis(time);
Date res = cal0.getTime();
return res;
}
Here's some code which doesn't quite do the right thing near DST transitions:
private static final long DOTNET_TICKS_AT_UNIX_EPOCH = 621355968000000000L;
private static final TimeZone GREECE = TimeZone.getTimeZone("Europe/Athens");
public static Date dateTimeLongToDate(long ticks) {
long localMillis = (ticks - DOTNET_TICKS_AT_UNIX_EPOCH) / 10000L;
// Note: this does the wrong thing near DST transitions
long offset = GREECE.getOffset(localMillis - GREECE.getRawOffset());
long utcMillis = localMillis - offset;
return new Date(utcMillis);
}
There's no need to use a Calendar here.
You can get it to be accurate around DST transitions unless it's actually ambiguous, in which case you could make it either always return the earlier version or always return the later version. It's fiddly to do that, but it can be done.
By subtracting the offset for standard time, we're already reducing the amount of time during which it will be incorrect. Basically this code now says, "Subtract the standard time (no daylight savings) offset from the local time, to get an approximation to the UTC time. Now work out the offset at that UTC time."
I want to create an app that will allow the user to check whether or not the current time falls between a specified time interval. More specifically, I created a sql table using sqllite program, with the table specifying an end time and a start time for each record. The problem is that the type of data each field can be is limited to text, number, and other type other than a datetime type. So, how would I be able to check if the current time is between the start and end time since the format of time is in h:mm and not just an integer value that I could just do less than or greater than? Do I have to convert the current time to minutes?
You should be able to do the comparison even if time is not stored in the datetime type, here is a link that explains the conversion from string to time.
If that doesn't work, convert the time to seconds (int) and calculate the difference.
Try this. You can save and retrieve your time as String:
String to long: String.valueOf()
long to String: Long.valueOf()
Then, you use this procedure to check time:
//Time checker
private boolean timeCheck(long startTimeInput, long endTimeInput) {
long currentTime = System.currentTimeMillis();
if ((currentTime > startTimeInput) && (currentTime < endTimeInput)) {
return true;
} else {
return false;
}
}
And in your main program check it like this:
//You kept the times as String
if (timeCheck(Long.valueOf(start), Long.valueOf(end))) {
//it is in the interval
} else {
//not in the interval
}
I hope it helps.