I have an Android ARM64 device.
I'm trying to get a benchmark to use only X% of the CPU when run (let's say Spec2k17). However, I want the sleep-run-sleep period to be deterministic. Let's say I want 50% utilization. I'd like the scheduling to be:
run-for-16-ms, sleep-for-16-ms, run-for-16-ms, sleep-for-16-ms, etc.
I know cpulimit can get a process to run at X%. But it lets the process run for some unknown period. Is there a way to set the period where X% is calculated?
Related
Problem Background
Currently, we have facing "Excessive network usage (background)" from Android Vital report. Last 30 days is 0.04%, but we're only Better than 9%
Last 30 days - 0.04%
Benchmark - Better than 9%
Since only better than 9% looks like a scary thing. We decide to look into this issue seriously.
The app is a note taking app (https://play.google.com/store/apps/details?id=com.yocto.wenote), which provides an optional feature - sync to cloud in background after the app close.
This is how we perform sync to cloud in background.
We use WorkManager.
In Application onPause, Schedule OneTimeWorkRequest, with constraint NetworkType.CONNECTED. The worker is scheduled to start with delay 8 seconds.
In case failure, we retry using BackoffPolicy.LINEAR, with delay time 1.5 hours.
The maximum number of retry is 1 time. That's mean, after the app close till the app re-open again. The maximum number of execution, of sync to cloud process is 2.
The size of data is vary, can be few KB till few hundred MB.
Additional information how we perform sync
We are using Google Drive REST API.
We are performing downloading of a zip file from Google Drive App Data folder, perform data merging in local, then zip, and re-upload the single zip file back to Google Drive App Data folder.
The zip file size can ranged from few KB, to few hundred MB. This is because our note taking app supports image as attachment.
Analysis
The only information we have is https://developer.android.com/topic/performance/vitals/bg-network-usage .
When an app connects to the mobile network in the background, the app
wakes up the CPU and turns on the radio. Doing so repeatedly can run
down a device's battery. An app is considered to be running in the
background if it is in the PROCESS_STATE_BACKGROUND or
PROCESS_STATE_CACHED state.
...
...
... Android vitals considers background network usage excessive when an
app is sending and receiving a combined total of 50 MB per hour while
running in the background in 0.10% of battery sessions.
We start the background sync job, 8 seconds after Application's onPause. During that period, will the app inside or outside PROCESS_STATE_BACKGROUND/PROCESS_STATE_CACHED? How can we avoid running inside PROCESS_STATE_BACKGROUND/PROCESS_STATE_CACHED?
What does it mean by "running in the background in 0.10% of battery sessions."? How can we avoid such?
Another assumption, is sync file is too large, and using too much data. Soon, we notice this assumption might not be true. We notice according to "Hourly mobile network usage (background)", the data size is from 0MB to 5MB.
Questions
My questions are
What is the actual root cause for such "Excessive network usage (background)" warning? How can we accurately find out the root cause.
How does other apps (Like Google Photo, Google Keep, Google Doc, ...) which perform background sync, tackle this problem?
For your first question, "Excessive network usage (background)" is triggered when:
... an app is sending and receiving a combined total of 50 MB per hour while running in the background in 0.10% of battery sessions. A battery session refers to the interval between two full battery charges.
Source
To identify what is causing this, try using Battery Historian to analyse your app's battery usage over time. For us, it helped identify a repeating wakelock we didn't intend to introduce.
Here's an example of the output, showing us that excessive BLE scanning is causing a major battery impact:
For your second question, WorkManager is likely what you are after, as you correctly identified. This allows you to schedule a task, as well as a window you'd like it to occur in. Using this allows the OS to optimise task scheduling for you, along with other app's jobs. For example, instead of 6 apps all waking the device up every 10 minutes for their hourly task, it can be scheduled to happen for all 6 apps at the same time, increasing the time spent in doze mode.
Notice the screenshot above includes a "JobScheduler Jobs" tab. After running an analysis you'll be able to see how your jobs are actually performing:
I've previously used Firebase JobDispatcher with great success (tutorial I wrote), which extends the OS' JobScheduler API and is ultimately similar.
I see you're using WorkManager now (Jetpack's version of JobDispatcher), but with 8 seconds there's no chance for the OS to optimise your jobs. Is there any capacity of scheduling them with a minimum of a few seconds, and as large a maximum as possible?
Further improvements
However, your current task scheduling setup may not be the root cause. Here's a few additional ideas that may provide the battery improvement you need. The usefulness of them will become clearer after you've run Battery Historian and identified the root cause:
Consider whether wifi-only is a feasible default / option for data syncing. You'll experience better battery usage, fewer network issues, and likely better customer satisfaction.
Why does a note taking app need to sync a few hundred MB? Can you perhaps just sync the note that has changed, instead of the entire list of notes every time?
Is there any way that I can, over a period of say a day, see how much CPU (or battery) each app uses? Here's a sample from BatteryDoctor app:
I think they did some works programmatically like:
Get CPU time of an app
Get Network Data Usage of an app
From that, they can "calculate" the battery usage per app (in percent).
I know there is some way to get the second work (Data Usage). But for the first one, seems like it's really hard to get.
I've found some ways, but they seems don't work
Tracking long-term CPU usage of apps in android (The "/proc/" + pid + "/stat" way)- I can't run this one, seems like toks[2] is always "S"
The command way ("sh -c top -m 1000 -d 1 -n 1 | grep \"" + pid + "\" ").
In anyway, it only can return the percent of CPU which app is using. But I want the time of CPU which app's used for a period (like 24 hours in the example).
So does anyone know how to get that CPU time per app programmatically ?
Or maybe it just the foreground/running time of an app?
Here's the answer to your question, but I'm unsure if it's really what you want.
If top calculates the %CPU of an app, let's call it %CPU_APP, over a window of, say 500ms, then the total time the app spends executing is 500ms*%CPU_APP over that interval. If you sum up all such intervals over, say 24 hrs, then you get the total amount of time the app used the processor in that period.
For example, say the window is 500ms and the %CPU_APP is 20%, then the app uses 100ms of that time. If the %CPU_APP is constant over 24 hrs, then the total time the app spends running is 24hrs*60min/hr*60sec/min*.5*%CPU_APP. (I leave the math to the reader ;-)> )
The difficult part is finding the size of the window. You might be able to measure this by eyeballing the graininess of the plot if it isn't given in any documentation. And the %CPU_APP is unlikely to be constant so you'll have to collect those statistics and integrate over time.
If this isn't the answer you want, please clarify your question and resubmit to stack overflow.
I only need a rough guide on this really at this point, though specific calculations would obviously be welcome too!
I'm looking at using Radius Network's Android iBeacon Library in an app which will listen for iBeacon advertisements.
I'm new to this but from what I understand it's the scanning for BT devices which is the most battery intensive part of the BLE system so it's not advised to have this running constantly, however I would like to be able to 'catch' devices when they are in a certain area, i.e. a person walking through a lobby.
The Android Beacon Lib's documentation states that the Battery Manager's default setting scans for 30 seconds every 5 minutes (actively scanning for 10% of the time) and this reduces the battery drain on a Nexus 5 from roughly 90mA to 37mA.
My question is... would scanning for 3 seconds every 30 seconds (also 10% of the time) acieve the same battery savings? Or is there an overhead involved in starting the scanning process which would mean the savings would be less? and if so by how much?
You would have to measure to be sure, but I would suspect you would get similar power savings from the cycle you describe (it may be slightly less savings because of startup overhead as you suggest.)
The disadvantage of this approach is that you may miss detections in a 3 second interval, especially in areas with lots of beacons, distant beacons, or with beacons transmitting infrequently. You have to decide if it is worth the tradeoff.
To test power savings, do the following:
On a test device, uninstall as many apps as possible to limit background activity that might use power in unpredicatble ways.
Install an app that implements background scanning on the cycle you describe and start it on your device.
Charge the battery to 100℅
Turn off WiFi and mobile data to prevent system downloads from using power in unpredictable ways.
Note the time, turn off the screen, and let the device rest, checking it every hour or so for battery level.
When the battery reaches 5%, note the time.
Repeat the above test with the app doing a constant scan in the background.
The end result of the above procedure will give you the time it took to drain the battery in both cases. From this you can calculate the percentage difference in power savings.
Please let us know what you find!
Can someone please explain me this time stamp discrepancy in the kernel log?
We wrote an app to wake up Android at a specified time and the app leverages AlarmManager API and sets:
AlarmManager.ELAPSED_REALTIME_WAKEUP
The app works as intended and wakes up at an user specified time correctly. But there is a discrepancy in the kernel timestamp. I traced the source code from AlarmManagerService.java to alarm-dev.c and confirmed that Android sets the alarm wake up time correctly and sends it to the kernel (eg. from Java layer Android uses SystemClock.elapsedRealtime() to get elapsed time and adds 100 sec and converts the value in second and nano second and finally sends it to the kernel layer through JNI).
However, when reading dmesg log it appears that there is a discrepancy in the kernel timestamp. At the time when alarm_ioctl function with the state ANDROID_ALARM_SET(0): was called dmesg printed the following message:
[20450.036529] alarm 2 set 20544.720000000
This implies that [20450.036529] is the current time and 20544.720000000 is when AlarmManager wakes Android up. The value 20544.720000000 was set from the Android layer and from the logcat's timestamp (eg. logcat -v time) this value is when Android is supposed to wake up.
From the Android layer to the kernel layer it takes less than a tenth of a second but why the delta is 94.683471, which is 5.316529 less than it should be? Or is elapsed time different from the kernel time printed by dmesg?
Another interesting observation is that as written above, the app does wake up at a user specified time. So in this case after user called the app AlarmManager woke up the tablet in 100 sec.
Thank you,
References:
AlarmManagerService.java
alarm-dev.c
OPTION 1
You might want to embed a timestamp inside the message instead of relying on the timestamp generated by printk(). This approach should at least give you a true measure of the time.
OPTION 2
You could investigate the API used by kernel/printk.c to get the timestamp.
If printk is using cpu_clock() You might want to consider the following:
CPU Clock
14 * What:
15 *
16 * cpu_clock(i) provides a fast (execution time) high resolution
17 * clock with bounded drift between CPUs. The value of cpu_clock(i)
18 * is monotonic for constant i. The timestamp returned is in nanoseconds
I'm writing an app that extends the SensorEventListener interface to listen for changes to the barometer, which I log in a logfile. Before I start logging, I prepend a system time in milliseconds (let's call this Millisecond Timestamp 1, or MT1), and after the logging is finished, I append another system timestamp in milliseconds (let's call this Millisecond Timestamp 2, or MT2).
The SensorEvent has its own timestamp (which I will call Nanosecond Timestamps, or NT), which I also log, between MT1 and MT2.
The problem is this: If the phone goes to sleep during the logging, the SensorEvent rate seems to no longer occur at the rate which I set (for example, SENSOR_DELAY_FASTEST). Furthermore, even though the SensorEvent timestamp is supposed to represent the nanoseconds of uptime since the phone has been rebooted, there are "missing" nanoseconds--the time gap between MT2 and MT1 is often twice or more that between NTN (where N is the number of samples) and NT1.
I've been able to sort of resolve this issue by using PowerManager.Wakelock(), but that results in my app being a huge power hog and seems like a really clumsy hack. Is there any other way to work around this problem?
Sensors are not guaranteed to work if the device goes to sleep, or even if the screen turns off (but the CPU has not necessarily yet powered down). The behavior is undocumented and definitely seems to vary by device.
Either settle for being "a huge power hog" or redesign your app to not require sensor readings except when the screen is on.
Sensors in Android are definitely designed to be used actively by foreground apps, not for long-term logging or monitoring purposes.