Looking at the github and power_profile of devices based on the same SoC.
I noticed that Motorola devices have a different order in cluster.
As with others in Cluster0 there are slower cores (voltage and frequency) and in Cluster1 faster. But Motorola has it the other way around, i.e. faster in Cluster0 and slower in Cluster1.
Does this mean that faster cores are considered (or rather only marked) as slower ones? And does this have any effect on energy consumption?
Related
My device can display 3 videos simultaneously without problem. But I suppose not all the devices my app will be used on (api 21+) have the cpu cores and RAM to do that.
What would be a good way to determine at runtime how many videos the device can handle?
The best I can come up with at the moment is to always allow 3 videos and adjust the video resolution I request from the server by looking at the screen width: High-resolution devices tend to have better hardware
High-resolution devices tend to have better hardware - Correct but not always the case. I would not depend on this alone.
But I suppose not all the devices my app will be used on (api 21+) have the cpu cores and RAM to do that. - You could retrieve the hardware spec of the device and determine it on that; if you're comfortable enough that this is enough to determine if the device can run 3+ video or not, along with your resolution check.
Benchmark to see how one video playing is affecting the hardware e.g. using 0.7gb of ram, cpu usage at 47% etc. and from that, you can have a rough estimate to determine further.
I was wondering what is the maximum framerate that could be achieved on iOS and Android devices with Unity3d. Can 60 fps and 100 fps be reached?
What FPS should I provide:
Android as a platform aims to provide 60fps as a standard. However, keep in mind this is for Applications which come nowhere near the GPU requirements of a game.
If you can't do all of the calculations you require in 16ms (60fps) you should aim to provide 30fps and provide the user a consistent experience. User's will quickly detect variations in frame rate and interpret this as a performance issue with their phone.
Never over-promise and under-deliver.
Modern phones claim to have quad core processors with other incredible hardware profiles. Rarely are you taking advantage of the full capabilities of a phone, the hardware and Android platform is designed to use as minimal battery as possible and cut corners when it can.
Your user's phone is typically idling and the full potential will be activated for a sparing amount of ms to perform work and catch up on operations.
What is the max performance on Android:
You can search for Android benchmark test's using Unity, keep a very open mind for what each phone puts through as there are more than 12,000 hardware configurations for Android.
Your development phone and those for testing should be expected to be significantly better than your user's phones.
I have an andengine game, which has:
45-50-55 fps on a normal Galaxy S3, the phone temperature is warm.
stable 60 fps on CM11 Galaxy S3 with root in performance mode (maximum cpu frequence = 1400mhz) . With root you can modify the cpu frequence. The phone temperature is almost hot.
40-45fps on my Nexus 6 (without root), but this phone is faster than galaxy s3! The phone temperature is almost cold.
The resolutions of the game are the same!
The main question is: why does my game fps same on the both device? On Nexus6 it should be faster!
The game is: https://play.google.com/store/apps/details?id=com.hattedskull.gangsters
when a cpu is faster but does show less performance then the other cpus, it could be that it uses only 1 of 2,4,8,12 cores. thats just 25% usages, s the cpu stays cold. a single core cpu will always burn at 100%, and gets warm. Multithreading is the solution. that will "force" the cpu to go at 100%, and the game will run faster
I am answering my question, because it is not obvious (for me), so the solution is:
Performance mode: After rooting the phone, I changed the scaling governor (at processor) mode in performance, changing min hz was not necessary. Now the phone are hot, not warm!
Mobil ads bad performance: I switched off all the internet connections (wifi, mobile), therefore the mobile ad disappeared from the game. I use admob in my game, which has not the best performance.
These caused the FPS drops in my game(s)!
The Google Fit app, when installed, measures the duration you are walking or running, and also the number of steps all the time. However, strangely, using it does not seem to drain the battery. Other apps like Moves which seems to record number of steps pretty accurately declares that it uses a lot of power because of it constantly monitoring the GPS and the accelerometer.
I imagine several possibilities:
Wakes up the phone every minute or so, then analyses the sensors for a few seconds and then sleeps again. However it seems that the records are pretty accurate to the minute, so the waking up must be frequent.
Actually turns on the accelerometer all the time, and analyzes it only after the accelerometer measurement data buffer is full. However I think the accelerometer has a small buffer to store the latest measurements.
Use GPS to estimate the number of steps instead of actually counting it. However this should not be the case, since it works even indoors.
The app still feels magical. Counting steps the whole time without perceptible battery drain.
Thanks for asking this question!
Battery is one of our top most concerns and we work hard to optimize Google Fit's battery usage and provide a magical experience.
Google Fit uses a mix of sensors(Accelerometer, Step counter, Significant Motion counter), Machine Learning and heuristics to get the data right. Our algorithm is pretty similar to your 1st option plus a little bit of magic.
We periodically poll accelerometer and use Machine Learning and heuristics to correctly identify the activity and duration.
For devices with hardware step counters, we use these step counters to monitor step counts. For older devices, we use the activity detected to predict the right number of steps.
Our algorithms merge these activities, steps and sometimes location to correlate and further increase accuracy.
We do not poll GPS to estimate steps or detect activities.
-- Engineer on Google Fit Team.
On some very recent phones like the Nexus 5 (released in late 2013 with Android 4.4 KitKat), there is a dedicated low-power CPU core that can serve as a pedometer. Since this core consumes very little power and can compute steps by itself without the need for the entire CPU or the GPS, overall battery use is reduced greatly. On the recent iPhones, there is a similar microcontroller called the M7 coprocessor in the iPhone 5s and the M8 in the iPhone 6.
More information here:
https://developer.android.com/about/versions/kitkat.html
http://nexus5.wonderhowto.com/how-to/your-nexus-5-has-real-pedometer-built-in-heres-you-use-0151267/
http://www.androidbeat.com/2014/01/pedometer-nexus5-hardware-count-steps-walked/
having a 3 year old HTC OneX I can say that THERE IS NO DEDICATED HARDWARE, Google Fit just uses standard sensors in a very clever way. I come from Runtastic Pedometer: there is a clear battery consume when in use, it would be impossible to keep it on all the time as it needs the full accelerometer power. On the other side, if you stand still and shake the phone Runtastic will count the shakes, while Google Fit apparently does nothing... Still it works perfectly when you actually walk or run. Magic.
Google fit try to learn use pedo step pattern and try to create its own personal walking patterns and its clusters. This eliminates the need of having huge mathematics calculations on receiving sensor data every time. This makes Google fit more power efficient compared other software pedo apps. Having said that, there is compromise on accuracy factors here. Between power-accuracy trade off, google seems to be more aligned towards power factor here.
At this moment the most power efficient detection happens Samsung flagship & its other high end models. Thanks to Samsung's dedicated hardware chip! No matter how power efficient your software pedo algorithm be but its hard to beat dedicated hardware unit advantage. I also heard about Google's bringing dedicated hardware unit for Ped upcoming nexus devices.
It would seem like the solution would be device dependent, with devices where a co-motion processor or "wimpier" core is available for low power operations, that it would default to this once the buffer is full or similar condition. With devices where a low-power core is not available, it seems like waking the device could trigger a JIT operation that would/should finish by the time the app is called.
While the Nexus 5 does have a dedicated "low-power" pedometer built in. It isn't as "low power" as you might think.
My Nexus 5 battery life was decreased by about 25% when I had Google Fit Activity Detection switched on.
Also, the pedometer doesn't show up in the battery usage stats. Presumably, because it is a hardware thing.
I don't know for the other phones out there, but Google Fit was really draining my battery life on my Nexus 5. Disabling it definitely improved my battery life.
I have developed 3 C/RS/Neon-Intrinsics versions of Video Processing Algorithm using Android NDK (using C++ APIs for Renderscript). Calls to C/RS/Neon will be made to Native level on NDK side from JAVA front end. I found that for some reason Neon version consumes lot of power in comparison with C and RS versions. I used Trepn 5.0 for my power testing.
Can some one clarify me regarding the power consumption level for each of these methods C , Renderscript - GPU, Neon Intrinsics. Which one consumes most ?
What would be the Ideal power consumption level for RS codes ?, since GPU runs with less clock frequency and power consumption must be less!
Does Renderscript APIs focuses on power optimization ?
Video - 1920x1080 (20 frames)
C -- 11115.067 ms (0.80mW)
RS -- 9867.170 ms (0.43mW)
Neon Intrinsic -- 9160 ms (1.49mW)
First, Power consumption of render script code is dependent on the type of SOC, the frequency/Voltages at which the CPUs, GPUs operate etc.
Even if you look at CPUS from the same vendor, say ARM for instance A15s and A9s, A15s CPUs are more power hungry compared to the A9. Similarly, A Mali GPU4XX versus 6XX also exhibits power consumption differences for the same task.
In addition, power deltas also exist between different vendors, for instance, Intel and ARM CPUs, for the doing the same task. Similarly, one would notice power differences between a QCOM Adreno GPU and say ARM Mali GPU, even if they are operating at the same frequency/voltage levels.
If you use a Nexus 5, we got a QUAD A15 CPU cranking at 2.3G speed per CPU. Renderscript pushes CPUs and GPUs to their highest clock speed. So on this device, I would expect the power consumption of RS code based on CPU/Neon or just CPU to be highest depending on the type of operations you are doing and then followed by the RS GPU code. So bottomline, on power consumption, the type of device you are using matters a lot due to the differences in SOCs they use. In the latest generation of SOCs that are out there, I expect CPUs/Neon to be more power hungry then GPU.
RS will push the CPU/GPU clock frequency to the highest possible speed. So I am not sure if one could do meaningful power optimizations here. Even if they do, those power savings will be miniscule compared to the power consumed by CPUS/GPU at their top speed.
This power consumption is such a huge problem on mobile devices, you would probably be fine from power consumption angle with your filters for processing a few frames in computational imaging space. But the moment one does renderscript in real video processing, the device gets heated up so quickly even for lower video resolutions, and then the OS system thermal managers come into play. These thermal managers reduce the overall CPU speeds, causing unreliable performance with CPU renderscript.
Responses to comments
Frequency alone is not the cause of power consumption. It is the combination of frequency and voltage. For instance, GPU running at say 200 Mhz at 1.25V, and 550 Mhz at 1.25V will likely consume the same power. Depending on how power domains are designed in the system, something like 0.9V should be enough for 200Mhz and system should in theory transision GPU power domain to a lower voltage when frequency comes down. But various SOCs have various issues so one cannot guarantee a consistent voltage and frequency transition. This could be one reason behind why GPU power could be high even for nominal loads.
So for whatever, complexities, if you are holding GPU Voltage at say something like 1.25V#600 MHz, you power consumption will be pretty high and comparable to to that of CPUs cranking at 2G#1.25V...
I tested Neon intrinsic - 5X5 convolve and they are pretty fast (3x-5x) compared to not using CPUs for the same task. Neon hardware is usually in the same power domain as CPUs (aka MPU power domain). So all CPUs are held at the voltage/frequency even when Neon hardware is working. Since Neon performs faster for the given task than CPU, I wouldn't be surprised if it consumes more power relatively than the CPU for that task. Something has to give if you are getting faster performance - it is obviously power.