I am using Animations in my application while
switching among Activities
populating ListView.
now I am wondering if it uses a lot more CPU power which uses more Battery.
Thanks.
Well, any calculation by the cpu uses power, and I have to believe that you have to do a lot more calculation to animate those items, so I would say yes, it would affect the battery life.
I can't imagine it would be drastic though... people use Live Wallpapers, and some searching on the nets shows that those (depending on the device) can use an extra 2-4% power, and that's for something that runs 100% of the time when visible, as opposed to list creation which happens in a finite amount of time.
Upshot: I wouldn't worry about it.
Sure it does, but i don't believe is significant at all. Those are really simple operations when compared to whole process of launching an activity or loading lists. you can test it if you like by using traceview.
TraceView Sample
Related
So, I've a Service that is important for applications logic and thus needs to be alive all the time.
First thing is first, I created it as a foreground service.
Whenever user starts to run, Service starts to do a lot of heavy things. It uses FusedLocationAPI different type of sensors and do a lot of different calculations. At this point phone starts to heat up (CPU usage is high). When running stops, heating stops and CPU drops lower.
This makes me think that the main problem is whenever these sensors are used and calculations are made, but there's a lot of possibilities that could cause this and I wanted to understand, how can I deeply analyze batter usage in this case? I read that Battery Historian should be way to go, but the information there is complicated and do not give me much information. How can I use this data to understand which class/runnable/code part is held responsible for the CPU usage? Maybe there's better ways to solve this? Thanks in advance.
Detailed analysis on what's happening using CPU profiler (after suggestion).
Image. Application is opened and few different screens are switch to and from. Nothing special. Phone does not heat up and from CPU analysis everything also looks pretty okay.
2.User starts to run and enter heavy state. The yellow (light green) rectangle shows previously mentioned "important service" that bring important role to application. Called functions does not use too much time of CPU considering the % of whole running trip - which makes me think, I've to look somewhere else..
What I saw is that CPU increases heavily whenever I lock the phone (while service is running. From CPU Bottom Up analysis I cannot understand what is causing the problem (orange rectangle) - looks like a bunch of Android stuff is happening, but it does not give me any clue.
From CPU profiler I do understand that I have a serious problem considering CPU usage, but right now I do not understand what is causing it (which piece of code/class/function). I know for the fact that whenever my important service is created (on app start) I use PARTIAL_WAKE_LOCK to not allow CPU go to sleep and also make service to be alive at all times. As I understand I need to find different solution for this, because CPU drains too much of battery.
How could I found this using only profiler? I now this fact only for code and profiler stack does not tell me much about this (maybe Im not looking at the right place?)
I found out the root cause of high CPU usage.
I did detailed investigation on threads and found out that DefaultDispatcher threads are using A LOT of CPU.
Turns out the has been a bug in Kotlin-Coroutines library and latest version provides fixes. When I did tests I used version 1.3.1, but changing version number to 1.3.3 provided me with big improvements: from 40% usage to 10% CPU usage.
Well i have read a lot of answers of similar questions (even if they are old from like 2013-2014) and i understood that it is not possible to know it exactly since android doesnt count the hardware usage as usage of the app, and some other possible problems like services etc.
At the moment I'm trying to test the perfomance of an App using a protocol to reach a goal and the perfomance of the same App using another protocol (not well known by everyone) to reach the same goal, the default android battery analyzer is good for me since both cases are like 90% the same and i know how the protocols work
My problem is that i'm not sure which one is the best to measure the mAph consumed by my App, i know that there are some external apps that shows it but i would prefer using the one of default, I believe this is something important not only for me but for other people who might have to compare different protocols.
I know that i can measure it programmatically and I've done it too, i save the percentage when the app is opened and how much has been consumed until it gets closed, but it isnt an exact measure since while the app is opened some other apps can do heavy work and add some kind of noise of what i'm measuring so i would prefer to use the android's battery analyzer.
Get a spare device. Load it completely, then run the protocol until shutdown without other interaction (no youtube or anything), note the time it lasted. Repeat with the other protocol. Imho that is a fair way to compare. Note that every device behaves differently and it may or may not be possible to transfer this result to other devices e.g. with different network chips, processors or even firmware versions.
For a more fair comparison I think you should compare how the protocols work. I.e. number of interactions, payload size etc. because the power consumption can only ever be an estimate.
I can't find any better wording to my question.
At some point inside my app I have set up some pretty intensive animation. Thing is, on high-end devices the animation runs smoothly and is pleasant to the eye. On the other hand, one low-end device I tested had a pretty bad performance while animating.
Trying to put user experience first, I'd like to run this stuff on devices that are computationally sufficient, and somehow "turn it off" on other devices.
I have thought for a while on how to discriminate between devices. The only thing that comes to my mind is API level: considering the platform fragmentation and manufacturers delays, I believe there should be some kind of correlation between API level and performance. But there might be something better.
Do you have any idea?
Just to clarify, the animation is not something I can lighten or simplify in any way (e.g., using smaller size drawables, worse quality bitmaps, .... ). It's mainly measuring and layout stuff.
Please feel free to edit the tags I chose.
The first time your app runs, you could run some kind of micro-benchmark that would measure the CPU performance for no more than a second or two. I would suggest not disabling the animations automatically, but warn the user if the device seems slow and ask if they'd like to disable them.
I am trying to speed up my app start-up time (currently ~5 seconds due to slow Guice binding), and when I run traceview I'm seeing pretty big variations (as high as 30%) in measurements from executions of the same code.
I would assume this is from garbage collection differences, but the time spent in startGC according to traceview is completely insignificant.
This is particularly aggravating because it's very difficult to determine what the effects were of my optimizations when the measurements are so variable.
Why does this happen? Is there any way to make the measurements more consistent?
I suppose you are starting profiling from the code rather than turning it on manually? But anyway even if you use Debug.startMethodTracing and Debug.stopMethodTracing from a specific point of your code you will receive different measurments.
You can see here that Traceview disables the JIT and I believe some other optimizations so during profiling your code is executed slower than without it. Also your code performance depends on overall system load. If some other app is doing any heavy operation in background your code will execute longer. So you should definitely get results that a slightly different and so start-up time couldn't be a constant.
Generally it is not so important how long your method executes but how much CPU time it consumes comparing to other methods.
Sounds like measurement is not your ultimate goal. Your ultimate goal is to make it faster.
The way to do that is by finding what activities are accounting for a large fraction of time, so you can find a better way to do them.
I said "finding", not "measuring", and I said "activities", not "routines".
To do this, it is only necessary to sample the program's state.
Many profilers collect a large number of samples of the program's state, but then they all fall into the same logic - they summarize, on the theory that all you want is measurements, and you don't really care of what.
In fact, if rather than getting summaries you could examine some of the samples in detail, it would tell you a great deal more about how the program is spending its time.
What's more, if on as few as two(2) samples you could see the program pursuing some goal, and it was something you could improve significantly, you would see a significant speedup.
This process can be repeated several times, and that's how you can really optimize it.
The process is explained in more detail here, and there's a use case here.
If you are doing any network related activity on startup then this tool can help you understand what is happening and how you might be able to optimize connections and caching. http://developer.att.com/developer/legalAgreementPage.jsp?passedItemId=9700312
My game uses too much battery. I don't know exactly how much it uses as compared to comparable games, but it uses too much. Players complain that it uses a lot, and a number of them note that it makes their device "run hot". I'm just starting to investigate this and wanted to ask some theoretical and practical questions to narrow the search space. This is mainly about the iOS version of my game, but probably many of the same issues affect the Android version. Sorry to ask many sub-questions, but they all seemed so interrelated I thought it best to keep them together.
Side notes: My game doesn't do network access (called out in several places as a big battery drain) and doesn't consume a lot of battery in the background; it's the foreground running that is the problem.
(1) I know there are APIs to read the battery level, so I can do some automated testing. My question here is: About how long (or perhaps: about how much battery drain) do I need to let the thing run to get a reliable reading? For instance, if it runs for 10 minutes is that reliable? If it drains 10% of the battery, is that reliable? Or is it better to run for more like an hour (or, say, see how long it takes the battery to drain 50%)? What I'm asking here is how sensitive/reliable the battery meter is, so I know how long each test run needs to be.
(2) I'm trying to understand what are the likely causes of the high battery use. Below I list some possible factors. Please help me understand which ones are the most likely culprits:
(2a) As with a lot of games, my game needs to draw the full screen on each frame. It runs at about 30 fps. I know that Apple says to "only refresh the screen as much as you need to", but I pretty much need to draw every frame. Actually, I could put some work into only drawing the parts of the screen that had changed, but in my case that will still be most of the screen. And in any case, even if I can localize the drawing to only part of the screen, I'm still making an OpenGL swap buffers call 30 times per second, so does it really matter that I've worked hard to draw a bit less?
(2b) As I draw the screen elements, there is a certain amount of floating point math that goes on (e.g., in computing texture UV coordinates), and some (less) double precision math that goes on. I don't know how expensive these are, battery-wise, as compared to similar integer operations. I could probably cache a lot of these values to not have to repeatedly compute them, if that was a likely win.
(2c) I do a certain amount of texture switching when rendering the scene. I had previously only been worried about this making the game too slow (it doesn't), but now I also wonder whether reducing texture switching would reduce battery use.
(2d) I'm not sure if this would be practical for me but: I have been reading about shaders and OpenCL, and I want to understand if I were to unload some of the CPU processing to the GPU, whether that would likely save battery (in addition to presumably running faster for vector-type operations). Or would it perhaps use even more battery on the GPU than on the CPU?
I realize that I can narrow down which factors are at play by disabling certain parts of the game and doing iterative battery test runs (hence part (1) of the question). It's just that that disabling is not trivial and there are enough potential culprits that I thought I'd ask for general advice first.
Try reading this article:
Android Documents on optimization
What works well for me, is decreasing the use for garbage collection e.g. when programming for a desktop computer, you're (or i'm) used to defining variables inside loops when they are not needed out side of the loop, this causes a massive use of garbage collection (and i'm not talking about primitive vars, but big objects.
try avoiding things like that.
One little tip that really helped me get Battery usage (and warmth of the device!) down was to throttle FPS in my custom OpenGL Engine.
Especially while the scene is static (e.g. a turn-based game or the user tapped pause) throttle down FPS.
Or throttle if the user isn't responsive for more then 10 seconds, like a screensaver on a desktop pc. In the real world users often get distracted while using mobile devices. Don't let your app drain battery while your user figures out what subway-station he's in ;)
Also on the iPhone, sometimes 60FPS is the default, throttling this manually to 30 FPS is barely visible and safes you about half of the gpu cycles (and therefore a lot of battery!).