I'm trying to use the TFLite Benchmark tool with mobilenet model and checking the final inference time in microseconds to compare different models. The issue I am facing is with varying results between runs. I also found this section in the documentation which is pertaining to reducing variance between runs on Android. It explains how one can set the CPU affinity before running the benchmark to get consistent results between runs. Currently using Redmi Note 4 and One Plus for the work.
Please, can someone explain what should I set the CPU affinity value as for my experiments?
Can I find the affinity masks for different mobiles online or on the Android phone?
When I increase the number of --warmup_runs parameter I get less varying results. Are there more ways in which I can make my results more consistent?
Are the background processes on the Android phone affecting my the inference time and is there a way I can stop them to reduce the variance in results?
As the docs suggest, any value is fine, as long as you stay consistent with one across experiments. The one thing to consider is whether to use a big core or a little core (if you're a big.little architecture) and usually it's good to try both (they have varying cache sizes, etc.)
Yes you can typically find this information online. See http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0515b/CJHBGEBA.html as an example. You'll want to look at your particular phone, see the particular CPU it uses, and then google for more info from there.
I've tried --warmup_runs = 2000+ and typically it's pretty stable. There's a bit more variance with smaller models. For intensive models (at least for the particular device), you might want to see if the devices are overheating, etc. I haven't seen this for mid-tier phones, but heard that people sometimes keep their devices in a cool area (fan, fridge) for this.
They may, but it's unavoidable. The best you can do is close all applications and disconnect from the internet. I personally haven't seen them introduce too much variance though.
Related
Well i have read a lot of answers of similar questions (even if they are old from like 2013-2014) and i understood that it is not possible to know it exactly since android doesnt count the hardware usage as usage of the app, and some other possible problems like services etc.
At the moment I'm trying to test the perfomance of an App using a protocol to reach a goal and the perfomance of the same App using another protocol (not well known by everyone) to reach the same goal, the default android battery analyzer is good for me since both cases are like 90% the same and i know how the protocols work
My problem is that i'm not sure which one is the best to measure the mAph consumed by my App, i know that there are some external apps that shows it but i would prefer using the one of default, I believe this is something important not only for me but for other people who might have to compare different protocols.
I know that i can measure it programmatically and I've done it too, i save the percentage when the app is opened and how much has been consumed until it gets closed, but it isnt an exact measure since while the app is opened some other apps can do heavy work and add some kind of noise of what i'm measuring so i would prefer to use the android's battery analyzer.
Get a spare device. Load it completely, then run the protocol until shutdown without other interaction (no youtube or anything), note the time it lasted. Repeat with the other protocol. Imho that is a fair way to compare. Note that every device behaves differently and it may or may not be possible to transfer this result to other devices e.g. with different network chips, processors or even firmware versions.
For a more fair comparison I think you should compare how the protocols work. I.e. number of interactions, payload size etc. because the power consumption can only ever be an estimate.
Is there a way in android to publish different APKs based on various heuristics such as
Total available memory
CPU speed/type
Storage space
Even brand name (Samsung or Nexus or whatever)
I know there's a way to do it for different screens but let's say you would like to have more animations and better resolution images and more graphic/CPU intensive operations for higher end phones and tablets but still provide basic level functionality (albeit with reduced function set or a more basic UI) for older/less powerful phones.
Is this possible? If not, I fear I have to cater for the lowest end of the market and won't be able to make use of the better hardware that some users may be using.
Many thanks,
p.s. I'd seen this before I posted my question http://developer.android.com/google/play/publishing/multiple-apks.html but it doesn't quite answer this question which is why I posted my question.
Why it doesn't answer my question is because it only mentions the following attributes for determining which APK would be available for a device:
OpenGL texture compression formats
Screen size (and, optionally, screen density)
Device feature sets
API level
CPU architecture (ABI)
However, it does not mention the memory, storage space or the CPU speed (though CPU architecture may... I have no idea) which would be ideal to have to decide whether or not to load a reduced set of resources and functionality or not which is why I don't believe the mentioned link quite answers my question.
I'm developing an app that it functionality very similar to Facebook Android native app: social network that most of the time the user will spend in an endless ListView displaying lot's of images, entering an image gallery, and so on.
let's say for the discussion that I'm doing all the right things and best android practices to achieve smooth scroll (recycling views as it should, using different view types when needed, loading to memory only scaled bitmaps in the needed size, caching bitmaps, using ViewHolder design pattern, not blocking th UI thread when its possible and so on...)
let's say also that every thing else in my app written in the best way and following best practices (for the discussion... :->)
my app working not bad at all in that stage, but when
turning on the hardware acceleration, as described and promised in Android Developers documentation it making my app much much more smooth and fast.
let's say that it does not affect in any nagative way on the UI as can happened, and I'm not performing any of the Unsupported Operations
according to Google's document on the subject, only reason I can see not to use this feature (besides all other reasons I already mentioned above) is that it can cause my app to use more RAM. but how much RAM? a lot more? I know that when my app consumes lot's of RAM - it becoming good candidate to be destroyed by the OS when it need to free some memory.
my question is basically -
is it "ok" under my circumstances to use this feature?
what other problems can raise from using it?
TIA
To use or not to use
It is advised to use hardware acceleration only if you have complex custom computations for scaling, rotating and translating of images, but do not use it for drawing lines or curves (and other trivial operations) (source).
If you plan on having common transitions and also given that you have already considered scaling, recycling, caching etc, than it may not make sense to burden your project anymore. Also, any efforts spent reworking your code to support hardware acceleration will not effect users on versions below 3.0, which are ~36% of the market as of May 8, 2013.
Memory
Regarding memory usage (according to this article), by including Android Hardware the application loads up the OpenGL drivers for each process, takes memory usage of roughly 2MB, and boosts it to 8MB.
Other issues
Apart from API versions, I presume it will also affect battery life. Unfortunately there aren't any benchmarks on different use cases online in order to draw a line on this one. Some argue that in given cases because of multiple gpu cores, using acceleration may save battery life. Overall, I think it would be safe that the effect won't be too dramatic (or Google would have made this a major point).
UPDATE
Hardware acceleration is enabled by default if your Target API level
is >=14
I would say yes in your situation, use hardware acceleration.
Seeing that you aren't using any resource intensive controls in your app it should not be a problem to enable Hardware acceleration. As you said your app is working quite well without hardware acceleration.
When you enable hardware acceleration Android will start using your GPU and because of the increased resources required to enable hardware acceleration, your app will consume more RAM.
A frequently asked question is Will the amount of ram increase by a really big amount?
The answer to that will all be determined by :
1. Your programming ability ie. management of the recycling list, scaling of the Images ect.
2. The Device
I wrote a app a while ago that was used to edit really high res bitmaps. I ran into the same problem. I found that on different devices the max amount of ram allocated by the OS when hardware acceleration is enabled varies by device. If your device has more ram the OS will allocate more ram to your app, so you will never find a consistent amount of ram used for your app. The bigger more expensive devices will always run your app on a larger amount of ram.
What other problems can raise by using hardware acceleration?
Hardware acceleration might cause problems for some 2D drawing operations. If you experience this you can enable Hardware Acceleration for only specific activities in your app like stated on the Hardware Acceleration post in the android Developer Docs
The easiest way to enable hardware acceleration is to turn it on globally for your entire application. If your application uses only standard views and Drawables, turning it on globally should not cause any adverse drawing effects. However, because hardware acceleration is not supported for all of the 2D drawing operations, turning it on might affect some of your applications that use custom views or drawing calls. Problems usually manifest themselves as invisible elements, exceptions, or wrongly rendered pixels. To remedy this, Android gives you the option to enable or disable hardware acceleration at the following levels:
Application,
Activity,
Window,
View
This way you can also limit the hardware acceleration in your app but by the sound of it you will need it for most of your apps functions.
Hope this helps
I'd like to ask for some help regarding the sampling rate and jitter on the magnetometer.
I'm working on a project with some people that involves a high rate magnetic field sampling application. Even though we have developed an algorithm to workaround the jitter and other issues we encountered, we'd like to improve the sampling rating somehow and, at the same time, if possible, attempt to reduce the sampling jitter. Improving the sampling rate would allow us to achieve better results for our application. We are using a Samsung Nexus S and according to the tests we performed we observed that the sampling rates between 15ms and 20ms and, sometimes, peaks around 50 ms (this is between consecutive events).
We have come with different approaches to try to develop a solution to these issues, however without any success so far.
Firstly, we thought of modifying the current magnetometer (AK8973) device driver but we soon realized that the bottleneck couldn't the there as the device driver directly implements the correct sensor operation modes, data reading and respects the sensor hardware timing constraints.
As a second alternative, we developed a small code using Android NDK to obtain samples to compare the times obtained between consecutive events, i.e. between samples, with the code developed at the Java level. Sadly, the result was pretty much the same.
As a final alternative, we are currently trying to understand how the events are handled by the API and passed to Java. That said, if the bottleneck is there we'd try to change the code to solve the issues. However, we are not sure if the bottleneck is in the underlying hardware or software API.
The code we used for NDK is based on the example provided by the Android documentation (NativeActivity) and some other examples we came across with by googling (google groups and other articles). The articles we found are quite interesting (Native Sampling, Sensor Sampling Performance). Even though it is reported that native sampling allows for better performance, in our case it seems not to happen.
We'd like to know if it is actually possible to obtain a higher sampling rate at all or if anyone has already developed a solution. Is the bottleneck at the software or hardware level?
In the articles referenced above, it is mentioned that a custom library (FreeMotion) is able to deliver better performance results, as a replace to the original sensor library, because it works with the drivers directly. Has anyone used this library before and, if yes, could you provide us your results?
With another smartphone, a Samsung Galaxy Nexus, we decided to collect more magnetometer data samples and do some statistical analysis and compare the results obtained with the Samsung Nexus S. This time we used Android v4.1.2. Again, we observed that the rate at we are able to collect data does not improve significantly when comparing NDK vs SDK APIs with both smartphones, using the values from ASensor_getMinDelay() and SENSOR_DELAY_FASTEST, respectively, which give maximum performance. The timestamp jitter reduction is, however, very significant for both smartphones with NDK API, regardless of the approach used: polling or callback-based. Polling, in general, provides little or no better results, and should be more CPU intensive.
The Samsung Galaxy Nexus sensor hardware is far superior, and thus fine-grained tuning of the desired event rate is possible for, naturally, rates aboves ASensor_getMinDelay(). For the Samsung Nexus S, however, this was not possible; for lower rates the target rate is not satisfied and samples are acquired at an even slower rate. Activating multiple sensors, the overall jitter reduction is greater than using only a single sensor.
I want to transplant a 3D program written in OpenGL on windows platform to Android, but I wonder if it can run smoothly on general Android platforms, so i want to estimate how much hardware resource is sufficient for it to run smoothly. It is some kind like the hardware requirements for a software or 3d game that a company will recommend the users. I don't know how can i get a hardware requirements of my program when transplant to Android.
i used gdebugger and it gave me some information but i don't think that is enough for me. Anyone here have some idea or solution? Many thanks in advance!
If your program is simple enough, you could write up some estimates about texture fill rate, which is a pretty basic (and old) metric of rendering performance. Nearly every 3D chip comes with a theoretical fill rate, so you can get the theoretical numbers of both your desktop system and some Android phones.
The texture memory footprint is another thing that you can estimate, especially using gdebugger. Once again, these numbers are known for most chips.
This is a quick way to produce some numbers, obviously without any real life performance guarantees.
The best way would be to test it on an actual device, and get an idea of what hardware works well. You could distribute a beta app and get some feedback too.
Depends on feature set that you use. For example, if you use FBO, the device will have to support framebuffer extension. If you use MSAA, smooth line, the device will have support corresponding extensions.
After listing down your requirements, you can use glGet to check for the device suppport
http://www.opengl.org/sdk/docs/man/xhtml/glGet.xml