With respect to mobile development, I'd like to know whether the rate at which something is downloaded from the internet can be affected by the app making the http request. I assume that download speed is most affected by hardware. If code can affect download speed, what are some performance tips to download something as fast as possible most of the time?
In all communications, you are limited by your bandwidth. Mobile platforms tend to be much slower than wired connection. So the solution is simple, make the download amount as small as possible.
This is, of course, easier said than done. It tends to take some creativity to get a network bound application unbound. However when you are able to, you can see very impressive/unbelievable gains in performance.
I'm your case, it is good to think about it up front, but also consider it as the app is developed.
Ps: some general rules of thumb
Network accesses: on the order of 10 millisecond
Disk access: on the order of 10 microseconds
Memory access: on the order of 10 nanoseconds
Cpu cache: on the order of 100 picosec
This is a little bit more than you asked for, but you can clearly see why it would be faster to compress data, send it and uncmpress it then to just send it.
As Robert said, it's more likely than not the network that's limiting you, rather than the code itself. It IS possible to intentionally throttle download speed in multiple languages, but I doubt the code is the cause.
Think instead about how you might cut down your application's size. Think hard about reusing assets, pulling some data from a webserver if possible, etc etc.
Related
Well i have read a lot of answers of similar questions (even if they are old from like 2013-2014) and i understood that it is not possible to know it exactly since android doesnt count the hardware usage as usage of the app, and some other possible problems like services etc.
At the moment I'm trying to test the perfomance of an App using a protocol to reach a goal and the perfomance of the same App using another protocol (not well known by everyone) to reach the same goal, the default android battery analyzer is good for me since both cases are like 90% the same and i know how the protocols work
My problem is that i'm not sure which one is the best to measure the mAph consumed by my App, i know that there are some external apps that shows it but i would prefer using the one of default, I believe this is something important not only for me but for other people who might have to compare different protocols.
I know that i can measure it programmatically and I've done it too, i save the percentage when the app is opened and how much has been consumed until it gets closed, but it isnt an exact measure since while the app is opened some other apps can do heavy work and add some kind of noise of what i'm measuring so i would prefer to use the android's battery analyzer.
Get a spare device. Load it completely, then run the protocol until shutdown without other interaction (no youtube or anything), note the time it lasted. Repeat with the other protocol. Imho that is a fair way to compare. Note that every device behaves differently and it may or may not be possible to transfer this result to other devices e.g. with different network chips, processors or even firmware versions.
For a more fair comparison I think you should compare how the protocols work. I.e. number of interactions, payload size etc. because the power consumption can only ever be an estimate.
I'm going to write a music player for android which satisfies specific needs (which doesn't matter to my question).
I'd like to identify internal playlists with directories (inside of one main directory) on sdcard because I know my users will set up organized directories. So simply reading all audio files in a single list to let the user create playlists manually afterwards would probably be annoying.
I'm wondering whether it's worth to generate a hierarchical playlist file for this purpose.
My current plan is to run a "library inspector" when the app is started. This inspector will use a "library state" containing of hierarchical data of the form
String filename;
long modified; // timestamp of last modification
to check the library recursive for the need of creating a new playlist file. If this matching check fails, this hierarchical file (metadata: title, artist, album, ... - for example xml) is created including a new "library state". This file should prevent to read all the metadata every single run of the app.
To make that clear: I'm searching for an efficient way to play music - but safe battery!
Since I'm new to the development of mobile apps, I'm not very familiar with battery saving. Is it that more saving to read one file instead of recursive metadata reading? Or maybe I'm about to overdo things? Do you know some strategies of established applications?
I'm very interested in your thoughts :) and I hope my bad english doesn't prevent your understanding ... I'm sorry for that.
Thank you!
Max
I can't answer from the perspective of using MediaStore or SQLite, but can give you some suggestions about minimizing battery usage.
Don't use recursion. Recursion is structurally compact but awful in terms of efficiency. Every call is very expensive due to accessing stacks, possible doing context switches, etc. If the recursion is very deep, there are also issues with regards to disk usage, page swapping, etc.
Use an efficient searching algorithm for any large list. The faster you complete what you're doing, the more the processor is idle, the deeper the power state, the more power savings.
Gather your searches / accesses together as much as possible. For example, if you have to do 3 searches, each 1 second apart and taking .5 seconds to execute, you'll keep the processor active in a high power state for over 4.5 secs before letting it rest and drop into a lower power state. If you gather your queries together, you spend 1.5s in a high power state, and 3 seconds in a lower power state. Roughly speaking, you use <1/3 the power.
Use on board memory as much as possible. I don't know how slow accesses to sdcards are, but it'll slow down your algorithm and possibly increase your power consumption.
Try to setup your database entries and other data structures so that they are naturally aligned with your processor's caches (e.g. 16B aligned). That will speed up routines by a significant amount (L1 cache access might be 1 cycle, L2 10 cycles, and memory 100 cycles - these values are illustrative but ballpark). And the fast your routine, the more idle, and the greater the power savings.
My timing durations (e.g. 1 sec apart) are just for illustration purposes. There are multiple idle states and different rules for dropping into those states that can make a real illustration very complicated.
I don't know much about the power efficiency of databases. I do know there are some data bases designed for mobile and low power devices. Unfortunately, I don't recall what they are. (Don't quote me on this, but I recall something about Berkeley and real time.
PS Your English seems excellent.
I'm developping a radio app and I need to know if the user's connection speed is fast enough, if it's slow I'll show a message saying that the streaming may be slow sometimes.
The problem I'm having is in calculate the speed connection from the user.
I've read some opinions about that and I only found answers based on internet type (2g, 3g, wi-fi). I found this answer : Detect network connection type on Android that is almost what I needed, but the method "isConnectionFast" isn't accurate because it doesn't make a real test connection, it's just based on some properties.
I think that the best way is to download an image with a determinated size and calculate the time that took to finish the download. But I'm not knowing how to do that in android.
Can anyone help me ? Thank you
Well it seems that you already know the basic steps for doing a speedtest but I would like to explain why is that probably a waste of time in this case.
If we're talking about cellular connections there are standards that specify the speed and the answer you linked is an example of how to get an estimate based on that. Sure, you will never get the full speed and the speed test would provide a better estimate but just for that moment in time. There are many factors which may influence client's speed and most of them are changing every second so the test you made at the beginning is pretty much useless if the client is mobile. For wifi estimating the speed is a bit harder without a speedtest because the bandwidth is usually not limited by the technology but the plan user is paying for. Anyway those speeds are almost certainly enough for a radio stream.
You didn't provide much info about the streams itself but from my experience (as a user) for 128kbps streams everything above EDGE is sufficient, providing your buffering is enough to compensate for short speed degradation or connection losses caused by handoffs etc.
In my test Android application, I need to process a large file such as compression, encryption, erasure encoding, and etc...
In order to speed-up the process, I spawn multiple threads, each thread read and process different part of a file, and finally merge/append the result together. (Using Java NIO)
I already tried it and there is really some speed-up, 50% or more depending on which storage technique was involved.
There are many similar SO question on this BUT they mainly discussed how it will not improve the I/O speed due to the limitation on single spinning hard disk.
But in my case, it is on multi-cores Android device, which uses flash memory.
Therefore I am not really sure if the speed-up is due to parallel processing or due to caching in RAM.
and my main question is:
Am I doing the right thing? (since I am on multi-cores Android device)
or is this method bad? Bad in terms of what?
Given that interoperability (compression, encryption) on other system is not a concern here.
More details:
I also somehow using the concept of pipelining.
For example:
i) [Sequential] Compress and then encrypt a file will take 10 + 20 = 30 seconds,
ii) [Pipelining] Compress the first half of the file, immediately start encryption after compression is done. At the same time, start compressing the second half of the file, and finally encrypt the second half after the compression is done. It might only take 20 seconds.
(I know it is a bad example, but just to give the idea of applying storage techniques into pipeline)
I am not sure about this, but since each chunk of a file does not depends on the previous chunk (no data dependencies problem), pipelining a file should be ok right?
Whether it will actually speed up your program or not depends on a lot of factors. This includes- is the file in RAM or on disk? If its on disk is the program IO bound or CPU bound (if IO bound then it won't help)? How is the scheduler of the OS actually going to assign the threads- is it going to assign them to the same core or multiple cores? Do the different threads ever need to interact (will they be waiting locked for so long that its not a speedup or is very buggy)?
Your technique is a fairly standard one for parallel processing. Whether its good for your app or not requires pretty much implementing and checking.
I am wondering if I have to synchronize very often (every hour) a bunch of text data - for example 10kb - won't it better to compress it before sending ?
For example if I compress it and then send to my server, where it will be uncompressed and handled - application will use less transfer. Is it good pattern to work with this case ?
I think that depends on your problem set. If it's small, it might be better to compress and send the whole thing. If it's not small, you may want to consider an rsync style solution that only transmits the required amount of data to bring the file in sync. You still may want to do compression on the wire, but you wouldn't want to rsync a compressed file because a small change might cascade into a larger one with compression.
The link matters too, as does cost and power consumption. Compression algorithms vary, but it will take more CPU time than not compressing. Hopefully, at the cost of a much reduced send time, which ultimately saves you power.
So, given the problem set you're talking about, a small file, I think it's nice to compress it and send it across, particularly if a link is slow (which they sometimes are when you have bad reception). I'm not familiar with Android's framework, but you may get this for "free" if you're POSTing a file via HTTP, and both ends support gzip compression.