What upload speed should I expect to be available to my users? This is for image attachments to emails.
A test we did over the phone for one user came out to about 100 kilobits per second.
The reason I ask is I am not sure if my unloader is creating an unusual amount of latency (edit: I mean time wasted between actual upload of a chunk). It uploads in parts using separate HttpPost requests and it base64 encodes the parts and sends them as POST parameters instead of using a "multipart file upload" like a browser would do.
This is the only test I have done with an end user and I don't actually own an Android phone.
100 Kbps does not seem so bad. The actual max speed for UMTS is 384 Kbps, but I never saw more than 250 Kbps, and that's with a very good signal. HSUPA speeds, on the other hand, can be 10 fold faster, but there are just a few phones in the US supporting it, with a lot more in Europe. Given the really big variability in speed due to signal issues, my guess is that it won't be a bottleneck in your software. You should, nevertheless, consider that phones can get into 2g zones still or even loose signal in the middle of the transfer. A failure due to a loss of signal is a much worst problem for the end user than a couple more seconds waiting for the transfer (which, anyway, should be done in the background).
Regardless if for upload or download, data transfer speed over 3G can vary a big deal depending on how close or far the user equipment is from the base station transmiting nearby and depending on the conditions he is regarding that base station... buildings nearby, inside building or outside, interference, and so on and so on. Also, the modulation used will depend on this, which will greatly change the connection speeds for users in different conditions.
In any case, the speed you get for the data transfer and the latency are two different things.
I'm not sure of the implications in your specific application and the Http protocol, but you mention separate requests. So it should work.
Related
With respect to mobile development, I'd like to know whether the rate at which something is downloaded from the internet can be affected by the app making the http request. I assume that download speed is most affected by hardware. If code can affect download speed, what are some performance tips to download something as fast as possible most of the time?
In all communications, you are limited by your bandwidth. Mobile platforms tend to be much slower than wired connection. So the solution is simple, make the download amount as small as possible.
This is, of course, easier said than done. It tends to take some creativity to get a network bound application unbound. However when you are able to, you can see very impressive/unbelievable gains in performance.
I'm your case, it is good to think about it up front, but also consider it as the app is developed.
Ps: some general rules of thumb
Network accesses: on the order of 10 millisecond
Disk access: on the order of 10 microseconds
Memory access: on the order of 10 nanoseconds
Cpu cache: on the order of 100 picosec
This is a little bit more than you asked for, but you can clearly see why it would be faster to compress data, send it and uncmpress it then to just send it.
As Robert said, it's more likely than not the network that's limiting you, rather than the code itself. It IS possible to intentionally throttle download speed in multiple languages, but I doubt the code is the cause.
Think instead about how you might cut down your application's size. Think hard about reusing assets, pulling some data from a webserver if possible, etc etc.
I'm trying to come up with a solution enabling data exchange between an embedded device (xMega128(C) based) and an Android apps. The catch is the data exchange must be conducted via the Internet and both the embedded device and the mobile device running the app can be behind different NATs, connecting using different ISPs, 3G, LTE, etc.
I tried UDP hole punching, but it does not work with symmetric NATs. Multi hole punching with prediction also does not guarantee 100% reliabilty. I also considered using ICE, but ICE C libraries (pjnath, libnice) are incompatible with the hardware chosen (libs require os). Right now I'm considering implementing or use (if exists) traffic relay server, but that just seems like a hack to me.
Are there any other options I hadn't considered? Any help will be appreciated.
Ideally, the communication scheme would be:
100% reliable
relatively low-latency (3 seconds absolute max)
scalable (say up to 500k devices in the future)
initializable by both the app and the device
multi-user – one device would connect to many android apps
Also, if this helps, the data exchange between the device and the app is not very high-intensity – roughly 1 session per hour, ~50 messages per session with 10-20 seconds between them, each message weighing around 100 bytes.
What you're describing is effectively peer to peer or a subset thereof and to get that working reliably is a lot of work. Where peer to peer fails you normally fall back to a relay server. It can be done but the amount work to do it is quite large. Your list of requirements is also quite steep...
100% reliable
There's no such thing as a reliable connection. You need to build in fault tolerance to the app to make it reliable.
relatively low-latency (3 seconds absolute max)
Quite often you will be limited by physics ie speed of light. Low latency is hard.
scalable (say up to 500k devices in the future)
I don't know what this means ie is this concurrent connections?
From wikipedia on NAT Traversal
Many techniques exist, but no single method works in every situation
since NAT behavior is not standardized. Many NAT traversal techniques
require assistance from a server at a publicly routable IP address.
Some methods use the server only when establishing the connection,
while others are based on relaying all data through it, which adds
bandwidth costs and increases latency, detrimental to real-time voice
and video communications.
ie it will work sometimes ie it will be unreliable so you need to user several methods to make it reliable.
As long as both endpoints are behind different NATs you don't control, it won't work reliably. No way. You need a relay.
So I have been experimenting with multi-peer networks. Ultimately I am going to try to use different frameworks to make one that can connect devices of same os through Bluetooth and WiFi, and ones of different types through wifi.
My first shot was apple's Multi-peer Networking. Unfortunately I got had about 0.5 seconds of delay (I didn't actually calculate this just an estimate) before even one bit of information actually got to the other device. I am suspicious that the framework is optimized for larger and encrypted data way more then it is for 1-32 bit jobs.
I was just wondering what you guys knew about the latency of other frameworks out their, since it takes a decent chunk of time for me to learn how to use each new framework. Is latency of about 0.5 seconds the best the industry has?
Honestly I would be happy if their was a library that was optimized to send 1 bit to each connected device every (1/60th) of a second. But I think most of these networks package up the data like its of bigger size anyways.
I sorta wish mobile devices had NFC. Just look at systems like the 3ds that can do multi-peer multiplayer (smash-bros) with really really small latency and great accuracy.
Try changing the MCSessionSendDataMode to MCSessionSendDataUnreliable
MCSessionSendDataUnreliable
Messages to peers should be sent immediately without socket-level queueing. If a message cannot be sent immediately, it should be dropped. The order of messages is not guaranteed.
This message type should be used for data that ceases to be relevant if delayed, such as real-time gaming data.
but depends how reliable you really need the data to be, but on a closed network, it should be very reliable anyway
I'm developping a radio app and I need to know if the user's connection speed is fast enough, if it's slow I'll show a message saying that the streaming may be slow sometimes.
The problem I'm having is in calculate the speed connection from the user.
I've read some opinions about that and I only found answers based on internet type (2g, 3g, wi-fi). I found this answer : Detect network connection type on Android that is almost what I needed, but the method "isConnectionFast" isn't accurate because it doesn't make a real test connection, it's just based on some properties.
I think that the best way is to download an image with a determinated size and calculate the time that took to finish the download. But I'm not knowing how to do that in android.
Can anyone help me ? Thank you
Well it seems that you already know the basic steps for doing a speedtest but I would like to explain why is that probably a waste of time in this case.
If we're talking about cellular connections there are standards that specify the speed and the answer you linked is an example of how to get an estimate based on that. Sure, you will never get the full speed and the speed test would provide a better estimate but just for that moment in time. There are many factors which may influence client's speed and most of them are changing every second so the test you made at the beginning is pretty much useless if the client is mobile. For wifi estimating the speed is a bit harder without a speedtest because the bandwidth is usually not limited by the technology but the plan user is paying for. Anyway those speeds are almost certainly enough for a radio stream.
You didn't provide much info about the streams itself but from my experience (as a user) for 128kbps streams everything above EDGE is sufficient, providing your buffering is enough to compensate for short speed degradation or connection losses caused by handoffs etc.
i am trying to estimate network delay between two android devices connected over WiFi to synchronize clocks. but the network delay is varying a lot from 2ms to 1024ms. sometimes i gets delay value which varies between 2-10ms for continuous 100 readings. but sometimes values ranges between 2ms to 1024ms for continuous 100 readings like 2, 100, 570, 640, 2, 5, 150.
I am using socket timestamps to determine the exact receive and send time of the packet. my setup uses one laptop as wifi access point and two mobile phones. There is not much load on the network. my question is why is it varies less sometime and why it varies so much sometimes.
How to make it vary less. am i missing any configuration on android. Give me some possible reasons for this kind of behavior...
Unlike wired links, wireless links are affected from a lot of different factors. You can get stable results only in a sterile environment without electromagnetic or mechanical interferences.
Most latency fluctuations are a direct result of RF collisions. WiFi networks implement the CSMA/CA protocol to deal with the collisions. In general it detects whether there is any activity in the air and postpones the transmission if it's noisy.
You can try to minimize the external influences but still this doesn't guarantee anything:
Perform WiFi scan and see what channels are the noisiest. Choose the less occupied for our link. Remember, there are overlaps between various channels, so moving to a different channel won't necessary remove all the noises. See here about channels overlapping and how to choose a channel: http://en.wikipedia.org/wiki/List_of_WLAN_channels .
Remove mechanical obstacles from your environment.
Increase the transmission (Tx) power of your devices from their SW configuration.
Check the Quality of Service (QoS) configuration of your devices, some tweaking there might yield improvements.
Solved this problem by using WIFI_MODE_FULL_HIGH_PERF.
After acquiring this lock, i have observed constant network delays.
http://developer.android.com/reference/android/net/wifi/WifiManager.html#WIFI_MODE_FULL_HIGH_PERF