I'm trying to come up with a solution enabling data exchange between an embedded device (xMega128(C) based) and an Android apps. The catch is the data exchange must be conducted via the Internet and both the embedded device and the mobile device running the app can be behind different NATs, connecting using different ISPs, 3G, LTE, etc.
I tried UDP hole punching, but it does not work with symmetric NATs. Multi hole punching with prediction also does not guarantee 100% reliabilty. I also considered using ICE, but ICE C libraries (pjnath, libnice) are incompatible with the hardware chosen (libs require os). Right now I'm considering implementing or use (if exists) traffic relay server, but that just seems like a hack to me.
Are there any other options I hadn't considered? Any help will be appreciated.
Ideally, the communication scheme would be:
100% reliable
relatively low-latency (3 seconds absolute max)
scalable (say up to 500k devices in the future)
initializable by both the app and the device
multi-user – one device would connect to many android apps
Also, if this helps, the data exchange between the device and the app is not very high-intensity – roughly 1 session per hour, ~50 messages per session with 10-20 seconds between them, each message weighing around 100 bytes.
What you're describing is effectively peer to peer or a subset thereof and to get that working reliably is a lot of work. Where peer to peer fails you normally fall back to a relay server. It can be done but the amount work to do it is quite large. Your list of requirements is also quite steep...
100% reliable
There's no such thing as a reliable connection. You need to build in fault tolerance to the app to make it reliable.
relatively low-latency (3 seconds absolute max)
Quite often you will be limited by physics ie speed of light. Low latency is hard.
scalable (say up to 500k devices in the future)
I don't know what this means ie is this concurrent connections?
From wikipedia on NAT Traversal
Many techniques exist, but no single method works in every situation
since NAT behavior is not standardized. Many NAT traversal techniques
require assistance from a server at a publicly routable IP address.
Some methods use the server only when establishing the connection,
while others are based on relaying all data through it, which adds
bandwidth costs and increases latency, detrimental to real-time voice
and video communications.
ie it will work sometimes ie it will be unreliable so you need to user several methods to make it reliable.
As long as both endpoints are behind different NATs you don't control, it won't work reliably. No way. You need a relay.
Related
I am working on a project that is meant for testing network diagnostics. The section I'm working on now involves testing TCP and UDP connections. One of the points of measurement is packet loss for both TCP and UDP.
In the past I used: '/statistics/rx_dropped' (I'm leaving off the beginning portion of that file path), suffice to say it points to the number of packets dropped in total on a specified network interface.
However, in Android N this file requires root to read and I can't assume that the phone this app will be on is rooted.
Does anyone have a decent way of measuring packet loss on the client side for at least TCP that doesn't require rooting?
I am mostly aware of networking jargon so don't be shy, the reason I ask is because the way I measure has to be fairly elegant (either using some existing java/android library or finding a legal way of reading packet loss from the device).
So I have been experimenting with multi-peer networks. Ultimately I am going to try to use different frameworks to make one that can connect devices of same os through Bluetooth and WiFi, and ones of different types through wifi.
My first shot was apple's Multi-peer Networking. Unfortunately I got had about 0.5 seconds of delay (I didn't actually calculate this just an estimate) before even one bit of information actually got to the other device. I am suspicious that the framework is optimized for larger and encrypted data way more then it is for 1-32 bit jobs.
I was just wondering what you guys knew about the latency of other frameworks out their, since it takes a decent chunk of time for me to learn how to use each new framework. Is latency of about 0.5 seconds the best the industry has?
Honestly I would be happy if their was a library that was optimized to send 1 bit to each connected device every (1/60th) of a second. But I think most of these networks package up the data like its of bigger size anyways.
I sorta wish mobile devices had NFC. Just look at systems like the 3ds that can do multi-peer multiplayer (smash-bros) with really really small latency and great accuracy.
Try changing the MCSessionSendDataMode to MCSessionSendDataUnreliable
MCSessionSendDataUnreliable
Messages to peers should be sent immediately without socket-level queueing. If a message cannot be sent immediately, it should be dropped. The order of messages is not guaranteed.
This message type should be used for data that ceases to be relevant if delayed, such as real-time gaming data.
but depends how reliable you really need the data to be, but on a closed network, it should be very reliable anyway
We are developing a synchronous multiplayer game. As it stands one of the players is selected as the server instead of connecting the clients to a dedicated server.
With the restricted environment of mobile apps, should we still be worried about cheating (from the player running the server) or is this a non issue in the mobile space? Are there any other major concerns we should look out for if we decide to stick with players hosting the game?
All of the below is about Android. iOS is more secure, but the server load issue still applies there too.
If you store game data on the SD card, any app can access that data. You could encrypt it, but it would still be a liability (like the Whatsapp hack here: techcrunch.com/2014/03/12/hole-in-whatsapp-for-android-lets-hackers-steal-your-conversations/)
If someone were to implement a low-level interception / modification of your game server network traffic, this could also be a problem. (http://www.justbeck.com/modifying-data-in-transit-to-android-apps-using-burp-and-backtrack-5/)
If you are using a Service, make sure it's a local service so it's only accessible from your app.
Also, the "restricted" aspect of Android systems can be easily removed by rooting the device.
Another thing to consider is network and cpu load. Both these things could grow big very fast, making the server laggy or even crash, considering the relatively low capacities of Android devices as compared to dedicated servers. Of course, this depends on the amount of work the server has to do per client.
In general, dedicated servers are a good idea, even for Android games I think.
I'd look into this from two different point of views:
Cost/Benefit: have in mind that dedicated server will impact your budget, so ask yourself if cheating is really a concern or not. I'd treat mobile space as other kind of spaces.
Game quality: As #1 is your point of view, this is your players point of view... They are going to feel something is going wrong and think about cheating? maybe. You can fix this with a reputation of the player that is hosting the server.
What upload speed should I expect to be available to my users? This is for image attachments to emails.
A test we did over the phone for one user came out to about 100 kilobits per second.
The reason I ask is I am not sure if my unloader is creating an unusual amount of latency (edit: I mean time wasted between actual upload of a chunk). It uploads in parts using separate HttpPost requests and it base64 encodes the parts and sends them as POST parameters instead of using a "multipart file upload" like a browser would do.
This is the only test I have done with an end user and I don't actually own an Android phone.
100 Kbps does not seem so bad. The actual max speed for UMTS is 384 Kbps, but I never saw more than 250 Kbps, and that's with a very good signal. HSUPA speeds, on the other hand, can be 10 fold faster, but there are just a few phones in the US supporting it, with a lot more in Europe. Given the really big variability in speed due to signal issues, my guess is that it won't be a bottleneck in your software. You should, nevertheless, consider that phones can get into 2g zones still or even loose signal in the middle of the transfer. A failure due to a loss of signal is a much worst problem for the end user than a couple more seconds waiting for the transfer (which, anyway, should be done in the background).
Regardless if for upload or download, data transfer speed over 3G can vary a big deal depending on how close or far the user equipment is from the base station transmiting nearby and depending on the conditions he is regarding that base station... buildings nearby, inside building or outside, interference, and so on and so on. Also, the modulation used will depend on this, which will greatly change the connection speeds for users in different conditions.
In any case, the speed you get for the data transfer and the latency are two different things.
I'm not sure of the implications in your specific application and the Http protocol, but you mention separate requests. So it should work.
I'm working on a project creating a fairly simple one-to-many host-to-slave network using a bunch of Android devices. What would be the best way to go about doing this?
A friend recommended Bluetooth, which I think would work very well for local small networks. This is actually one core component of the project we're doing, groups of devices in a maximum area of maybe 50 square meters (a large lecture hall, for example).
What would be the best way to connect a number of devices at a large distance from each other? Would rapid polling of the devices be possible? The project is based around device and user response time (sort of a response time test, in fact). Will having a host and guest network created on-the-fly be robust enough to detect response times in the milliseconds?
Thanks for any help, and I'm relatively new to this so if I wasn't clear or if any of my thinking was ridiculous, please let me know.
There is no best way for something, there are particular solutions for particular cases.
Bluetooth is a small range wireless communication protocol, the maximum distance is for 100 meters for class 1 devices, class 2 - 10 meters, class 3 - 1 meter. It has a master/slave topology and a master can have a maximum of 8 slaves in a piconet, if you have more then 8 devices you go to a scatternet and things change.
You should study also what WiFi has to offer. For polling I think you could take a look at SNMP, maybe there is somekind of instrumentation it is already done.