TL;DR
android, board-like game.
latency unimportant.
client-server via LAN/Global, potential high score => no trust.
potential bluetooth layer in future => full trust.
mobile means expensive traffic, upstream more expensive than downstream.
PRNG used a lot, must be deterministic.
kryonet for serialization + transmission
Me & a couple of pals are building our "port"/"flavor" of a board-game (with a lot of different event-types and permutations of all sorts) to Android and adding a networked multi player aspect to it. We are no encryption or networking experts =)
The game:s default multi player layer will be client-server based (server is either dedicated host or run on an android device [in case you're out in the woods with your friends and you've no network] ). This might be a global server, or LAN - it doesn't matter much.
But there might be a potential high-score which needs some anti-cheating mechanisms
As it is a turn-based board-like game, latency is neither an issue.
In the future, if we've time - we might add a bluetooth based layer, but in this case cheating is not an issue as the assumption is that the people know each other well.
Since we're dealing with a (most likely) mobile network where upstream is more expensive than downstream, downloading from server is cheaper than sending the models from client.
We'll probably be using kryonet for the serialization and transmission of data.
The game will use PRNGs quite frequently, and as such it needs to have some good anti-cheat and verification logic, so after a lot of searching and reading at gamedev & stack overflow, I've devised the following logic. I need some input on how sound the plan is. All tips and recommendations are greatly appreciated.
Assumptions / Considerations:
PRNG behaves deterministically ( thinking of using Mersenne Twister ).
Normal game play shouldn't be penalized due to anti-cheat measures.
Scheme / Plan:
On "handshake" / first connection since "onResume()" - acquire PRNG seed from server and store it. Each player has its own seed in server.
Also send hash of game state to server in same request and if out of sync,
sync game state.
On action / user input, get next random number -> save to "bucket of randoms" (hence known as bucket - the bucket is a "trapped list" to which you may add but never remove). Also save input ID/Type/Enum to a list.
Accumulate changes until other players have to know (i.e next players turn) / PNR (Point of No Return)
PNR reached
Send change ID/Type/Enum ( ca. 64 bits ) + bucket + hash (post change) of model. Should bucket + hash be encrypted, maybe AES 256 (other encryption tech that's more suitable?).
Check bucket against N consecutive random numbers where N = size( bucket ). If no match, goto 6, then 7.
Make changes to model temporarily (without committing to full game model).
Compute hash and check against client provided hash. If invalid goto 6.
Valid: commit to game model on server.
Invalid: ask client to revert to state sent back by server.
Cheat: Ban ( not IP [dynamic IPs...], rather MAC addr or UID ) / Remove from game on several cheats.
EDIT: Also - another question regarding anti-cheat... Any good ideas for antimeasures for bot teams of 2 farming highscores for a board-like game?
EDIT2: Here's the links to the pre-question research I did:
https://gamedev.stackexchange.com/questions/4181/how-can-i-prevent-cheating-on-global-highscore-tables
Good link (for de-centralized, client-client protocol): https://gamedev.stackexchange.com/questions/47145/peer-to-peer-hostless-competitive-games-of-chance
http://en.wikipedia.org/wiki/Commitment_scheme
iphone puzzle game: Code examples for simple game servers
Look into:
5) Use some lightweight polymorphic encoding on your game connections.
6) Use some anti-debugging techniques to prevent debuggers from attaching to your processes. Google anti-debugging and you should be able to find lots of stuff.
7) Use a custom proprietary PE packer to prevent useful disassembly of your game.
8) Using hashes as a promise, and then reveal the meaning of the hashed promise once conditions on the behavior of other players are met.
It's complicated, and it has performance impact, but some of the ideas may be useful, particularly to peer to peer games.
9) I think a good way to make harder the problem to the crackers is to have the only authoritative copies of the game state in your servers,
only sending to and receiving updates from the clients,
that way you can embed in the communication protocol itself client validation (that it hasn't been cracked and thus the detection rules are still in place).
That, and actively monitoring for new weird behavior found might get you close to where you want to be.
Realtime FPS related, less relevance to us:
Client-Server: How to prevent cheating in our (multiplayer) games?
linked: How to secure client-side anti-cheat
Automation: Protection against automation
Trusting clients, Howto take latency differences into consideration when verifying location differences with timestamps (anti-cheating)?
https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
Do good multiplayer/mmo client<>server games use latency within movement calculations?
Howto take latency differences into consideration when verifying location differences with timestamps (anti-cheating)?
That's about it, hope you've got some tips!
Related
We need to build a server that can communicate with some embedded devices running a variant of Android. We need to be able to send commands to the device, and receive a response. A simple command might be asking the device for it's status. We won't have HTTP, so we need to have the client/device establish a connection with the server.
We were considering using MQTT as it has a lot of nice properties (QoS, lightweight, built for IoT), but it doesn't natively support a request response workflow.
We have considered building RPC on top of MQTT, but before we do I just wanted peoples thoughts on the matter. Would Websockets, WAMP, ZeroMQ be a better approach?
Edit:
Q1: Do we even need RPC?
Q2: Is there an approach to building systems where I always send async type messages and still provide a good user experience?
Q3: Any examples?
Looking for implementation examples and hands on experience of building an IoT communication system beyond a toy example with a single device.
"one-size-fits-all" may sound as a "smart" slogan for T-shirts but causes nightmare for ex-post attempts to fix poorly designed architectures once real-world implementations scale
"right-sizing" and "Minimum-Viable-Product" strategies for just-enough designs have much better chance to survive IoT scales and to keep costs-of-adaptation acceptable ( take just the scales of the recent VW global device firmware update, expected to have about -2.5% to -3.0% GDP adverse impacts on Germany and automotive supply chains in Hungary and former Czechoslovakia regions - Yes, costs matter in IoT domain more than just the trivial count$.)
A smart-fit tool for IoT domain-specific architecture is a must
A first thing that ought to be born in mind is the fact, that IoT domain is by several orders of magnitude different from scales of the classical legacy computing architectures. Minimised local-resources ( by design, also mentioned above ), massive scales/counts with uncontrolled concurrency, immense synchronisation complications for true parallelism ( if such system design is needed ), ref.: a PARALLEL v/s CONCURRENT SEQUENTIAL Disambiguation Link.
Thus a proper selection of tools is needed in context with this given state.
While AMQP and other power-MQ tools are great for broker-based ( if well designed, the central MQ-broker need not be a single-point of failure & remains "just" a performance bottleneck ) the overheads for architectures with IoT-devices are to be carefully validated, whether feasible.
Broker-less ZeroCopy, ZeroSharing, ZeroBlocking, ZeroLatency(...almost)
While AMQP has opened doors for the broker-less powers of well known ZeroMQ, the same happened another step further when Martin Sustrik redefined the rules and came with nanomsg.
nanomsg, besides its portability and light-weight-ness or a just enough right-weight-ness sets itself a good candidate ready for IoT models of co-operation, giving your project much more than the asked REQ/REP where needed -- more advanced behaviours, alike SURVEY one asks, all vote
BUSdecentralised routing
or PIPE a directed, one-way pipe are particularly attractive in distributed process compositions in massive sensoric networks and a lovely example.
Answers for added questions:
A1: Yes, if design architecture requires, RPC might be using the same uniform signalling framework ( not reinventing wheel or adding just-another-distributed layer just for Remote Proceducer Call
A2: Yes, ZeroMQ and similar broker-less almost Zero-Latency nanomsg framework from Martin Sustrik are a good fit for inter-process messaging/signalling services. Your top-level design decides, whether these powers get harnessed anywhere near to their (awfully magnific) full potential or wasted into underperforming usage-patterns. To have an idea of their limits, FOREX event-streams execute spurious blasts of event with less than microsecond resolution time-stamping.
There you really need a framework, that is robust ( to handle such blasts ), fast ( not to add unnecessary delays ), elastically linear-scaleable ( with inner abilities to handle load-balancing on demand in many-folds ). After hands-on experience I can confirm that my own team's creativity is the very limiting factor for user-experience, not the ZeroMQ / nanomsg smart-frameworks.
A3: Yes, for a few years already using ZeroMQ ( DLL/LIB-adaptations are currently in progress for a nanomsg port ) for remote (load-balanced) central logging ( soft-realtime minimum latency-motivated, off-loading of distributed agents' capabilities ). Unless your system span grows into space ( where round-trip latencies are easily in minutes-hours ) this modus operandi is both smart & close to "just-enough"-design ideals.
Based on your requirement of a light weight request/response protocol for IoT, CoAP (http://coap.technology/), an IETF standard, might be useful. It's light weight, and you can build RESTful services on top of it.
The other thing worth to consider is the "data model" and "service interfaces" for your server. Choosing a standard-based communication protocol, such as HTTP, MQTT, CoAP, is important, but it might be equally important to choose standard-based interoperable sensor data model and interfaces, so that your application can be interoperable and don't need to worry it becomes obsolete soon. Open Geospatial Consortium (OGC) SensorThings API (http://ogc-iot.github.io/ogc-iot-api/) might be an option to consider. It is an open standard, and it's data model is based on ISO 19156 Observation and Measurement.
I could suggest to use AMQP if one of your requirements is request/response pattern.
The AMQP protocol supports this pattern natively with a "correlation" mechanism between the request end the response.
In your environment you could try to use the Apache Qpid Proton in C of eventually all the available language bindings like Java (for you Android based system).
For those already using MQTT communications and want to have request/response over their service you can try replyer (https://github.com/netbeast/replyer), which is a strategy over MQTT packet structure and protocol, rather than a new one.
I'd advice not to create your own protocol, but use LoraWAN protocol, which already contains those join/accept (the same as request/response) protocols.
Here's spec of LoraWAN protocol - page 47 describes join/accept.
Basically, rpc and message passing are functionally equivalent as I believe was formally proved by Prof Needham in Cambridge back in the 70's. As you say, MQTT has some nice transport properties designed to help with small footprint, intermittently connected devices.
The point about RPC is that is enables a synchronous, single thread style of programming. However, if you are using Android, it's kind of unlikely that you will really be prepared for a UI to synchronously wait for an RPC to complete. Therefore, my personal opinion is that I find it easier to use a straight messaging system, such as MQTT, and track the state of the transaction however you want, (state machine, state variable, whatever).
As far as non-toy examples of MQTT based UI, you could checkout our platform http://www.thingstud.io. With MQTT multiple devices are a non-issue, as the UI is not even aware if it is talking to one device or many.
Mike
Can't speak to the other protocols but MQTT does have some features that you may want to look into:
If you are just trying to figure out whether a device is connected or not, you can use a feature called 'last will' to send a pre-determined message on timeout or disconnect. Using that and Quality-of-service levels you should be able to keep track of the device state enough to know whether your messages are being received or not, and then monitor the publishing channels from the devices to process the responses.
If you need just request/response protocol you can go for CoAP (http://coap.technology/), it is like HTTP and has HTTP verb support.
MQTT comes under pub - sub model. Ideal speaking you need a third machine which runs MQTT broker.
I'm developing an Android application to control my quadcopter from the smartphone: I have a periodic process that sends the data acquired from the touchscreen.
The data in then received from a microcontroller, that generates a PWM command to 4 DC motors, obtaining the duty cycle values with a control loop that exploits the received commands.
Can someone suggest a precise criterion to choose the period of the process on the smartphone? Or it is possibile only a "trial and error" approach, checking the reactivity of the system?
EDIT: I have successfully implemented it just setting the frequency of the smartphone task as 2*control_loop_frequency
If you knew or could measure the impulse response of the system it would be possible to determine an appropriate control loop rate; however you do not have that data and it will be confounded in any case by external factors such as wind speed and direction. Determining the rate empirically will be faster than determining the precise characteristics.
If the control is open-loop, then probably you have to ask yourself how far off the desired course can you allow the vehicle to get before a correction is applied. That will depend on the vehicles maximum speed (in any direction).
In the end however Android is not a real-time operating system, so there are no guarantees that any particular periodic update will be performed precisely; its always going to be somewhat non deterministic. At a guess I would imagine that such a system might manage 10Hz update reasonably reliably and that would probably be sufficient for adequate control and responsiveness - if the only feedback is via the human controller's hand-eye coordination, that is perhaps the limiting factor in the system response.
I have 2 android phones phones, both connected to the same wifi, both with bluetooth.
I want some method that syncs somehow the phones and starts a function on the same time on both phones.
For example playing a song at the same time.
I already tried with bluetooth but its with lag, sometimes 0.5 secs. I want something in +- 0.01sec if possible.
Someone suggesting playing it in the future with 2-3 seconds, sending the time-stamp, but how do you sync the internal clocks of the devices then ?
Before calling that particular method, try to measure the latency between the two devices:
1.First device says Hi(store the current time)
2.Second device receives the Hi.
3.Second device says back Hi !!
4.First device receives the Hi.((storedTime - currentTime) / 2 )
Now you have the latency, send your request to second device to start your particular method and start it on first one after the latency.
Try to measure the latency 5 to 10 times to be more accurate.
you have a way to transfer data between the devices right ?
if so you can send a time-stamp which is in the future,
ex: if the present time stamp is 1421242326 you send 1421242329 or something and start the function at that time on both devices.
Basically use #Dula's suggestion (device 1 sends command to device 2 and gives a "start time" which lies in the future). Both devices then start the action at the same time (in the future).
To make sure that the devices are synchronized, you can use a server-based time sync (assuming that both devices have Internet access). To do this, each device contacts the same server (using NTP, or HTTP-based NTP, or contacts a known HTTP server, like www.google.com and uses the value in the "Date" header of the HTTP response). The "server-date" is compared to the system clock on the device, and the difference is the "time-offset from server-time". The time-offsets can be used to synchronize on the "server-time", which is then used as the time base for the actual action (playing the media, etc.).
If your WiFi router allows clients to talk to each other (many public hotspots disable this), you could implement a simple socket listener on one (or each) device and have the initiating device broadcast a message.
For more complicated things and network flexibility, I've had good success with connected sessions using AllJoin. There is a bit of a learning curve to do interesting things, but the simple stuff is pretty easy once you understand the architecture.
Use a server to provide a synchronous event to just the two clients who have decclared their mutual affinity (random as a parm and pair serializer Partner-1 or Partner-2 which they share prior to their respectve calls for the sync event).
Assume both clients on same subnet (packets from 2 events serialized on the server , arrive across the network at the 2 clients simultaneously client-side) This provides synchronous PLays by 2 , bound clients.
The event delivered by server is either a confirm to play queued selected track OR a broadcast( decoupled, more formal)
The only tricky thing is the server side algorythm implementing this:
Queue a pair of requests or error
Part1, part2 with same Random value constitute valid pair if both received before either times out.
On a valid pair schedule both to the same future event in their respective , committed responses.
OnSchedule do the actual IO for 2 paired requests. Respective packets will arrive back at respective clients at same time, each response having been subject to equal network latency
Ng if two diff carrier 4G or lte networks involved. (Oops)
This thing is possible via socket, you will send a event via socket then the other device receive that event. For learn socket io chat
maybe it's not the answer you are looking for but i think that due to the high precision you are wanting , you should look for a push technology, i advice you to take look at SignalR. It's real time technology which gives you abstraction of sending methods , it have a built-in methods like Clients.All.Broadcast that fit your needs.
You can try to use some MQTT framework to send message between two device, or into a set with more number of devices.
I've been doing some reading about the various forms of multiplayer that exist today. In a nut shell, I believe the industry standard way involves the following:
Running the physics for each client on the client machine/device
Sending the input data of this physics to the server (could be another client running a 'server' session along with their own client data)
Server processes this data to determine if the client is making legitimate moves, and if not, forces the client to sync to it's instructions (Rubber banding).
Server forwards historic data from other clients to each client for simulation.
Effective result from each clients perspective, is playing themselves in the present, while seeing the other clients in the past.
Hit detection is performed on the server by 'rewinding' the game state to see if at the time stamp an event occurred, where the affected players were at that point in time.
Presently, I use a pure dead-reckoning system. Inputs are collected from each client and physics calculated on each client. This works, however units quikcly get out of sync and rubber band because the dependency on a players previous position/speed/orientation is not high enough. AKA: they are free to change directions and speed quickly and often.
With that being said, how do I solve this?
Is my client simulator effectively supposed to collect several data points for each player and interpolate the rendering between those nodes? (IE: each client has multiple data points about all the other clients historic positions).
As such, my client's simulator drawing player B, would have positions X0, X1, X2, X3 in a queue at time 0. Between the transition of Time 0 -> Time 1, I know the starting relevant values (location, speed, orientation, etc) and where he should be come Time 1.
Is the solution to interpolate these values between these known historic times and data points?
Thanks!
Ryan
I am fairly new to Android Development and i have recently been exploring Usb Host.
Would anyone be able to tell me how to use Bulk Transfer so that i can see what an external camera sees but instead show it on my tablet?
Camera : Canon Powershot A1300
Tablet : Iconia A200
I have looked around stack overflow and some other forums but have not yet been able to find a good explanation on how to use Bulk Transfer or what constants to use as parameters for retrieving certain data.
I am able to see the endpoints and set up a connection with the external camera but I do not know where to go from here.
Any help is deeply appreciated.
The USB Host APIs in Android are fairly thin, by which I mean once you have gone beyond enumerating the interfaces/endpoints and creating a connection it doesn't do much more to assist you. You are then in the realm of communicating with raw USB data transfers, the format of which depend on the device class your camera represents. Your request is somewhat a can of worms, so I will do my best to provide helpful resources.
Unfortunately, storage and media devices are not the simplest device classes to interpret, so it may be difficult if you are just getting your feet wet on USB in general. The best advice I can give is to take a look at the device class specs for the interface class your camera reports (most are either Mass Storage or MTP), which can be found here: http://www.usb.org/developers/devclass_docs
The spec document will enumerate the commands you need to use to communicate with the device. I would also recommend checking out USB In a Nutshell, which does a great job of pointing out how USB requests are constructed in general, which can help you map what you see in a the spec docs to the parameters found in the methods of UsbDeviceConnection: http://www.beyondlogic.org/usbnutshell/usb1.shtml
There will likely be a handful of control commands you need to send to "endpoint 0" initially to set up the camera, and then the remaining transfers will likely take place over the bulk endpoints.
In Android terms, control requests can only be sent synchronously using UsbDeviceConnection.controlTransfer(), meaning this method blocks until the transfer is complete. The parameters that fill in this method are found in the spec docs for your device class.
Requests on bulk endpoints can be sent synchronously via UsbDeviceConnection.bulkTransfer() OR asynchronously using a UsbRequest instance. With UsbRequest you can queue a transfer and then later check back (via UsbDeviceConnection.requestWait()) for the results.
I have some examples on my Github page in using the host APIs to do some basic interrupt and control transfers to get information like device descriptors. Perhaps some of that will be helpful to you as well: https://github.com/devunwired/accessory-samples
With regards to your question about the USB example code:
The request made in this code is just a generic "Get Configuration Descriptor" request that all USB devices must respond to (it's a core command, not class-specific). In fact, its the request where the Android APIs get the information you can query for interfaces and endpoints. The field values come from the Core USB Specification (this command specifically is defined at section 9.4.3 and 9.6.3 in the 3.0 spec): http://www.usb.org/developers/docs/ or a more helpful description you can find from USB in a Nutshell, which has a little more discussion: http://www.beyondlogic.org/usbnutshell/usb5.shtml#ConfigurationDescriptors
The length is somewhat arbitrary, this tells the driver how many bytes to read or write. Most USB host drivers will first query the device descriptor, which includes a field telling the host the Max Packet Size the device supports, and then will use that size as the length for future requests. A full-featured driver would probably make this command and then check the length bytes first (the wTotalLength field of the descriptor) to see if the buffer was large enough, and modify/resend if not. In the example, I just chose 64 for simplicity because that is the "maximum" Max Packet Size the protocol defines as supportable.
Again, then making requests of the specific data your device has to offer, those commands will be found in the specific class document, not the core specification.