Preventing request replay attack - android

I have a simple REST API and to prevent the request replay attack. I have added a UTC timestamp with the request and allowing request if the difference in seconds in not more than 60 seconds.
This API is consumed by Android and iOS applications. After moving the app into production, we realized that most of users have inaccurate time and time zone and this prevents them to use the app.
How can I improve the mitigation?
Any chance I can get UTC time irrespective of phone's time and timezone?

You should not rely on the timestamp sent by the device, as it may be easily tampered with.
If (and only if) you are able to identify requests coming from the same user or device, then you can use the timestamp on the server and do the same mitigation strategy, but this time you have a unified time measure. (You need to save the "last" access timestamp on per-user or per-device basis.)
A way to identify an user or a device can be that of creating a hash of (e.g. time of first run + IP address + ...) the first time of use, and send that along the REST API request all the subsequent times.
If the user is required to log-in, then you could save this value along with the user's info; otherwise you can "identify" the device similarly.
EDIT: This edit is in light of your comment which adds information that was not clear before
I want to check if the originating time of the request and the receiving time on server should not have large interval (say more than 1 minute).*
Rephrasing, you want to invalidate requests based on their generation time (not older than 60 seconds). If now I interpret it correctly, the idea is that very likely client's requests would need much less than 60 seconds to be answered, so the assumption is that too-old a message was very likely already consumed properly - it might be a reply attack.
Unfortunately, your solution would only (predictably) reduce the space of messages that the attacker can use to carry out reply attacks, that are therefore far from being prevented, even with synchronised timestamps. Additionally, if the message can be tampered with, i.e. the attacker can freshen old messages with newer timestamps, the the mitigation would be totally ineffective.
Anyway, if your purpose was not that of preventing reply attacks, but to simply discard expired messages, then you can achieve your goal by offering two REST APIs:
one for "preparation" (you do not even need to collect a reply)
the other for the actual "request"
and both with client's timestamps and with the same request identifier.
So you will see server side the two pairs timestamp-requestID: (t,ID) and (t',ID).
This will allow you to simply make the difference of timestamps, t' - t, which is independent on the time shift or time zone they set up.
You can also time the requests server-side to discard the "request" arriving too late related to their corresponding "preparation".
Additionally to improve security, you might let the client App sign the messages (including the timestamps and the request ID) to guarantee integrity - the messages cannot be tampered with without noticing, so that you can rely on timestamps sent to the APIs. One way is to use public-key cryptography, but be careful on how you distribute public keys.
(This way you limit the attacker to reply only the most recent requests).

Related

Android Time Synchronization in UTC

I have an application that relies heavily on current timestamps. Currently, when user submits a request, I get the current timestamp in UTC using System.currentTimeMillis(). While this works fine, it becomes a problem when the user starts manipulating their Date/Time on their device and would result as inaccurate timestamps.
Why do it on the client then? Why not just handle it on the server? Well, my application needs to work offline. All my requests get pushed into a jobQueue when connectivity to the internet is unavailable. In these cases, I must have the original time wherein the user did the action so if I submit a request at 4:02pm, but due to network problems, server will only receive it around 7:30pm, server MUST know that I sent the request at 4:02pm.
Now what options have I considered?
Upon user login, I sync the device time with the server time and store that time locally. If any user manipulation occurs while the user is logged in, I'll have a BroadcastReceiver listening in onto any intents of Date/Time manipulation, then store the offset so that whenever a user submits a request, I will calculate the synced time with the offset to ensure the timestamp is accurate.
Have a server sync api done in the backend and set up a service within my application to continuously sync up with the server time and look for any drift while also listening in onto any user manipulation.
Use push notifications and listen downstream for time synchronization adjustments while also listening onto any user manipulation.
I could also make use of the NTP servers to synchronize time with my device.
I'm not entirely sure which would be the most optimal (assuming I have listed down all the possible solutions). If there are other solutions I haven't thought of, please let me know.
P.S. If I happen to use the BroadcastReceiver to listen onto any datetime manipulation on the device, how would I even calculate the offset in that?
It has been some time since I asked this question and there hasn't been any elegant answers to the problem so after some research and some trial and error, I decided to take the NTP route and after some digging around, I found a nice library that does the entire thing for you.
It can be found here:
NTP TRUE TIME
Credits to these guys who had made life a lot easier.
You must sync with the ntp servers just once and from there on, they will calculate the Delta for us giving us accurate UTC regardless of SystemClock time.
For time synchronization you can implement like getting time zone from server. Once we have timzone we can get the current time of server.

Keeping an app's internal time in sync

Disclaimer: I am a backend developer, and I have no idea of what happens inside an Android or iOS app, so please bear with me.
In a specific use case of our platform we would need all client applications (Android and iOS) to keep an internal timer accurate and in sync with an external source of time (so that they can accurately timestamp messages sent between one another). Is this possible at all (using, for example, an NTP client or a different / better technique)?
When the client connects to the server it can fetch what the server's reference time source is. Once it has obtained this it can calculate and store the difference between the server's time and the device's time.
Then when the client needs to do something based on the time* on the handset it takes the server's time into consideration when doing whatever it needs to do such as schedule a timer or whatever.
*You can't really do anything based on time in iOS, only if the app is in the foreground. An exception is posting a local notification. Its not possible to schedule a timer for example if the app is in the background.
As per Martin H, you could use an offset from the device's internal time. The client device will probably be within a second of current time if the user has not manually set the time (which does happen - I just read about a user that changed her date/time to tomorrow to get a reward in a game).
I have dealt with this in a time-based app by using the device time. The client and server validate against server time when communicating with the server.
For example, when the client and server communicate, the server can validate the client time against server time. If the user time is off by more than 'x' minutes, the server sends back an error message.
It will be difficult to keep all clients within a few milliseconds of each other. One idea is to use the server to coordinate messages (messages are not sent between devices, but to the server, which then sends the message on). Then, you can use the received time at the server as the basis for the message time.

Get accurate time from android/iphone to server

We have an android(or iphone) client we are developing. The client allows the android user to send entries to a server which we also develop. If the client does not have data services (GPRS) at the moment the user sends the entry to the server, the client also supports saving the entry to an offline database and sending it later to the server.
One important aspect of the whole process is accuracy of the timestamps on which the user sent the entry to the server (whether the entry is made in real time or sent by the client from the offline database)
When available on the client, we get a GPS location and are able to use the GPS timestamp to send that to the server (or save the GPS timestamp on the offline DB and send it later to the server). However if the user has turned off the GPS (and all other location services), the device will not have a GPS fix and therefore the server can not determine accurately when an entry was made.
We can not use the local device clock as the user may change the clock to make entries on different times than they actually occurred (these entries are part of the users salary so he might have an interest to "fix" them).
So basically I am searching for a way to determine as best I can the time some entry was made when I can not trust the internal clock of the mobile. The algorithm should support both entries sent in real time or entries sent from an offline DB. the algorithm should also support cases where the user changes the time of the mobile, turns the mobile on/off, turns the GPS on/off while the application is running on the mobile etc...
Few ideas that I thought of:
Although I can not trust the mobile's time, it can still perform as a stop watch:
Have a class that will loop until the application exists, the loop will sleep 1 second and increase an internal clock variable by 1 second. On every GPS location my code gets we update the internal clock variable. This way I have an absolute clock that came from outside the device (from the GPS) and when the client sends an entry to the server, we can use the internal clock as an absolute time.
PROS: the user can not modify this clock as it is only updated when we get a location from the GPS
CONS: the application needs at least one GPS fix before the user can make any reliable entries
I can take advantage of the fact that the server has an accurate clock which is correct. If the client would send to the server info that the age of the entry is 10 minutes, the server could use its internal time and know the exact time the entry was made on.
The biggest problem is how to know the entry age? I thought about saving the entries to the offline DB with an age of 0, then every 1 second increase the age of the entry in the DB. The problem is that if the app is closed and/or the device is off this will now happen
This is where I am currently stuck. Any ideas on how to solve this are more than welcome
Thanks
Here's how I handle this issue for iPhone. When the app starts, I call my server and ask for the current GMT time (you could also call a public NTP server if you preferred). I then compare it to the system time. If it is different by more than X then I popup a message saying, sorry your system time is wrong so you can't use the app until you fix this. I then monitor for the user changing the system time while the app is running and if they do that, then I do the compare again (and popup the error message if the time is off by more than X). This ensures that their system time is always correct (within some reasonable allowance) and you can trust [NSDate date]. However, this solution does require a valid network connection. If this solution works for you, I can post the sample code.
i think i am going to combine Jules and Joel's answers into one solution which will provide for my needs the best solution:
since the user might change the clock when the mobile doed not have GPRS, just detecting the time change event will not help us as we can not validate at that moment the new time is correct.
As Joel recommended i will pull the time from my server when my application is started (at that point i still must have communications with the server or else my application will not start). The time pulled from the server along with the current device upTime will be saved.
when the user wants to make an entry i will calculate the current time using (Server Base Time + Current UpTime - Base UpTime). this way i will have an independent source of time regardless of the current clock of the device
this will defenitly work on android
on iPhone we will try to use something out of http://www.cocoadev.com/index.pl?FindingUptime to get the upTime
Jules & Joel, thanks for your answers!
Look into android.os.SystemClock. Specifically, elapsedRealtime() returns a time since the phone was switched on, which is not affected if the clock is changed by the user.
You can correlate times in event the phone is switched off by having code that runs when it is switched on and checks the realtime clock. As the clock can't be changed when the phone is off, I suspect you could use this to put together a system that will catch any simple attempts at cheating. (If the user roots the phone all bets are off -- they could modify the behaviour of the APIs from under you).
Running code every second will kill the phone's battery life. Most phones would be unlikely to last a day if you did this.

SyncML (sync or update?)

I'm faced with another dilemma, with regards to synchronizing (or updating?) data across to the server, from a mobile device (using Android).
I've looked into SyncML as the standard for doing this, but my big concern is that we plan on syncing a large amount of data across (not just 1 record), and probably only doing it once, twice or at most 3 times a day, or maybe not even once a day - all dependant on certain circumstances.
The other thing - the device or server will still be able to function properly without having to sync across. The sync would just be an update, essentially.
From reading up on the SyncML specs, it applies more to syncing across small pieces of data, and at a very fast interval (ie. every 5-15 minutes, but I guess can be regulated by the user). At any rate, the synchronization process is more involved, and important for both the device and the server (more so the device, I guess).
Here's a quote from the documentation that got me thinking:
2.2.3 Data synchronization SyncML is oriented towards synchronization of
small independent records, as the
modied records are transmitted
entirely. This is adequate for address
entries, short messages and similar
data. On the primary target of SyncML,
mobile devices, most data is of this
type. The devices must be able to keep
track which of their records have been
changed. Each record is identied by a
unique ID, so con icts can be detected
quite simple. As the record ID's may
not be arbitrarily chosen but
automatically created, mapping between
server and client ID's is dened in
the protocol. Mapping is always
managed by the server. When the client
receives a new item from the server,
he can send a map update command to
tell the server what ID he assigned to
the item. Now the server uses the
client ID in all his messages.
So, I guess my question is whether we should continue to look into SyncML for this, or built an in-house solution - maybe something more tailored to delivering large pieces of data across, which can define it as well?
I'm faced with the problem too. I prefer syncml solution, mainly because it's more extensible.
The data tables we want to sync are indeterminate, syncml may be a better choice.

Android battery when working with http

Recently google introduced push-to-device service, but it's only available 2.2 and up.
I need a similar system in my app, and I'm trying to get around limitations.
The issue is battery life. Since the user must be notified immediately about the changes on the server, I thought to implement a service that would live in the background (standard Android service) and query the server for updates.
Of course, querying the server, even each second, will cost a lot of bandwidth, as well as battery, so my question is this: does it make a difference if the server is holding the response for some period of time? (the idea behind Comet type ajax request)
Works like this:
Device sends request for data update
Server gets the request and goes in the loop for one minute, checking if there are updates on each iteration
If there are updates, server sends response back with updates
If not, service goes on to the next iteration.
After a minute, it finally sends the response that no data is yet available
After response (no matter whether empty or with data) Android fires another such request.
It will definitely cost less bandwidth, but will it consume less (or even more) battery?
Holding a TCP socket (and consequently waiting for an HTTP response) as you suggest is probably going to be your best option. What you've described is actually already implemented via HTTP continuation requests. Have a look at the Bayeux protocol for HTTP push notifications. Also, check out the Android implementation here. For what it's worth, that's definitely what I would use. I haven't done any sort of analysis of it, but this allows you to minimize the amount of data transmitted over the line (which is directly proportional to the power consumption) by allowing the connection to hang for as long as possible.
In short, the way Bayeux works is very similar to what you've suggested. The client opens a request and the server waits on it. If it has something to send, it sends it otherwise it simply waits. Eventually, the request will timeout. At that point, the client makes another request. What you attain is near instantaneous push to the client from the server without constant polling and duplication of information like HTTP headers, etc.
When the phone is actively using the networks, it's battery is used more. That is to say when it sends the request and when it receives the response. It will also be using battery just by listening for a response. However, will the phone download data, checking to see if there's a response? Or will the phone just be open to receiving it and the server will push the response out to the phone? That is mainly what it depends on. If the phone is just open to receiving the response but does not actually use the network while trying to download some response the whole time it's waiting, it should use less battery.
Additionally, the phone sending a query every minute instead of every second definitely uses less battery, as far as using the networks goes. However it depends on how you make the phone hold, if you tie it up with very complex logic to make it wait it may not help battery life. That's probably not the case, however, and I would say that in all likelihood this would work out for you.
In closing, it should help the battery but there are ways you could make it in which it would not. It wouldn't hurt to write the program and then just change some type of variable (for example WAIT_TIME to 1 second instead of 1 minute) and test battery usage though, would it?

Categories

Resources