I'm making a cross-platform application on Monodroid/MonoTouch, and my application should contact with server-side part to get data from it. Data is sensitive and is the base of application.
How would i defend it to restrict usage of server-side from other people/applications, assuming people can get correct request syntax or if i encode my query with secret key they can get that key by debugging.
You'll need confidentiality in your data transfers, e.g. using SSL/TLS, like HTTPS, but that alone won't be enough. By default it means that the client can ensure it trust the server, not that the server can trust the client (and that does not cover your debugging case).
So you'll need authentication as well. That's nearly identical to having a secret key except that it needs to be user (or the entity you trust) based, not hard coded into the application itself (that can't be trusted).
Having the users register and get passwords (or a user-token saved to the device storage) is one way to start this. It will protect your from other people using your data.
You can enhance this by creating some kind a user/device association so that a user secret can't be shared across several devices... that can limit the possibility of using an alternative (untrusted) application by the same (trusted) user, e.g. on a different device.
Related
I am thinking of creating a app that will contain personal user information for a memberships scheme, the basics, name, dob, a user ID and a valid from / expiry date.
I want to reduce the instance of hacking and having the data stolen. So I was thinking that I add the user information in the database, when the user logs in for the first time, the app connects to the data and downloads the users information to their phone and all personal data is removed from the database, the ID is used for the app to display valid from/expiry dates etc.
I am unfamiliar with iOS/Android app development. Can this work, do I store in a separate file and download to a user area in the app package or do I download a database to the phone, and what about when I need to update the app?
This is not good system design
In reality if a system is designed properly, with a security focussed mindset and deployed in a properly designed environment, this should not be a concern that warrants causing end users such potential issues.
In fact, user data would be considerably more secure on a properly designed, controlled system than on a user's device; How many people do you know that don't have a passcode on their phone, or have it set as their date of birth? I know a whole bunch of these types of people (and by logical extension, the passcodes to their phones).
You also mentioned that the data will be deleted from the database. How exactly will it end up back in that database in the event of a support ticket? If it's by emailing it back to you, that would be a bigger security risk because plain text email is not secure.
What you should do instead
Build a web service to sit between your app and the database
Pass the login details from the application to the web service and perform authentication/authorisation there. If successful, pass back an access token of some description. Save this access token to the database with an expire-time value.
Have the app call various api endpoints, passing the access token as part of the Authorization header (so it doesn't get cached or end up in the logs of proxies and web servers etc). If the token is valid, fulfil the request and return the requested data back to the app from the web service
On log-out/quit, have the app remove any sensitive information from the device memory if security is such a concern.
Additional Notes
As with any such design, you should ensure that all communications are done over a secure channel.
Ensure passwords are stored in a secure format and not transmitted or stored in plaintext anywhere. Use a secure channel for passwords in transit, Bcrypt is good for storing passwords or consider implementing Secure Remote Password Protocol.
Ensure that direct access to the database is only allowed from your web service and not the wider internet
Ensure that your webservice sanitises input, escapes output and is not vulnerable to SQL Injection attacks.
The benefits of this approach are obvious:
Your app data remains secure so long as the environment is secured using the correct tools. If you choose the right hosting provider they'll also be able to provide help and support securing your web server and database.
In the event of a user changing their device, logging out or whatever else they'll be able to log back in as they see fit. This will meet the already well established expectations of users and reduce potential support calls.
If you decide to expand on the features of your app later on, you can add new tables to the database, new endpoints to the webservice and new functionality within the app to consume said endpoints.
Many users tend to have a bad habit of reusing passwords; With a properly designed system you're able to audit login attempts, lock users out for a period of time after so many incorrect password attempts, force password expiry or resets and allow for self servicing of password changes to the whims of your more security conscious users.
We are getting started with developing an android app and the corresponding REST APIs and I need to figure out a security model for the same. I've close to zero experience with designing secure systems and would like some expert opinion on the loopholes of a first draft we've come up with.
I've been all over the web for the past few days and everyone seems to suggest HTTPS and OAUTH as the proven answer. Since our app doesn't deal with anyone's bank account, I think we can live with less than DoD grade security (although even they get hacked often!). And we don't want to spend the effort for OAUTH unless there really is no other reasonable alternative.
We're trying to avoid HTTPS because the app will, at times, be polling the server every few seconds and we thought it'd be too expensive to use it for all REST calls. Also, the payload for some of those API calls can be too big (2-4 KBytes) for asymmetric encryption.
Here's what we've lined up so far:
User creates an account by entering a unique 'username' and a 'password' on the registration page in the app
The 'username' is stored in plaintext in SharedPreferences using MODE_PRIVATE
The SHA-256 of the 'password' is also stored in SharedPreferences using MODE_PRIVATE
The user credentials ('username' and hashed 'password') are sent to the server using https://
The server creates an authentication "token" (a random AES key, really, using a CSPRNG), stores it in its DB and also sends it back to the client (using https, of course)
The AES-256 key is then stored by the app in the SharedPreferences using MODE_PRIVATE
All further communication between the app and the server is done over http:// with encrypted (payload (json/xml) + timestamp + checksum/hash) (CBC with random IV)
The AES key is only updated if the user changes his password
For actions that require additional security, the app asks the user to re-enter his password which is verified against the stored hash
The app should be usable offline (It can talk to pre-registered embedded devices over a WiFi connection. Security over WiFi is another story!)
I know some of the pitfalls of the system already:
Storing the key on the phone isn't safe: If a hacker gets access to the user's phone, the user just needs to change his password and everything will be safe.
Storing keys on the server is bad: A lot of people seem to say if you really have to store the keys, at least store them on a separate server. But that adds an extra round trip between the servers for every REST call. And there can potentially be many of them when the app is polling.
Keys without expiry are bad: I can't think of another way to let the app function offline.
The real questions now are:
What are the other loopholes that I've missed so far?
What kind of effort would it take for someone to break into the system?
Most important of all, how can we improve overall security to some "reasonable" standard without overdoing it?
This is not DoD security!
You really do need to use https and insure it is setup for TLS 1.2 and Perfect Forward Secrecy. Additionally the app needs to pin the certificate.
Section 1:
3: Do not use SHA256, use PBKDF2, crypt of another hash that has an increased work factor.
4: Send the password, not the hashed password to the server, the server does the hashing.
7: When using https there is no need to encrypt the payload, that is what https does.
Section 2:
2: When storing keys on the server keep them out of any http accessible directory. This is a weak point and needs to be addresses with server security.
Section 3:
Use two factor authentication for administration of the server. Have a good scheme to control the 2nd factor, I like hardware tokens and keep track of them by their serial numbers. That way there is a limited number and they can be recovered when someone is no longer should have administrator access. They can also be loaned for short periods of time.
You also need to have disaster plans for various contingencies, do not wait for an incidence and try to deal with it on the fly. Some times appropriate immediate action is required.
All of this is basic.
You need to evaluate potential threats, attackers and the value to an attacker or user.
If you care about security and are not a domain expert hire one for advice and review, I do.
Aside: DoD security: Two guard stations, two overhead passages between buildings, the last building has one door that is a huge safe door and there are no windows. Ceiling bubblegum lights rotating when there are un-cleared personal in the building, one escort per un-cleared person who follows you everywhere including into the bathroom, multiple sensors in the ceiling, tempest shielding.
I'm building an Android app that contains sensitive chat messages.
I'd really appreciate some help regarding an encryption workflow that allows me to encrypt these messages, store them in a remote database, query for them via Angular.JS and finally decrypt them and present them to the user.
The server must not be able to decrypt the messages. Only both Android and Angular.JS clients should be able to encrypt and decrypt the data, and the encryption key should be unique for each of my users. Both clients can send messages, so both need the ability to encrypt and decrypt.
Is there any way to get this done without requiring the user to enter a custom "Encryption Key" in both clients? Is there any way for this to be automatic in some way, and without involving the server? If not, what are the best-practices in this condition? I wasn't able to find any example of this kind of encryption in any wide-known service as of today.
Thanks!
You're asking about how to do key exchange without revealing the key to the network, right?
Diffie-Hellman key exchange is one well known algorithm for doing this. The important high level properties are that the two parties, in the end, agree on a shared secret that a passive eavesdropper can't get. However, the parties don't authenticate each other, so they can't tell if they're running the algorithm with a man in the middle (e.g., the server in your question).
I've seen products use password-authenticated key exchange. As the name suggests, these algorithms require that both parties (in this case, the same user, but on different devices) know a password. So ultimately, going with this approach requires the user to enter a password on one of the devices (the other can generate it and display it to the user). It's a little less troublesome than entering an entire encryption key into both devices, right?
As for technical implementations, it's still probably going to involve the server (or a server, if not the database server) just to relay messages, but these key exchange algorithms should keep the shared secret confidential.
I wasn't able to find any example of this kind of encryption in any wide-known service as of today.
One great resource I've found is a page from Mozilla's wiki on how they implemented key exchange in their Firefox Sync product. They use this when you set up Sync on multiple devices, which requires the second device to get the key from the first device.
I'm trying to build a secure system for transmitting data from a client Android app to a web server running PHP.
What I want to do is ensure that the system is cryptographically secure in such a way that the messages from the app can be verified as being actually from the app itself, rather than being from a devious user who may have written a custom script or perhaps using cURL in order to game the system.
There are a number of use cases for this kind of verification, for example:-
If an app contains an advert from which you gather metrics, you would want to verify that the click data is being sent from the app rather than from a malicious user who has figured out your API and is sending dummy data.
The app might have a multiple-choice survey and again, you would want to ensure that the survey results are being collected from the app.
The app is collecting GPS traces and you want to ensure that the data is being sent from the app itself.
In each of these cases, you would want to ensure that the origin of the messages is the app itself, and not just a user who is running a simple script to fake the data.
Some ideas I've considered:-
SSL - Works well for securing the channel and preventing tampering (which fulfils some of the requirements) but still cannot ensure the integrity of the source of the data.
Public-key cryptography - The client app could encrypt the data with a private key and then transmit it to the server where it can be decoded. The problem is that the private key needs to be hardcoded within the app -- the app could be decompiled and the private key extracted and then used to send fake data.
Home-made algorithms - A very similar question to this is asked here where the solutions only work until "someone figures out your algorithm" -- i.e. not a great solution!
Hash chain - This seemed like a really interesting way of using one-time keys to verify each data payload from the client to server, but again it relies on the app itself not being decompiled, because the password still needs to be stored by the app.
My limited knowledge of cryptography makes me think that it's actually theoretically impossible to build a system that would be totally verifiable in this manner, because if we cannot trust the end client or the channel, then there is nothing on which to base any trust... but maybe there's something I've overlooked!
It's not that hard, you just need to authenticate the app. You can do this with a simple user and password (over SSL) or use client authentication. In both cases, the credentials need to be in the app, and an attacker can extract them and impersonate the app. You have to leave with it and maybe implement some methods to mitigate it.
You can also authenticate the messages, by having them signed either with an asymmetric key (RSA, etc.) or a symmetric one (HMAC, etc.). A nonce helps against replays, where someone captures a validly signed messages and sends it to your server over and over again. Depending on your protocol, the overhead of using one might be too much.
To protect the credentials, you can have the client generate them and save them in the system KeyStore, although it is not quite supported by a public API, see here for some details. This, of course, requires an extra step where you need to send the generated credentials (say, public key) to your server securely which might be tricky to implement properly.
Whatever you do, don't try to invent your own cryptographic algorithm or protocol, use an established one.
The issue I am facing is an interesting one and my knowledge of security is strong, but my understanding is weak. That is, I understand the theory, but have had little practical application in this particular regard. I have stored passwords, transmitted them using salt, verified them a hash, etc. My needs here are similar but specific.
I have one application that other external application may "hook" into via a ContentProvider URI. External applications may be developed by anyone, thus I do not have control over them. However, I want to limit some access to subscribers. To facilitate this, each "subscribed" application will have a key registered to its package name. The ContentProvider then needs to verify this key as valid.
My issue is this:
Since it is passed via URI, it is easily possible to intercept the key in transit. Additionally, my subscribers need a method by which they can store their own key without having to connect to a secure server. They cannot store the key as a literal within their app, of course, as this makes for easy vulnerability. I am trying to provide as much of a solution as possible without having to "trust" the security of these other applications.
So, how do we store a key in both my database and their external application, and allow them to send it to me for specifically verified queries? I think my issue in understanding how to do this is the aspect of persistent storage and how it affects the model. That is, with a password model, the password is typed and not typically stored.
FuzzicalLogic
Process the key in an encrypted challenge / response.
Client requests challenge value encrypted with a predetermined per application public key.
If the client then returns the correct value to the server encrypted using the client specific server's public key then the handshake was a success.
Using a per application private key / public key and something like a guid for challenge value, it would be very hard to duplicate.
and the keys never change hands except when registering the application developer initially.