I have read a few examples on SO for securing client / data. But we have a little bit different issue, and not sure where to look.
Basically we have an android game which is a geo-location based game. We use HMAC-SHA1 to the query string to verify that the data being sent from the client is in-fact from the client. There is a small issue. The HMAC-SHA1 key. I can obfuscate till my hearts content, but the key remains in the app. Someone can easily de-compile the app, grab the key, and then send manual queries by a browser for their user account (spoofing GPS).
I saw one example where someone suggested client & server side ssl authenication. Not sure how that would work, would you not just need to attach a ssl cert to the app? Would this not be subject to de-compiling also, it would require the end user to re-compile / use the cert?
Can we some how use the package manager to get the self signed cert? I need to find out the correct way to secure our transmission so someone can't fake their transmissions for their own user account..
Thanks
To authenticate the client, it needs some form of credentials. You can either:
don't save the credentials and have the user input them every time
save them somewhere
use system credentials
use some form of an identity provider
1 is inconvenient, 2 i subject to attacks as long as someone has physical access to the device. For 3, you could use the user's Google account so you can be (pretty) sure who they are and block them if there are any problems/attacks. 4 really a variation of 3: the user will authenticate to some third-party service and it will only issue an (temporary) access token. So if the account is compromised the token will eventually expire and/or be revoked (look into OAuth). Consider the risks and amount of work to implement and take your pick.
As for using client certificates, you can store them encrypted, so you need to provide a passphrase to use them. On pre-ICS you need to implement this yourself, on ICS you can use the system key store via the KeyChain API. You will only get access to the private key after you unlock the device (uses the unlock password/PIN to protect keys) and the user explicitly grants permission.
If you want to stick to you current way of doing things, implement the HMAC part in native code (OpenSSL, etc.), and generate the key at runtime by combining bits of it. That would make it fairly hard to reverse engineer. Additionally, you might want to add some sort of a nonce, so that requests cannot be replayed.
Related
My web server has a REST API. I need to add user authentication to my app, and my thought process behind it is this:
Get the user's username and password from the app form
Encrypt the password and Base64 encode both the username and password
Send the data to the REST API over HTTPS
Web server verifies credentials, returns errors or success
Is this secure? I see a lot of mentions of OAuth2. What is it? What does it do better than my process?
The fact that you used the word "encrypt" for the users password instead of "hash" demonstrates you have fairly limited knowledge about this. This will almost certainly result in you messing up your authentication procedures somewhere along the line and put your users private information at risk.
A really important point about OAuth2 is that it can be used with many existing third party providers (Google, Facebook, Twitter, etc) with minimal effort from you.
You don't need to do anything to store credentials or even authenticate users. The third party takes cares of all of this and simply provides the client with a token (long random string) which is then passed to your server. Your server then talks to the third-party server to make sure the token is valid (and gain any info you need, like the users' name, email address or other information).
You really should consider using it if you can. The big companies put a lot of effort into securing their authentication methods and you gain all of that by making use of it.
A final nice point is that users don't need to create and remember credentials for (yet) another account.
Google has some docs to get you started and includes an OAuth playground to test how it works in practise.
A very basic explanation of OAuth2 is that the user will log into your system, with it encrypting both username and password before sending it, then if it gets authenticated, it will send back a token to the user.
Thereafter, whenever the user tries to contact the web server, it will send this token along with each API call. This is how it makes sure that non-authenticated people can't access your web server.
So basically your current method includes parts of the OAuth2 standard, but not the most important part (The token).
In your situation, how would you stop non-authenticated people from accessing your web server? If the project is small, then the risk of this is not that large.. But for larger companies, this is a real threat that needs to be dealt with.
You should really try to understand the difference between encryption and hashing before providing an authentication portal for your users. There are many different hashing algorithms you can use. I've personally used BCrypt in the past and I have a related SO Question about it as well. You can find implementations of pretty much all the popular algorithms in pretty much all the major high level languages these days.
Obviously if you don't want to do all that you can use an OAuth provider, who will take care of all the hard bits like storing the passwords securely, protecting the database and all the other security aspects for you. There are many reliable OAuth providers, Google, Facebook, Yahoo, etc. etc.
One thing to bear in mind would be the environment in which your app is hosted. OAuth does depend on having a connection available to the OAuth provider's servers every time a user wants to access your app. So, if you are behind a corporate firewall or similar which may block access to websites like Facebook, this might be a big problem.
I personally prefer token based authentication for my API projects. If you're not familiar with token based authentication you can read this SO Question and this link.
The general concept behind a
token-based authentication system is
simple. Allow users to enter their
username and password in order to
obtain a token which allows them to
fetch a specific resource - without
using their username and password.
Once their token has been obtained,
the user can offer the token - which
offers access to a specific resource
for a time period - to the remote
site.
I've been doing a lot of search about secure my api for mobile apps for Android or IOS.
Almost all examples tell user provides an user id and password somehow in a exchange for a token.
But how to prevent someone else to consume my api without my consent?
Face the following scenario:
I expose an API,
I develop, then, an app for android to consume it,
I develop, then, an app for IOS to consume it.
Other developer performs a rev. engineer in my app, creates his own app and starts to consume it without authorization.
How to prevent that?
Short answer: you can't.
Little longer answer: If you know what you are doing you can always reverse engineer a given application and use its api. You can only make it more difficult and time consuming, using authentification via tokens and device ids or usernames is a good first step. Apart from that: why would you want to close your api to outsiders? If your server code is written well there is nothing to worry about.
You can maybe secure your API on a legal basis and sue developers who use it, but that is a completely different topic.
Some clarification regarding securing the API and securing content via the API. Assume you create a server where you can send user/password and receive a token if that combination was correct. For the account-page you send said token over and the server verifys that that token is valid and returns your account page. You secured the actual content of the API. That is obviously very possible and almost a must-have unless you have no user-specific data. But still everybody can send the exact same initial request from their custom app, sending a user/pass and again receive a token, etc. You cannot really prevent the request itself or even determine that it was not send by some service not authorized by you. You can send some hashes along the request to add some security by obfuscation, but since your app has to compute them, so can the reverse engineer.
Yes, login api are open but they return a token only on successful match in your database. You should focus more on security of your data than unknown hits at your api.
SignUp API can be used for creating a user, and login for returning token of that user. Only if malicious developer has credentials, then he can access tokens and auth APIs. There is also something about DDOS attacks so you can maybe write logic to temporarily block IPs where hits frequency is high.
You can also store device ID of signing user, which seems idle for your scenario. Entertain hits from that deviceID only. Similarly, user can add more devices with their credentials. I think even Google does that (generate alerts if user creds are signed in from new device and add the device to list if user confirms). Hope this helps.
Adding the AWS access key and secret key directly in app code is definitely not a good approach, primarily because the app resides on the users device (unlike server side code), and can be reverse engineered to get the credentials, which can then be misused.
Though I find this information everywhere, but am unable to find a definitive solution to this problem. What are my options? I read about the token vending machine architecture for temporary credentials, but I am not convinced that it is any better. If I can reverse engineer the secret key, then I can reverse engineer the code which requests for temporary credentials. And once I have a set of temporary credentials to access S3, I am as good as if I had the key. I can request the temporary credentials again and again, even if they expire pretty quickly. To summarize, if an app can do something, I can do the same as a malicious user. If anything, the TVM can be a bit better at management (rotating credentials, and changing key in case of breach, etc.). Please note we can put the same access restrictions on the secret key, as we plan to do in case of TVM temporary credentials.
Additionally, if Amazon doesn't want people to use the secret key directly in the App, why don't they block it in their SDK, and enforce TVM or the correct solution. If you will leave a path, people are going to use it. I read several articles like these, and wonder why?: http://blog.rajbala.com/post/81038397871/amazon-is-downloading-apps-from-google-play-and
I am primarily from web background, so my understanding of this may be a bit flawed. Please help me understand if this is better, and whether there is a perfect (or may be good) solution available to this problem.
PS: Is there a rails implementation of TVM?
Embedding S3 keys in App code is very risky. Anyone can easily get that key from your app code (no reverse engineering or high skill set required), even if that is stored encrypted it is still compromised just that someone need to try harder (depending on how do you encrypt).
I hope that you understand the advantages of using temporary credentials to access Amazon (S3 etc) resources (mainly security + some others like no app update etc). I think you are more confused about the process to get the temporary credentials from TVM and how that is safer than embedding keys in code.
Every client using TVM first need to register with the TVM server implementation hosted by you. The communication between App (using TVM client) and TVM server is over SSL.
First the app register with TVM by providing UUID and a secret key. Please note that the secret key is not embedded in App code (which I think is main reason for your confusion) but generated randomly (using SecRandomCopyBytes which generates an array of cryptographically secure random bytes) at the time of registration (and hex encoded).
Once the device is registered successfully with TVM, the client TVM store the generated UDID and secret key in a storage called Keychain in iOS and Shared Preferences in Android. The keychain in iOS is the shared storage provided by iOS to securely (encrypted) store information (mainly keys, password etc).
After registration and UDID/Secret Key storage, App can get the token from TVM by sending the UDID, cryptographic signature, and a timestamp. The cryptographic signature is an HMAC hash generated from the timestamp using the secret key. The TVM can use the UDID to lookup the secret key and uses it to verify the signature. The TVM then responds by sending back temporary credentials, which are encrypted using the secret key (uses AES). The application decrypts the temporary credentials using the key and can then use them to access any AWS services for which the temporary credentials are authorized. Eventually, the expiration time of these temporary credentials will be reached, at which point the application can get the fresh temporary credentials, if required.
I am not sure how signed URLs relate to TVM, because I don't understand the concepts 100% but signed URLs really solved the problem for me. I needed a mechanism that would feed web app and mobile app data without allowing for misuse of the credentials. Putting the key in the code is indeed a very bad idea as it may generate a huge bill for the company.
After 3 days of extensive research, I found a simple and, what seems to be, a reliable and relatively safe solution: signed URLs. The idea is, that a very light-weight back-end can generate a temporary URL that will grant the user access to the specific resource for a limited time. So the idea is simple:
the user asks our back-end with a Rest call that he wants a specific resource
the back-end is already authorized with AWS S3
the back-end generates a temporary URL for the user and sends it in the Rest response
the user uses the URL to fetch the data directly from the AWS
A plug-and-play Python implementation can be found here and with a slight modification that I had to use: here.
Of course one more thing to figure out would be how do we authorize the user before we know that we can grant it the URL but that's another pair of shoes.
You should ideally use Cognito Identity for achieving this along with appropriate policies. It should be used with S3TransferUtility and S3TransferManager in iOS and Android SDKs. That would allow for background uploads and downloads as well. Cognito vends temporary credentials for access to AWS resources and is free. Also, you could federate it using UserPools or providers like Google, Facebook if you want secure access.
Thanks,
Rohan
I have 'secured' the communication between my android application and a tls server providing a financial transaction service, currently in development.
The security credentials are stored in a BKS keystore included in the Android apk. The password to the keystore is visible in plain text in the application source:
keyStore.load(is, "passwd".toCharArray());
I am concerned that if someone was to reverse engineer the app, they would be able to impersonate another user and compromise the security of the service.
I was wondering whether there is a fault in my implementation, if anyone else has this concern, and what the best method of securing against this possibility is.
Whenever you store security data on the client it can be compromised by reverse engineering. You may try to obscure it in the code but determined hacker will figure it anyway. So the only way to make it more secure is not to have the password openly in the code. May be you can just ask user for some pin code at the start of the application and use it to decrypt the password?
Are credentials stored in your app unique per user, i.e. every user gets it own apk with unique credentials? If you only have one apk with same credentials then this is as good as no security. Even worse, it gives false feeling of security.
You (your employer) should really hire a security expert to design your system from security point of view.
Here's what I'd do:
App comes without security credentials.
Every user is generated security credentials on server.
Every user gets secret activation code which is generated in secure environment and delivered via alternative channel. Preferably via snail mail. Activation codes are time-limited and can be used only one time.
On first use user types into app the activation code which enables a one-time download of credentials over a secure (https) channel.
User provides password to encrypt the credentials while stored on device.
Every time the app is used user must provide this paswword. If app is not used for some time the app must timeout the session and ask for password again when user wants access.
But don't take my word for granted. You still need a security expert if there are financial transactions involved.
I believe that Diffie-Hellman Key Exchange is what I was looking for. I'd rather not have to re-implement my own version of DH using a complicated process which involves the user.
currently programming for a Processing company-
their are a set of rules and regulations for a transaction application -OR- a POS APP(Point Of Sale application)
the rules are listed online as PCI validation, a certain amount of security has to be issued or it will be a law suit from Visa,inc or Many other Company's.
about your Question, it doesn't follow PCI compliance as that is a security issue.
please read the PCI compliance so that their is a complete understanding of Security, its not good to compromise Cardholder Data.
:)
Suppose you have a mobile application (Windows Phone or Android) that connects yo your back-end using SOAP.
For making it easy, let's say that we have a Web Service implemented in C#. The server exposes the following method:
[WebMethod]
public string SayHallo() { return "Hallo Client"; }
From the server perspective, you can't tell if the caller is your mobile application or a developer trying to debug your web service or a hacker trying to reverse engineer/exploit your back-end.
How can one identify that the origin of the web service call is THE application? as anyone with the WSDL can invoke the WS.
I know I can implement some standard security measures to the web service like:
Implement HTTPS on the server so messages travel encrypted and the danger of eavesdropping is reduced.
Sign the requests on the client-side using a digest/hashing algorithm, validate the signature in the server and reject the messages that have not been signed correctly.
Write custom headers in the HTTP request. Anyways headers can be simulated.
However, any well experienced hacker or a developer who knows the signing algorithm, could still generate a well signed, well, formatted message. Or a really good hacker could disassemble the application and get access to the hidden know-how of my "top secret" communications protocol.
Any ideas how to make the SayHallo() method to answer ONLY to request made from my mobile application?
We are running in the context of a mobile application, with hardware access, there could be something that can be done exploiting the hardware capabilities.
If someone wonders, I'm thinking on how to make a mobile app secure enough for sensitive applications like banking, for example.
Thanks for your ideas.
What you are describing is bi-directional authentication. It can only be done by storing a signed public key (certificate) on boths sides of the communication. Meaning that each app would need to authenticate your server with your servers public key and the server would need to authenticate each instance of your app. The app's public key would need to be produced and stored on the server at the deployment time with each instance of your app. Think of this as 2 way HTTPS, in general the only authentication that needs to be done is one direction, with the browser authenticating the server with a trusted signing key. In your case this would need to be done on both sides. Normally you would have a service like VeriSign sign each instance of a public key, this can get quite spendy with multiple deployments of an app. In your case you could just create an in house signing application using something like OPENSSL to sign your app every time it is distributed. This does not mean that someone still could not hack your code and extract the signing key on the app side. In general any code can be hacked, it's just a question of how hard can you make it before they give up? If you were to go the custom hardware route, there are things such as crypto chips and TMP's that can serve as a key sotre and make it harder for people to gain access to the private keys on the device.
A quick google search turned up the following:
http://www.codeproject.com/Articles/326574/An-Introduction-to-Mutual-SSL-Authentication
If you are thinking about using rewards points, and are really worried about someone gaming the system from the outside a better solution is to have each person make an account that is stored securely on the server and their points are saved and tallied there. This centralizes all the data and allows you complete control over it with out worrying about a malicious app reporting non-existent points. (this is how banks work)
If you want to verify that a user is both mobile and who they say they are then the best way is to leverage the network. Send a push notification with the hashed key that you want the user to use via:
APN for iOS
something like urban airship for windows phone
GCM for Android.
In general, the model looks like:
Server authenticates itself to the many clients with a certified public key (this is the whole Public Key Infrastructure, Certificate Authorities, etc)
Each client identifies itself to the server via some other authentication system (in 99.9% of cases, this is a password)
So if you're wondering how this sort of thing works in the case of banking apps, etc that's basically how it breaks down: (1) Client and server establish a secure channel such as a shared secret key, using the server's public key, (2) Client authenticates via this secure channel using some other mechanism.
Your question specifically, however, seems more aimed at the app authenticating itself (i.e., any request from your app is authentic) with the thought that if only your app can be authenticated, and your app is well-behaved, then everything should be safe. This has a few implications:
This implies that every user of your app is trusted. In security terms, they are part of your "trusted computing base".
Attempting to achieve this sort of goal WITHOUT considering the user/user's computing platform as trusted is essentially the goal of DRM; and while it's good enough to save money for music publishers, it's nowhere close to good enough for something really sensitive.
In general:
The problem that you're specifically looking at is VERY hard to solve, if you're looking for very strong security properties.
You likely don't need to solve that problem.
If you give us some more context, we might be able to give you more specific advice.
In addition to the answers already given, how about using a login type scheme with the unique id of the phone and a password? Get the user to register with your "back-end" and every time a transaction has to be made with the back-end, require the password or have an option to automatically log in.
You can use the following way to secure and to track your requests to server.
You can force the mobile or web clients to send you a Custom Headers of the device type when accessing your webservice through REST methods
Use basic http authentication by forcing each client to have their own username and passwords provided by you as a authorized webservice provider..
If you need more advanced protection you can use OAuth 2.0 to secure your webservice.
Since your originating app is going to be Android or Windows Phone apps, either one of them will be relatively easy for the wanna be hacker to debug. in any case you're going to be running the code on a machine that you have no control over so no ssl tricks or checking signing will solve your fundamental problem.
the only way you can combat threat from it is to NOT TRUST THE CLIENT. verify that the input coming from the clients is valid before acting on it if you're making a game - that it's accompanied by a valid security token etc.
in essence build your service so that it doesn't matter if the user is using an unofficial client.