As we know. the Android apk can be decompiled. Even if you're using ProGuard, anyone can see your server URLs. I have a basic authenticate scheme on my server where the client passes an AuthToken in the header of a request; if someone manages to steal that AuthToken then they can fool the server.
Which authentication system I should use to prevent this as I have to include something (Key | AuthToken ) in my Java code (app doesn't requires a login).
Here are my pointers on this subject.
APK reverse engineering is doable and your URL is definitely susceptible to attacks. This can be avoided by using token based authentication. Which you are already doing.
How do you think someone can steal auth token? only possibility is to do a network spoofing. This can be avoided using HTTPS.
Also, the token should not be stored anywhere on your device. It should be in memory and every time the user opens the app request for a new token. Since you have mentioned that the app does not need any login, you can provide a short lived auth token or expire the token at server end.
You can also read more on this topic here https://www.owasp.org/index.php/REST_Security_Cheat_Sheet
Related
I am getting into Android development for the first time and am having a blast, of course. I do have a question, though, about the general approach to authentication (for dealing with a backend).
To begin, here is, in a nut shell, what I have worked out.
Using google's documentation (link), I authenticate the user using the google sign in api. I have put the logic mentioned in the reference in my app's main activity. After the onConnected method fires, I have a successfully connected GoogleApiClient.
With the now connected GoogleApiClient, I use a call to GoogleAuthUtil.getToken to get an oath2 token that I use to authenticate requests to my backend. Basically, any time I make an HTTP request to my backend, I include this token as a header. My backend reads this token and uses the Python API google provides for verifying this token. In the backend, I use the email that is embedded in the (now parsed) token to make sure the user to whom that oauth2 token was issued is, in fact, a user of my system.
Now, here are the questions. First, does this sound like a reasonable approach to authentication on the Android platform? What might I be missing? What could go wrong?
The second question is a bit more direct. When I get the oauth2 token from the client app, I store it and use the same token each time an HTTP request to a secured resource is made. Eventually, of course, the token will expire. From some limited testing using the Android emulator, it seems that if I shut down the application and restart it, I am getting the same, expired token back using the GoogleAuthUtil.getToken, rather than getting a fresh token with a new expiration in the future. In my tests, I have had to restart the emulator in order to get a token with a correct expiry. Am I mistaken here? Is there something special I need to do to tell the Google API to issue me a new token? Do I need to disconnect the GoogleApiClient and reconnect it? I hope to avoid doing this in order to limit the number of activities that need to carry the callbacks required to complete this process.
Any words of wisdom here will be greatly appreciated!
after you have got your token you can use Validate Token, and if it responses with an error: 'invalid_token', you can use GoogleAuth.clearToken(Context context, String token) to clear the token and get a new token with the method you are using to get auth token.
First, my sincere apologies for asking a newbie security question. This is my first real foray in the world of online security and I'm kind of lost.
I am in the process of developing an Android app and an accompanying website which both require the users to login to be able to use the services my web servers provide. The services are all written in a session-less fashion, meaning authenticated requests (request requiring user credentials) all need to provide their accompanying security tokens to function and that every authenticated request first has to validate user's credentials.
How I've developed such an architecture is to have the users login using email and password. This information is sent to an authentication server via SSL and an authentication token (an independent hash to the password hash) is provided. This token is then stored on the client (cookies for website and private shared preferences in android). For all future calls, unless the user logs out, this token is valid and can be used to authenticate the user. Each device (different Android devices or web clients) also get their own independent token so that the authentication token is a pair of hashed token + device id.
In addition, I would like to avoid using SSL for every authenticated call. Ideally I would like only the initial authentication (with the email/password) to be encrypted and the rest of the calls to go via HTTP using the authentication token that was obtained when the user signed in. My reason for this avoidance is the triple handshake cost and that maintaining persistent or long lived connections are not preferred.
Not using SSL however leaves me open to a man-in-the-middle attack (MIM). If anyone intercepts the calls and gets a hold of the [device id + authentication token], for all intents and purposes, they will be able to impersonate the user and have access to everything the user can access until the user logs out, at which point the token will be invalidated.
I know my implementation doesn't handle MIM attacks so I was wondering if you could suggest another way to implement this that doesn't include SSL for each and every call and yet avoids MIM attacks.
In short, my requirements are:
Do not maintain sessions on the server
Use SSL only for initial login (email/password pair)
Do not use SSL for subsequent calls that provide authentication token and device id
Somehow avoid MIM attacks if possible (this is the real requirement)
Is it at all possible to have all 4 of these requirements together? Can I avoid using SSL connections and still maintain secure, session-less servers? Where am I going wrong with my implementation and how can I avoid issues with MIM attacks?
Many thanks in advance and apologies if this is a duplicate. I couldn't find the answer anywhere. Perhaps I was searching the wrong thing. If so, please let me know and I'll close/remove the question.
SSL keeps a state at OSI session layer, and it encrypts the whole message using shared key. If you want a cheap solution to prevent MIM and with no server state, the only (low-security) solution I can think of is to use a server-wide global secret and send it to the client during the initial SSL process. The following HTTP roundtrips will have requests/responses encrypted with this shared secret.
But it's simply a badly implemented SSL, and the application level encryption/decryption processes will be probably more costly then the built-in SSL. And you can't encrypt the headers!
Looking for confirmation and relevant docs for a best practice/design pattern for a RESTful interface between an Android native application and a PHP website.
Does this make sense?
HTTPS requests over SSL (so that communication is encrypted).
OAuth2 for token based authentication (so that the user can authorize with the site initially with a username and password but then rely on an authorization token).
Anything missing? Is there a better approach? Are there general approaches for a persistent connection?
I have seen this approach used and its implementation was very secure. Instead of calling it an authToken, I refer to it as a sessionToken as mine were set to expire after a certain period of time and have the server request the username/password from the client again. This helps drop dead sessions and ensure that if someone has succeeded in maliciously getting the user's sessionToken then they are thwarted the next time the app moves to HTTPS to provide credentials again (assuming you only use HTTPS over SSL for login). If all the traffic is sent over SSL then the use case would be to have the session token timeout for the benefit of the servers so they can clear out dead sessions.
*Just something to be aware of, sending all data over SSL is fairly cost heavy on the server compared to regular requests, so if you can avoid it without compromising security, it can really help with scalability.
I'm interested in the best way to do user auth in a mobile app. At the moment the set up is quite simple. I'm storing the username and password on the app and sending it to the api each time I need to run a restricted query.
This I feel is probably the wrong way to go about this.
Would a better way to be to send the username and password when the user logs in and then store that user's id? The problem with this is that then the api accepts a user id and not a username and password. A user id will be much easier to "guess" at and malicious persons would be able to submit a req to the api with randomly selected user id's performing actions under their account. I have an api key. Is this secure enough?
The issue is that I want to start integrating twitter and facebook oauth into the app. I haven't read much about it, but I think you get a "token". How would this work with the set up that you're suggesting? Would there be benefit to creating a token in my own database of users and using the token (whether it be mine, facebook's or twitter's) as the authorisation? Or would it make sense to keep each service separate and deal with them separately?
Thank you.
The correct way would be to generate auth token on the server when user logs and send this token in login reply. Then this token is used in subsequent requests.
This means that server must keep track of auth tokens it generates. You can also track token creation times and make tokens expire after some time.
Token must be a sufficiently long random string, so that it can not be easily guessed. How to do this was answered before: How to generate a random alpha-numeric string?
Personally I prefer the UUID approach.
Update:
This problem was already solved in web browsers, via cookies and sessions. You can reuse this mechanism in your Android requests (though some REST purists disprove this approach):
Enable sessions on server.
When user logs into a server add some data to session, for instance time of login:
request.getSession().setAttribute("timeOfLogin", System.currentTimeMillis());
Since sessions are enabled, you also need to enable support for cookies in your HttpClient requests: Using Cookies across Activities when using HttpClient
Every time a request is made, server should check if session contains timeOfLogin attribute. Otherwise it should return HTTP 401 reply.
When user logs out, call server logout url and clear the cookies on client.
This question is about trying to understand the security risks involved in implementing oauth on a mobile platform like Android. Assumption here is that we have an Android application that has the consumer key/secret embedded in the code.
Assuming a consumer secret has been compromised, and a hacker has gotten a hold of it, what are the consequences of this ?
Compromised Consumer Secret assumptions
Am I correct in stating that a compromised consumer secret as such has no effect on the user's security, or any data stored at the OAuth enabled provider that the user was interacting with. The data itself is not compromised and cannot be retrieved by the hacker.
The hacker would need to get a hold of a valid user access token, and that's a lot harder to get.
What could a hacker do with a compromised consumer secret ?
Am I also correct in stating the following :
The hacker can setup/publish an
application that imitates my app.
The hacker can attract users that will go
through the OAuth flow, retrieving an
access token via the hackers OAuth
dance (using the compromised consumer
key/secret).
The user might think
he's dealing with my app, as he will
see a familiar name (consumer key)
during the authorization process.
When a consumer issues a request via
the hacker, the hacker can easily
intercept the access token, and
combined with the consumer secret can
now sign requests on my behalf to
gain access to my resources.
End-user impact
In the assumption that
a hacker has setup an application /
site using my consumer secret
one of my users was tricked into authorizing
access to that application / site
The following might happen :
the end-user may being noticing that something fishy is going on, and inform the service provider (ex: Google) about the malicious app
the service provider can then revoke the consumer key/secret
OAuth consumer (my application) impact :
My app (containing the consumer secret) would need to be updated, as otherwise all my clients would not be able to authorize my application do to requests on their behalf anymore (as my consumer secret would no longer be valid).
Delegating all OAuth traffic
Although it would be possible to delegate a lot of the OAuth interactions via an intermediate webserver (doing the OAuth dance and sending the access token to the user), one would have to proxy all service interactions also, as the consumer key/secret is required for signing each request. Is this the only way to keep the consumer key/secret outside of the mobile app, and stored in a more secure place on the intermediate webserver ?
Alternatives
Are there alternatives for this proxy-ing ? Is it possible to store the consumer secret at the intermediate webserver, and have some kind of mechanism that the Android application (published in the market and properly signed), can do a secure request to the intermediate webserver to fetch the consumer secret and store it internally in the app ? Can a mechanism be implemented that the intermediate webserver "knows" that this is an official android app that is requesting to fetch the consumer secret, and that the intermediate webserver will only handout the consumer secret to that particular android app ?
Summary: I would just take the risk and keep the secret in the client app.
Proxy server alternative:
The only way you can reasonable mitigate the problems I list below and make the proxy-ing work, would be to go the whole nine yards - move all the business logic for dealing with the resources on the third party webservice to your proxy server, and make the client app dumb terminal with rich UI. This way, the only actions the malicious app would be able to make the proxy perform on its behalf would be only what your business logic legitimately needs.
But now you get in the realm of a whole slew of other problems having to deal with reliability and scalability.
Long deliberation on why simple proxy wouldn't work:
Some people, when confronted with a
problem, think “I know, I'll add my
own proxy server” Now they have two
problems. (with apologies to Jamie
Zawinski)
Your assumptions are largely right. Right down to the point where you start thinking about your own server, whether it keeps the secret and proxies the calls for the client app, or it attempts to determine if the app is legitimate and give it the secret. In both approaches, you still have to solve the problem of "is this request coming from a piece of code I wrote"?
Let me repeat - there is no way to distinguish on the wire that particular piece of software is running. If the data in the messages looks right, nothing can prove it's another app that's sending that message.
At the end of the day, if I am writing a malicious app, I don't care if I actually know the real secret, as long as I can make somebody that knows it do a work on my behalf. So, if you think a malicious app can impersonate your app to the third party OAuth servers, why are you certain it can't impersonate your app to your proxy?
But wait, there's more. The domain at which your proxy service is located, is your identity to both your clients and the OAuth provider (as shown to the end user by the OAuth provider). If a malicious app can make your server do bad stuff, not only is your key revoked, but your public web identity is also not trusted anymore.
I will start with the obvious - there is no way to distinguish on the wire that particular piece of software is running. If the data in the messages looks right, nothing can prove it's another app that's sending that message.
Thus, any algorithm that relies on app-side stored secret can be spoofed. OAuth's strength is that it never gives the user's credentials to the app, instead giving the app temporary credentials of it's own that the user can revoke if necessary.
Of course, the weak point here is that a sufficiently good app can get the user to trust it and not revoke the credentials, before it finished its nefarious deeds.
However, one way to mitigate this is Google's approach of using 3-legged OAuth, instead of the standard 2-legged. In the 3-legged OAuth, there's no pre-assigned secret, but on every authentication a new access token secret is issued, along with each access token. While ultimately this suffers from the same drawback, as a bad app can read the good app's token secret from its process, it does result in the user having to approve the app access every time it needs new access token.
And of course, this also means that it's a bit more inconvenient and annoying for the user.