secure verification process between android app and server - android

I am writing an android application which will not be available by the google play store. I am looking into how can I accomplish to verify that any user of the application is indeed a verified user.
I would like to use a server for this process that the application is using anyway to send/receive data. My idea was setting up something like a challenge that only verified clients would be able to pass. So anyone using a fake app will not be able to bypass this.
Is there any standard approach to this problem? I have searched a bit but did not find something covering this entirely. Please keep in mind that I am aware of the fact that given the fact the application runs on an android phone which is a device out of my reach there will probably always be ways to bypass the challenges. I am looking to see what the majority is doing in these cases.

There are two probable issues here. First is user authentication (authn) and authorization (authz), and the second is verifying that the client app itself is authentic.
For user authn/authz, I would use some form of OAuth2 with OpenID/Connect. The end result is that you are authorizing your client app to access your end resources on behalf of the user. There are open source and free commercial services available to get you started.
More problematic is authentication of the app itself. API keys are the standard approach here, but these are static secrets which don't do much good if the app is tampered with or the key is observed in the communications channel. No matter how hard you try to hide or compute the secret as needed, if your endpoint is valuable enough, someone will do the work necessary to extract and abuse the secret and then your backend.
You are on a good track thinking about some form of challenge-response protocol. Captchas are the canonical approach here, but they are quite annoying to users on a mobile app and are not always very effective. I believe (and full disclosure, so does my company) that attesting the app's authenticity through a cryptographically secure challenge is a solid strategy. The attestation service challenges the app and analyzes its response. The challenge evaluates whether the app's code has been tampered with and assesses the state of the runtime (is app rooted? running in a debugger? frameworks like frida or xposed present? etc.). The app is issued a short-lifetime token - properly signed if the attestation passes, invalid otherwise. There's no secret in the app, and the app does not make the authentication decision; it just passes on the token to your backend which checks the token lifetime and signature to determine the app authenticity. No token or invalid token and you know this is a bot or tampered app.
For background on user and app authenticity, check out a 3 part blog post, starting with Mobile API Security Techniques, or if you prefer video, check out A Tour of Mobile API Underprotection. I encourage you to also check out approov.io for how this can be implemented as a service.

Related

How to update pinned ssl certificates android

I am implementing SSL pinning in our android app. I have pinned 2 certificates (current and backup) at the client by embedding them in the app.
Now, I want to have a mechanism in place to update these certificates without requiring to roll out an app upgrade in case certificates are expired or private key is compromised. How can I implement that?
One possible solution I am seeing is through app notification. I can broadcast a notification with new certificates and store them in the client. Is there any problem in this approach or is there any better approach?
PUBLIC KEY PINNING
I am implementing SSL pinning in our android app. I have pinned 2 certificates (current and backup) at the client by embedding them in the app.
If you pin against the public key you do not need to update your mobile app each time a certificate is rotated in the server, once you will sign it with the same public key, and you can read the article Hands On Mobile APi Security: Pinning Client Connections for more details in how this can be done:
For networking, the Android client uses the OKHttp library. If our digital certificate is signed by a CA recognized by Android, the default trust manager can be used to validate the certificate. To pin the connection it is enough to add the host name and a hash of the certificate’s public key to the client builder(). See this OKHttp recipe for an example. All certificates with the same host name and public key will match the hash, so techniques such as certificate rotation can be employed without requiring client updates. Multiple host name - public key tuples can also be added to the client builder().
For a situation where the private key used to sign the certificate gets compromised you will end-up in the same situation that you are trying to solve now, that is the need to release a new mobile app to update what you trust to pin against. By other words, the public key cannot be trusted anymore, thus the server must rotate the certificate with one signed with the backup public key you have released with your mobile app. This approach will give you time for a new release, that removes the public key used to sign the compromised certificate, without locking out all your users.
You should always store the backup private keys in separated places, so that if one is compromised you don't get all compromised at once, because then having a backup pin being release with the mobile app is useless.
DON'T DO THIS
Now, I want to have a mechanism in place to update these certificates without requiring to roll out an app upgrade in case certificates are expired or private key is compromised. How can I implement that?
Unfortunately the safer method to deal with a compromised private key is to release a new mobile app that doesn't trust on it anymore. Any remote solution you may devise to update the certificates will open the mobile app doors for attackers to replace the certificates you are pinning against.
So my advice is to not go down this road, because you will shoot yourself on the foot more easily than you can think off.
One possible solution I am seeing is through app notification. I can broadcast a notification with new certificates and store them in the client. Is there any problem in this approach or is there any better approach?
While the mobile app have the connection pinned it can be bypassed, thus a MitM attack can be performed and the new certificates retrieved from the attackers server, instead from your server. Please read the article The Problem with Pinning for more insights on bypassing it:
Unpinning works by hooking, or intercepting, function calls in the app as it runs. Once intercepted the hooking framework can alter the values passed to or from the function. When you use an HTTP library to implement pinning, the functions called by the library are well known so people have written modules which specifically hook these checking functions so they always pass regardless of the actual certificates used in the TLS handshake. Similar approaches exist for iOS too.
While certificate pinning can be bypassed is still strongly advised to use it, because security is all about layers of defense, the more you have the more hard it will be to overcome all them... This is nothing new, if you think of medieval castles, they where built with this approach.
A POSSIBLE BETTER APPROACH
But you also asked for a better approach:
Is there any problem in this approach or is there any better approach?
As already mentioned you should pin against the public key of the certificate to avoid lockouts of the client when you rotate the server certificates.
While I cannot point you a better approach to deal with compromised private keys, I can point out to protect the certificate pinning from being bypassed with introspection frameworks, like xPosed or Frida, we can employ the Mobile App Attestation technique, that will attest the authenticity of the mobile app.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
Before we dive into the Mobile App Attestation technique, I would like to clear first a usual misconception among developers, regarding the WHO and the WHAT is calling the API server.
The Difference Between WHO and WHAT is Accessing the API Server
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
The Intended Communication Channel represents the mobile app being used as you expected, by a legit user without any malicious intentions, using an untampered version of the mobile app, and communicating directly with the API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of the mobile app, a hacker using the genuine version of the mobile app, while man in the middle attacking it, to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the original version of the mobile app.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the mobile app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise you may end up discovering that It can be one of the legit users using a repackaged version of the mobile app or an automated script that is trying to gamify and take advantage of the service provided by the application.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
Mobile App Attestation
The use of a Mobile App Attestation solution will enable the API server to know WHAT is sending the requests, thus allowing to respond only to requests from a genuine mobile app while rejecting all other requests from unsafe sources.
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered or is not running in a rooted device by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
So this solution works in a positive detection model without false positives, thus not blocking legit users while keeping the bad guys at bays.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
CONCLUSION
So I recommend you to switch to pin the certificates by the public key, and if you want to protect against certificate pinning being bypassed, and other threats, then you should devise your own Mobile App Attestation solution or use one that is ready for plug and play.
So In the end, the solution to use in order to protect your Mobile APP and API server must be chosen in accordance with the value of what you are trying to protect and the legal requirements for that type of data, like the GDPR regulations in Europe.
DO YOU WANT TO GO THE EXTRA MILE?
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.

How can I keep an API key secure from malicious attackers decompiling my app as well as detect malicious uses?

I am currently developing an android application that uses an API secret and access tokens to access data over TLS.
Instead of storing the secret locally on the app in plaintext I am considering sending it over TLS from something like Firebase. I would send it encrypted and have a method of decryption that is fairly obfuscated. Then the API secret would be used to make requests to the API.
Are there considerations that should be made to protect the keys? My concern is that a malicious entity could decompile the app and insert their own code to find out our method of hiding the API key.
I'm not sure how someone could figure out the key now. I assume they'd decompile the code and redirect the API secret after it's been decrypted.
Eventually, no matter what, I understand that it could be hacked and someone could discover the API secret. How do I then detect that someone has the API secret? They can't hurt other users unless they have their access tokens, which is a different matter, but are there any well-known ways of detecting attacks? The only effect a malicious entity could have is sending many requests to the API servers pretending to be us which would increase our billing, but this should still be protected against. I could rotate my secret but if they already have a method of finding it, then this doesn't do much for me.
To summarize:
What is considered best practice? Should the API secret stay in our servers where we'd make requests from Firebase Functions? How does one detect an attack, or does this depend from API to API? If an attack is detected do I have to force users to update to a new version to make requests and hide the data in a new way in the new version?
I've put a lot of thought into this, but I still have questions I haven't found answers to myself or online. Thank you.
Storing an API key in an app is problematic. You can obfuscate it or hide it in a computation, but if the secret is valuable enough, someone will extract it.
You are on a good track thinking about sending your key from a server. That keeps the key out of the app package itself. You must protect that communication, so TLS is a must, and you should go further and pin the connection to avoid man-in-the-middle attacks.
Rather than sending the key itself, I would send a time-limited token signed by your API key. You'll need to send different tokens over time, but the API key is never directly exposed on the app, and you can change the signing key without requiring an app field upgrade. If a token is stolen, at least it is only valid for a limited period of time.
You still need to make sure you don't send tokens to a tampered app or even a bot who has reverse engineered your protocol. You need to authenticate the installed app package/code as well as check for a safe run time environment (not running in a debugger, no frameworks like frida or xposed, etc.). You could add tamper-detection to your app, but since you're already sending tokens to your app, I think it is a better approach to set up a challenge-response protocol which will cryptographically attest the app. That way you and not the app makes the actual authenticity decision.
For additional background on user and app authenticity, check out a 3 part blog post, starting with Mobile API Security Techniques, or if you prefer video, check out A Tour of Mobile API Underprotection. You can also look at approov.io for a commercial implementation of challenge-response attestation and JWT tokens.

Secure an API for mobile apps call

I've been doing a lot of search about secure my api for mobile apps for Android or IOS.
Almost all examples tell user provides an user id and password somehow in a exchange for a token.
But how to prevent someone else to consume my api without my consent?
Face the following scenario:
I expose an API,
I develop, then, an app for android to consume it,
I develop, then, an app for IOS to consume it.
Other developer performs a rev. engineer in my app, creates his own app and starts to consume it without authorization.
How to prevent that?
Short answer: you can't.
Little longer answer: If you know what you are doing you can always reverse engineer a given application and use its api. You can only make it more difficult and time consuming, using authentification via tokens and device ids or usernames is a good first step. Apart from that: why would you want to close your api to outsiders? If your server code is written well there is nothing to worry about.
You can maybe secure your API on a legal basis and sue developers who use it, but that is a completely different topic.
Some clarification regarding securing the API and securing content via the API. Assume you create a server where you can send user/password and receive a token if that combination was correct. For the account-page you send said token over and the server verifys that that token is valid and returns your account page. You secured the actual content of the API. That is obviously very possible and almost a must-have unless you have no user-specific data. But still everybody can send the exact same initial request from their custom app, sending a user/pass and again receive a token, etc. You cannot really prevent the request itself or even determine that it was not send by some service not authorized by you. You can send some hashes along the request to add some security by obfuscation, but since your app has to compute them, so can the reverse engineer.
Yes, login api are open but they return a token only on successful match in your database. You should focus more on security of your data than unknown hits at your api.
SignUp API can be used for creating a user, and login for returning token of that user. Only if malicious developer has credentials, then he can access tokens and auth APIs. There is also something about DDOS attacks so you can maybe write logic to temporarily block IPs where hits frequency is high.
You can also store device ID of signing user, which seems idle for your scenario. Entertain hits from that deviceID only. Similarly, user can add more devices with their credentials. I think even Google does that (generate alerts if user creds are signed in from new device and add the device to list if user confirms). Hope this helps.

How to validate the origin of a web service invokation

Suppose you have a mobile application (Windows Phone or Android) that connects yo your back-end using SOAP.
For making it easy, let's say that we have a Web Service implemented in C#. The server exposes the following method:
[WebMethod]
public string SayHallo() { return "Hallo Client"; }
From the server perspective, you can't tell if the caller is your mobile application or a developer trying to debug your web service or a hacker trying to reverse engineer/exploit your back-end.
How can one identify that the origin of the web service call is THE application? as anyone with the WSDL can invoke the WS.
I know I can implement some standard security measures to the web service like:
Implement HTTPS on the server so messages travel encrypted and the danger of eavesdropping is reduced.
Sign the requests on the client-side using a digest/hashing algorithm, validate the signature in the server and reject the messages that have not been signed correctly.
Write custom headers in the HTTP request. Anyways headers can be simulated.
However, any well experienced hacker or a developer who knows the signing algorithm, could still generate a well signed, well, formatted message. Or a really good hacker could disassemble the application and get access to the hidden know-how of my "top secret" communications protocol.
Any ideas how to make the SayHallo() method to answer ONLY to request made from my mobile application?
We are running in the context of a mobile application, with hardware access, there could be something that can be done exploiting the hardware capabilities.
If someone wonders, I'm thinking on how to make a mobile app secure enough for sensitive applications like banking, for example.
Thanks for your ideas.
What you are describing is bi-directional authentication. It can only be done by storing a signed public key (certificate) on boths sides of the communication. Meaning that each app would need to authenticate your server with your servers public key and the server would need to authenticate each instance of your app. The app's public key would need to be produced and stored on the server at the deployment time with each instance of your app. Think of this as 2 way HTTPS, in general the only authentication that needs to be done is one direction, with the browser authenticating the server with a trusted signing key. In your case this would need to be done on both sides. Normally you would have a service like VeriSign sign each instance of a public key, this can get quite spendy with multiple deployments of an app. In your case you could just create an in house signing application using something like OPENSSL to sign your app every time it is distributed. This does not mean that someone still could not hack your code and extract the signing key on the app side. In general any code can be hacked, it's just a question of how hard can you make it before they give up? If you were to go the custom hardware route, there are things such as crypto chips and TMP's that can serve as a key sotre and make it harder for people to gain access to the private keys on the device.
A quick google search turned up the following:
http://www.codeproject.com/Articles/326574/An-Introduction-to-Mutual-SSL-Authentication
If you are thinking about using rewards points, and are really worried about someone gaming the system from the outside a better solution is to have each person make an account that is stored securely on the server and their points are saved and tallied there. This centralizes all the data and allows you complete control over it with out worrying about a malicious app reporting non-existent points. (this is how banks work)
If you want to verify that a user is both mobile and who they say they are then the best way is to leverage the network. Send a push notification with the hashed key that you want the user to use via:
APN for iOS
something like urban airship for windows phone
GCM for Android.
In general, the model looks like:
Server authenticates itself to the many clients with a certified public key (this is the whole Public Key Infrastructure, Certificate Authorities, etc)
Each client identifies itself to the server via some other authentication system (in 99.9% of cases, this is a password)
So if you're wondering how this sort of thing works in the case of banking apps, etc that's basically how it breaks down: (1) Client and server establish a secure channel such as a shared secret key, using the server's public key, (2) Client authenticates via this secure channel using some other mechanism.
Your question specifically, however, seems more aimed at the app authenticating itself (i.e., any request from your app is authentic) with the thought that if only your app can be authenticated, and your app is well-behaved, then everything should be safe. This has a few implications:
This implies that every user of your app is trusted. In security terms, they are part of your "trusted computing base".
Attempting to achieve this sort of goal WITHOUT considering the user/user's computing platform as trusted is essentially the goal of DRM; and while it's good enough to save money for music publishers, it's nowhere close to good enough for something really sensitive.
In general:
The problem that you're specifically looking at is VERY hard to solve, if you're looking for very strong security properties.
You likely don't need to solve that problem.
If you give us some more context, we might be able to give you more specific advice.
In addition to the answers already given, how about using a login type scheme with the unique id of the phone and a password? Get the user to register with your "back-end" and every time a transaction has to be made with the back-end, require the password or have an option to automatically log in.
You can use the following way to secure and to track your requests to server.
You can force the mobile or web clients to send you a Custom Headers of the device type when accessing your webservice through REST methods
Use basic http authentication by forcing each client to have their own username and passwords provided by you as a authorized webservice provider..
If you need more advanced protection you can use OAuth 2.0 to secure your webservice.
Since your originating app is going to be Android or Windows Phone apps, either one of them will be relatively easy for the wanna be hacker to debug. in any case you're going to be running the code on a machine that you have no control over so no ssl tricks or checking signing will solve your fundamental problem.
the only way you can combat threat from it is to NOT TRUST THE CLIENT. verify that the input coming from the clients is valid before acting on it if you're making a game - that it's accompanied by a valid security token etc.
in essence build your service so that it doesn't matter if the user is using an unofficial client.

What are the risks of exposing the not-so-secret oauth key? Are there any workarounds?

I'm developing a multi-platform app at present which uses Twitter, including authentication via oAuth.
I've looked at lots of existing apps and most of these seem to embed both the id and the secret key inside the app.
What are the risks of doing this? Is it just that someone can "download and inspect" your app binary in order to extract your key - and can then pretend to be your app (phishing style)? Or are there other risks?
Aside from the risks, are there any workarounds or solutions people know of?
The one solution I've seen already is that some people workaround this by routing all twitter calls via their own website - e.g. OAuth Twitter with only Consumer Key (not use Consumer Secret) on iPhone and android - but this seems quite slow and costly - I'd rather not route all calls via my own web service if I can avoid it (or have I misunderstood the solution - is it just the auth that goes via a website?)
The only real danger seems to be someone can impersonate your App and get it banned by Twitter, then you have to users to a new key.
This Google I/O talk has some pointers on obfuscating the code to make it harder to reverse engineer. http://www.google.com/events/io/2011/sessions/evading-pirates-and-stopping-vampires-using-license-verification-library-in-app-billing-and-app-engine.html
The workaround that I came up with is to fetch the secret key from your webserver (over SSL) using an authenticated web service call. You can cache it in the client app (just in memory). Use it when you need to without connecting to the server again.
The downside I see is that someone could probably still run your app under a debugger and inspect the memory and get the key.

Categories

Resources