Different ways to authenticate smartphone application like Android - android

I am looking for different ways to authenticate client like android, iphone, windows and blackberry app and which one is better and why
As per my research I know of 2 way to authenticate client
1. Private key embedded inside smartphone app which will be used to sign the message : Problem with this is its easy for hacker to get access to private key
2. Client certificate
Are there other ways to authenticate these smartphone app and which one is most secured?

Both of the options you list here are really the same. A client certificate is really just the public key part of a private/public keypair that is signed by some entity along with some identification information.
The best way to authenticate the client is to use mutually authenticated SSL. You can use self-signed certificates here so you don't need to buy any from a CA, assuming you control all of the clients that you want to allow access and you control the servers they are going to talk to. This will ensure that your clients only receives data from your legitimate server (configure the SSL system for your application to only accept the self-signed certificate that your server is using) and your server only accepts data from your authorized clients (configure your server to only access the self-signed certificates deployed in your app as a resource for client authentication). There is a complete step-by-step rundown on how to do this for Android in Application Security for the Android Platform, published by O'Reilly.
You are correct in that you need to embed some secret information (a private key) in your client application and an attacker will be able to compromise it. The best solution you have within Android right now is to put the certificate and private key in a Keystore that you include in your application APK as a resource and have your application access the Keystore when it needs to use the key. That means your application will need to have the password to the Keystore. So, how you protect that password becomes important. You can obfuscate your code to make it harder for an attacker to determine that password, but that will only slow down a determine attacker who is reverse engineering your application. However, short of requiring the user of the device to type that password in every time they want to use your application, that's the best you can do. If your client app that is running on the device needs access to something that it stores, a person with access to that device will be able to access it as well. All you can do it make it more difficult.

Related

Provide private key and certificate for android app

My app needs to consume webservice, and I would like to authenticate app against server with certificate.
However, embedding keystore with signed key into package is considered bad practice (and explicitly warned against: https://developer.android.com/google/play/asi) as it can be extracted an decrypted.
I can generate private key with android provided keystore, and use it - but I still need it to be signed in order to verify it on server side.
In ideal case there shall be certificate chain, with trusted root authority and containing metadata of signed app package I could verify on server side.
Or is it somehow possible to use package signature in certificate generation process to prove that self signed certificate originated form untampered package?
A bad actor ("Trudy") can flash any custom Android ROM, including a ROM that removes package certificate validation during APK installation. Thus, any query that your app makes to its Android host OS is essentially a request to Trudy.
So, it might be possible to uncover the installation of a hacked APK with an unadulterated OS. But with an adulterated OS, all bets are off.
I think any solution for the client to self-validate would necessitate an authoritative validation of the host OS. Not easy.
Is it somehow possible to use package signature in certificate
generation process to prove that self signed certificate originated
form untampered package?
(1) Do you mean that each client generates a different self-signed certificate after installation and somehow cross-references this against the apk package signature in order to authenticate? Then no, this will not stop Trudy. (And it would not be useful in authentication, either.)
(2) Do you mean that the client has a universal private key embedded in the APK and that metadata on the package certificate can be used to verify against the private key? I do not know offhand if there are any available fields in the package certificate metadata in which to add this information. This is an interesting approach that you suggest. However, since Trudy might be the OS, theoretically Trudy could mock any result it wishes. I do not see this stopping Trudy.
This (SO) post by a developer for Proguard offers 5 options for handling secrets in your Android app. He notes:
Intrinsically, nothing on the client-side is unbreakable, but you can
certainly raise the bar.
I understand that you want user authentication of Webservice (API) using user's private key, and signed token would be verified at the server with user's certificate or public key already registered there at the time of user registration.
Are you looking to use external USB Token to store private key securely or you want to store the private key to be stored in Android device memory?
We have worked on such requirements on desktop (sample at https://web.signer.digital/Home) but not on Android. I know we can connect USB Cryptographic Token (like ePass2003 or others) to Android device with OTG USB and drivers for Android for some devices are available to access from Android but didn't worked on it actually. But this can be direction for you to look into.
A mobile application is public app, because an end user could possibly view and modify the app code. So, your application can't keep secrets from malicious users. That includes client certificates, which are required for mutual TLS (mTLS).
It is not safe to store any client certificate on the Android device for machine to machine (don't mix machine to machine with user to machine) mTLS authentication. It can be exported and then used on other devices by any user.
Operation systems (Android as well, some browsers, e.g. Firefox may have own) provides certificate storage, where CA certificates are stored and where client certificates can be stored.
It is a good idea to store user (issued for the user, not for the app/machine) client certificates there. Then mobile app should get an option to select which user client certs should be used for mTLS. But I want to authenticate my application without any user interaction. won't be possible. There must be at least initial user interaction: user client cert import to client storage, mTLS app configuration. That's IMHO the best and secure mTLS implementation. Real world example: Rocket Chat app.
You are asking for reverse TLS based verification ? Usually, clients verify web services because client's host OS trusts a particular CA certificate that is also used to generate the web-service TLS certificates.
The difference is:
web servers are protected and controlled by authors. Client devices are not. So, having a private key in app to sign the data sent to server is futile.
Still, You can explore Google's Licensing implementation with server side verification to enforce that a legitimately installed and licensed app can contact the server.

How to update pinned ssl certificates android

I am implementing SSL pinning in our android app. I have pinned 2 certificates (current and backup) at the client by embedding them in the app.
Now, I want to have a mechanism in place to update these certificates without requiring to roll out an app upgrade in case certificates are expired or private key is compromised. How can I implement that?
One possible solution I am seeing is through app notification. I can broadcast a notification with new certificates and store them in the client. Is there any problem in this approach or is there any better approach?
PUBLIC KEY PINNING
I am implementing SSL pinning in our android app. I have pinned 2 certificates (current and backup) at the client by embedding them in the app.
If you pin against the public key you do not need to update your mobile app each time a certificate is rotated in the server, once you will sign it with the same public key, and you can read the article Hands On Mobile APi Security: Pinning Client Connections for more details in how this can be done:
For networking, the Android client uses the OKHttp library. If our digital certificate is signed by a CA recognized by Android, the default trust manager can be used to validate the certificate. To pin the connection it is enough to add the host name and a hash of the certificate’s public key to the client builder(). See this OKHttp recipe for an example. All certificates with the same host name and public key will match the hash, so techniques such as certificate rotation can be employed without requiring client updates. Multiple host name - public key tuples can also be added to the client builder().
For a situation where the private key used to sign the certificate gets compromised you will end-up in the same situation that you are trying to solve now, that is the need to release a new mobile app to update what you trust to pin against. By other words, the public key cannot be trusted anymore, thus the server must rotate the certificate with one signed with the backup public key you have released with your mobile app. This approach will give you time for a new release, that removes the public key used to sign the compromised certificate, without locking out all your users.
You should always store the backup private keys in separated places, so that if one is compromised you don't get all compromised at once, because then having a backup pin being release with the mobile app is useless.
DON'T DO THIS
Now, I want to have a mechanism in place to update these certificates without requiring to roll out an app upgrade in case certificates are expired or private key is compromised. How can I implement that?
Unfortunately the safer method to deal with a compromised private key is to release a new mobile app that doesn't trust on it anymore. Any remote solution you may devise to update the certificates will open the mobile app doors for attackers to replace the certificates you are pinning against.
So my advice is to not go down this road, because you will shoot yourself on the foot more easily than you can think off.
One possible solution I am seeing is through app notification. I can broadcast a notification with new certificates and store them in the client. Is there any problem in this approach or is there any better approach?
While the mobile app have the connection pinned it can be bypassed, thus a MitM attack can be performed and the new certificates retrieved from the attackers server, instead from your server. Please read the article The Problem with Pinning for more insights on bypassing it:
Unpinning works by hooking, or intercepting, function calls in the app as it runs. Once intercepted the hooking framework can alter the values passed to or from the function. When you use an HTTP library to implement pinning, the functions called by the library are well known so people have written modules which specifically hook these checking functions so they always pass regardless of the actual certificates used in the TLS handshake. Similar approaches exist for iOS too.
While certificate pinning can be bypassed is still strongly advised to use it, because security is all about layers of defense, the more you have the more hard it will be to overcome all them... This is nothing new, if you think of medieval castles, they where built with this approach.
A POSSIBLE BETTER APPROACH
But you also asked for a better approach:
Is there any problem in this approach or is there any better approach?
As already mentioned you should pin against the public key of the certificate to avoid lockouts of the client when you rotate the server certificates.
While I cannot point you a better approach to deal with compromised private keys, I can point out to protect the certificate pinning from being bypassed with introspection frameworks, like xPosed or Frida, we can employ the Mobile App Attestation technique, that will attest the authenticity of the mobile app.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
Before we dive into the Mobile App Attestation technique, I would like to clear first a usual misconception among developers, regarding the WHO and the WHAT is calling the API server.
The Difference Between WHO and WHAT is Accessing the API Server
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
The Intended Communication Channel represents the mobile app being used as you expected, by a legit user without any malicious intentions, using an untampered version of the mobile app, and communicating directly with the API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of the mobile app, a hacker using the genuine version of the mobile app, while man in the middle attacking it, to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the original version of the mobile app.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the mobile app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise you may end up discovering that It can be one of the legit users using a repackaged version of the mobile app or an automated script that is trying to gamify and take advantage of the service provided by the application.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
Mobile App Attestation
The use of a Mobile App Attestation solution will enable the API server to know WHAT is sending the requests, thus allowing to respond only to requests from a genuine mobile app while rejecting all other requests from unsafe sources.
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered or is not running in a rooted device by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
So this solution works in a positive detection model without false positives, thus not blocking legit users while keeping the bad guys at bays.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
CONCLUSION
So I recommend you to switch to pin the certificates by the public key, and if you want to protect against certificate pinning being bypassed, and other threats, then you should devise your own Mobile App Attestation solution or use one that is ready for plug and play.
So In the end, the solution to use in order to protect your Mobile APP and API server must be chosen in accordance with the value of what you are trying to protect and the legal requirements for that type of data, like the GDPR regulations in Europe.
DO YOU WANT TO GO THE EXTRA MILE?
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.

How safe are client SSL certificates in a mobile app?

I'd like to have secure communication between my Android/iOS app and my Internet-accessible backend service, so I'm investigating HTTPS/SSL.
If I create self-signed certificates, then put a client certificate in the app and cause the backend service to require that client certificate, is this truly secure?
Here's why I'm asking. It seems that the client certificate could be "hacked" by interrogating the .apk. The client certificate is just a string constant, right? That means anyone could use the client certificate to access my backend. Is the .apk (and iOS equivalent) sufficiently opaque to prevent the client certificate from being discovered?
Are you doing client side authentication with certificates over SSL? Not that it really matters for this question. Any private keys you store in your app is accessible to an attacker. Each client should have it's own certificate and key pair, to prevent a mass compromise. Your server should also enforce protections, ensuring a compromised client can't just request anything.
This is true for any authentication scheme. If you embed passwords, API keys, decryption keys, whatever. Anything on the device should be assumed to be accessible.
The added security from certificates in part comes from there being nothing to brute force. If you went the username/password route for each clients, passwords can be guessed. Same with API keys (albeit they are longer and harder). With certificates, it's an entirely different class of attacks, and a considerably harder problem.
But, most importantly, the backend service shouldn't allow the app to do anything it wouldn't normally do.
Now, dealing with certificates, you're going to have a whole host of other problems. You probably want to sign each client certificate with your self-signed CA cert. Managing that CA cert can be problematic, depending on your use case. Are you going to generate these client certs on the fly, or manually yourself? Meaning, is this an app that a million people can download, and you need an automated system for generating them? Or is this a private/internal app that you personally will handle generating certs?
The certificate is harmless. It is the private key that needs protection, and it is only as safe as the device itself, no safer. Distributing the certificate and private key with the application just means that anyone who has the application has the key, so it doesn't provide you any security whatsoever. I think you need some kind of post-install registration step.
Typically, client SSL certificates are stored in keystores (BKS formatted in the case of Android) and the keystore is included as a resource within your APK. Keystores are encrypted and protected with a password. So, that client certificate cannot be readily extracted from an APK, as it is stored in an encrypted form.
Now...what do you do about the password? Here is the crux of the matter and you have two alternatives.
If you want your application to be able to communicate with the server (so, to be able to access the certificate) without user interaction, you will need to embed the password into your application and then, yes, an attacker could reverse engineer your code to find it, grab the keystore, and then decrypt it to recover the certificate. You can apply techniques like obfuscating your code so that it is harder for an attacker to do so, but this will just slow someone down and not prevent it.
Your alternative is to prompt the user for a password every time your application communicates to the server and use that to decrypt the keystore (or ask when the app starts and cache the certificate for a certain amount of time). The advantage here is that if someone reverse engineers your APK, they will find the encrypted keystore and no password so your certificate is safe. The disadvantage is having the user provide the password.
Which approach is best? It completely depends on the sensitivity of the data you are concerned with and the level of risk you are willing to accept. Only you can answer that question.
Daniel Guillamot, some tricks I've come over:
split server side key. Make the passphrase for the SSL-key be the result of string-in-app XOR string-fetched-from webservice.
make the string-in-app be created by calling some app-functions, instead of hard coded string.
deny tracing the app while it's running, to avoid someone picking up the final passphrase when it's calling the decryption of the private key. Ref: http://books.google.no/books?id=2D50GNA1ULsC&lpg=PA294&ots=YPQQ7DLjBD&dq=The%20example%20just%20shown%20demonstrates%20how%20calls%20to%20ptrace%20can%20be%20hijacked&hl=no&pg=PA293#v=onepage&q&f=false
I'd love to hear more if anybody have other ideas.
APK can be accessed and copied, so putting anything in it won't help. Activation and maybe binding the certificate to the device after installation would be necessary. Binding can be done for example by putting the IMEI of the device to one of certificate extensions and passing the IMEI together with the certificate by your application (or, better, pass IMEI after authentication and establishing the secure channel).

Securing a web service so it can only be called by a specific Android application

We have a web service that should only be called by a specific Android app. What solutions are there for this problem?
The requirement is to not use authentication at all.
If it's only your client and your server, you can (and should) use SSL without purchasing anything. You control the server and the client, so each should only trust one certificate, the one belonging to the other and you don't need CAs for this purpose.
Here's the high-level approach. Create a self-signed server SSL certificate and deploy on your web server. You can use the keytool included with the Android SDK for this purpose. Then create a self-signed client and deploy that within your application in a custom keystore included in your application as a resource (keytool will generate this as well). Configure the server to require client-side SSL authentication and to only accept the client certificate you generated. Configure the client to use that client-side certificate to identify itself and only accept the one server-side certificate you installed on your server for that part of it.
A step-by-step for this is a much longer answer than is warranted here. I would suggest doing this in stages as there are resources on the web about how to deal with self-signed SSL certificate in Android, both server and client side. There is also a complete walk-through in my book, Application Security for the Android Platform, published by O'Reilly.
You'll normally store that certificate/private-key in a keystore of sometype (a KeyStore if you're using Android) and that keystore will be encrypted. That encryption is based on a password, so you'll either need to (1) store that password in your client somewhere, or (2) ask the user for the password when they start your client app. What you need to do depends on your usecase. If (2) is acceptable, then you've protected your credential against reverse engineering since it will be encrypted and the password will not be stored anywhere (but the user will need to type it in everytime). If you do (1), then someone will be able to reverse engineer your client, get the password, get the keystore, decrypt the private key and certificate, and create another client that will be able to connect to the server.
There is nothing you can do to prevent this; you can make reverse engineering your code harder (by obfuscation, etc) but you cannot make it impossible. You need to determine what the risk you are trying to mitigate with these approaches is and how much work is worth doing to mitigate it.
I guess this will work with proper authentification in place. First post I just stumpled upon was this one:
Securing communication from android to a web service
Hope it helps =)
If you're absolutely certain this web service will only need to be accessed by authorized applications/devices, go with client-side SSL certificates and restrict access at the server to only clients with authorized certs. This has the bonus feature of forcing SSL at all times so you don't like auth secrets over an open channel. Here's a quick guide for Apache, but you could use nginx too:
http://it.toolbox.com/blogs/securitymonkey/howto-securing-a-website-with-client-ssl-certificates-11500

which is the safest way to include a pair of key (public/private) in a apk

I'm developing an application for android and I have to maintain a secure communication with a server through a pair of private and public key. Which is the safest way to storage the private key in my apk? Obviously I'm going to obfuscate the code but I want more security. I have thought the following option:
If I create a native share library with the methods for sign the transaction information, The apk only have to contain the .so file and this file is in machine code, so the decompilation could be difficult, isn't it?
any ideas?
Thanks
Store the keypair in a keystore and include the keystore as a resource in your APK. Android tends to prefer the BouncyCastle Key Store (BKS) format. Keystores are specifically designed for this purpose.
Note that you should protect the keystore with a password and your application will need to know that password to access the keystore. So, you're left with needing to ask the user for a password to access the keystore or include the password in your code (obfuscate it to make it harder for an attacker to reverse engineer). If someone is going to the trouble of reverse engineering your application to recover your encrypted keystore and the password needed to access it, including that password in a compiled native library will not present much of an additional hurdle.
However, you may not need to do this anyway. If your goal is to protect/encrypt the data in transport to/from the server, use SSL/TLS. If you're not doing client-side authentication, your server needs an SSL certificate but your client does not; the protocol takes care of generating the encryption keys for you in a safe manner. If you do want the server to authenticate the client (make it so your server only talks to your clients), you'd need to install a client-side SSL certificate with your app ... this is the private key that you're probably thinking about.
I'll also point you to Application Security for the Android Platform. This book (disclaimer: I wrote the book) has an entire chapter talking about how to design secure Android app-to-server communications, with code examples to illustrate how to implement the appropriate protections. You may want to give it a read.
First of all, in order to implement secure communication between your client application and a server, conceptually speaking, you need only the public key of the server. That allows you to establish a one-way trust relation ship with the server and to establish a secure session, in which the identity of the server is guaranteed.
While certainly the above method does not provide two-way trust (the client cannot be identified to the server), when establishing the communication channel in most applications, this level of trust is not really required.
If your requirements are to provide client authentication to the server using public/private keys then things get more complicated because if you put the key in the apk, no matter how much you obfuscate it (including embedding it in a native library) it will only slow down a dedicated nefarious user.
The only way to store the private key with the client is to encrypt it. But then you have a similar issue of where to store the decrypt key. The easiest solution is to generate a public/private key pair for the user of the client application and ask the user to provide a symmetric encryption/decryption key (which the user will always have to type in) to decrypt the private key each time the user is using the application.
The alternative would be to use some kind of dedicated cryptographic hardware device similar to a smart card that would store the private key securely but you still have the problem of authorizing your application to read the key from the device (not to mention the complication of interfacing with said device).
Now, the question you have to ask yourself is this: "Who are you trying to prevent from reading the private key?" (of course after answering the other question: "Do you really need a public/private key pair for the client").

Categories

Resources