Right now I'm adding IAB for the first time. I've read the documentation, downloaded the sample and it seems to work.
However, setting things up isn't my problem I want to understand the following two recommendation’s by Google which should improve security
Encrypt the public key
If an attacker decompiles my app he can also remove my encrypting-, string-split- or bit shifting- stuff.
The Developer Payload
Same thing here. Actually I can do it the way google has recommended that. I have the user ID’s on my server and can put this to request and compare it afterwards… But I think it’s quite easy to remove this logic from code when my app is decompiled.
I obfuscate my code with Proguard and I always decompile my App before I upload it to Google Play to see if it works and is setup correctly. That’s why I say that these two recommendations don’t bring a big security benefit.
I also know how the private/public key system works. That's why I can say that it is impossible to let my app communicate with a "fake" server without decompiling it. If Google wouldn't use some kind of async encryption I may understand why I have to check if the response came from a fake server...
Can you help me understand that?
Cheers,
Stefan
Security is all about tradeoff between invested effort into hacking your app and gained benefit from hacking it. If your app costs 99 cents and a hacker needs 3 hours to hack it, and he needs to hack every new version again and again, then it makes no sense to invest his time in hacking it, although he can technically do this. Just implement as much security to make your app unattractive target for hackers.
Insecurely stored public key will allow attackers to replace it with own public key easily. If your public key is replaced, then your app will successfully validate responses signed by attacker's server. That is why you need to make finding and replacement of your public key in the app more difficult.
Development Payload. It is used for protecting your app from the attacks, when an attacker tries to give your app a valid signed response back, which has been already used by another purchase from another user in the past. For instance, I bought an extension of your app in the past and I stored Google Play response in byte form. If your code cannot differentiate two valid responses from each other, then I can give this response to other users and they can use it for further purchase. That is why Google suggests to add a development payload, which you can verify when a valid response comes back. In a simple case this can be user's e-mail. In more complex cases, you need a server, which will generate a string for a user's purchase and store it in a database. Later, when response comes back it will validate this response agains that generated string.
I hope this gives you a better understanding why this is needed.
Related
I'm developing an app that will use text messages to verify a user's telephone number, the usual "enter code" routine.
After reading a little bit it seems like a bad idea to store the private keys for whatever 3rd party I'll use in the app (twilio, nexmo, etc). Somebody could reverse engineer these from my binary and use them in their app.
However, having these on the server doesn't help either, somebody could just reverse engineer my server's endpoint that I use to send text messages and use that instead.
E.g. I could reverse engineer WhatsApp and get the private keys or API endpoints that they use for telephone number verification and just use that in my app, saving me thousand of dollars.
Any ideas on how to protect myself against such an attack?
Hiding API Keys on the server
However, having these on the server doesn't help either, somebody
could just reverse engineer my server's endpoint that I use to send
text messages and use that instead.
Yes it does help a lot.
If somebody gets access to the keys to your web service, they can only do, what your service allows them to do. This is a very good idea to have a web service that encapsulates all the 3d party keys and API - it's way more secure.
Nobody will ever get access to your sensitive keys, that'll allow them to do everything.
For example the 3rd party API allows deleting - your server wrapper API will not allow it.
Moreover, you can add any extra logic or alerts for suspicious behavior.
Hiding API Keys in the app
If somebody sets their mind to it, there's no way you can prevent getting your keys reverse engineered from your app. You can only make that harder. Computer security should never be about "how hard/complicated it is to do", but in this case we have no choice.
Ok, so you have to hardcode the API keys into your source files. It can be easily reverse-engineered.
You can obfuscate your keys, so that they can't be read directly. The result will be that they'll be scattered in a compiled file, rather than comfortably being placed in one place.
On iOS you can use something like this.
On Android you can use DexGuard, or any other way to obfuscate a string.
Encrypting the keys
Another layer of making it hard for hackers is to encrypt the keys.
Here's an example for iOS.
You can do the same for Android.
Perfect Scenario
Ok, so let's say you have a 3rd party API for video management.
The hacker wants to delete all videos on the server, because the 3rd API allows that.
First he has to glue up all the scattered strings in the file. If he manages to do that, he has to find a way to decrypt that.
Even if he manages to decrypt that, that'll give him the API keys to your server and your server and your server only allows to upload videos, not delete them.
I think firebase functions can help us in hiding the third party API keys.
The proposed solution-
Store API keys in firebase as environment variables.
Make a firebase https function that answers to only the authenticated users. If an authenticated user requests it, the secret API key from the firebase environment variable is returned as the response.
Android app does an anonymous login into firebase for the first time, obtains the token.
This token is used as Authorization token in headers while requesting firebase https function. The firebase function would be something like https://us-central1-{your_project_name}.net/{function_name}
I have discussed the approach in detail in this blog and made a sample project
Our app needs to ship with several usernames, passwords, and tokens for accessing other web based services. I have done quite a bit of googling on this but cannot figure out how to ship the app with the credentials stored securely. Any advice on how to achieve this would be appreciated.
at the end of the day what you're packing in an .apk file is a Java bytecode that if you Google "reverse engineer java byte code" you'll tools and tutos on how to extract the information on that file. I can think of a few good practices that will help you make your app more secure depending how far you're willing to work on it:
pro-guard: that one is a giving, use pro-guard!
you can use little tricks to make stuff more complicate, to store all those Strings in an encrypted format and decrypt them on run time.
as a nice add-on for the last point: on your Developer console, you'll find "YOUR LICENCE KEY FOR THIS APPLICATION". You can use that key to cript the information during development time, and during runtime acquire the value from Google Play to use it to decrypt. More info about it HERE
This license key can also be used to verify app authenticity.
You could also built those keys as a native library. Strings stored in C++ compiled code are way more complicated to crack than in bytecode.
all in all, might be a good read for you this link: http://developer.android.com/training/articles/security-tips.html
Since it has to be decrypted for the application to use there is no way to securely do it, unless you have the person download the information after they install, but there will still need to be information to get to the data for the application to use it.
The most secure way, if you don't want to trust the user, is to have them send the request to your server, and then your server uses its own credentials to go to the website of interest and return the data back to the user.
This way the data stays protected in one place.
Otherwise someone can get to the credentials if they try hard enough.
I'm on a team that has a website created with Django. We're starting on making an Android app. We are somewhat stuck on a couple questions related to how to properly handle authentication. I'll address one of them here:
We are running on the assumption that a user is going to expect to not have to log into a mobile app more than once ever, under normal circumstances. For instance, on my Android device, I haven't logged into my Google app for maybe a year, or my Trello app for months.
If we use Django's usual session system, and the user stops using the app for a sufficient amount of time, the user will be logged out. We could have an API key that the app uses to authenticate with the website, which can be used to log in instead of a password, and then establish a new session key. However, it sounds like you want to renew API keys periodically as well, which seems to present the same problem all over again.
Since this is a security issue, we don't want to rely on instincts or plausible solutions. We know it's an esoteric field. We want to make sure we know what the right way to go about this is.
So, here are some possible ways we've considered, with possible drawbacks:
Make a session renewal request once a day in the background.
This requires a background service, which may put off users by confusing them, or using more power than they'd like. A lot of our users may be in areas of the world with low access to Internet connectivity and not want to have background processes making requests for no reason.
Use an API key, expire it, but still allow renewal after an arbitrary amount of time.
If a user comes back after 5 months, allow them to use the old API key, one time, only to generate a new API key. This is similar to password expiration policies. However, if an API key operates on the same assumptions as a session key, this seems to not follow the proper policy.
Use Google's Cloud Messaging System to periodically send them new keys
Let Google deal with the hard stuff. Admittedly somewhat out of left field.
Implement an Oauth2 provider
Seems like overkill, but maybe? It seems that the refresh_token/access_token system is the sort of thing I'm looking for.
One factor I will put as a footnote, because I don't want to distract from the main point with an independent question, but which may be more interdependent than I assume: For V1 of our app, we will rely a lot on Web Views, but still have some direct API calls. We are faced with the question of how to coordinate authentication between the API calls and the Web View. Initially we thought we'd have to coordinate authentication between an API key for the API calls, and a session for the web views, but from reading other Stackoverflow responses, it sounds like I can just share session cookies between the two, though I haven't confirmed this yet.
Thanks a lot.
I think you can use the method described in the Django docs :
Implement your view and retrieve your user details
Manually specify the authentication backend.
Login your user using login(request, user)
The code looks like:
from django.contrib.auth import login
def your_view(request):
// retrieve your user
...
// you can write your own backend to do lots of things
user.backend='django.contrib.auth.backends.ModelBackend'
login(request, user)
Which steps should i follow to reduce the possibility of illegally activating and using in-app features in an android application?
It may be impossible to beat it, however, there should be some basic steps at least to filter out kids...
If you don't do the verification on a server then the criminals won't even bother hacking your application. They'll reroute all the requests to their server and feed you bogus receipts that they can then self verify. I'm sure I have had content stolen, but I also know my server verification has stopped many attempts (from the logs). One thing to keep in mind is that you want some kind of authentication on the communication between your application and your server.
I also think there is some value in obfuscating your code to slow people down but that is more to stop code theft than prevention of IAP theft.
You probably want to at least try obfuscating your public key so that an attacker can't simply decompile your app and look for static strings.
The Android developer website has some thoughts on this:
Protect your Google Play public key
To keep your public key safe from malicious users and hackers, do not
embed it in any code as a literal string. Instead, construct the
string at runtime from pieces or use bit manipulation (for example,
XOR with some other string) to hide the actual key. The key itself is
not secret information, but you do not want to make it easy for a
hacker or malicious user to replace the public key with another key.
http://developer.android.com/google/play/billing/billing_best_practices.html
I'm pretty stuck on this. I have a username and password as strings within my application that are used for Javamail; however I of course don't want to leave these as plain text and run the risk of having my application decompiled and combed through.
A few had suggested that I look into asymmetric encryption (Using BouncyCastle possibly ); however I'm still unaware how that would entirely help.
I don't have much, if any, experience in cryptography so bear with me here: If I'm using a public/private key pair and I want my application to be able to read the string - then the decrypting key would be the "public key" but that doesn't really make sense to me because it completely defeats the purpose of the encryption. If I have the encrypting key as the "public key" then all my application could do is encrypt the string - not decrypt it.
So my questions here are:
1) Is my reasoning flawed on this?
2) How do I solve this dilemma?
If you want to store them in the app, best you could do is obfuscate them. Encryption is one way of doing this, but it will stop only the casual 'hacker'. If you have encrypted strings as resources (or class fields), in order to decrypt them, you will need the key to be in the app. If someone would decompile your app, it would be fairly easy to find the key too. You could make this a bit harder by generating the key dynamically, from different places in your code, but, as mentioned above, the attacker could just find the place where the secrets are used and dump the already decrytped stings. There is really no easy way out of this.
You could build a simple Web service that requires authentication using a Google account (which pretty much every Android user has on their device), and have it send the mails on behalf of the user (if that fits your requirements). That way, you would at least know who is sending the mails and block them if they try to use it for spam, go over quota, etc. Of course, they could get a new Google account fairly easily, but if your service is purposefully targeted you will have bigger problems that that. Another downside is that your app will require permissions to access the accounts on the device, which some users might see as a privacy concern.
Rephrasing your question, you want your application to have access to sensitive information which your user should not be able to access. The short answer: find another project to work on because this one is not going to secure your secret. Your best alternative is to provide a proxy service which is on a machine you are certain is secure; let it hold your secrets and let your application contact the proxy for everything it needs to do.
You expressed a primary concern of decomplication to discover your secret. Lets say encryption was viable here (it's not). If I put on my black hat, I would decompile, find the API call which receives the decrypted data in its parameters, and either add additional code to output this data, or just set a breakpoint here.