Now, the question I have posted sounds quite vague, as no development team should release an application into production without Firebase security rules, but what I really wish to know is how a malicious user could potentially access the data on a Firebase project if AppCheck is in place. Let's say I have a simple application that lets users jot down quick notes (which are saved to Firebase Firestore). Now, every user has to be authenticated and all the notes created by that user lie under a collection with their email or uid.
If I am releasing this application only on Android and iOS platforms and AppCheck is securely in place, the only way to read/write or modify data on Firestore would be through a genuine version of the app released on AppStore or PlayStore, which means an unauthorized user/hacker cannot read or modify any data (they are not supposed to have access to) unless they either reverse engineer the android or ios app or inject malicious code that lets them do so. I cannot imagine this would be an easy task to do. Now while I will implement AppCheck and Firebase Security Rules before releasing an app, how do I account for this possibility, i.e the app being reverse-engineered or hacked? And how likely is it? Because AppCheck also states that only "requests originate from your authentic app" will be allowed, which I assume means an application that has not been tampered with.
While App Check adds an important layer of protection against abuse to your applications, it does not replace Firebase's server-side security rules.
Using App Check drastically reduces the changes of abuse from unauthorized code, but as with any security mechanism that runs a client-side check, there is always a chance that a malicious user can bypass it. From the documentation on How strong is the security provided by App Check?:
App Check relies on the strength of its attestation providers to determine app or device authenticity. It prevents some, but not all, abuse vectors directed towards your backends. Using App Check does not guarantee the elimination of all abuse, but by integrating with App Check, you are taking an important step towards abuse protection for your backend resources.
Security rules on the other hand are evaluated on the server only, and cannot be bypassed by anyone. You can tightly control exactly what data any specific user can access.
By combining App Check and security rules, you can reduce broad abuse quickly, while also retaining fine-grained control over who can access what data.
We had a good discussion about the topic here too: What is the purpose of Firebase AppCheck?
Related
I am writing an android application which will not be available by the google play store. I am looking into how can I accomplish to verify that any user of the application is indeed a verified user.
I would like to use a server for this process that the application is using anyway to send/receive data. My idea was setting up something like a challenge that only verified clients would be able to pass. So anyone using a fake app will not be able to bypass this.
Is there any standard approach to this problem? I have searched a bit but did not find something covering this entirely. Please keep in mind that I am aware of the fact that given the fact the application runs on an android phone which is a device out of my reach there will probably always be ways to bypass the challenges. I am looking to see what the majority is doing in these cases.
There are two probable issues here. First is user authentication (authn) and authorization (authz), and the second is verifying that the client app itself is authentic.
For user authn/authz, I would use some form of OAuth2 with OpenID/Connect. The end result is that you are authorizing your client app to access your end resources on behalf of the user. There are open source and free commercial services available to get you started.
More problematic is authentication of the app itself. API keys are the standard approach here, but these are static secrets which don't do much good if the app is tampered with or the key is observed in the communications channel. No matter how hard you try to hide or compute the secret as needed, if your endpoint is valuable enough, someone will do the work necessary to extract and abuse the secret and then your backend.
You are on a good track thinking about some form of challenge-response protocol. Captchas are the canonical approach here, but they are quite annoying to users on a mobile app and are not always very effective. I believe (and full disclosure, so does my company) that attesting the app's authenticity through a cryptographically secure challenge is a solid strategy. The attestation service challenges the app and analyzes its response. The challenge evaluates whether the app's code has been tampered with and assesses the state of the runtime (is app rooted? running in a debugger? frameworks like frida or xposed present? etc.). The app is issued a short-lifetime token - properly signed if the attestation passes, invalid otherwise. There's no secret in the app, and the app does not make the authentication decision; it just passes on the token to your backend which checks the token lifetime and signature to determine the app authenticity. No token or invalid token and you know this is a bot or tampered app.
For background on user and app authenticity, check out a 3 part blog post, starting with Mobile API Security Techniques, or if you prefer video, check out A Tour of Mobile API Underprotection. I encourage you to also check out approov.io for how this can be implemented as a service.
The title doesn't really indicates what I mean:
I am searching for a secure way to save user data (a point system for a game - under no circumstances the user should have the ability to change his amount of points). And I stumbled across firebase, which seems pretty nice and easy.
But:
If I give the app the rights to directly write the users new points to the database it is pretty insecure, right? I mean, someone could decompile the app and get the keys from firebase so that anyone could write to the database, or am I wrong?
Also, what would be the best way to save those "new point" into a firebase realtime database?
Edit: I am already securing my app with pro-guard but that just makes it more difficult for users to get the key, I guess.
The Firebase configuration data in your app is not a security concern. It is simply information that your app needs to find its Firebase project on the servers. See Is it safe to expose Firebase apiKey to the public?.
To properly secure data you write security rules, which are evaluated on the server. With these you ensure that users can only read the data you want them to and that only authorized users can make valid changes.
In cases where security rules become more complex than is feasible, you can consider proxying the read/write through Cloud Functions for Firebase. With Cloud Functions your code runs on Google's servers, so you have to worry less about user modifying the code for malicious purposes.
its secure if you use cloud code. This way everything is going through the server to save it and a user has no way to change that unless they have access to your cloud code.
I have 'secured' the communication between my android application and a tls server providing a financial transaction service, currently in development.
The security credentials are stored in a BKS keystore included in the Android apk. The password to the keystore is visible in plain text in the application source:
keyStore.load(is, "passwd".toCharArray());
I am concerned that if someone was to reverse engineer the app, they would be able to impersonate another user and compromise the security of the service.
I was wondering whether there is a fault in my implementation, if anyone else has this concern, and what the best method of securing against this possibility is.
Whenever you store security data on the client it can be compromised by reverse engineering. You may try to obscure it in the code but determined hacker will figure it anyway. So the only way to make it more secure is not to have the password openly in the code. May be you can just ask user for some pin code at the start of the application and use it to decrypt the password?
Are credentials stored in your app unique per user, i.e. every user gets it own apk with unique credentials? If you only have one apk with same credentials then this is as good as no security. Even worse, it gives false feeling of security.
You (your employer) should really hire a security expert to design your system from security point of view.
Here's what I'd do:
App comes without security credentials.
Every user is generated security credentials on server.
Every user gets secret activation code which is generated in secure environment and delivered via alternative channel. Preferably via snail mail. Activation codes are time-limited and can be used only one time.
On first use user types into app the activation code which enables a one-time download of credentials over a secure (https) channel.
User provides password to encrypt the credentials while stored on device.
Every time the app is used user must provide this paswword. If app is not used for some time the app must timeout the session and ask for password again when user wants access.
But don't take my word for granted. You still need a security expert if there are financial transactions involved.
I believe that Diffie-Hellman Key Exchange is what I was looking for. I'd rather not have to re-implement my own version of DH using a complicated process which involves the user.
currently programming for a Processing company-
their are a set of rules and regulations for a transaction application -OR- a POS APP(Point Of Sale application)
the rules are listed online as PCI validation, a certain amount of security has to be issued or it will be a law suit from Visa,inc or Many other Company's.
about your Question, it doesn't follow PCI compliance as that is a security issue.
please read the PCI compliance so that their is a complete understanding of Security, its not good to compromise Cardholder Data.
:)
I know very little about security or servers, but am making an Android app that allows users to purchase an in-app subscription. As recommended, I want to use the Google Play Developer API and store the necessary data on my own server. However, I can't think of a way to do this without having a line in my code like
if(userIsSubscribed){
//give access to purchased data
}
A hacker could obviously go in and just flip that to if(true). What should I do instead?
Obfuscate your app code as a minimum. Also do the subscription check on the server, before you send the content. That is one of the reasons they have an Web API.
Basically, anything the user (and potential cracker) has access to (i.e., your app) cannot be trusted. Things they don't have direct access to (i.e., your content server) can be trusted a bit more and it is a good idea to move all sensitive operations and/or data there, where possible.
I have 'secured' the communication between my android application and a tls server providing a financial transaction service, currently in development.
The security credentials are stored in a BKS keystore included in the Android apk. The password to the keystore is visible in plain text in the application source:
keyStore.load(is, "passwd".toCharArray());
I am concerned that if someone was to reverse engineer the app, they would be able to impersonate another user and compromise the security of the service.
I was wondering whether there is a fault in my implementation, if anyone else has this concern, and what the best method of securing against this possibility is.
Whenever you store security data on the client it can be compromised by reverse engineering. You may try to obscure it in the code but determined hacker will figure it anyway. So the only way to make it more secure is not to have the password openly in the code. May be you can just ask user for some pin code at the start of the application and use it to decrypt the password?
Are credentials stored in your app unique per user, i.e. every user gets it own apk with unique credentials? If you only have one apk with same credentials then this is as good as no security. Even worse, it gives false feeling of security.
You (your employer) should really hire a security expert to design your system from security point of view.
Here's what I'd do:
App comes without security credentials.
Every user is generated security credentials on server.
Every user gets secret activation code which is generated in secure environment and delivered via alternative channel. Preferably via snail mail. Activation codes are time-limited and can be used only one time.
On first use user types into app the activation code which enables a one-time download of credentials over a secure (https) channel.
User provides password to encrypt the credentials while stored on device.
Every time the app is used user must provide this paswword. If app is not used for some time the app must timeout the session and ask for password again when user wants access.
But don't take my word for granted. You still need a security expert if there are financial transactions involved.
I believe that Diffie-Hellman Key Exchange is what I was looking for. I'd rather not have to re-implement my own version of DH using a complicated process which involves the user.
currently programming for a Processing company-
their are a set of rules and regulations for a transaction application -OR- a POS APP(Point Of Sale application)
the rules are listed online as PCI validation, a certain amount of security has to be issued or it will be a law suit from Visa,inc or Many other Company's.
about your Question, it doesn't follow PCI compliance as that is a security issue.
please read the PCI compliance so that their is a complete understanding of Security, its not good to compromise Cardholder Data.
:)