Storing user profile images, privacy policy question - android

I am about to release an Android app. It requires users to upload photos. While developing it, I used Cloudinary. Can I still use it in release version or do I need to create my own storage? If I can (and to me, feels like I can), what should I add into the privacy policy about it?

Firstly, reviewing the privacy policy provided by Cloudinary, I don't think user's data is at risk. They've mentioned that details to supply into their database wouldn't be shared with anyone else. However, they are legally obligated to fulfill the legal requirements of providing any information to the authorities on demand (that applies to many cloud hosting corporations). However, you must take the following point into consideration:
The content you upload to the Service, whether from your own device or from a cloud-based hosting service, including any data, text, graphic, audio and audio-visual files, may include personally identifiable information. The content that you upload and designate as public, will be accessible to others.
Privacy Policy
Based on further research, when a image is uploaded, Cloudinary generates a random image ID which will result in a random generated URL (I am assuming this will go directly in your database). That URL seems to be accessible by the public but in order to guess that random URL can be a task in itself (so you must be safe). The reason for this being a public domain is partly be able to share the images for another application/platform via providing the URL... more info
Now coming to the next part of the question, using user's information.
Firstly, you'll need to create your own Privacy Policy (Guidance) in which you'll mention all these details. The privacy policy should cover the following points (minimum)
What information do you collect - This includes both information entered by the user i.e. phone number, name, etc or information which is collected automatically i.e. HTTP logs, data usage or anything in the background.
How do you use the information - For example - email might be used for marketing purposes or resetting forgotten passwords.
What information do you share - Are you sharing information with third parties (for marketing, survey or any other purposes) or other users?
Therefore, privacy policy is a major subject which I would recommend you to learn in-depth as it's a
a statement or a legal document (in privacy law) that discloses some or all of the ways a party gathers, uses, discloses, and manages a customer or client's data. It fulfills a legal requirement to protect a customer or client's privacy... read more
Having the user agree to the privacy policy puts you at a safe place as they're agreeing to your terms of how their data will be used. As long as your functionality is acting within the policy.

You need to be aware of the requirements of data processing under the GDPR. In essence you need to inform your customer of what information you are going to process, why you are processing it, where you store it, who you might share it with and how long you will retain it. This needs presenting in a notice at the point the data is captured. You must also inform your data subject of their qualified rights under the GDPR, how to contact your data officer and how to make a formal complaint to the Data Commissioner.
This can be a lot more simple than it sounds but I'd recommend getting an expert to cast an eye over your notices and whatever processes you put in place for the exercising of rights etc. - documentation is key; the ICO will want to see that everything has been considered and documented. Even if you choose to take no action on a point, as long as that decision is documented with justifiable reason, you are broadly compliant.
These guys are good and can certainly help you. A worthwhile investment, as falling foul of GDPR is likely to be expensive!
Steve.

Related

Does Firebase App Check discard the need for implementing Security Rules?

Now, the question I have posted sounds quite vague, as no development team should release an application into production without Firebase security rules, but what I really wish to know is how a malicious user could potentially access the data on a Firebase project if AppCheck is in place. Let's say I have a simple application that lets users jot down quick notes (which are saved to Firebase Firestore). Now, every user has to be authenticated and all the notes created by that user lie under a collection with their email or uid.
If I am releasing this application only on Android and iOS platforms and AppCheck is securely in place, the only way to read/write or modify data on Firestore would be through a genuine version of the app released on AppStore or PlayStore, which means an unauthorized user/hacker cannot read or modify any data (they are not supposed to have access to) unless they either reverse engineer the android or ios app or inject malicious code that lets them do so. I cannot imagine this would be an easy task to do. Now while I will implement AppCheck and Firebase Security Rules before releasing an app, how do I account for this possibility, i.e the app being reverse-engineered or hacked? And how likely is it? Because AppCheck also states that only "requests originate from your authentic app" will be allowed, which I assume means an application that has not been tampered with.
While App Check adds an important layer of protection against abuse to your applications, it does not replace Firebase's server-side security rules.
Using App Check drastically reduces the changes of abuse from unauthorized code, but as with any security mechanism that runs a client-side check, there is always a chance that a malicious user can bypass it. From the documentation on How strong is the security provided by App Check?:
App Check relies on the strength of its attestation providers to determine app or device authenticity. It prevents some, but not all, abuse vectors directed towards your backends. Using App Check does not guarantee the elimination of all abuse, but by integrating with App Check, you are taking an important step towards abuse protection for your backend resources.
Security rules on the other hand are evaluated on the server only, and cannot be bypassed by anyone. You can tightly control exactly what data any specific user can access.
By combining App Check and security rules, you can reduce broad abuse quickly, while also retaining fine-grained control over who can access what data.
We had a good discussion about the topic here too: What is the purpose of Firebase AppCheck?

Minimising the value of database

I am thinking of creating a app that will contain personal user information for a memberships scheme, the basics, name, dob, a user ID and a valid from / expiry date.
I want to reduce the instance of hacking and having the data stolen. So I was thinking that I add the user information in the database, when the user logs in for the first time, the app connects to the data and downloads the users information to their phone and all personal data is removed from the database, the ID is used for the app to display valid from/expiry dates etc.
I am unfamiliar with iOS/Android app development. Can this work, do I store in a separate file and download to a user area in the app package or do I download a database to the phone, and what about when I need to update the app?
This is not good system design
In reality if a system is designed properly, with a security focussed mindset and deployed in a properly designed environment, this should not be a concern that warrants causing end users such potential issues.
In fact, user data would be considerably more secure on a properly designed, controlled system than on a user's device; How many people do you know that don't have a passcode on their phone, or have it set as their date of birth? I know a whole bunch of these types of people (and by logical extension, the passcodes to their phones).
You also mentioned that the data will be deleted from the database. How exactly will it end up back in that database in the event of a support ticket? If it's by emailing it back to you, that would be a bigger security risk because plain text email is not secure.
What you should do instead
Build a web service to sit between your app and the database
Pass the login details from the application to the web service and perform authentication/authorisation there. If successful, pass back an access token of some description. Save this access token to the database with an expire-time value.
Have the app call various api endpoints, passing the access token as part of the Authorization header (so it doesn't get cached or end up in the logs of proxies and web servers etc). If the token is valid, fulfil the request and return the requested data back to the app from the web service
On log-out/quit, have the app remove any sensitive information from the device memory if security is such a concern.
Additional Notes
As with any such design, you should ensure that all communications are done over a secure channel.
Ensure passwords are stored in a secure format and not transmitted or stored in plaintext anywhere. Use a secure channel for passwords in transit, Bcrypt is good for storing passwords or consider implementing Secure Remote Password Protocol.
Ensure that direct access to the database is only allowed from your web service and not the wider internet
Ensure that your webservice sanitises input, escapes output and is not vulnerable to SQL Injection attacks.
The benefits of this approach are obvious:
Your app data remains secure so long as the environment is secured using the correct tools. If you choose the right hosting provider they'll also be able to provide help and support securing your web server and database.
In the event of a user changing their device, logging out or whatever else they'll be able to log back in as they see fit. This will meet the already well established expectations of users and reduce potential support calls.
If you decide to expand on the features of your app later on, you can add new tables to the database, new endpoints to the webservice and new functionality within the app to consume said endpoints.
Many users tend to have a bad habit of reusing passwords; With a properly designed system you're able to audit login attempts, lock users out for a period of time after so many incorrect password attempts, force password expiry or resets and allow for self servicing of password changes to the whims of your more security conscious users.

Storing Gmail emails on web server: Acceptable or privacy issue?

I'm trying to come up with an app in which I need to access user's emails for some functionality. I'm getting the normal "View and manage emails" permission from the user for the Gmail access.
I'll be storing the email on the user's device. But is it a problem if I store the details on my own web server too? Is that a data policy violation? Has anyone tried that here?
There are also a few apps which actually do something like this - Swingmail, Tripit, Boomerang. I know they store data on the device, but there is no way of knowing if they even store data on their web servers.
Initial research:
I went through the Google API TOS and below are the 3 things that seem to point that we can store data on the web server:
Security: You will use commercially reasonable efforts to protect
user information collected by your API Client, including personally
identifiable information ("PII"), from unauthorized access or use and
will promptly report to your users any unauthorized access or use of
such information to the extent required by applicable law.
Retrieval of content: When a user's non-public content is
obtained through the APIs, you may not expose that content to other
users or to third parties without explicit opt-in consent from that
user.
Prohibitions on Content: Unless expressly permitted by the
content owner or by applicable law, you will not, and will not
permit your end users or others acting on your behalf to, do the
following with content returned from the APIs:
Scrape, build databases, or otherwise create permanent copies of such content, or
keep cached copies longer than permitted by the cache header;
I think we can still store the data for a user provided user grants permmission for that and the devs takes sufficient measures to protect the data obtained from Google APIs.
I'm not sure it's a stackoverflow question, but as I've been thinking about this:
What does Google say about it?
This is in the terms of service, which you have found.
What is your relevant data protection laws say about it
For example in the UK, you need to consider if you are a data controller, and have taken reasonable measures to secure the data, amongst other things.
What is the process you take if you lose the data for instance. There can be financial penalties if you get this wrong.
Are your users informed (this is linked to 2)
Your users need to have agreed to this, in some acceptable form. How do they remove their data, how long is it held, how is it used.
Now, IANAL, but you have to be careful, you will likely be putting things in a Terms & Conditions, which is a basis of a contract, and there be dragons. For example reasonable in UK law can have a host of pitfalls http://www.linklaters.com/Insights/Publication1403Newsletter/PublicationIssue20070605/Pages/PublicationIssueItem2385.aspx
Bottom line? I'd get a lawyer.

Marking non-sensitive data, to prevent submission from external sources, instead full encryption? (mobile app submitting to a server)

I want to submit non-sensitive data from a mobile app to a server.
But I don't want external sources to be able to submit data.
I would like some opinions on whether it's enough to mark the requests with hash formula.
For example:
MD5(MD5(message)+secretString)
The messages will be unique, and there is min of 10 min interval between submissions from single source (if request gets from the same source before this time, it will be rejected).
That's why I think it's not worth the effort to go for full encryption of the requests, but since I have no experience in this area I decided to check with the community.
Thanks in advance.
The approach looks good, few considerations though:
the secretString can be extracted pretty easiely for the app. The only factor here is the motiviation of the attacker.
consider replacing MD5 with SHA-1. Although there is no fatal vulnerability in MD5, the change is trivial and more secure.
don't use IP addresses for a "single source" protection. Mobile devices pass through carrier networks and share a relativly small IP block.
consider adding unique, incrementing number in the request to avoid replay attacks.
You say you want to submit data to the server but if you do a hash the data is no longer recoverable by the server. Not only the attacker but even the server will not know what the data is. Going for encryption is the best way to go about this problem if you want to achieve confidentiality.
As mentioned by another user having a fixed secret string in the app is not doing you any good as it can be recovered easily. You cannot rely on someone not knowing the "formula" reversing an app is easier than people think. So security through obscurity is definitely not the way to go. If you want to use salt use a secure random number generator but then you have the additional task to have the same salt at the server to verify (and the server needs to have the message beforehand).

How to securely verify a user's subscription to an Android app?

I know very little about security or servers, but am making an Android app that allows users to purchase an in-app subscription. As recommended, I want to use the Google Play Developer API and store the necessary data on my own server. However, I can't think of a way to do this without having a line in my code like
if(userIsSubscribed){
//give access to purchased data
}
A hacker could obviously go in and just flip that to if(true). What should I do instead?
Obfuscate your app code as a minimum. Also do the subscription check on the server, before you send the content. That is one of the reasons they have an Web API.
Basically, anything the user (and potential cracker) has access to (i.e., your app) cannot be trusted. Things they don't have direct access to (i.e., your content server) can be trusted a bit more and it is a good idea to move all sensitive operations and/or data there, where possible.

Categories

Resources