After reading over tons of articles and stackoverflow posts, I can't find a concrete reason to use
EncryptedSharedPreferences or EncryptedFile compared to using their non-encrypted counter-parts.
To start off with, I want to talk about the 2 states of a device that security must be thought about:
the device is not compromised
the device is compromised
When the device is not compromised, the application is sandboxed. As long as the application follows Android's Security Best Practices, then the application should be fine -- security wise. Because internal app data is safe when the device is not comprised, there is no need to encrypt it.
When the device is compromised, there is very little an application can do to protect itself. The only real strategy is to minimize the amount of sensitive data on the device. However, EncryptedSharedPreferences and EncryptedFile seems to imply that it can protect user data even when the device is compromised, as talked about in Android's Blog Data Encryption on Android with Jetpack Security:
Why would you want to encrypt data in your app? Doesn’t Android, since 5.0, encrypt the contents of the user's data partition by default? It certainly does, but there are some use cases where you may want an extra level of protection... In the app home directory, your app should encrypt data if your app handles sensitive information including but not limited to personally identifiable information (PII), health records, financial details, or enterprise data.
But what does it mean by "extra level of protection"? According to the same Blog:
Before we jump into encrypting your data, it’s important to understand how your encryption keys will be kept safe. Jetpack Security uses a master key... which is generated and stored in the AndroidKeyStore.
So Jetpack's EncryptedSharedPreferences and EncyptedFile uses the KeyStore to generate and store the keys for encryption. This is verified by examining the source code. And this is also where the problem is.
The KeyStore is not intended to generate keys to encrypt data local to the device. As the answer to the post Android - What are the practical security benefits of using a hardware-backed keystore vs software-only keystore vs no keystore points out:
The purpose of a key store is not to restrict access to an application or application data, it's purpose is to protect the credential from being exposed during use. Since a key store will willingly leverage its knowledge to encrypt data or access sensitive application information, it's not really a challenge for an attacker to leverage as you pointed out in many of your breakdowns across all three types.
This means that, on a compromised device, a malicious program can use the KeyStore to decrypt all of the previously encrypted data. The Android Documentation acknowledges this:
If the Android OS is compromised or an attacker can read the device's internal storage, the attacker may be able to use any app's Android Keystore keys on the Android device, but not extract them from the device.
This completely nullifies any encryption done by EncryptedSharedPreferences and EncryptedFile when the device is compromised.
To recap: When the device is not compromised, internal app data is safe. When the device is compromised, internal app data is not safe, regardless of whether it is encrypted via EncryptedSharedPreferences/EncryptedFile or not.
Question:
If the above is true, then what are the benefits to using EncryptedSharedPreferences and EncryptedFile? Is there a specific scenario where EncryptedSharedPreferences and EncryptedFile can protect internal app data, as compared to their non-encrypted counterparts?
EDIT 1:
As pointed out in the comments, "internal app data" is ambiguous. Specifically, I mean the location at /data/data/<package name>, which is protected by app sand-boxing and credential encryption. Also, in terms of this question, I would like to focus on Android 10+ as this is when FBE was required. However, I am also interested in scenarios in lower Android versions too (at the time of writing, minimum API level for EncryptedSharedPreferences/EncryptedFile is 21).
EDIT 2:
After re-reading the question, I think its also really important to be clear here by what the KeyStore is. The KeyStore consists of 2 major parts: a physical component (e.g. TEE, SoC, HSM) and an OS daemon. The physical component is the thing that performs crypto operations on behalf of the OS, so no process (including the OS) can know what the key is. The OS daemon is the thing that restricts usage of the physical component. Because the OS daemon restricts usage, a malicious program (on a compromised device) can circumvent those restrictions and directly use the physical component. This is the reason why the KeyStore is not supposed to be used to encrypt data that remains local to the device. The physical component only provides the property that the key itself will not be known by an attacker, not that it can't be used by them. More information about the KeyStore can be found here and here.
If device is compromised, the security of whole system is in doubt and all data might being considered exposed. If device is not compromised, the OS itself should guarantee safety of the applications, data and execution environment.
I'd elaborate on another state, the device being analyzed by a 3rd party, in many cases in offline mode -- possibly law enforcement subject or a thief.
According to docs EncryptedSharedPreferences the preferences file gets encrypted hence protect data at rest. This level of security is independent of other security aspects of the device (optional FDE or SD card encryption) and is manageable by the application developer. Using Android KeyStore should allow to use of the Android security features (such as HSM) via standard and stable API.
Answer
... what are the benefits to using EncryptedSharedPreferences and EncryptedFile?
The application developer can assure some security level for the application data via standard API.
Is there a specific scenario where EncryptedSharedPreferences and EncryptedFile can protect internal app data, as compared to their non-encrypted counterparts?
Yes, during evil-maid or offline attack on the device (or storage), EncryptedSharedPreferences/EncryptedFile can provide protection for the application data or at least raise the bar required to acquire such data to non-trivial level.
Based on my knowledge and expereince in this part.
EncryptedSharedPreferences came to secure user data even in rooted devices.
When you create a SharedPreferences object in your implementation, a file is created in a directory called shared_pref, and it would include the filename you passed in your code implementation.
This shared_pref folder is located in data/data/your package name.
This directory is accessible if the device is rooted, so data inside could be exploited.
It's easy to read the preferences file as it's a map file, with key and values like the following:
while, if you used EncryptedSharedPrefences, your key and value are encrypted by default, so even the device is root no data could be exploited.
like the following:
If you need to check your sharedPref files, you can do it from Android Studio
Open Android Studio
Run your application on an Emulator
on Device File Explorer(Bottom Right of the Android Studio)
Find your package name within the data/data directory.
Related
How do we secure api keys on rooted device?
As you know we cant trust the client what we can do is make things difficult for the hacker. Following are some of the points which I know to secure keys
Using NDK (store key in your C class and get it in kotlin class on runtime) - Even if device is rooted or decompiled hacker can't access it.
Using Android Key Chain (stores key in the hardware device and without device integrity, certificate, no one can access it. It is stored in a separate place from your application. Not sure what happens if we decompile the app).
Secure Shared Preference. (Even if we encrypt the file, it can still be access on rooted device, one might can figure the decrypt algorithm after check the code)
Secure Shared Preference and Proguard/ Dexguard? (Still not a good idea to store the encrypted key publicly available under app package when device is rooted.)
If we just encrypt the file? (again it will be under app package folder, can be accessed.)
What can be other options?
Have a look at the Jetpack Security Library where you can encrypt files or shared preferences.
However a good rule of thumb is if you dont want things from your app to be accessed then you should not store them locally
I'm currently using Spotify in my Android app, but I am required to use a Secret in order to refresh tokens and such. I would like to transmit the secret from my Backend to the app, so the secret does not reside in the APK and cannot be found when decompiling. I've read a lot only about securing secrets in your app, via various ways like proxies, just using your own backend, putting the code into native C++ code (NDK) in the app or using the Hash of the app to determine whether the app is calling the backend, and not some guy behind his computer trying to steal the secrets.
Found options:
Proxy: It means routing it through my own server, don't want that
Own backend: Same as proxy, don't want all request to got trough my own service
Native code: Using this seems to slow down decompilers, but doesn't stop them
Hash: From what I could find, this post suggests some things that I consider weird. It is retrieving the SHA-1 and passing it into the network header to verify that the app is calling. The weird part about this is, that when you just unzip the APK file, running a printcert (keytool -printcert -file CERT.RSA) command will display all SHA and MD5 hashes of the APK. From what I can tell, this is not foolproof as someone can just get the hashes of the APK file and submit that to the server.
Is there any other way I can solve this issue?
YOUR PROBLEM
I'm currently using Spotify in my Android app, but I am required to use a Secret in order to refresh tokens and such. I would like to transmit the secret from my Backend to the app, so the secret does not reside in the APK and cannot be found when decompiling. I've read a lot only about securing secrets in your app, via various ways like proxies, just using your own backend, putting the code into native C++ code (NDK) in the app or using the Hash of the app to determine whether the app is calling the backend, and not some guy behind his computer trying to steal the secrets.
Congratulations in your efforts to understand the problem, it seems that you went to a great extent to understand that secrets in a mobile app can always be extracted by static binary analysis, but I don't see any mention to instrumentation frameworks like:
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
or
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
but many others more exist, and all of them will hook into your code at runtime and extract any secret you store in your mobile app, no matter how securely you store it, even if you use hardware backed keystores, that run in a trusted execution environment:
Android Hardware-backed Keystore
The availability of a trusted execution environment in a system on a chip (SoC) offers an opportunity for Android devices to provide hardware-backed, strong security services to the Android OS, to platform services, and even to third-party apps.
At some point the secret retrieved from this keystore will need to be used to make the request to your third party service, and at this point all an attacker needs to do is to hook on the call to that function and extract the secret when passed to it.
So no matter what you do in the end a secret in a mobile app can always be extracted, it just depends on the skill set of the attacker and the time and effort he is willing to put in.
This being said, it leads me to the point I am always advising developers to not do, that is calling Third Party services from within their mobile app.
THIRD PARTY SERVICES ACCESS FROM A MOBILE APP
Found options:
Proxy: It means routing it through my own server, don't want that
Own backend: Same as proxy, don't want all request to got trough my own service
Yes I read you don't want to use a proxy or your backend, but that is the best chance you have to secure the access to your third party service, in this case Shopify.
I wrote this article that explains why you should not do it from your mobile app, from where I quote:
Generally, all Third Party APIs require a secret in the form of an API key, Access Token or some other mechanism for a remote client to identify itself to the backend server with which it wishes to communicate. Herein lies the crux of the problem of accessing it from within your mobile app, because you will need to ship the required secret(s) within the code (the coloured keys in the above graphic).
Now you may say that you have obfuscated the secret within your code, hidden it in the native C code, assembled it dynamically at runtime, or even encrypted it. However, in the end all an attacker needs to do in order to extract this secret is to reverse engineer the binary with static binary analysis, or hook an instrumentation framework like Frida into the function at runtime which will return the secret. Alternatively an attacker can inspect the traffic between the mobile app and the Third Party API it is connecting to by executing a MitM (man-in-the-middle).
With the secret in their possession, the attacker can cause a lot of damage to an organization. The damage can be monetary, reputational and/or regulatory. Financially, the attacker can use the extracted secret to access your cloud provider and your pay-per-call Third Party APIs in your name, thus causing you additional costs. Further, you may be financially hurt by the exfiltration of data which may be sold to your competitors or used to commit fraud. Reputationally you can be impacted when the attacker uses the extracted secret to post on your behalf on social networks, creating a public relations nightmare. Another reputational damage can occur when an attacker uses the Third Party API and violates its terms & conditions (for example where frequent usage of the API triggers rate limits) such that you get blocked from using the service, creating pain for your end users. Last but not least are regulatory troubles caused when the extracted secret is the only mechanism of protecting access to confidential information from your Third Party API. If the attacker can retrieve confidential information such as Personal Identifiable Information (PII), regulatory fines connected to violations of GDPR in Europe, or the new CCPA Data Privacy Law in the California, may be enforced against your business.
So the take home message is that any secret you ship in your code must be considered public from the moment you release your app or push the code to a public repository. By now it should be clear that the best approach is to completely avoid accessing Third Party APIs from within a mobile app; instead you should always delegate this access to a backend you can trust and control, such as a Reverse Proxy.
You now may say that the problem have just shifted from the mobile app into the reverse proxy or backend server, and that's a positive thing, because the backend or reverse proxy is under your control, but a mobile app is out of your control, because it's in the client side, therefore the attacker can do whatever he wants with it.
In the backend or reverse proxy you are not exposing the secrets to access the third party services to the public, and any abuse an attacker wants to do in your behalf against that third party service will need to pass through a place you control, therefore you can apply as many defense mechanisms as you can afford and is required by law for your use case.
SECURITY IN DEPTH
putting the code into native C++ code (NDK)
When hiding the secret in native C code it's not easy to find it with static binary analysis, at least for script kids and seasonal hackers, it needs a better skill set that the majority may not have, thus I really recommend you to use it as an extra layer of security, but to protect a secret to access your own services, not third party ones as I already mentioned before.
If you really decide to follow my advice and shift your efforts to defend the third party secret in place you have control off, like your own backend, then I recommend you to read my answer to the question How to secure an API REST for mobile app? for the sections on Securing the API Server and A Possible Better Solution.
If you read the above answer then you have realized that if you keep the access to third party services in your backend, you can lock down your API server to your mobile app with a very high degree of confidence by using the Mobile App Attestation concept.
DO YOU WANT TO GO THE EXTRA MILE?
I saw that you are well informed, thus you already know what I am about to share, but in any response I gave to a security question I always like to reference the excellent work from the OWASP foundation, thus If you allow I will do it here to :)
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
Everything that was created by a human can be broken down by a human - there is no completely secure option.
There are few things you can try though.
Use end-to-end encryption to establish a secure connection with you server and then send your secret to your app from your backend. Store secret secured via KeyStore in SharedPrefs or file or database.
Also you can leverage one-time pad cipher based on Vernam algorithm. It has absolute cryptographic strength thus cannot be cracked. In conjunction with Diffie-Hellman it may give a nice security boost.
It can still be cracked though - via memory scan on rooted devices while the app is active and has secret decrypted, via man-in-the-middle attack etc. As I've said - everything can be broken(for now except of Vernam algorithm maybe).
Don't bother too much with it though - it will be hard for criminals to significantly misuse your secrets. Generally they even don't bother with such stuff that much.
Hope this answer will help you somehow.
I developed an android application. I want to know the penetration techniques available for testing to secure my application. Can someone unzip my apk and get into my java files. I tried using apktool on my apk, all the java files converted into SMALI format. Is there any tool to convert my dex files back into java files?
I have already set debuggable false, so I think my logs won't be visible when someone try to dig in.
And, I have used hardcoded string even, I read it somewhere to use hardcoded strings.
Sorry, I missed out Diva tool & Apk Inspector, an app for testing but I am not able to use it properly.
What are the possibility that one can penetrate through my application and does obfuscation really help in preventing?
Any specific obfuscation to achieve that?
There are 2 main site about checking Android Security:
Open Android Security Assessment Methodology
Android Testing Cheat Sheet
In the first, Open Android Security Assessment Methodology, has the following sections:
OASAM-INFO: Information Gathering: Information gathering and attack surface definition.
OASAM-CONF: Configuration and Deploy Management: Configuration and deploy assessment.
OASAM-AUTH: Authentication: Authentication assessment.
OASAM-CRYPT: Cryptography: Cryptography use assessment.
OASAM-LEAK: Information Leak: Confidential information leak assessment.
OASAM-DV: Data Validation:User entry management assessment.
OASAM-IS: Intent Spoofing: Intent reception management assessment.
OASAM-UIR: Unauthorized Intent Receipt:Intent resolution assessment.
9 OASAM-BL Business Logic: Application business logic assessment.
In the second, Android Testing Cheat Sheet, it provides a checklist of tasks to be performed to do a penetration test of an Android application. It follows the OWASP Mobile Top 10 Risks list.
Here the documentation for the website:
Testing Methodology
At the device level, there are 2 ways in which the application shall be tested.
With Android device running in a factory default or normal mode
With Android device running in a rooted mode
At the application level, there are 2 ways in which it shall be tested
Application running on the device (to take benefits of touch related features)
Application running on the emulator (to ease the task of testing using wider screen of desktop or laptop)
Application Mapping
Map the application for possible security vectors
What is the application genre ? (Game, business, productivity etc)
Does the application connect to backend web services?
Is the application purely native or incorporates readymade frameworks?
Does the application store data on the device?
What all features of the device are used by the application? (camera, gyroscope, contacts etc)
OWASP Step-by-step Approach
(For each of the standards below, there shall be multiple steps for the tester to follow])
M1 - Weaker Server side controls
M2 - Insecure Data storage
This Section should be ideally tested after using the application for some time. This way application has time to store some data on the disk.
Commonplaces to look at
/data/data/app_folder
/sdcard/
/sdcard1/
M3 - Insufficient Transport Layer
Multiple layer of checks to be performed here
On Server side
Identify all ssl endpoints.
Perform SSL Cipher Scan using (sslscan)[1] or similar software.
SSLv2, SSLv3 is disabled
TLS 1.2, 1.1 and 1.0 is supported (1.2 is essential to ensure highest possible secure connection)
RC4 and CBC Based Ciphers are disabled
DH Params are >2048 Bits
SSL Certificate is signed with atleast sha2 / sha256
ECDHE Ciphers / Ciphers supporting Perfect forward secrecy are preferred
SSL Certificate is from Trusted RootCA
SSL Certificate is not expired
On Device Side
Ensure application is working correctly by navigating around.
Put a proxy in between the application and remote server. If application fails to load. Application might be doing cert validation. Refer logcat if any message is printed.
Place Proxy RootCA in trusted root CA list in device. (Burp)[2] (OWASP-ZAP)[3]
Try using application again. If application still doesn't connect, application might be doing cert pinning.
Install (Xposed Framework)[4] and (Just Trust Me)[5], enable JustTrustMe and then reboot device.
Try again if everything works we have a application which employee's cert pinning.
M4 - Unintended Data Leakage
Similar to M2 this section requires application to be used however while the application is in use we need to monitor following places.
adb logcat output
cache and webcache folder locations
M5 - Poor Authorization and Authentication
One of the simplest check's to be performed after application is used for some time and it has time to put the data inside system.
enumerate all exported activities
start each activity and identify if the activity was suppose to be publicly accessible or not.
Any activity displaying confidential information should be behind authentication. (confidential information includes PII (Personally identifiable data), financial data etc)
M6 - Broken Cryptography
There are multiple things to look at
Usage of known weak crypto algo's like Rot13, MD4, MD5, RC2, RC4, SHA1
Do it Yourself / let me design my own algo for encryption
Secret key hard coded in the application code itself.
M7 - Client Side Injection
Android applications need to store data locally in sqlite files or XML structures and hence need to performs either SQL/XML Queries or file I/O.
This gives rise to 2 major issues.
SQL / XML injection, and if the reading intent is publicly exposed another application could read this.
Local file read which can allow other application to read files of the application in question and if they contain sensitive data then data leakage via this media.
If the application is a HTML5 hybrid application then Cross Site Scripting (XSS) should also be considered. XSS will expose the entire application to the attacker as HTML5 applications will have the ability to call native functionality and hence control over the entire application.
M8 - Security Decisions via untrusted inputs
M9 - Improper Session Handling
Improper Session Handling typically results in the same outcomes as poor authentication. Once you are authenticated and given a session, that session allows one access to the mobile application. There are multiple things to look at
Check and validate Sessions on the Backend
Check for session Timeout Protection
Check for improper Cookies configuration
Insecure Token Creation
M10 - Lack of Binary Protection
Android Binaries are basically dex classes, which if not protected can result in an easy decompilation of source code. This could lead to code / logic leakage.
Following controls need to be checked for and validated:
Jailbreak Detection Controls
Checksum Controls
Certificate Pinning Controls
Debugger Detection Controls
You can try online apk decompilers and check what datas are visible. Then you can try to restrict your proguard up to your requirements.
I've been using Nikolay Elenkov's blog (http://nelenkov.blogspot.com/2012/05/storing-application-secrets-in-androids.html) to store encrypted password information in our android application. The requirements are such that we a) don't want to store the key/salt in our code directly, because this can be decompiled/removed, b) need to support back to android API level 14, and c) need to store password (encrypted) information on the device (i.e. can't currently use an OpenAuth token or similar system, as it would require server changes that can't be made right now).
So, on JB 4.2+ devices, I can utilize the newer secure credential storage, which doesn't cause any problems. For JB 4.1 and ICS devices, though, I need to use the aforementioned method of interacting with the keystore daemon through nelenkov's techniques.
The problem here is that when the secure credential storage is initialized, it requires that the user set up a device password/pin, as it uses this to base the encryption key used for the master storage off of. This is kind of a bad deal, because it is a big hindrance for the user.
Alternatively, I've looked at using a separate key store, based off of SpongyCastle. The problem with this direction, though, is that I would need to initialize it with some password (likely stored in my source code). This would mean that, if the device were stolen/rooted, it would be relatively easy to procure the contents of the "secure" key store, as the password could be retrieved from the app's decompiled source.
Is there a better solution to this problem that I'm not seeing, or is it just not possible with API versions < 18?
There are really only two ways to do this: either the user enters some kind of password and you derive your keys from it, or you generate a key and store it on the device. Using the device unlock password is a lot more user-friendly than having the user remember a dedicated password for your app only. BTW, on 4.2+ you still need a lockscreen password so nothing is changed compared to 4.0. As usual, if the device is rooted, the attacker can get the user's Google authentication tokens, and bruteforce the lockscreen password so you'd have much bigger problems. So think about your threat model first and decide how far you are willing to go. If the data is truly sensitive, use a dedicated password with sufficient complexity that needs to be entered every time the app is opened. You can also write a device administrator and require that the device is encrypted, that the lockscreen PIN/password is sufficiently long/complex, etc.
The alternative is to use tokens, either your own or from a third party identity provider (Google, FB, etc.).
I want to make an app like McAfee Secure Container. The container app should launch other (specific) apps and provide them isolated execution environment. There should be no data sharing outside the container and all the apps inside container should use container's network connection.
What can be a way forward?
I know one solution that runs each app within it's own dalvik VM with a unique ID (uid ref linux) to protect all resources for that app. It makes use of the linux file permissions to protect these resources. The only way to get apps running with the same UID is to sign it with the same publisher key and declare this ID in the manifest. To get resources world readable you have to declare this explicitly when opening the resources within the app. Further more the apps can only access certain system resources if they declare that permission in the manifest. Think of IO operations and so on. These permissions will than be prompted to the user and install time.
... It's called Android :-)
Or in other words what more do you search for than what is already provided by the Android system? If you're looking for security I would say the Android system is pretty secure on its own. Some threats I can think of are listed next.
A possible threat is that the system itself (not the app) is compromised (rooted or so). Then all your app data will be exposed on that system. The solution for that is encrypting your data. Google for Android Derived Key for more information on how to get a key from a user password and use that key to encrypt sensitive data stored on a device. The main rule here is to only store sensitive data if you really have to and encrypt it if you do. Also make sure to use CBC mode instead of ECB mode and provide a salt and an IV.
Never ever think that your code is save. Not even if it is obfuscated. Obfuscation does not make it impossible to get the code in a readable format. It just makes it harder. So it's always a bad idea to keep sensitive data in your code.
Another possible threat I can think of is network traffic. Use SSL/TLS and verify hostnames. Limit credentials going over the network by using generated tokens for authentication. Encrypt data over the network, this time use a dynamic IV. Also validate input and be aware of SQL injection.
Short answer : you can achieve this with Dynamic Library loading.
Long answer please refer to this:
https://www.youtube.com/watch?v=siVS2jmPABM