Secure key storage in Android Application - android

I know this isn't exactly a programming topic, but maybe someone has experience with this topic.
I need to store three 128-bit keys for authentication in my Android application.
How to do it the safest way so that a simple decompilation of the application will not allow to read these codes ? Generally, what to do to make these keys secure
I use Delphi 11.

You want to distribute some fixed keys in your executable. My guess is that you want only your application to be able to connect to an API, or a device. Then you use symmetric encryption from these keys.
From what this official guide says about storing secrets in Android, the best way is to obfuscate the key within native code. Luckily enough, your Delphi application is a native application, so more difficult to disassemble than an Android/JVM application.
So I would just use SHA2/SHA3 derivation from several constants (which may be a mix of text and binary) in the source code - perhaps from a .inc file if you want to keep it secret and not part of the source code repository. You may hash the binary content of a small bounded resource, e.g. a picture. This obfuscation process could be some pure pascal code, which would be difficult to hack for the average hacker. Just don't store the keys as plain constant somewhere in your code/executable: derivate the key from some existing material - this is where SHA2/SHA3 could be good enough, e.g. with a PBKDF2 algorithm and a high number of rounds.
Don't make something too complex: if your attacker is good enough to disassemble some pascal compiled ARM, he/she is likely to defeat anything you may define in-house.
But I would always say that such fixed keys will never be safe. If I could, I would use asymmetric keys, with key derivation from two public/private key pairs. I would put the private key in a safe place (e.g. a server accessible via an API), then call once this API when the user registers the app. It would make e.g. a ECDH challenge with the private key stored in the other server API, or the device. Then I would perhaps store this secret in a safe place, using e.g. the Android KeyStore API.
Anyway, what I would always do is provide a way to revoke the keys. An application update may be enough in such case.
And if you can, let your code be audited by someone - even not familiar with Delphi - which knows about security and cryptography.

Related

Security constraints if complete Source Code can be extracted from APK/IPA [duplicate]

I am developing a payment processing app for Android, and I want to prevent a hacker from accessing any resources, assets or source code from the APK file.
If someone changes the .apk extension to .zip then they can unzip it and easily access all the app's resources and assets, and using dex2jar and a Java decompiler, they can also access the source code. It's very easy to reverse engineer an Android APK file - for more details see Stack Overflow question Reverse engineering from an APK file to a project.
I have used the Proguard tool provided with the Android SDK. When I reverse engineer an APK file generated using a signed keystore and Proguard, I get obfuscated code.
However, the names of Android components remain unchanged and some code, like key-values used in the app, remains unchanged. As per Proguard documentation the tool can't obfuscate components mentioned in the Manifest file.
Now my questions are:
How can I completely prevent reverse engineering of an Android APK? Is this possible?
How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
1. How can I completely avoid reverse engineering of an Android APK? Is this possible?
AFAIK, there is not any trick for complete avoidance of reverse engineering.
And also very well said by #inazaruk: Whatever you do to your code, a potential attacker is able to change it in any way she or he finds it feasible. You basically can't protect your application from being modified. And any protection you put in there can be disabled/removed.
2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
You can do different tricks to make hacking harder though. For example, use obfuscation (if it's Java code). This usually slows down reverse engineering significantly.
3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
As everyone says, and as you probably know, there's no 100% security. But the place to start for Android, that Google has built in, is ProGuard. If you have the option of including shared libraries, you can include the needed code in C++ to verify file sizes, integration,
etc. If you need to add an external native library to your APK's library folder on every build,
then you can use it by the below suggestion.
Put the library in the native library path which defaults to "libs" in
your project folder. If you built the native code for the 'armeabi' target then put it
under libs/armeabi. If it was built with armeabi-v7a then put it under
libs/armeabi-v7a.
<project>/libs/armeabi/libstuff.so
AFAIK, you cannot protect the files in the /res directory anymore than they are protected right now.
However, there are steps you can take to protect your source code, or at least what it does if not everything.
Use tools like ProGuard. These will obfuscate your code, and make it harder to read when decompiled, if not impossible.
Move the most critical parts of the service out of the app, and into a webservice, hidden behind a server side language like PHP. For example, if you have an algorithm that's taken you a million dollars to write. You obviously don't want people stealing it out of your app. Move the algorithm and have it process the data on a remote server, and use the app to simply provide it with the data. Or use the NDK to write them natively into .so files, which are much less likely to be decompiled than apks. I don't think a decompiler for .so files even exists as of now (and even if it did, it wouldn't be as good as the Java decompilers). Additionally, as #nikolay mentioned in the comments, you should use SSL when interacting between the server and device.
When storing values on the device, don't store them in a raw format. For example, if you have a game, and you're storing the amount of in game currency the user has in SharedPreferences. Let's assume it's 10000 coins. Instead of saving 10000 directly, save it using an algorithm like ((currency*2)+1)/13. So instead of 10000, you save 1538.53846154 into the SharedPreferences. However, the above example isn't perfect, and you'll have to work to come up with an equation that won't lose currency to rounding errors etc.
You can do a similar thing for server side tasks. Now for an example, let's actually take your payment processing app. Let's say the user has to make a payment of $200. Instead of sending a raw $200 value to the server, send a series of smaller, predefined, values that add up to $200. For example, have a file or table on your server that equates words with values. So let's say that Charlie corresponds to $47, and John to $3. So instead of sending $200, you can send Charlie four times and John four times. On the server, interpret what they mean and add it up. This prevents a hacker from sending arbitrary values to your server, as they do not know what word corresponds to what value. As an added measure of security, you could have an equation similar to point 3 for this as well, and change the keywords every n number of days.
Finally, you can insert random useless source code into your app, so that the hacker is looking for a needle in a haystack. Insert random classes containing snippets from the internet, or just functions for calculating random things like the Fibonacci sequence. Make sure these classes compile, but aren't used by the actual functionality of the app. Add enough of these false classes, and the hacker would have a tough time finding your real code.
All in all, there's no way to protect your app 100%. You can make it harder, but not impossible. Your web server could be compromised, the hacker could figure out your keywords by monitoring multiple transaction amounts and the keywords you send for it, the hacker could painstakingly go through the source and figure out which code is a dummy.
You can only fight back, but never win.
At no point in the history of computing has it ever been possible to prevent reverse-engineering of software when you give a working copy of it to your attacker. Also, in most likelihood, it never will be possible.
With that understood, there is an obvious solution: don't give your secrets to your attacker. While you can't protect the contents of your APK, what you can protect is anything you don't distribute. Typically this is server-side software used for things like activation, payments, rule-enforcement, and other juicy bits of code. You can protect valuable assets by not distributing them in your APK. Instead, set up a server that responds to requests from your app, "uses" the assets (whatever that might mean) and then sends the result back to the app. If this model doesn't work for the assets you have in mind, then you may want to re-think your strategy.
Also, if your primary goal is to prevent app piracy: don't even bother. You've already burned more time and money on this problem than any anti-piracy measure could possibly ever hope to save you. The return on investment for solving this problem is so low that it doesn't make sense to even think about it.
First rule of app security: Any machine to which an attacker gains unrestricted physical or electronic access now belongs to your attacker, regardless of where it actually is or what you paid for it.
Second rule of app security: Any software that leaves the physical boundaries inside which an attacker cannot penetrate now belongs to your attacker, regardless of how much time you spent coding it.
Third rule: Any information that leaves those same physical boundaries that an attacker cannot penetrate now belongs to your attacker, no matter how valuable it is to you.
The foundations of information technology security are based on these three fundamental principles; the only truly secure computer is the one locked in a safe, inside a Farraday cage, inside a steel cage. There are computers that spend most of their service lives in just this state; once a year (or less), they generate the private keys for trusted root certification authorities (in front of a host of witnesses with cameras recording every inch of the room in which they are located).
Now, most computers are not used under these types of environments; they're physically out in the open, connected to the Internet over a wireless radio channel. In short, they're vulnerable, as is their software. They are therefore not to be trusted. There are certain things that computers and their software must know or do in order to be useful, but care must be taken to ensure that they can never know or do enough to cause damage (at least not permanent damage outside the bounds of that single machine).
You already knew all this; that's why you're trying to protect the code of your application. But, therein lies the first problem; obfuscation tools can make the code a mess for a human to try to dig through, but the program still has to run; that means the actual logic flow of the app and the data it uses are unaffected by obfuscation. Given a little tenacity, an attacker can simply un-obfuscate the code, and that's not even necessary in certain cases where what he's looking at can't be anything else but what he's looking for.
Instead, you should be trying to ensure that an attacker cannot do anything with your code, no matter how easy it is for him to obtain a clear copy of it. That means, no hard-coded secrets, because those secrets aren't secret as soon as the code leaves the building in which you developed it.
These key-values you have hard-coded should be removed from the application's source code entirely. Instead, they should be in one of three places; volatile memory on the device, which is harder (but still not impossible) for an attacker to obtain an offline copy of; permanently on the server cluster, to which you control access with an iron fist; or in a second data store unrelated to your device or servers, such as a physical card or in your user's memories (meaning it will eventually be in volatile memory, but it doesn't have to be for long).
Consider the following scheme. The user enters their credentials for the app from memory into the device. You must, unfortunately, trust that the user's device is not already compromised by a keylogger or Trojan; the best you can do in this regard is to implement multi-factor security, by remembering hard-to-fake identifying information about the devices the user has used (MAC/IP, IMEI, etc), and providing at least one additional channel by which a login attempt on an unfamiliar device can be verified.
The credentials, once entered, are obfuscated by the client software (using a secure hash), and the plain-text credentials discarded; they have served their purpose. The obfuscated credentials are sent over a secure channel to the certificate-authenticated server, which hashes them again to produce the data used to verify the validity of the login. This way, the client never knows what is actually compared to the database value, the app server never knows the plaintext credentials behind what it receives for validation, the data server never knows how the data it stores for validation is produced, and a man in the middle sees only gibberish even if the secure channel were compromised.
Once verified, the server transmits back a token over the channel. The token is only useful within the secure session, is composed of either random noise or an encrypted (and thus verifiable) copy of the session identifiers, and the client application must send this token on the same channel to the server as part of any request to do something. The client application will do this many times, because it can't do anything involving money, sensitive data, or anything else that could be damaging by itself; it must instead ask the server to do this task. The client application will never write any sensitive information to persistent memory on the device itself, at least not in plain text; the client can ask the server over the secure channel for a symmetric key to encrypt any local data, which the server will remember; in a later session the client can ask the server for the same key to decrypt the data for use in volatile memory. That data won't be the only copy, either; anything the client stores should also be transmitted in some form to the server.
Obviously, this makes your application heavily dependent on Internet access; the client device cannot perform any of its basic functions without proper connection to and authentication by the server. No different than Facebook, really.
Now, the computer that the attacker wants is your server, because it and not the client app/device is the thing that can make him money or cause other people pain for his enjoyment. That's OK; you get much more bang for your buck spending money and effort to secure the server than in trying to secure all the clients. The server can be behind all kinds of firewalls and other electronic security, and additionally can be physically secured behind steel, concrete, keycard/pin access, and 24-hour video surveillance. Your attacker would need to be very sophisticated indeed to gain any kind of access to the server directly, and you would (should) know about it immediately.
The best an attacker can do is steal a user's phone and credentials and log in to the server with the limited rights of the client. Should this happen, just like losing a credit card, the legitimate user should be instructed to call an 800 number (preferably easy to remember, and not on the back of a card they'd carry in their purse, wallet or briefcase which could be stolen alongside the mobile device) from any phone they can access that connects them directly to your customer service. They state their phone was stolen, provide some basic unique identifier, and the account is locked, any transactions the attacker may have been able to process are rolled back, and the attacker is back to square one.
1. How can I completely avoid reverse engineering of an Android APK? Is this possible?
This isn't possible
2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
When somebody change a .apk extension to .zip, then after unzipping, someone can easily get all resources (except Manifest.xml), but with APKtool one can get the real content of the manifest file too. Again, a no.
3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
Again, no, but you can prevent upto some level, that is,
Download a resource from the Web and do some encryption process
Use a pre-compiled native library (C, C++, JNI, NDK)
Always perform some hashing (MD5/SHA keys or any other logic)
Even with Smali, people can play with your code. All in all, it's not POSSIBLE.
100% avoidance of reverse engineering of the Android APK is not possible, but you can use these ways to avoid extracting more data, like source code, assets form your APK, and resources:
Use ProGuard to obfuscate application code
Use NDK using C and C++ to put your application core and secure part of code in .so files
To secure resources, don't include all important resources in the assets folder with APK. Download these resources at the time of application first start up.
Here are a few methods you can try:
Use obfuscation and tools like ProGuard.
Encrypt some part of the source code and data.
Use a proprietary inbuilt checksum in the app to detect tampering.
Introduce code to avoid loading in a debugger, that is, let the app have the ability to detect the debugger and exit / kill the debugger.
Separate the authentication as an online service.
Use application diversity
Use the finger printing technique, for example, hardware signatures of the devices from different subsystem before authenticating the device.
1. How can I completely avoid reverse engineering of an Android APK? Is this possible?
Impossible
2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
Impossible
3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
More tough - possible, but in fact it will be more tough mostly for the average user, who is just googling for hacking guides. If somebody really wants to hack your app - it will be hacked, sooner or later.
1. How can I completely avoid reverse engineering of an Android APK? Is this possible?
That is impossible
2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
Developers can take steps such as using tools like ProGuard to obfuscate their code, but up until now, it has been quite difficult to completely prevent someone from decompiling an app.
It's a really great tool and can increase the difficulty of 'reversing' your code whilst shrinking your code's footprint.
Integrated ProGuard support: ProGuard is now packaged with the SDK Tools. Developers can now obfuscate their code as an integrated part of a release build.
3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
While researching, I came to know about HoseDex2Jar. This tool will protect your code from decompiling, but it seems not to be possible to protect your code completely.
Some of helpful links, you can refer to them.
Proguard, Android, and the Licensing Server
Securing Android LVL Applications
Stack Overflow question Is it really impossible to protect Android apps from reverse engineering?
Stack Overflow question How to prevent reverse engineering of an Android APK file to secure code?
The main question here is that can the dex files be decompiled and the answer is they can be "sort of". There are disassemblers like dedexer and smali.
ProGuard, properly configured, will obfuscate your code. DexGuard, which is a commercial extended version of ProGuard, may help a bit more. However, your code can still be converted into smali and developers with reverse-engineering experience will be able to figure out what you are doing from the smali.
Maybe choose a good license and enforce it by the law in best possible way.
Your client should hire someone that knows what they are doing, who can make the right decisions and can mentor you.
Talk above about you having some ability to change the transaction processing system on the backend is absurd - you shouldn't be allowed to make such architectural changes, so don't expect to be able to.
My rationale on this:
Since your domain is payment processing, its safe to assume that PCI DSS and/or PA DSS (and potential state/federal law) will be significant to your business - to be compliant you must show you are secure. To be insecure then find out (via testing) that you aren't secure, then fixing, retesting, etcetera until security can be verified at a suitable level = expensive, slow, high-risk way to success. To do the right thing, think hard up front, commit experienced talent to the job, develop in a secure manner, then test, fix (less), etcetera (less) until security can be verified at a suitable level = inexpensive, fast, low-risk way to success.
As someone who worked extensively on payment platforms, including one mobile payments application (MyCheck), I would say that you need to delegate this behaviour to the server. No user name or password for the payment processor (whichever it is) should be stored or hardcoded in the mobile application. That's the last thing you want, because the source can be understood even when if you obfuscate the code.
Also, you shouldn't store credit cards or payment tokens on the application. Everything should be, again, delegated to a service you built. It will also allow you, later on, to be PCI-compliant more easily, and the credit card companies won't breathe down your neck (like they did for us).
If we want to make reverse engineering (almost) impossible, we can put the application on a highly tamper-resistant chip, which executes all sensitive stuff internally, and communicates with some protocol to make controlling GUI possible on the host. Even tamper-resistant chips are not 100% crack proof; they just set the bar a lot higher than software methods. Of course, this is inconvenient: the application requires some little USB wart which holds the chip to be inserted into the device.
The question doesn't reveal the motivation for wanting to protect this application so jealously.
If the aim is to improve the security of the payment method by concealing whatever security flaws the application may have (known or otherwise), it is completely wrongheaded. The security-sensitive bits should in fact be open-sourced, if that is feasible. You should make it as easy as possible for any security researcher who reviews your application to find those bits and scrutinize their operation, and to contact you. Payment applications should not contain any embedded certificates. That is to say, there should be no server application which trusts a device simply because it has a fixed certificate from the factory. A payment transaction should be made on the user's credentials alone, using a correctly designed end-to-end authentication protocol which precludes trusting the application, or the platform, or the network, etc.
If the aim is to prevent cloning, short of that tamper-proof chip, there isn't anything you can do to protect the program from being reverse-engineered and copied, so that someone incorporates a compatible payment method into their own application, giving rise to "unauthorized clients". There are ways to make it difficult to develop unauthorized clients. One would be to create checksums based on snapshots of the program's complete state: all state variables, for everything. GUI, logic, whatever. A clone program will not have exactly the same internal state. Sure, it is a state machine which has similar externally visible state transitions (as can be observed by inputs and outputs), but hardly the same internal state. A server application can interrogate the program: what is your detailed state? (i.e. give me a checksum over all of your internal state variables). This can be compared against dummy client code which executes on the server in parallel, going through the genuine state transitions. A third party clone will have to replicate all of the relevant state changes of the genuine program in order to give the correct responses, which will hamper its development.
The other upvoted answers here are correct. I just want to provide another option.
For certain functionality that you deem important you can host the WebView control in your app. The functionality would then be implemented on your web server. It will look like it's running in your application.
Agreed with #Muhammad Saqib here:
https://stackoverflow.com/a/46183706/2496464
And #Mumair gives good starting steps:
https://stackoverflow.com/a/35411378/474330
It is always safe to assume that everything you distribute to your user's device, belong to the user. Plain and simple. You may be able to use the latest tools and procedure to encrypt your intellectual property, but there is no way to prevent a determined person from 'studying' your system. And even if the current technology may make it difficult for them to gain unwanted access, there might be some easy way tomorrow, or even just the next hour!
Thus, here comes the equation:
When it comes to money, we always assume that client is untrusted.
Even in as simple as an in-game economy. (Especially in games! There are more 'sophisticated' users there and loopholes spread in seconds!)
How do we stay safe?
Most, if not all, of our key processing systems (and database of course) located on the server side. And between the client and server, lies encrypted communications, validations, etc. That is the idea of a thin client.
I suggest you to look at Protect Software Applications from Attacks. It's a commercial service, but my friend's company used this and they are glad to use it.
There is no way to completely avoid reverse engineering of an APK file. To protect application assets, resources, you can use encryption.
Encryption will make harder to use it without decryption. Choosing some strong encryption algorithm will make cracking harder.
Adding some spoof code into your main logic to make it harder to crack.
If you can write your critical logic in any native language and that surely will make harder to decompile.
Using any third party security frameworks, like Quixxi
APK signature scheme v2 in Android 7.0 (Nougat)
The PackageManager class now supports verifying apps using the APK signature scheme v2. The APK signature scheme v2 is a whole-file signature scheme that significantly improves verification speed and strengthens integrity guarantees by detecting any unauthorized changes to APK files.
To maintain backward-compatibility, an APK must be signed with the v1 signature scheme (JAR signature scheme) before being signed with the v2 signature scheme. With the v2 signature scheme, verification fails if you sign the APK with an additional certificate after signing with the v2 scheme.
APK signature scheme v2 support will be available later in the N Developer Preview.
http://developer.android.com/preview/api-overview.html#apk_signature_v2
Basically it's not possible. It will never be possible. However, there is hope. You can use an obfuscator to make it so some common attacks are a lot harder to carry out including things like:
Renaming methods/classes (so in the decompiler you get types like a.a)
Obfuscating control flow (so in the decompiler the code is very hard to read)
Encrypting strings and possibly resources
I'm sure there are others, but that's the main ones. I work for a company called PreEmptive Solutions on a .NET obfuscator. They also have a Java obfuscator that works for Android as well one called DashO.
Obfuscation always comes with a price, though. Notably, performance is usually worse, and it requires some extra time around releases usually. However, if your intellectual property is extremely important to you, then it's usually worth it.
Otherwise, your only choice is to make it so that your Android application just passes through to a server that hosts all of the real logic of your application. This has its own share of problems, because it means users must be connected to the Internet to use your app.
Also, it's not just Android that has this problem. It's a problem on every app store. It's just a matter of how difficult it is to get to the package file (for example, I don't believe it's very easy on iPhones, but it's still possible).
100% security of the source code and resources is not possible in Android. But, you can make it little bit difficult for the reverse engineer. You can find more details on this in below links:
Visit Saving constant values securely
and Mobile App Security Best Practices for App Developers.
If your app is this sensitive then you should consider the payment processing part at the server side. Try to change your payment processing algorithms. Use an Android app only for collecting and displaying user information (i.e., account balance) and rather than processing payments within Java code, send this task to your server using a secure SSL protocol with encrypted parameters. Create a fully encrypted and secure API to communicate with your server.
Of course, it can also be hacked too and it has nothing to do with source code protection, but consider it another security layer to make it harder for hackers to trick your app.
It’s not possible to completely avoid reverse engineering, but by making them more complex internally, you could make it more difficult for attackers to see the clear operation of the app, which may reduce the number of attack vectors.
If the application handles highly sensitive data, various techniques exist which can increase the complexity of reverse engineering your code. One technique is to use C/C++ to limit easy runtime manipulation by the attacker. There are ample C and C++ libraries that are very mature and easy to integrate with and Android offers JNI.
An attacker must first circumvent the debugging restrictions in order to attack the application on a low level. This adds further complexity to an attack. Android applications should have android:debuggable=”false” set in the application manifest to prevent easy run time manipulation by an attacker or malware.
Trace Checking – An application can determine whether or not it is currently being traced by a debugger or other debugging tool. If being traced, the application can perform any number of possible attack response actions, such as discarding encryption keys to protect user data, notifying a server administrator, or other such type responses in an attempt to defend itself. This can be determined by checking the process status flags or using other techniques like comparing the return value of ptrace attach, checking the parent process, blacklist debuggers in the process list or comparing timestamps on different places of the program.
Optimizations - To hide advanced mathematical computations and other types of complex logic, utilizing compiler optimizations can help obfuscate the object code so that it cannot easily be disassembled by an attacker, making it more difficult for an attacker to gain an understanding of the particular code. In Android this can more easily be achieved by utilizing natively compiled libraries with the NDK. In addition, using an LLVM Obfuscator or any protector SDK will provide better machine code obfuscation.
Stripping binaries – Stripping native binaries is an effective way to increase the amount of time and skill level required of an attacker in order to view the makeup of your application’s low level functions. By stripping a binary, the symbol table of the binary is stripped, so that an attacker cannot easily debug or reverse engineer an application.You can refer techniques used on GNU/Linux systems like sstriping or using UPX.
And at last you must be aware about obfuscation and tools like ProGuard.
I know some banking apps are using DexGuard which provides obfuscation as well as encryption of classes, strings, assets, resource files and native libraries.
I can see that there are good answers for this question. In addition to that, you can use Facebook ReDex to optimize the code. ReDex works on the .dex level where ProGuard works as .class level.
Tool: Using ProGuard in your application, it can be restricted to reverse engineering your application
Nothing is secure when you put it on end-users hand but some common practice may make this harder for attacker to steal data.
Put your main logic (algorithms) on the server side.
Communicate with the server and client; make sure communication between server and client is secured via SSL or HTTPS; or use other techniques for key-pair generation algorithms (ECC and RSA). Ensure that sensitive information is remain end-to-end encrypted.
Use sessions and expire them after a specific time interval.
Encrypt resources and fetch them from the server on demand.
Or you can make a hybrid app which access system via webview protect resource + code on server
Multiple approaches; this is obvious you have to sacrifice among performance and security.
Aren't Trusted Platform Module (TPM) chips supposed to manage protected code for you?
They are becoming common on PCs (especially Apple ones) and they may already exist in today's smartphone chips. Unfortunately, there isn't any OS API to make use of it yet. Hopefully, Android will add support for this one day. That's also the key to clean content DRM (which Google is working on for WebM).
How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
An APK file is protected with the SHA-1 algorithm. You can see some files in the META-INF folder of APK. If you extract any APK file and change any of its content and zip it again and when you run that new APK file on an Android machine, it will not work, because the SHA-1 hashes will never match.
when they have the app on their phone, they have full access to memory of it. so if u want to prevent it from being hacked, you could try to make it so that u cant just get the static memory address directly by using a debugger. they could do a stack buffer overflow if they have somewhere to write and they have a limit. so try to make it so when they write something, if u have to have a limit, if they send in more chars than limit, if (input > limit) then ignore, so they cant put assembly code there.
Just an addition to already good answers above.
Another trick I know is to store valuable codes as Java Library. Then set that Library to be your Android Project. Would be good as C .so file but Android Lib would do.
This way these valuable codes stored on Android Library won't be visible after decompiling.

Prevent android app reverse engineering on Airwatch managed devices

From the Airwatch features and documentation, they mention that the apps are containerized. And thus, all the app content is safely encrypted and not easily exposed.
For rooted devices, Airwatch can detect such devices and perform a remote wipe of corporate data.
I wanted to check if Airwatch can guarantee that the application code cannot be reverse-engineered, to extract sensitive data from the code base, like API keys, Encryption keys etc.
I wanted to check if Airwatch can guarantee that the application code cannot be reverse-engineered, to extract sensitive data from the code base, like API keys, Encryption keys etc.
While I cannot advise you in relation to Airwatch, because I am not familiar with it, I can alert you that if you are storing this type of sensitive information in your mobile app, then you are already at risk, because reverse engineering secrets are not that hard as I show in the article How to Extract an API key from a Mobile App by Static Binary analysis:
Summary
Using MobSF to reverse engineer an APK for a mobile app allows us to quickly extract an API key and also gives us a huge amount of information we can use to perform further analysis that may reveal more attack vectors into the mobile app and API server. It is not uncommon to also find secrets for accessing third part services among this info or in the decompiled source code that is available to download in smali and java formats.
Now you may be questioning yourself as to how you would protect the API key, and for that I recommend starting by reading this series of articles about Mobile Api Security Techniques.
The lesson here is that shipping your API key or any other secret in the mobile app code is like locking the door of your home but leaving the key under the mat!
or even with a MitM attack in this other article:
Conclusion
While we can use advanced techniques, like JNI/NDK, to hide the API key in the mobile app code, it will not impede someone from performing a MitM attack in order to steal the API key. In fact a MitM attack is easy to the point that it can even be achieved by non developers.
We have highlighted some good resources that will show you other techniques to secure mobile APIs, like certificate pinning, even though it can be challenging to implement and to maintain. We also noted that certificate pinning by itself is not enough since it can be bypassed, thus other techniques need to be employed to guarantee that no one can steal your API key, and, if you have completed the deep dive I recommended, you will know by now that in the context of a mobile API, the API key can be protected from being stolen through a MitM attack. So if you have not already done it, please read the articles I linked to in the section about mitigating MitM attacks.
Also more skilled developers can hook, at run-time, some introspection frameworks, like Frida and xPosed, to intercept and modify behavior of any of your running code. So even if they are not able to decrypt your data, they will intercept the content after you have decrypted it in your application. To be able to do this, they just need to know were to hook into your code, and this they achieve by de-compiling and reverse engineer the code of your mobile app with the help of tools, like the Mobile Security Framework or APKTool, but more tools exist in the open source community.
Mobile Security Framework
Mobile Security Framework is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing framework capable of performing static analysis, dynamic analysis, malware analysis and web API testing.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
APKTool
A tool for reverse engineering 3rd party, closed, binary Android apps. It can decode resources to nearly original form and rebuild them after making some modifications. It also makes working with an app easier because of the project like file structure and automation of some repetitive tasks like building apk, etc.

Android Dev: Run custom code in the Trusted Execution Environment (TEE), extending the Keystore

I am relatively new to Android development and have never used the Android Keystore before. But I am familiar with the (theoretical) concepts.
My problem is that I have to generate and store a secret key and later use this key to run cryptographic primitives on some data. Ideally, the key is protected against extraction in best possible way, e.i. key generation and all cryptographic operations only run inside a secure enclave, such that only the payload leaves the trusted zone.
As far as I understand this happens automatically, if the "correct" Keystore API is used, the hardware device supports it and the key's usage is flagged appropriately. However, the supported algoritms are limited.
The question: Can I write own custom code that is executed inside the Trust Zone? If yes, could you point me to a good resource or tutorial?
Background: I need to do some fancy modern stuff over elliptic curves (Barreto-Naehrig curve) with Optimal Structure-Preserving Signatures by Abe and SXDH-based Groth-Sahai-Proofs. Obviously, this is not supported by the Keystore API out-of-the-box. At the moment the code is implemented as C++-code and compiled as native Android code. The implementation is semantical correct but does not take any special care of secure key storage on the implementation level, because it is all academic prototype development. At the moment the key is just read/written to/from a plain file and all operations are executed in the same user-land (main) process.
The TEE is, in most cases, only available to the OEM and there's no SDK to access the TEE. The exception to this is Kinibi from Trustonic who do provide an SDK to their TEE. In order to access this you would need to have the SDK to develop the Trusted App and some form of development board (HIKEY) to test it. To deploy into a handset you would need to have some form of agreement with Trustonic that would allow users to download and install the app using an OTA server to manage the key exchange.
The trusted code can be written with the help of trustonic which provides product called application security , with the help of this , one can access to TEE environment .
another way is by using Trusty TEE OS which is in every android and runs parallel separate from the main processor's process , with help of it this can be done .
its very much like System on Chip process running in separate and can be access in safer way by IPC
both the ways are very much complicated ,

Protect Android App from reverse engineering

I want to secure my app 100% and don't want hackers to enter inside.
These are the solutions which I found from Stack Overflow.
Integrating Proguard in the app.
Keeping most important part of the code in C/C++.
Using NDK to write the code natively into .So file.
Encrypting the api keys using MD5.
So is there any other way to protect my Android app fully from the hackers or which is best solution among the above mentioned.
These are the references which I found
How to avoid reverse engineering of an APK file?
How to prevent reverse engineering of an Android APK file to secure code?
There is simply no way of completely preventing reverse engineering of your app. Given enough resources, programs do eventually get reverse engineered. It all depends on how motivated your adversary is.
Integrating Proguard in the app
The most efficient counter-mesaure against reverse engineering is obfuscation. That is what Proguard does (but, not too well from what I gather). Proguard's website says it is an optimizer and only provides a minimal protection againse RE. Obfuscation only makes the process of reverse engineering harder. It does NOT prevent reverse engineering.
Keeping most important part of the code in C/C++.
This is a general misconception that writing code in native code will prevent reverse engineering. Writing in C/C++ will compile and build your code to the machine language, which is harder to reverse engineer than Java bytecode. But, it still does not prevent it completely.
Also, writing code in C/C++, unless you are a hardcore systems programmer, you have more chances of introducing a lot of bugs
nasty segmentation faults
memory leaks
use after free
On top of all these, you might end up introducing a multitude of vulnerabilities in your app, from information disclosures to buffer overflows.
Languages which allow you to manage the memory yourselves(Like C/C++), are immensely powerful. So, it also makes it easier to shoot yourself in the foot. That is the another reason why Java is considered generally safer (since memory is managed by the JVM with the help of GC).
So, unless there is an absolute need to write code in C/C++ (say, you are writing a codec), please don't write in C (just to mitigate reverse engineering).
Encrypting the api keys using MD5
MD5 is a hashing algorithm which hashes data into a 16 byte string. And it is also considered broken. You can only hash with MD5, not encrypt with it.
Even if you use encrypt your keys with an algorithm like AES, you will need to store the key somewhere to decrypt it in the future. The attacker can easily extract the key either from the program memory (while running) or from persistent storage and then use it to decrypt your API keys.
Suggestion
Any sensitive part of the code, which you want to prevent from reverse engineering, move it to a remote server. Say, you have come up with a cool algorithm which you do not want anyone to reverse engineer.
I would suggest, building a REST API in the server which accepts data from clients, run the algorithm and return the results. Whenever you need to make use of this algorithm, you can make a REST call to your server from the app, and then just make use of the results you get from there in your app.
All sensitive and confidential data like your API keys can also be stored in the server and never exposed directly in the app.
This would make sure that your sensitive parts of the code is not disclosed to your adversaries.
You can read my article on Medium Protect Android App from reverse engineering.
I'll show you how to prevent android apps from stolen. The big picture of our problem is app's Data not code so why there is no built-in framework that work on obfuscated strings depends on the fingerprints of developers.
I know it is Proguard that framework that works on obfuscated functions and classes names. you know every time I hack an app I don't need to know function or class name :)
but I need to know base data URL or all any strings that shown to the user on the screen.
so the effective way to protect APK is obfuscated strings by using fingerprints of a developer so when I'm going to decompile an app I can't get services URLs or any important strings that shown on screen without an original fingerprint.
there is a framework that can do that is called StringCare
https://github.com/StringCare/AndroidLibrary.

how to save private key for android cordova app

Let's imagine an android/phonegap app that connects with a server for data. I want to use some private key, so that you can use the API from the server only if you have the app.
I found there is a plugin for ios called keychain, but have not found something similar for android. Now, this must be a solved problem. A lot of apps use API's to servers on android. How do you solve this?
My main objective is not generating private keys, but keeping the servers private (for those that have the app). If the way to go is to generate a private key (which is what I thought at first), then I cannot find a way to save a private key securely when using phonegap on android. The closes I've got is putting it on javascript and obfuscating the code.
I found the android keystore, but if I understand correctly that's for cryptography which I don't need.
It is virtually impossible to secure a compile-time key, it is in the code. Securing it from a the user is even more difficult.
Depending on the platform the executable may be available to viewing by examining the file. Other platforms, including iOS, encrypt the executable and is not available until execution time.
Obfuscation may be your best approach but understand that it is only increases the work factor minimally.
Saving the key in a keychain or keystone does not really help in protecting from the user, the user has access and the key is still in the code.
Also you need to use https with a current server configuration and pin the certificate.
If you need better security consider creating a key at first run or requiring the user to log-in.

Categories

Resources