How secure is using https on android? - android

If I have an app on the store, is it possible to download it and use the debugger and introspection or some other technique to see the unencrypted Strings requests inside the app? If yes, what could help to prevent such techniques?

If I have an app on the store, is it possible to download it and use the debugger and introspection or some other technique to see the unencrypted Strings requests inside the app?
Sure.
If yes, what could help to prevent such techniques?
Stop writing software for end users.
Software that resides purely on a server somewhere can be assumed to be relatively safe from inspection. Software that runs on user equipment -- whether that is JavaScript in a Web browser, an Android app, a Windows desktop app, or whatever -- is subject to analysis. Different environments may make analysis easier or harder, so it is generally easier to inspect the goings-on of a Web app than it is for an Android app, etc. Obfuscation, such as that offered by ProGuard, will help incrementally, as can writing your HTTP logic in native C/C++ code, but neither approach is proof from reverse-engineering, let alone debugging.
Note that this has little to do with HTTPS, as the same statements hold for anything done by any program.

Related

Security constraints if complete Source Code can be extracted from APK/IPA [duplicate]

I am developing a payment processing app for Android, and I want to prevent a hacker from accessing any resources, assets or source code from the APK file.
If someone changes the .apk extension to .zip then they can unzip it and easily access all the app's resources and assets, and using dex2jar and a Java decompiler, they can also access the source code. It's very easy to reverse engineer an Android APK file - for more details see Stack Overflow question Reverse engineering from an APK file to a project.
I have used the Proguard tool provided with the Android SDK. When I reverse engineer an APK file generated using a signed keystore and Proguard, I get obfuscated code.
However, the names of Android components remain unchanged and some code, like key-values used in the app, remains unchanged. As per Proguard documentation the tool can't obfuscate components mentioned in the Manifest file.
Now my questions are:
How can I completely prevent reverse engineering of an Android APK? Is this possible?
How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
1. How can I completely avoid reverse engineering of an Android APK? Is this possible?
AFAIK, there is not any trick for complete avoidance of reverse engineering.
And also very well said by #inazaruk: Whatever you do to your code, a potential attacker is able to change it in any way she or he finds it feasible. You basically can't protect your application from being modified. And any protection you put in there can be disabled/removed.
2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
You can do different tricks to make hacking harder though. For example, use obfuscation (if it's Java code). This usually slows down reverse engineering significantly.
3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
As everyone says, and as you probably know, there's no 100% security. But the place to start for Android, that Google has built in, is ProGuard. If you have the option of including shared libraries, you can include the needed code in C++ to verify file sizes, integration,
etc. If you need to add an external native library to your APK's library folder on every build,
then you can use it by the below suggestion.
Put the library in the native library path which defaults to "libs" in
your project folder. If you built the native code for the 'armeabi' target then put it
under libs/armeabi. If it was built with armeabi-v7a then put it under
libs/armeabi-v7a.
<project>/libs/armeabi/libstuff.so
AFAIK, you cannot protect the files in the /res directory anymore than they are protected right now.
However, there are steps you can take to protect your source code, or at least what it does if not everything.
Use tools like ProGuard. These will obfuscate your code, and make it harder to read when decompiled, if not impossible.
Move the most critical parts of the service out of the app, and into a webservice, hidden behind a server side language like PHP. For example, if you have an algorithm that's taken you a million dollars to write. You obviously don't want people stealing it out of your app. Move the algorithm and have it process the data on a remote server, and use the app to simply provide it with the data. Or use the NDK to write them natively into .so files, which are much less likely to be decompiled than apks. I don't think a decompiler for .so files even exists as of now (and even if it did, it wouldn't be as good as the Java decompilers). Additionally, as #nikolay mentioned in the comments, you should use SSL when interacting between the server and device.
When storing values on the device, don't store them in a raw format. For example, if you have a game, and you're storing the amount of in game currency the user has in SharedPreferences. Let's assume it's 10000 coins. Instead of saving 10000 directly, save it using an algorithm like ((currency*2)+1)/13. So instead of 10000, you save 1538.53846154 into the SharedPreferences. However, the above example isn't perfect, and you'll have to work to come up with an equation that won't lose currency to rounding errors etc.
You can do a similar thing for server side tasks. Now for an example, let's actually take your payment processing app. Let's say the user has to make a payment of $200. Instead of sending a raw $200 value to the server, send a series of smaller, predefined, values that add up to $200. For example, have a file or table on your server that equates words with values. So let's say that Charlie corresponds to $47, and John to $3. So instead of sending $200, you can send Charlie four times and John four times. On the server, interpret what they mean and add it up. This prevents a hacker from sending arbitrary values to your server, as they do not know what word corresponds to what value. As an added measure of security, you could have an equation similar to point 3 for this as well, and change the keywords every n number of days.
Finally, you can insert random useless source code into your app, so that the hacker is looking for a needle in a haystack. Insert random classes containing snippets from the internet, or just functions for calculating random things like the Fibonacci sequence. Make sure these classes compile, but aren't used by the actual functionality of the app. Add enough of these false classes, and the hacker would have a tough time finding your real code.
All in all, there's no way to protect your app 100%. You can make it harder, but not impossible. Your web server could be compromised, the hacker could figure out your keywords by monitoring multiple transaction amounts and the keywords you send for it, the hacker could painstakingly go through the source and figure out which code is a dummy.
You can only fight back, but never win.
At no point in the history of computing has it ever been possible to prevent reverse-engineering of software when you give a working copy of it to your attacker. Also, in most likelihood, it never will be possible.
With that understood, there is an obvious solution: don't give your secrets to your attacker. While you can't protect the contents of your APK, what you can protect is anything you don't distribute. Typically this is server-side software used for things like activation, payments, rule-enforcement, and other juicy bits of code. You can protect valuable assets by not distributing them in your APK. Instead, set up a server that responds to requests from your app, "uses" the assets (whatever that might mean) and then sends the result back to the app. If this model doesn't work for the assets you have in mind, then you may want to re-think your strategy.
Also, if your primary goal is to prevent app piracy: don't even bother. You've already burned more time and money on this problem than any anti-piracy measure could possibly ever hope to save you. The return on investment for solving this problem is so low that it doesn't make sense to even think about it.
First rule of app security: Any machine to which an attacker gains unrestricted physical or electronic access now belongs to your attacker, regardless of where it actually is or what you paid for it.
Second rule of app security: Any software that leaves the physical boundaries inside which an attacker cannot penetrate now belongs to your attacker, regardless of how much time you spent coding it.
Third rule: Any information that leaves those same physical boundaries that an attacker cannot penetrate now belongs to your attacker, no matter how valuable it is to you.
The foundations of information technology security are based on these three fundamental principles; the only truly secure computer is the one locked in a safe, inside a Farraday cage, inside a steel cage. There are computers that spend most of their service lives in just this state; once a year (or less), they generate the private keys for trusted root certification authorities (in front of a host of witnesses with cameras recording every inch of the room in which they are located).
Now, most computers are not used under these types of environments; they're physically out in the open, connected to the Internet over a wireless radio channel. In short, they're vulnerable, as is their software. They are therefore not to be trusted. There are certain things that computers and their software must know or do in order to be useful, but care must be taken to ensure that they can never know or do enough to cause damage (at least not permanent damage outside the bounds of that single machine).
You already knew all this; that's why you're trying to protect the code of your application. But, therein lies the first problem; obfuscation tools can make the code a mess for a human to try to dig through, but the program still has to run; that means the actual logic flow of the app and the data it uses are unaffected by obfuscation. Given a little tenacity, an attacker can simply un-obfuscate the code, and that's not even necessary in certain cases where what he's looking at can't be anything else but what he's looking for.
Instead, you should be trying to ensure that an attacker cannot do anything with your code, no matter how easy it is for him to obtain a clear copy of it. That means, no hard-coded secrets, because those secrets aren't secret as soon as the code leaves the building in which you developed it.
These key-values you have hard-coded should be removed from the application's source code entirely. Instead, they should be in one of three places; volatile memory on the device, which is harder (but still not impossible) for an attacker to obtain an offline copy of; permanently on the server cluster, to which you control access with an iron fist; or in a second data store unrelated to your device or servers, such as a physical card or in your user's memories (meaning it will eventually be in volatile memory, but it doesn't have to be for long).
Consider the following scheme. The user enters their credentials for the app from memory into the device. You must, unfortunately, trust that the user's device is not already compromised by a keylogger or Trojan; the best you can do in this regard is to implement multi-factor security, by remembering hard-to-fake identifying information about the devices the user has used (MAC/IP, IMEI, etc), and providing at least one additional channel by which a login attempt on an unfamiliar device can be verified.
The credentials, once entered, are obfuscated by the client software (using a secure hash), and the plain-text credentials discarded; they have served their purpose. The obfuscated credentials are sent over a secure channel to the certificate-authenticated server, which hashes them again to produce the data used to verify the validity of the login. This way, the client never knows what is actually compared to the database value, the app server never knows the plaintext credentials behind what it receives for validation, the data server never knows how the data it stores for validation is produced, and a man in the middle sees only gibberish even if the secure channel were compromised.
Once verified, the server transmits back a token over the channel. The token is only useful within the secure session, is composed of either random noise or an encrypted (and thus verifiable) copy of the session identifiers, and the client application must send this token on the same channel to the server as part of any request to do something. The client application will do this many times, because it can't do anything involving money, sensitive data, or anything else that could be damaging by itself; it must instead ask the server to do this task. The client application will never write any sensitive information to persistent memory on the device itself, at least not in plain text; the client can ask the server over the secure channel for a symmetric key to encrypt any local data, which the server will remember; in a later session the client can ask the server for the same key to decrypt the data for use in volatile memory. That data won't be the only copy, either; anything the client stores should also be transmitted in some form to the server.
Obviously, this makes your application heavily dependent on Internet access; the client device cannot perform any of its basic functions without proper connection to and authentication by the server. No different than Facebook, really.
Now, the computer that the attacker wants is your server, because it and not the client app/device is the thing that can make him money or cause other people pain for his enjoyment. That's OK; you get much more bang for your buck spending money and effort to secure the server than in trying to secure all the clients. The server can be behind all kinds of firewalls and other electronic security, and additionally can be physically secured behind steel, concrete, keycard/pin access, and 24-hour video surveillance. Your attacker would need to be very sophisticated indeed to gain any kind of access to the server directly, and you would (should) know about it immediately.
The best an attacker can do is steal a user's phone and credentials and log in to the server with the limited rights of the client. Should this happen, just like losing a credit card, the legitimate user should be instructed to call an 800 number (preferably easy to remember, and not on the back of a card they'd carry in their purse, wallet or briefcase which could be stolen alongside the mobile device) from any phone they can access that connects them directly to your customer service. They state their phone was stolen, provide some basic unique identifier, and the account is locked, any transactions the attacker may have been able to process are rolled back, and the attacker is back to square one.
1. How can I completely avoid reverse engineering of an Android APK? Is this possible?
This isn't possible
2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
When somebody change a .apk extension to .zip, then after unzipping, someone can easily get all resources (except Manifest.xml), but with APKtool one can get the real content of the manifest file too. Again, a no.
3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
Again, no, but you can prevent upto some level, that is,
Download a resource from the Web and do some encryption process
Use a pre-compiled native library (C, C++, JNI, NDK)
Always perform some hashing (MD5/SHA keys or any other logic)
Even with Smali, people can play with your code. All in all, it's not POSSIBLE.
100% avoidance of reverse engineering of the Android APK is not possible, but you can use these ways to avoid extracting more data, like source code, assets form your APK, and resources:
Use ProGuard to obfuscate application code
Use NDK using C and C++ to put your application core and secure part of code in .so files
To secure resources, don't include all important resources in the assets folder with APK. Download these resources at the time of application first start up.
Here are a few methods you can try:
Use obfuscation and tools like ProGuard.
Encrypt some part of the source code and data.
Use a proprietary inbuilt checksum in the app to detect tampering.
Introduce code to avoid loading in a debugger, that is, let the app have the ability to detect the debugger and exit / kill the debugger.
Separate the authentication as an online service.
Use application diversity
Use the finger printing technique, for example, hardware signatures of the devices from different subsystem before authenticating the device.
1. How can I completely avoid reverse engineering of an Android APK? Is this possible?
Impossible
2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
Impossible
3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
More tough - possible, but in fact it will be more tough mostly for the average user, who is just googling for hacking guides. If somebody really wants to hack your app - it will be hacked, sooner or later.
1. How can I completely avoid reverse engineering of an Android APK? Is this possible?
That is impossible
2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
Developers can take steps such as using tools like ProGuard to obfuscate their code, but up until now, it has been quite difficult to completely prevent someone from decompiling an app.
It's a really great tool and can increase the difficulty of 'reversing' your code whilst shrinking your code's footprint.
Integrated ProGuard support: ProGuard is now packaged with the SDK Tools. Developers can now obfuscate their code as an integrated part of a release build.
3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
While researching, I came to know about HoseDex2Jar. This tool will protect your code from decompiling, but it seems not to be possible to protect your code completely.
Some of helpful links, you can refer to them.
Proguard, Android, and the Licensing Server
Securing Android LVL Applications
Stack Overflow question Is it really impossible to protect Android apps from reverse engineering?
Stack Overflow question How to prevent reverse engineering of an Android APK file to secure code?
The main question here is that can the dex files be decompiled and the answer is they can be "sort of". There are disassemblers like dedexer and smali.
ProGuard, properly configured, will obfuscate your code. DexGuard, which is a commercial extended version of ProGuard, may help a bit more. However, your code can still be converted into smali and developers with reverse-engineering experience will be able to figure out what you are doing from the smali.
Maybe choose a good license and enforce it by the law in best possible way.
Your client should hire someone that knows what they are doing, who can make the right decisions and can mentor you.
Talk above about you having some ability to change the transaction processing system on the backend is absurd - you shouldn't be allowed to make such architectural changes, so don't expect to be able to.
My rationale on this:
Since your domain is payment processing, its safe to assume that PCI DSS and/or PA DSS (and potential state/federal law) will be significant to your business - to be compliant you must show you are secure. To be insecure then find out (via testing) that you aren't secure, then fixing, retesting, etcetera until security can be verified at a suitable level = expensive, slow, high-risk way to success. To do the right thing, think hard up front, commit experienced talent to the job, develop in a secure manner, then test, fix (less), etcetera (less) until security can be verified at a suitable level = inexpensive, fast, low-risk way to success.
As someone who worked extensively on payment platforms, including one mobile payments application (MyCheck), I would say that you need to delegate this behaviour to the server. No user name or password for the payment processor (whichever it is) should be stored or hardcoded in the mobile application. That's the last thing you want, because the source can be understood even when if you obfuscate the code.
Also, you shouldn't store credit cards or payment tokens on the application. Everything should be, again, delegated to a service you built. It will also allow you, later on, to be PCI-compliant more easily, and the credit card companies won't breathe down your neck (like they did for us).
If we want to make reverse engineering (almost) impossible, we can put the application on a highly tamper-resistant chip, which executes all sensitive stuff internally, and communicates with some protocol to make controlling GUI possible on the host. Even tamper-resistant chips are not 100% crack proof; they just set the bar a lot higher than software methods. Of course, this is inconvenient: the application requires some little USB wart which holds the chip to be inserted into the device.
The question doesn't reveal the motivation for wanting to protect this application so jealously.
If the aim is to improve the security of the payment method by concealing whatever security flaws the application may have (known or otherwise), it is completely wrongheaded. The security-sensitive bits should in fact be open-sourced, if that is feasible. You should make it as easy as possible for any security researcher who reviews your application to find those bits and scrutinize their operation, and to contact you. Payment applications should not contain any embedded certificates. That is to say, there should be no server application which trusts a device simply because it has a fixed certificate from the factory. A payment transaction should be made on the user's credentials alone, using a correctly designed end-to-end authentication protocol which precludes trusting the application, or the platform, or the network, etc.
If the aim is to prevent cloning, short of that tamper-proof chip, there isn't anything you can do to protect the program from being reverse-engineered and copied, so that someone incorporates a compatible payment method into their own application, giving rise to "unauthorized clients". There are ways to make it difficult to develop unauthorized clients. One would be to create checksums based on snapshots of the program's complete state: all state variables, for everything. GUI, logic, whatever. A clone program will not have exactly the same internal state. Sure, it is a state machine which has similar externally visible state transitions (as can be observed by inputs and outputs), but hardly the same internal state. A server application can interrogate the program: what is your detailed state? (i.e. give me a checksum over all of your internal state variables). This can be compared against dummy client code which executes on the server in parallel, going through the genuine state transitions. A third party clone will have to replicate all of the relevant state changes of the genuine program in order to give the correct responses, which will hamper its development.
The other upvoted answers here are correct. I just want to provide another option.
For certain functionality that you deem important you can host the WebView control in your app. The functionality would then be implemented on your web server. It will look like it's running in your application.
Agreed with #Muhammad Saqib here:
https://stackoverflow.com/a/46183706/2496464
And #Mumair gives good starting steps:
https://stackoverflow.com/a/35411378/474330
It is always safe to assume that everything you distribute to your user's device, belong to the user. Plain and simple. You may be able to use the latest tools and procedure to encrypt your intellectual property, but there is no way to prevent a determined person from 'studying' your system. And even if the current technology may make it difficult for them to gain unwanted access, there might be some easy way tomorrow, or even just the next hour!
Thus, here comes the equation:
When it comes to money, we always assume that client is untrusted.
Even in as simple as an in-game economy. (Especially in games! There are more 'sophisticated' users there and loopholes spread in seconds!)
How do we stay safe?
Most, if not all, of our key processing systems (and database of course) located on the server side. And between the client and server, lies encrypted communications, validations, etc. That is the idea of a thin client.
I suggest you to look at Protect Software Applications from Attacks. It's a commercial service, but my friend's company used this and they are glad to use it.
There is no way to completely avoid reverse engineering of an APK file. To protect application assets, resources, you can use encryption.
Encryption will make harder to use it without decryption. Choosing some strong encryption algorithm will make cracking harder.
Adding some spoof code into your main logic to make it harder to crack.
If you can write your critical logic in any native language and that surely will make harder to decompile.
Using any third party security frameworks, like Quixxi
APK signature scheme v2 in Android 7.0 (Nougat)
The PackageManager class now supports verifying apps using the APK signature scheme v2. The APK signature scheme v2 is a whole-file signature scheme that significantly improves verification speed and strengthens integrity guarantees by detecting any unauthorized changes to APK files.
To maintain backward-compatibility, an APK must be signed with the v1 signature scheme (JAR signature scheme) before being signed with the v2 signature scheme. With the v2 signature scheme, verification fails if you sign the APK with an additional certificate after signing with the v2 scheme.
APK signature scheme v2 support will be available later in the N Developer Preview.
http://developer.android.com/preview/api-overview.html#apk_signature_v2
Basically it's not possible. It will never be possible. However, there is hope. You can use an obfuscator to make it so some common attacks are a lot harder to carry out including things like:
Renaming methods/classes (so in the decompiler you get types like a.a)
Obfuscating control flow (so in the decompiler the code is very hard to read)
Encrypting strings and possibly resources
I'm sure there are others, but that's the main ones. I work for a company called PreEmptive Solutions on a .NET obfuscator. They also have a Java obfuscator that works for Android as well one called DashO.
Obfuscation always comes with a price, though. Notably, performance is usually worse, and it requires some extra time around releases usually. However, if your intellectual property is extremely important to you, then it's usually worth it.
Otherwise, your only choice is to make it so that your Android application just passes through to a server that hosts all of the real logic of your application. This has its own share of problems, because it means users must be connected to the Internet to use your app.
Also, it's not just Android that has this problem. It's a problem on every app store. It's just a matter of how difficult it is to get to the package file (for example, I don't believe it's very easy on iPhones, but it's still possible).
100% security of the source code and resources is not possible in Android. But, you can make it little bit difficult for the reverse engineer. You can find more details on this in below links:
Visit Saving constant values securely
and Mobile App Security Best Practices for App Developers.
If your app is this sensitive then you should consider the payment processing part at the server side. Try to change your payment processing algorithms. Use an Android app only for collecting and displaying user information (i.e., account balance) and rather than processing payments within Java code, send this task to your server using a secure SSL protocol with encrypted parameters. Create a fully encrypted and secure API to communicate with your server.
Of course, it can also be hacked too and it has nothing to do with source code protection, but consider it another security layer to make it harder for hackers to trick your app.
It’s not possible to completely avoid reverse engineering, but by making them more complex internally, you could make it more difficult for attackers to see the clear operation of the app, which may reduce the number of attack vectors.
If the application handles highly sensitive data, various techniques exist which can increase the complexity of reverse engineering your code. One technique is to use C/C++ to limit easy runtime manipulation by the attacker. There are ample C and C++ libraries that are very mature and easy to integrate with and Android offers JNI.
An attacker must first circumvent the debugging restrictions in order to attack the application on a low level. This adds further complexity to an attack. Android applications should have android:debuggable=”false” set in the application manifest to prevent easy run time manipulation by an attacker or malware.
Trace Checking – An application can determine whether or not it is currently being traced by a debugger or other debugging tool. If being traced, the application can perform any number of possible attack response actions, such as discarding encryption keys to protect user data, notifying a server administrator, or other such type responses in an attempt to defend itself. This can be determined by checking the process status flags or using other techniques like comparing the return value of ptrace attach, checking the parent process, blacklist debuggers in the process list or comparing timestamps on different places of the program.
Optimizations - To hide advanced mathematical computations and other types of complex logic, utilizing compiler optimizations can help obfuscate the object code so that it cannot easily be disassembled by an attacker, making it more difficult for an attacker to gain an understanding of the particular code. In Android this can more easily be achieved by utilizing natively compiled libraries with the NDK. In addition, using an LLVM Obfuscator or any protector SDK will provide better machine code obfuscation.
Stripping binaries – Stripping native binaries is an effective way to increase the amount of time and skill level required of an attacker in order to view the makeup of your application’s low level functions. By stripping a binary, the symbol table of the binary is stripped, so that an attacker cannot easily debug or reverse engineer an application.You can refer techniques used on GNU/Linux systems like sstriping or using UPX.
And at last you must be aware about obfuscation and tools like ProGuard.
I know some banking apps are using DexGuard which provides obfuscation as well as encryption of classes, strings, assets, resource files and native libraries.
I can see that there are good answers for this question. In addition to that, you can use Facebook ReDex to optimize the code. ReDex works on the .dex level where ProGuard works as .class level.
Tool: Using ProGuard in your application, it can be restricted to reverse engineering your application
Nothing is secure when you put it on end-users hand but some common practice may make this harder for attacker to steal data.
Put your main logic (algorithms) on the server side.
Communicate with the server and client; make sure communication between server and client is secured via SSL or HTTPS; or use other techniques for key-pair generation algorithms (ECC and RSA). Ensure that sensitive information is remain end-to-end encrypted.
Use sessions and expire them after a specific time interval.
Encrypt resources and fetch them from the server on demand.
Or you can make a hybrid app which access system via webview protect resource + code on server
Multiple approaches; this is obvious you have to sacrifice among performance and security.
Aren't Trusted Platform Module (TPM) chips supposed to manage protected code for you?
They are becoming common on PCs (especially Apple ones) and they may already exist in today's smartphone chips. Unfortunately, there isn't any OS API to make use of it yet. Hopefully, Android will add support for this one day. That's also the key to clean content DRM (which Google is working on for WebM).
How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
An APK file is protected with the SHA-1 algorithm. You can see some files in the META-INF folder of APK. If you extract any APK file and change any of its content and zip it again and when you run that new APK file on an Android machine, it will not work, because the SHA-1 hashes will never match.
when they have the app on their phone, they have full access to memory of it. so if u want to prevent it from being hacked, you could try to make it so that u cant just get the static memory address directly by using a debugger. they could do a stack buffer overflow if they have somewhere to write and they have a limit. so try to make it so when they write something, if u have to have a limit, if they send in more chars than limit, if (input > limit) then ignore, so they cant put assembly code there.
Just an addition to already good answers above.
Another trick I know is to store valuable codes as Java Library. Then set that Library to be your Android Project. Would be good as C .so file but Android Lib would do.
This way these valuable codes stored on Android Library won't be visible after decompiling.

Prevent android app reverse engineering on Airwatch managed devices

From the Airwatch features and documentation, they mention that the apps are containerized. And thus, all the app content is safely encrypted and not easily exposed.
For rooted devices, Airwatch can detect such devices and perform a remote wipe of corporate data.
I wanted to check if Airwatch can guarantee that the application code cannot be reverse-engineered, to extract sensitive data from the code base, like API keys, Encryption keys etc.
I wanted to check if Airwatch can guarantee that the application code cannot be reverse-engineered, to extract sensitive data from the code base, like API keys, Encryption keys etc.
While I cannot advise you in relation to Airwatch, because I am not familiar with it, I can alert you that if you are storing this type of sensitive information in your mobile app, then you are already at risk, because reverse engineering secrets are not that hard as I show in the article How to Extract an API key from a Mobile App by Static Binary analysis:
Summary
Using MobSF to reverse engineer an APK for a mobile app allows us to quickly extract an API key and also gives us a huge amount of information we can use to perform further analysis that may reveal more attack vectors into the mobile app and API server. It is not uncommon to also find secrets for accessing third part services among this info or in the decompiled source code that is available to download in smali and java formats.
Now you may be questioning yourself as to how you would protect the API key, and for that I recommend starting by reading this series of articles about Mobile Api Security Techniques.
The lesson here is that shipping your API key or any other secret in the mobile app code is like locking the door of your home but leaving the key under the mat!
or even with a MitM attack in this other article:
Conclusion
While we can use advanced techniques, like JNI/NDK, to hide the API key in the mobile app code, it will not impede someone from performing a MitM attack in order to steal the API key. In fact a MitM attack is easy to the point that it can even be achieved by non developers.
We have highlighted some good resources that will show you other techniques to secure mobile APIs, like certificate pinning, even though it can be challenging to implement and to maintain. We also noted that certificate pinning by itself is not enough since it can be bypassed, thus other techniques need to be employed to guarantee that no one can steal your API key, and, if you have completed the deep dive I recommended, you will know by now that in the context of a mobile API, the API key can be protected from being stolen through a MitM attack. So if you have not already done it, please read the articles I linked to in the section about mitigating MitM attacks.
Also more skilled developers can hook, at run-time, some introspection frameworks, like Frida and xPosed, to intercept and modify behavior of any of your running code. So even if they are not able to decrypt your data, they will intercept the content after you have decrypted it in your application. To be able to do this, they just need to know were to hook into your code, and this they achieve by de-compiling and reverse engineer the code of your mobile app with the help of tools, like the Mobile Security Framework or APKTool, but more tools exist in the open source community.
Mobile Security Framework
Mobile Security Framework is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing framework capable of performing static analysis, dynamic analysis, malware analysis and web API testing.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
APKTool
A tool for reverse engineering 3rd party, closed, binary Android apps. It can decode resources to nearly original form and rebuild them after making some modifications. It also makes working with an app easier because of the project like file structure and automation of some repetitive tasks like building apk, etc.

How to hide API Keys in Android

I'm creating an Android App which connects to a server through an API. I'm worried about a security issue, as far as I'm concerned, anyone can decompile your .apk and take a look or modify your code.
Knowing that, where do I need to save the API Keys inside the app in order to avoid some bad guy stealing it and access the server, for example, to modify my database?
Thanks
Understand how an attacker will go after your application, 95% of the time in this order:
Inspect traffic between your application and the server by using an intercepting proxy like Burp -- very easy to do. See how I did it on Words With Friends here (this was for iOS device but same concept works with Android).
You can stop traffic inspection with certificate pinning, but they can break that by rooting the device and using some hacker tools on Android. So you need Android root detection.
The other attack is scanning your binary. You will need to obfuscate it with a tool like DexGuard.
None of these methods are bullet-proof: generally trying to hide secrets in client code is a losing game. Don't put more effort into it than it is worth.

Hybrid Mobile App Security

in my case i have an Hybrid Mobile App with Cordova.
How can I avoid changes to my javascript/AngularJs if someone download my *.apk or *.ipa and try to use it with a browser ?
I want to encrypt my source javascript code or if possible want to full content in *.apk.
A way to avoid other person make changes in your code is minify and obfuscate the javascript files.
Here you can test it by yorself.
But if you want to minify all the javascipt files of your project, i suggest you to use some task runner like grunt with Uglify.
To avoid unwanted requests you have to configure Cross-origin resource sharing (CORS) in your server.
While most people will agree that Javascript encryption is not a fool-proof method, there are a few cases where increasing the efforts required and slowing down the attacker often helps. You can check tools like refresh-sf, Uglify and also obfuscator.io for code obfuscation. However, basic encryption and obfuscation practices don’t prevent people from lifting and using your code, and considering the advanced cyber-attacks happening these days, hackers already have a way around such practices.
My suggestion would be to invest in mobile application security tools like Arxan, AppSealing, or Appdome to apply a robust security architecture on your application. Having personally used AppSealing, I can say that they have regular updates and definitely keep you guarded from the latest attacks. Their framework is integrated directly in your apk so there’s no additional coding required. I would suggest you look for such mobile app security solutions for a more robust security architecture. I am sure there must be more solutions apart from the ones I have mentioned above. Just research and pick the one that suits your requirement and budget.
First of all minify all your js file into one single bundle.js and use cordova-anti-tampering plugin. if somebody tries to tamper your apk and rebuild the apk app will get crashed.
This plugin will create hash value for all the files. if you want to secure your app more You can store these hashvalues to server once and check your files hash values with server stored hash values.

Is there a need for CAPTCHA verification on native android apps?

I know CAPTCHA verification is required on web applications so that people don't write scripts to trigger form submissions over and over again.
But is there a need for this on native android apps? I mean I don't think someone will be able to write scripts to trigger android form submissions from a thrid party app, would they? If not, IS using CAPTCHA really recommended?
Note: I am not asking this randomly, I have been asked to build CAPTCHA verification on the client side on my current project. However, I do not see the point in this.
There is a point to it. The specific attack scenario that can I envision is someone reverse-engineering the application to find out what underlying protocol is used, and then use that protocol to automate the operation that is supposed to be done manually. If the message also contains an answer to a CAPTCHA, this will be harder.
The question is whether or not such a scenario is likely enough (and severe enough) to cause the extra burden on the user. Keep in mind that having a CAPTCHA is a very effective way to make your users dislike the application.
It's impossible to say whether the tradeoff is worth it. In most cases though, it's not. Especially given that that CAPTCHA's are not actually very effective.

Categories

Resources