I'm a beginner to Flutter and programming in general.
First I'd like to know if it's possible to notify the creator of an app or it's back end service that a fingerprint has been deregistered and a new one added.
Note: The objective is not to get fingerprint data but to uniquely identify people in one way or the other. For example assuming an app would like to manage dormitories that have a closing time of say 9 pm and intends to generate a report of everyone present inside by using their device location and a service on a local network that checks location data and asks for a fingerprint authentication, there's every possibility that users could leave their devices to other users and register their fingerprints as well allowing them to provide authentication and creating inaccurate reports for the dormitory.
Please any suggestions for the above situation?
There is no support for "detecting de-registration" directly. Even if it were, it would not be useful.
tldr; Access, guarded by a device-secret fingerprint or otherwise, from an arbitrary and uncontrolled device cannot be used to guarantee that the person who 'owns' the device is present. It is the data governance regulations (EULA, company/dorm policy, etc.) and trust in the user to adhere to such, including reporting violations, which allows the device-to-person assertion.
On a mobile device, fingerprint authentication is effectively a per-device secret than can accept any of the registered fingerprints which is used to protect other access/secrets.
Consider:
Fingerprints are not accessible directly by applications and thus cannot be used as "user IDs".
Each device uses a private per-device key to encrypt and store the fingerprint information. This information is not accessible externally nor is it uploaded.
See 'Secure Enclave' for iOS and 'Trusted Execution Environment' for Android.
A person can have multiple fingerprints registered per device. This implies that multiple fingerprints from different people can be added and there is no way to determine the difference. Likewise, a person could register a fingerprint for a different finger on multiple devices.
The encoding of a fingerprint is a "one way" data modeling that accepts the fingerprint as registered. The actual fingerprint data will differ, even before it's securely saved: it is only the application of this model onto the fingerprint pattern being applied that is useful.
Now, if there was a physically controlled device / system ..
An example of a physically controlled system might be usages of fixed terminals controlling single-person entry/exit doors (with security cameras and/or a physical guard) where people can only register a fingerprint in front of a trusted person after appropriate ID verification.. how much does it really matter? And what happens when a person climbs through a window?
Having the app take a detailed face / eye scan off a live camera and sending it in to a controlled server for some internal biometric-based verification might be some [draconian] half-way step .. I'd say "No Thanks" ;-)
On iOS, if something is protected by fingerprint or faceID, the developer can set an option that the data can only be accessed if the set of registered fingerprints/ faces is unchanged. So you could send a one-time code that the user puts in their keychain, and if the change registered fingerprints, it’s gone. Now if I registered fingerprints of myself and my three best mates, you can’t detect that.
Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary? This app will be distributed on Google Play and the Apple App Store so it should be implied that someone will have access to its binary and try to reverse engineer it.
I was thinking something involving the app signatures, since every published app must be signed somehow, but I can't figure out how to do it in a secure way. Maybe a combination of getting the app signature, plus time-based hashes, plus app-generated key pairs and the good old security though obscurity?
I'm looking for something as fail proof as possible. The reason why is because I need to deliver data to the app based on data gathered by the phone sensors, and if people can pose as my own app and send data to my api that wasn't processed by my own algorithms, it defeats its purpose.
I'm open to any effective solution, no matter how complicated. Tin foil hat solutions are greatly appreciated.
Any credentials that are stored in the app can be exposed by the user. In the case of Android, they can completely decompile your app and easily retrieve them.
If the connection to the server does not utilize SSL, they can be easily sniffed off the network.
Seriously, anybody who wants the credentials will get them, so don't worry about concealing them. In essence, you have a public API.
There are some pitfalls and it takes extra time to manage a public API.
Many public APIs still track by IP address and implement tarpits to simply slow down requests from any IP address that seems to be abusing the system. This way, legitimate users from the same IP address can still carry on, albeit slower.
You have to be willing to shut off an IP address or IP address range despite the fact that you may be blocking innocent and upstanding users at the same time as the abusers. If your application is free, it may give you more freedom since there is no expected level of service and no contract, but you may want to guard yourself with a legal agreement.
In general, if your service is popular enough that someone wants to attack it, that's usually a good sign, so don't worry about it too much early on, but do stay ahead of it. You don't want the reason for your app's failure to be because users got tired of waiting on a slow server.
Your other option is to have the users register, so you can block by credentials rather than IP address when you spot abuse.
Yes, It's public
This app will be distributed on Google Play and the Apple App Store so it should be implied that someone will have access to its binary and try to reverse engineer it.
From the moment its on the stores it's public, therefore anything sensitive on the app binary must be considered as potentially compromised.
The Difference Between WHO and WHAT is Accessing the API Server
Before I dive into your problem I would like to first clear a misconception about who and what is accessing an API server. I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
Think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
So if you are not using user authentication in the app, then you are left with trying to attest what is doing the request.
Mobile Apps should be as much dumb as possible
The reason why is because I need to deliver data to the app based on data gathered by the phone sensors, and if people can pose as my own app and send data to my api that wasn't processed by my own algorithms, it defeats its purpose.
It sounds to me that you are saying that you have algorithms running on the phone to process data from the device sensors and then send them to the API server. If so then you should reconsider this approach and instead just collect the sensor values and send them to the API server and have it running the algorithm.
As I said anything inside your app binary is public, because as yourself said, it can be reverse engineered:
should be implied that someone will have access to its binary and try to reverse engineer it.
Keeping the algorithms in the backend will allow you to not reveal your business logic, and at same time you may reject requests with sensor readings that do not make sense(if is possible to do). This also brings you the benefit of not having to release a new version of the app each time you tweak the algorithm or fix a bug in it.
Runtime attacks
I was thinking something involving the app signatures, since every published app must be signed somehow, but I can't figure out how to do it in a secure way.
Anything you do at runtime to protect the request you are about to send to your API can be reverse engineered with tools like Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
Your Suggested Solutions
Security is all about layers of defense, thus you should add as many as you can afford and required by law(e.g GDPR in Europe), therefore any of your purposed solutions are one more layer the attacker needs to bypass, and depending on is skill-set and time is willing to spent on your mobile app it may prevent them to go any further, but in the end all of them can be bypassed.
Maybe a combination of getting the app signature, plus time-based hashes, plus app-generated key pairs and the good old security though obscurity?
Even when you use key pairs stored in the hardware trusted execution environment, all an attacker needs to do is to use an instrumentation framework to hook in the function of your code that uses the keys in order to extract or manipulate the parameters and return values of the function.
Android Hardware-backed Keystore
The availability of a trusted execution environment in a system on a chip (SoC) offers an opportunity for Android devices to provide hardware-backed, strong security services to the Android OS, to platform services, and even to third-party apps.
While it can be defeated I still recommend you to use it, because not all hackers have the skill set or are willing to spend the time on it, and I would recommend you to read this series of articles about Mobile API Security Techniques to learn about some complementary/similar techniques to the ones you described. This articles will teach you how API Keys, User Access Tokens, HMAC and TLS Pinning can be used to protect the API and how they can be bypassed.
Possible Better Solutions
Nowadays I see developers using Android SafetyNet to attest what is doing the request to the API server, but they fail to understand it's not intended to attest that the mobile app is what is doing the request, instead it's intended to attest the integrity of the device, and I go in more detail on my answer to the question Android equivalent of ios devicecheck. So should I use it? Yes you should, because it is one more layer of defense, that in this case tells you that your mobile app is not installed in a rooted device, unless SafetyNet has been bypassed.
Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary?
You can allow the API server to have an high degree of confidence that is indeed accepting requests only from your genuine app binary by implementing the Mobile App Attestation concept, and I describe it in more detail on this answer I gave to the question How to secure an API REST for mobile app?, specially the sections Securing the API Server and A Possible Better Solution.
Do you want to go the Extra Mile?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
No. You're publishing a service with a public interface and your app will presumably only communicate via this REST API. Anything that your app can send, anyone else can send also. This means that the only way to secure access would be to authenticate in some way, i.e. keep a secret. However, you are also publishing your apps. This means that any secret in your app is essentially being given out also. You can't have it both ways; you can't expect to both give out your secret and keep it secret.
Though this is an old post, I thought I should share the updates from Google in this regard.
You can actually ensure that your Android application is calling the API using the SafetyNet mobile attestation APIs. This adds a little overhead on the network calls and prevents your application from running in a rooted device.
I found nothing similar like SafetyNet for iOS. Hence in my case, I checked the device configuration first in my login API and took different measures for Android and iOS. In case of iOS, I decided to keep a shared secret key between the server and the application. As the iOS applications are a little bit difficult to reversed engineered, I think this extra key checking adds some protection.
Of course, in both cases, you need to communicate over HTTPS.
As the other answers and comments imply, you cant truly restrict API access to only your app but you can take different measures to reduce the attempts. I believe the best solution is to make requests to your API (from native code of course) with a custom header like "App-Version-Key" (this key will be decided at compile time) and make your server check for this key to decide if it should accept or reject. Also when using this method you SHOULD use HTTPS/SSL as this will reduce the risk of people seeing your key by viewing the request on the network.
Regarding Cordova/Phonegap apps, I will be creating a plugin to do the above mentioned method. I will update this comment when its complete.
there is nothing much you can do. cause when you let some one in they can call your APIs. the most you can do is as below:
since you want only and only your application (with a specific package name and signature) calls your APIs, you can get the signature key of your apk pragmatically and send is to sever in every API call and if thats ok you response to the request. (or you can have a token API that your app calls it every beginning of the app and then use that token for other APIs - though token must be invalidated after some hours of not working with)
then you need to proguard your code so no one sees what you are sending and how you encrypt them. if you do a good encrypt decompiling will be so hard to do.
even signature of apk can be mocked in some hard ways but its the best you can do.
Someone have looked at Firebase App Check ?
https://firebase.google.com/docs/app-check
Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary?
I'm not sure if there is an absolute solution.
But, you can reduce unwanted requests.
Use an App Check:
The "Firebase App Check" can be used cross-platform (https://firebase.google.com/docs/app-check) - credit to #Xande-Rasta-Moura
iOS: https://developer.apple.com/documentation/devicecheck
Android: https://android-developers.googleblog.com/2013/01/verifying-back-end-calls-from-android.html
Use BasicAuth (for API requests)
Allow a user-agent header for mobile devices only (for API requests)
Use a robots.txt file to reduce bots
User-agent: *
Disallow: /
I'm writing a ringtone gallery app which ringtones reside on a server and they can be downloaded by user.
What I want is to check and verify if the connection is really from my app not other apps or a HTTP request generator. for example I don't like someone write an app that uses my back end and show his ads in the app. It's like image leaching in web site which is prevented by checking the referrer.
It's not possible to insert a key in the app as android apps can be decompiled so easily. I thought of gaining the app signature and send it's hash as a key, but it's like any app can access other apps signature hash.
what about writing part of app which do the communication in native code? is it decompilable as easy as java code?
I really can't think of any other way and I don't like others use my resources for their benefit.
There are a couple of things you can do.
Create your own Certificate Authority, ship a certificate with your app and use two-way TLS authentication. This does not protect against decompilation and reverse-engineering but protects traffic en route.
Use the advice in this slide deck to detect modifications and debuggers.
Use Jelly Bean's hardware-backed secure storage.
At the end of the day, though, DRM is a lost battle. If the user has root access, all bets are off, with or without obfuscation (which native libraries are). The only question is how important is your data. For 90% of applications, running it through ProGuard makes it nearly impossible to untangle (especially if you use data flow obfuscation). Along with the certificate approach, that should suffice for most things.
Alternatively, try to change your model, so that you're authenticating the user and not the app - that's far simpler!
I am working on an applikation for Android platform. The application uses heavy amounts of HTTP calls to my webserver. This works out verry well but im in need of assistance in securing my calls and webserver.
I know that i can use SSL through Https to encrypt my connection both clientside and serverside, this is not a problem and will ofcourse be done when launching the application. But what would the most secure way be to have a session for the logged in device?
Ive thought about making a mysql based session system containing the following rows
id - sessKey - sessCont - sessUid - sessTime
sessKey will be a random generated 32 bit key. sessCont a JSON array of the stored informations and sessUid will be the user id of the user signed on. sessTime would contain a timestamp.
This session will be set on login and the phone would then recive the sessKey + sessId. When making calls the key will be changed and returned to the phone again. If a call is 10 minutes later than the latest call the session will close down and a new one will have to be made.
Yet i keep seing ways of compromising this approach, as well as i can with all other approches im able to think off.
How would i manage to make the best possible security and session control from my phone to my serverside script?
Thanks in advance.
Jonas
Alright numbered list time...
If you're using an SSL connection a good portion of security is already on your side. You can cross sniffing off your list of vulnerabilities.
Most of the leftover vulnerability will be on the user end, can hackers monitor the hardware on the user end and grab a session information after it's been transferred to the user's hardware, which in this case is an android phone. App data is protected from other apps so unless you, the developer, or your user is doing something insanely reckless it should be secure.
Which leads me to #3, all the rest of the security really lands in your lap as the developer. If you have cross-site scripting (XSS), the session IDs can be guessed easily, or you are vulnerable to session fixation, or your session ID storage is weak (SQL injection?) then you've effectively undone all the good work you did with every other measure of security.
In the end there are always ways to hack a system, but if you follow those three steps you've done everything you can do in order to prevent hackers. The rest unfortunately lies in the parts we can't touch; Android operating system, cell phone networks, user's common sense.
P.S. The most secure method would probably be to trash the session idea. Store the user id (a number that could mean anything), and a md5 encrypted version of their password. Be sure to add something funky so hackers can't just look up the reverse of common passwords. (IE. theirPassword+userid+HACKTHISSUCKERS) and even if someone goes to an md5 reverser they won't be able to undo your hash. And then every time you make a request to your server, do it over SSL and when the authentication checks out, send the info. Secure SSL connection, secure md5 passhash, no security leaks.
Even if a hacker somehow found what your app was sending to your server; a number and a undecipherable hash. The only way they could find out what your app was sending was if your user was being negligent and allowed their phone to be connected to the hacker's hardware that was actually capable of intercepting POST data before it was sent over the SSL connection.
I'm writing an Android app that communicates via HTTPS with a server application. On the server side, I have to be absolutely sure about the Android app's integrity. This means that the server app needs to be sure that it's communicating with the Android app that I developed and not with a re-written one (e.g. after decompiling the original app or after having rooted the device).
Is there a possibility to ensure that? Maybe there is a possibility with the signature of the apk file?
Any hint is appreciated.
Regards,
Peter
You are trying to address a known problem:
You can never trust an application on an open device (mobile phone, desktop computer). In order to trust it, it should be tamper proof. An example of such device is a SmartCard. Mobile devices are certainly not it.
You should never send data to device that user is not supposed to see. The implication of this is that all business logic must be done on the server.
All requests to the server should be authenticated with user's credentials (username/password) and made via a secure protocol (HTTPS/SSL).
No way. Whatever is in user's hands, is not yours anymore. Even if you somehow manage to transfer the APK to the server for validation, nothing prevents the hacked program send an original copy to the server.
In order to validate that your software is running, the client devices need to be able to provide remote attestation services, which is one of many piles of acronyms in the TPM world. I found that someone has been working on providing TPM services, including IBM's IMA, which is almost good enough for what you want.
Details here: http://www.vogue-project.de/cms/upload/vogueSoftware/Manual.pdf (Google Quickview).
Of course, this is emulating the TPM, and requires patching the Android kernel. But perhaps one of the various manufacturers would be willing to build a model with the TPM hardware included for you?