I am trying to learn about making secure mobile applications. I am curious to know, do we need Certificate pinning if our network calls from mobile to server use https ?
Yes, you should save it as a raw file in the folder of your app and use it to call the server for the requests.
Pay attention, if you have a self signed certificate you should force it as made by trusted authority.
I am curious to know, do we need Certificate pinning if our network calls from mobile to server use https ?
https guarantees that the data in transit between your mobile app and API server is encrypted and cannot be spied on by third parties, thus partially preventing it against Man in the Middle attacks.
So I say partially because an attacker can induce users to install a custom ssl certificate in order for them to use a free wifi. This is usually done with fake wifi captive portals where you have to sign in in order to have free wifi, like the ones you found in airports, trains, etc. If an attacker succeeds in tricking the user then all the traffic is routed through the attackers computer and despite being https it can be decrypted once the mobile app is using the attackers custom certificate, but the attacker will always use the original certificate when communicating with the API server, thus neither the API, server, mobile app or the user are aware that the communication is being intercepted and maybe even tampered with.
So using certificate pinning will prevent any type of a Man in Middle Attack to occur, even the ones where the user of the mobile app is the attacker that intentionally decrypts is own traffic in order to reverse engineer the mobile app communication with the API server in order to gain enough knowledge to mount an attack against it.
Now is time for the bad news... Certificate pinning can be bypassed when the attacker as access or controls the mobile device. This article walks you through on how certificate pinning can be used and bypassed by using a framework like xPosed, that will intercept the calls to validate the certificate, thus bypassing the validation process.
So should I use certificate pinning? Yes you should, because is one more layer of defence and requires more effort for an attacker to reverse engineer your mobile app, that he may deem not be worthy the effort, but if he deems worthy the effort then you may want to search in google for a Mobile App Attestation solution to protect further the communication between the mobile app and the API server.
But keep in mind that while certificate pinning may be easy to implement in your mobile app it may be an operational nightmare to maintain. Be sure to read the section PINNING IS A NIGHTMARE in the link I referenced previously about bypassing certificate pinning.
Edit
You can read the article Steal that API Key with a Man in the Middle Attack to see a practical example on how a secret can be extracted from a https request sent from a mobile app to the backend API, by performing a MitM attack.
So, in this article you will learn how to setup and run a MitM attack to intercept https traffic in a mobile device under your control, so that you can steal the API key. Finally, you will see at a high level how MitM attacks can be mitigated.
While we can use advanced techniques, like JNI/NDK, to hide the API key in the mobile app code, it will not impede someone from performing a MitM attack in order to steal the API key. In fact a MitM attack is easy to the point that it can even be achieved by non developers.
Afterwards you can also take a look to the article Securing HTTPS with Certificate Pinning on Android to understand how you can implement certificate pinning to prevent a MitM attack.
In this article you have learned that certificate pinning is the act of associating a domain name with their expected X.509 certificate, and that this is necessary to protect trust based assumptions in the certificate chain. Mistakenly issued or compromised certificates are a threat, and it is also necessary to protect the mobile app against their use in hostile environments like public wifis, or against DNS Hijacking attacks.
You also learned that certificate pinning should be used anytime you deal with Personal Identifiable Information or any other sensitive data, otherwise the communication channel between the mobile app and the API server can be inspected, modified or redirected by an attacker.
Finally you learned how to prevent MitM attacks with the implementation of certificate pinning in an Android app that makes use of a network security config file for modern Android devices, and later by using TrustKit package which supports certificate pinning for both modern and old devices.
I hope that both articles have made more clear why certificate pinning is recommended to be used to secure https connections from prying eyes.
EDIT 2
If you want to implement certificate pinning but are afraid of making any mistake while creating the network_security_config.xml configuration file then just use this free Mobile Certificate Pinning Generator to create it for you:
and it will give you a network_security_config.xml configuration file ready to be copy pasted:
Related
I am currently designing a native app which will have a simple username / password login. On a valid authentication, the backend server is issuing a JWT access token and refresh token which are eventually used throughout subsequent API calls.
My concern is that using such an approach, my app can be prone to phishing whereby a hacker can decompile and recompile my app with malicious interceptors.
In order to overcome this, I was thinking of designing my Login API, so that rather than having an accessToken and a refreshToken in the payload response of the API, the Login API will issue a 301 Redirect on myapp://some-path?accessToken=X&refreshToken=Y. This will ensure that if a phishing app is calling my APIs, the accessToken and the refreshToken are sent to the original APP.
Is this a correct approach? If not, which design would be suggested?
How would that help? If I'm going into your app to hack and recompile, I'll be taking over the entire app. I won't leave a copy of the old version, I'm just going to replace the urls. So when you redirect to myapp://somepath, it'll be redirecting to the same hacked app.
Secondly, how would a redirect do anything? Making an HTTP request doesn't cause an app to be launched on the client. It causes the HTTP request to return a HTTPResponse object with a response code of 301. So your redirect URL wouldn't be launched anyway.
Third, that's already too late. By the time you have an access and refresh token, you've logged in. That means the hacked app has your password and credentials. 2FA may help here a bit to make it harder to attack, but only if its time based, and even then it just reduces the window.
If a user is installing a hacked version of your app and is willing to sign into it, you've already lost.
YOUR PROBLEM
I am currently designing a native app which will have a simple username / password login. On a valid authentication, the backend server is issuing a JWT access token and refresh token which are eventually used throughout subsequent API calls.
You need to bear in mind that any secret that leaves the backend or is stored inside a mobile app is no longer a secret, instead it's a public thing, even if you are using TLS (can be intercepted in some scenarios) and Certificate pinning (can be bypassed in some circunstances), because an attacker can reverse engineer your mobile app via a plethora of open source tools and techniques, one of them being static decompilation as you mention:
My concern is that using such an approach, my app can be prone to phishing whereby a hacker can decompile and recompile my app with malicious interceptors.
Once it's decompiled the attacker can do whatever he wants with your code, not just adding interceptors. It's well known problem that the Google Play store is full of cloned and repackaged apps that will use the same backend as the genuine app. You even have a repo RePack: A repository of repackaged Android apps, that illustrates the problem:
RePack is a repository of over 15,000 repackaged Android app pairs collected from AndroZoo. The SHA256 of the apps are available in the repackaging_pairs.txt file. The actual APKs are available in the AndroZoo dataset, which can be downloaded by giving a SHA256 as input. Please following this page to learn how to download apps from AndroZoo.
When repackaged the apps can be used to simply add ads to it and act as a fake app to your service where the profits will go to the attacker, or can be used to distribute malware and/or spy on users as seen in the article 700,000 malicious Android apps found in Google Play store last year:
Google has confirmed that it had to remove nearly 700,000 potentially malicious apps from the Google Play store in 2017 – an increase of 70% from the previous year.
Reverse Engineering
The truth is that anything running in the client side can be reverse engineered
easily by an attacker on a device he controls.
When reverse engineering a mobile app an attacker first step may be to perform a static binary analysis to extract all static secrets and to identify attack vectors, and you can see how in my article
How to Extract an API key from a Mobile App with Static Binary Analysis:
The range of open source tools available for reverse engineering is huge, and we really can't scratch the surface of this topic in this article, but instead we will focus in using the Mobile Security Framework(MobSF) to demonstrate how to reverse engineer the APK of our mobile app. MobSF is a collection of open source tools that present their results in an attractive dashboard, but the same tools used under the hood within MobSF and elsewhere can be used individually to achieve the same results.
During this article we will use the Android Hide Secrets research repository that is a dummy mobile app with API keys hidden using several different techniques.
Another alternative to extract secrets is to perfome a Man in the Middle (MitM) attack to intercept the TLS connection and extract all secrets and learn how the mobile app interacts with the backend and any other third party APIs, and you can see how I done it in the article
Steal that Api Key with a Man in the Middle Attack:
In order to help to demonstrate how to steal an API key, I have built and released in Github the Currency Converter Demo app for Android, which uses the same JNI/NDK technique we used in the earlier Android Hide Secrets app to hide the API key.
So, in this article you will learn how to setup and run a MitM attack to intercept https traffic in a mobile device under your control, so that you can steal the API key. Finally, you will see at a high level how MitM attacks can be mitigated.
Wait, I imagine you now thinking, but I will use certificate pinning to protect my TLS channel, no problem just see how certificate pinning can be easily bypassed in a device the attacker controls in the article Bypassing Certificate Pinning
In this article you will learn how to repackage a mobile app in order to disable certificate pinning and in the process you will also learn how to create an Android emulator with a writable system to allow for adding the custom certificate authority for the proxy server into the Android operating system trust store. This will allow us to bypass certificate pinning and intercept the requests between the mobile and its backend with a MitM attack.
While repackaging a mobile app to bypass pinning is a good solution my preference goes to do it dynamically at runtime as I show in the article How to Bypass Certificate Pinning with Frida on an Android App:
Today I will show how to use the Frida instrumentation framework to hook into the mobile app at runtime and instrument the code in order to perform a successful MitM attack even when the mobile app has implemented certificate pinning.
Bypassing certificate pinning is not too hard, just a little laborious, and allows an attacker to understand in detail how a mobile app communicates with its API, and then use that same knowledge to automate attacks or build other services around it.
Now you may be thinking that you have lost the battle and shouldn't care about adding protections because they will end-up to be bypassed. Well, I have to tell you that software security shares the same traits as the defences in the medieval castles, where they used it in layers, in order to make the assault to the castle as hard as possible, and hopefully impossible. So, you need to adopt the same posture as them and put as many security layers as you can afford, and required by law, in order to make the attackers efforts time consuming and to require more expertise then most of them may have.
YOUR QUESTIONS
In order to overcome this, I was thinking of designing my Login API, so that rather than having an accessToken and a refreshToken in the payload response of the API, the Login API will issue a 301 Redirect on myapp://some-path?accessToken=X&refreshToken=Y. This will ensure that if a phishing app is calling my APIs, the accessToken and the refreshToken are sent to the original APP.
Is this a correct approach?
So, I think that by now is obvious that your solution will not be effective and will be easily defeated with repackaging the mobile app, MitM attack it or simply by instrumenting the code at runtime.
If not, which design would be suggested?
You should use a security approach where the API backend is able to recognize that what is making the request is indeed a genuine and untampered/repacked/cloned version of your mobile app, not just who is making the request, as when user authentication is solely used to protect unauthorised access to an API.
The Difference Between WHO and WHAT is Accessing the API Server
I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
So think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
POSSIBLE SOLUTION
I recommend you to read this answer I gave to the question How to secure an API REST for mobile app?, especially the sections Hardening and Shielding the Mobile App, Securing the API Server and A Possible Better Solution.
You will see that you can use a plethora of defence layers, like in medieval castles, and that the solution you may want to employ, to guarantee with an high degree of confidence that what is doing the request to the backend is indeed your genuine mobile app, is the Mobile App Attestation.
Do You Want To Go The Extra Mile?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
In my Android App we are using retrofit web service for communication to server. Some Hacker intercept request and modify it using some tool Burp Suit.
Please help me to let me know how I stop Intercept Attack.
What Burp Suit does - it basically performs a Man-in-the-middle attack. It generates an HTTPS certificate and pretends to be a browser.
The thing is if your server and your client are protected from this MITM attack - those tools won't work. At least in the mobile apps - the browser will show a security error but still will pass the data through.
The solution you can use is including your specific SSL certificate into the app and making the app consider it to be the only trusted one. It will be more or less secure - depending on the implementation. It is also free because you can attach a self-signed certificate you created yourself since you control the verification. Naturally, the backend should also use the same SSL certificate. While using this technique Burp Suit generated certificates won't work because the app knows only one trusted certificate.
The technique itself is called SSL pinning or certificate pinning and you can find plenty of info online about how to implement it both on the client and server.
I will give you several links though:
Here is the nice article about how to do it with retrofit(okhttp).
Here is the official documentation for OkHttp CertificatePinner
Here is the small implementation of retrofit SSL pinning.
Here is one more article.
It is not enough but the issue is complex and one StackOverflow answer won't suffice. But I think it is a good start to do the actual implementation.
Also as a small recommendation - use encryption to store your SSL certificate key instead of plain string storage - it still won't be secure from memory spoofing but it will be much harder for the hacker to use it.
I am implementing SSL pinning in our android app. I have pinned 2 certificates (current and backup) at the client by embedding them in the app.
Now, I want to have a mechanism in place to update these certificates without requiring to roll out an app upgrade in case certificates are expired or private key is compromised. How can I implement that?
One possible solution I am seeing is through app notification. I can broadcast a notification with new certificates and store them in the client. Is there any problem in this approach or is there any better approach?
PUBLIC KEY PINNING
I am implementing SSL pinning in our android app. I have pinned 2 certificates (current and backup) at the client by embedding them in the app.
If you pin against the public key you do not need to update your mobile app each time a certificate is rotated in the server, once you will sign it with the same public key, and you can read the article Hands On Mobile APi Security: Pinning Client Connections for more details in how this can be done:
For networking, the Android client uses the OKHttp library. If our digital certificate is signed by a CA recognized by Android, the default trust manager can be used to validate the certificate. To pin the connection it is enough to add the host name and a hash of the certificate’s public key to the client builder(). See this OKHttp recipe for an example. All certificates with the same host name and public key will match the hash, so techniques such as certificate rotation can be employed without requiring client updates. Multiple host name - public key tuples can also be added to the client builder().
For a situation where the private key used to sign the certificate gets compromised you will end-up in the same situation that you are trying to solve now, that is the need to release a new mobile app to update what you trust to pin against. By other words, the public key cannot be trusted anymore, thus the server must rotate the certificate with one signed with the backup public key you have released with your mobile app. This approach will give you time for a new release, that removes the public key used to sign the compromised certificate, without locking out all your users.
You should always store the backup private keys in separated places, so that if one is compromised you don't get all compromised at once, because then having a backup pin being release with the mobile app is useless.
DON'T DO THIS
Now, I want to have a mechanism in place to update these certificates without requiring to roll out an app upgrade in case certificates are expired or private key is compromised. How can I implement that?
Unfortunately the safer method to deal with a compromised private key is to release a new mobile app that doesn't trust on it anymore. Any remote solution you may devise to update the certificates will open the mobile app doors for attackers to replace the certificates you are pinning against.
So my advice is to not go down this road, because you will shoot yourself on the foot more easily than you can think off.
One possible solution I am seeing is through app notification. I can broadcast a notification with new certificates and store them in the client. Is there any problem in this approach or is there any better approach?
While the mobile app have the connection pinned it can be bypassed, thus a MitM attack can be performed and the new certificates retrieved from the attackers server, instead from your server. Please read the article The Problem with Pinning for more insights on bypassing it:
Unpinning works by hooking, or intercepting, function calls in the app as it runs. Once intercepted the hooking framework can alter the values passed to or from the function. When you use an HTTP library to implement pinning, the functions called by the library are well known so people have written modules which specifically hook these checking functions so they always pass regardless of the actual certificates used in the TLS handshake. Similar approaches exist for iOS too.
While certificate pinning can be bypassed is still strongly advised to use it, because security is all about layers of defense, the more you have the more hard it will be to overcome all them... This is nothing new, if you think of medieval castles, they where built with this approach.
A POSSIBLE BETTER APPROACH
But you also asked for a better approach:
Is there any problem in this approach or is there any better approach?
As already mentioned you should pin against the public key of the certificate to avoid lockouts of the client when you rotate the server certificates.
While I cannot point you a better approach to deal with compromised private keys, I can point out to protect the certificate pinning from being bypassed with introspection frameworks, like xPosed or Frida, we can employ the Mobile App Attestation technique, that will attest the authenticity of the mobile app.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
Before we dive into the Mobile App Attestation technique, I would like to clear first a usual misconception among developers, regarding the WHO and the WHAT is calling the API server.
The Difference Between WHO and WHAT is Accessing the API Server
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
The Intended Communication Channel represents the mobile app being used as you expected, by a legit user without any malicious intentions, using an untampered version of the mobile app, and communicating directly with the API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of the mobile app, a hacker using the genuine version of the mobile app, while man in the middle attacking it, to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the original version of the mobile app.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the mobile app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise you may end up discovering that It can be one of the legit users using a repackaged version of the mobile app or an automated script that is trying to gamify and take advantage of the service provided by the application.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
Mobile App Attestation
The use of a Mobile App Attestation solution will enable the API server to know WHAT is sending the requests, thus allowing to respond only to requests from a genuine mobile app while rejecting all other requests from unsafe sources.
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered or is not running in a rooted device by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
So this solution works in a positive detection model without false positives, thus not blocking legit users while keeping the bad guys at bays.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
CONCLUSION
So I recommend you to switch to pin the certificates by the public key, and if you want to protect against certificate pinning being bypassed, and other threats, then you should devise your own Mobile App Attestation solution or use one that is ready for plug and play.
So In the end, the solution to use in order to protect your Mobile APP and API server must be chosen in accordance with the value of what you are trying to protect and the legal requirements for that type of data, like the GDPR regulations in Europe.
DO YOU WANT TO GO THE EXTRA MILE?
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
In a recent discussion with colleague we analysed the implications of letting the default Android's system validation of CA signed TLS certificates from a known CAs like Verising, Comodo, etc. The discussion centered on the issue that the OS might be compromised (maybe rooted with malware or hacked) and the entire CA certificates library modified to validate certificates not signed by actual CAs.
A possible proposed solution to this was implementing Certificate validation in the app itself, having a list (which would in theory be much narrower considering that the app developer knows which certificates he'll be using) of Root CA Certificates in the app and let the app do the verification. This would allow for real validation without depending on the systems CA certificates.
I'm still a bit worried of wrongfully implementing this, how smart is it to let de app do the validation?
If the system is compromised an application running with same or less privileges than the compromised system can not save the day. Essentially you would not only need to implement TLS validation by your own but also the full TLS stack because the encryption libraries might be changed on the system and provide a backdoor to the attacker to get to the plain text. Additionally you would need to make sure that your application is not hijacked and the plain text information are stolen outside the encryption process, i.e. before encryption and after decryption. But, if the attacker has fully compromised the system you cannot stop this.
It is probably way more likely that you introduce new bugs while implementing the fairly complex TLS stack by your own and thus make your application not only insecure on compromised systems but also on others.
I'm setting up a server which an android app and an iPhone app will connect to. And I'm wondering what type of security is more secure for sending/requesting data?
Currently I generate a HMAC-SHA256 of the content I'm sending to the server in the header to verify its integrity.
But I'm wondering if its more secure to use a https connection instead? If I use https, could I skip the HMAC?
I would like to know the differences in security, which is more secure?
And also, if I'm using either is it better to use both for an extra layer of security?
Quick answer to your questions: SSL if used properly should give you more security guarantees than HMAC. So, usually SSL can be used in a way that removes the need for HMAC.
HMAC provides integrity as well as authenticity. Assuming the client and the server use pre-shared symmetric keys to calculate the HMACs, one side can be sure that the device on the other end has the secret key. This provides authenticity of both server and client.
What is missing in this picture (with just HMAC) is confidentiality. What is the nature of data exchanged between the server and client? Is there any sensitive user data being transferred during the communication that you don't want a man-in-the-middle to see? If so, then you may want to use SSL.
SSL gives you confidentiality (among other things). Meaning that you can be sure that you have a secured end-to-end connection and no man-in-the-middle can see what data is being exchanged between the server and client. However, common SSL usage does not include client machine authentication. Fro example, your web browser checks for Paypal's authenticity when you go to their https webpage. But the Paypal server does not ask your browser to send any certificate from your side.
Since you are comparing SSL with HMAC, I am assuming you care about authenticity of both sides. So, use SSL with both server and client authentication. This basically means that both of them would ask for each other's certificates and check different aspects of the certificates (i.e. common name, certificate issuer etc.). You can create your own certificate issuer to sign these certificates.
If you are making an app for AppStore or Google Play that users can simply install and start using, you may want to think through how the client side certificates will be generated, signed or who will sign them. You can remove the need for client side certificate (and signing) by adopting a model similar to GitHub's, where the user manually informs the server of trusted public keys to authenticate devices. But you can probably see how this process might not be user friendly.