I have Ionic PWA app published for Android and iOS (I used Capacitor to generate the native build). In the frontend code, it has my Google Maps API key, however, I can't restrict it to any of the options google offers because...
HTTP referrers - It's not on a public domain name, it's on a local host within the webview of the native app. http://localhost/ for Android and capacitor://localhost/ for iOS. It does not seem very secure to use these as restrictions as they are very generic, and all other apps will have the same ones.
IP addresses - For obvious reasons.
Android Apps - It's not within the native code, it's within a webview.
iOS Apps - It's not within the native code, it's within a webview.
None of these options can work for my situation. So how can I protect my API key from abuse?
Any ideas? I can't be the only the one using Google Maps API within an Ionic app.
You can configure the hostname of capacitor apps
"server": {
// You can configure the local hostname, but it's recommended to keep localhost
// as it allows to run web APIs that require a secure context such as
// navigator.geolocation and MediaDevices.getUserMedia.
"hostname": "unique-app",
}
and then restrict the the API keys to capacitor://unique-app
https://capacitor.ionicframework.com/docs/basics/configuring-your-app
In order to protect your API key you have to check the value of the window.location.href within a webview. I guess you will see something like file://some/path.
So you will need apply HTTP referrer restriction for this path. Note that URLs with a file:// protocol require special representation as explained in
https://developers.google.com/maps/documentation/javascript/get-api-key#restrict_key
Note: file:// referers need a special representation to be added to the key restriction. The "file://" part should be replaced with "__file_url__" before being added to the key restriction. For example, "file:///path/to/" should be formatted as "__file_url__//path/to/*". After enabling file:// referers, it is recommended you regularly check your usage, to make sure it matches your expectations.
I hope this helps.
Old question but still...
If you do not want to store the api_key in your app, request it at run time from your own server over a POST https request before running any Google Maps requests.
Related
I have a mobile app written in Flutter. I am calling Google API's for Places and for Geocoding. Since the calls to these services are made by including the key in the url it is very important that this keys be restricted so that anyone can't just intercept and use them to make many calls and rack up a big account for us with Google.
Examples of API calls are:
https://maps.googleapis.com/maps/api/place/autocomplete/json?input=$input&types=address&language=$lang&components=country:za&key=$apiKey
and
https://maps.googleapis.com/maps/api/geocode/json?latlng=$lat,$lon&key=$apiKey
(where $apiKey=our Google Maps API Credentials Key $input, $lang, $lat and $lon represent other variables)
Google currently allows the following Application Restrictions on API Credentials:
None
HTTP Referrers
IP Addresses
Android Apps
iOS Apps
When I don't have any restrictions on my key (option "None") then my mobile application works perfectly fine. However, if I choose to restrict it to iOS Apps (and specify the Bundle Identifier for my app) I get the error "This IP, site or mobile application is not authorized to use this API key". The same happens when I restrict the key to Android Apps and specify my Package Id and SHA1 signature for the Android app.
What am I missing? According to https://developers.google.com/maps/api-key-best-practices#restrict_apikey I should be fine to restrict the key to my specific mobile application. It seems like Google Maps is not picking up that I'm indeed making the call from the correct mobile app. The error occurs regardless of whether I run this in the Simulator or on an actual device in either debug or release mode. (I have tested the iOS version of the app from Test Flight too. When I remove the restrictions, my api calls work; when I restrict it only to my iOS app's Bundle Identifier it stops working.) Is there anything else I need to configure? Is the problem perhaps that the app is written in Flutter?
I found some links (like this) that suggest that mobile applications should never use the key directly in the url but should rather use our own server as a proxy to Google and then we should restrict access to our server's IP. This seems like an unnecessary overhead since the very existence of the option to restrict the key to specific app id's and platforms suggests that this should be possible (as does the Google documentation I refer to).
I ended up creating a proxy server. (More on that below).
Thinking about it critically I realised there would not really be any way for the API to discern whether incoming requests are being made from the specific mobile app or not so I don't think the restriction to specific iOS or Android apps is really possible for https requests.
I also investigated some of the flutter plugins provided but most of them seem to use google_maps_webservice in the background. And google_maps_webservice requires either a key in the app code or a proxy to be specified. Most of the plugins derived from google_maps_webservice don't even offer the proxy option and requires a key to be specified. So even if these plugins are used, one ends up "giving away" your key in the app code which makes it possible to reverse-engineer and obtain your code. (It ends up in the AppDelegate.m file on iOS and AndroidManifest.xml in Android). Google recommends against giving out your api key in this way.
So I ended up creating a proxy server. Requests are made to my proxy without the key and then the proxy passes on the request to Google after adding the secret key. The proxy then returns the response to the app. The app and traffic between app and server never contains a key.
Luckily my app was already using AWS and users were signing in with Cognito. So I could create the Proxy easily in API Gateway and I let the app authenticate to my proxy server using Cognito. This ensures that only users validly logged in on my app in Cognito will be able to call the proxy and keys are always kept secret.
If you want to take advantage of Google Map Service API configuration for a specific Android/iOS app you'll need to be using the native Android & native iOS SDKs which depend upon the SHA fingerprint & package name, or bundle id.
As the Google Places API seems popular a number of third-party attempts exist: https://pub.dev/packages?q=google+places
a similar situation exists for geocoding: https://pub.dev/packages?q=google+geocoding
You can see if any fit your app or you'll need to create your own platform channel for support: https://flutter.dev/docs/development/platform-integration/platform-channels
I know that this question has been asked many times. I've read all of them. I've searched this in Google but I still have questions that I wasn't able to find answers for.
Currently I'm storing API Keys in my app. Which yes is a bad practice. Now to the best of my knowledge I can use ProGaurd or DexGaurd to obfuscate. I can also use Keystore to securely store my api keys. Now for this part here' my question:
- Obfuscation changes the name of variables and classes. My API Key will still be in that app when someone decompiles the apk file. Sure it might take more time, but how much more is it? For example 20 minutes? I feel like the bottom line is that they can use this key if they put some time and effort. Am I getting this wrong?
Now the other answer that I've seen on different websites was that I can store my keys on a server and then communicate that through my app.
How is that even possible? A server is going to send the api key to the app. If the hacker finds this url they can just open it and get the key. If I use a key to access the URL then i'm entering a never ending loop of keys. How would I do this?
Someone said that I can encrypt my keys and then decrypt them in the app once it's received. But can't people decompile my decryption function and figure out my key?
I was also told that Firebase Remote Config is going to be a safe method for storing my keys. But then there's another problem
How much safer is this method?
If I'm using a google services json file to identify my project, can't just people get my keys from the remove config part? Because I can't see any settings for remove config on my console in order to say who can access this and who can't. How can I securely store my api keys on Firebase?
And can't hackers just decompile the apk and just change the code and extract data from my firebase account? Because the google services json is there. If they print the data extracted can they access everything?
So what exactly should I do to safely use api keys for my third party applications? And some of these api keys are very valuable and some of them just get information from other servers. I just want to know the safest method to store these keys.
HOW HARD CAN IT BE TO EXTRACT AN API KEY?
My API Key will still be in that app when someone decompiles the apk file. Sure it might take more time, but how much more is it? For example 20 minutes? I feel like the bottom line is that they can use this key if they put some time and effort. Am I getting this wrong?
You say For example 20 minutes?... Well it depends if you already have the tools installed in your computer or not, but if you have at least Docker installed you can
leverage some amazing open source tools that will make it trivial for you to extract the API Key in much less then 20 minutes, maybe around 5 minutes, just keep reading to see how you may do it.
Extract the API Key with Static Binary Analysis
You can follow my article about How to Extract an API Key from a Mobile App with Static Binary Analysis where you will learn how you may be able to do it under five minutes, and this without any prior hacking knowledge, from where I quote:
I will now show you a quick demo on how you can reverse engineer an APK with MobSF in order to extract the API Key. We will use the MobSF docker image, but you are free to install it in your computer if you wish, just follow their instructions to do it so.
To run the docker image just copy the docker command from the following gist:
#!/bin/bash
docker run -it --name mobsf -p 8000:8000 opensecurity/mobile-security-framework-mobsf
So after the docker container is up and running all you need to do is to visit http://localhost:8000 and upload your mobile app binary in the web interface, and wait until MobSF does all the heavy lifting for you.
Now if you have your API key hidden in native C/C++ code, then the above approach will not work, as I state in the same article:
By now the only API key we have not been able to find is the JNI_API_KEY from the C++ native code, and that is not so easy to do because the C++ code is compiled into a .so file that is in HEX format and doesn’t contain any reference to the JNI_API_KEY, thus making it hard to link the strings with what they belong to.
But don't worry that you can just use a Man in the Middle(MitM) Attack or an Instrumentation framework to extract the API key.
Extract the API Key with a MitM Attack
Just follow my article Steal that API Key with a Man in the Middle Attack to extract it in a device you can control:
In order to help to demonstrate how to steal an API key, I have built and released in Github the Currency Converter Demo app for Android, which uses the same JNI/NDK technique we used in the earlier Android Hide Secrets app to hide the API key.
So, in this article you will learn how to setup and run a MitM attack to intercept https traffic in a mobile device under your control, so that you can steal the API key. Finally, you will see at a high level how MitM attacks can be mitigated.
Oh but you may say that you use certificate pinning, therefore the MitM Attack will not work, and if so I invite you to read my article about Byapssing Certificate Pinning:
In a previous article we saw how to protect the https communication channel between a mobile app and an API server with certificate pinning, and as promised at the end of that article we will now see how to bypass certificate pinning.
To demonstrate how to bypass certificate pinning we will use the same Currency Converter Demo mobile app that was used in the previous article.
In this article you will learn how to repackage a mobile app in order to make it trust custom ssl certificates. This will allow us to bypass certificate pinning.
Extract with Instrumentation Framework
So if none of the above approaches works for you, then you can resort to use an instrumentation framework, like the very widely used Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
So no matter what you do in the end a secret in a mobile app can always be extracted, it just depends on the skill set of the attacker and the time and effort he is willing to put in.
STORING API KEYS ENCRYPTED IN THE MOBILE APP?
Someone said that I can encrypt my keys and then decrypt them in the app once it's received.
So you can go with the Android Hardware-backed Keystore:
The availability of a trusted execution environment in a system on a chip (SoC) offers an opportunity for Android devices to provide hardware-backed, strong security services to the Android OS, to platform services, and even to third-party apps.
At some point the secret retrieved from this keystore will need to be used to make the http request, and at this point all an attacker needs to do is to hook an instrumentation framework on the call to the function that returns the API key decrypted to extract it when is returned.
And to find the decrypt function all an attacker needs to do is to decompile your APK and find it as you already though off:
But can't people decompile my decryption function and figure out my key?
FIREBASE AND SAFETYNET FOR THE RESCUE?
I was also told that Firebase Remote Config is going to be a safe method for storing my keys.
Once more all the attacker needs to do is to use an instrumentation framework to extract all it needs from any function he identifies as using the Firebase config.
Oh but you may tell that Firebase and/or your mobile is protected with SafetyNET, then I need to alert you for the fact that SafetyNet checks the integrity of the device the mobile app is running on, not the integrity of the mobile app itself, as per Google own statement:
The goal of this API is to provide you with confidence about the integrity of a device running your app. You can then obtain additional signals using the standard Android APIs. You should use the SafetyNet Attestation API as an additional in-depth defense signal as part of an anti-abuse system, not as the sole anti-abuse signal for your app.
Also I recommend you to read this answer I gave to the question Android equivalent of ios devicecheck? in order to understand what a developer needs to be aware when implementing Safety Net in their mobile app.
So despite SafetyNet being a very good improvement for the Android security ecosystem it was not designed to be used as a stand-alone defence, neither to guarantee that a mobile app is not being tampered with, for that you want to use the Mobile App Attestation concept.
PROXY OR BACKEND SERVER
Now the other answer that I've seen on different websites was that I can store my keys on a server and then communicate that through my app.
How is that even possible? A server is going to send the api key to the app. If the hacker finds this url they can just open it and get the key. If I use a key to access the URL then i'm entering a never ending loop of keys. How would I do this?
While you may say this only shifts the problem from the mobile app to the proxy or backend server I have to say that at least the proxy or backend server is a thing under your control, while the mobile app isn't. Anyone who downlaods it can do whatever wants with it, and you can't have a direct control of, you can only add as many barriers you can afford into the APK to make it hard.
I recommend you to read my answer to the question How to restrict usage of an API key with Hash comparison? to better understand why you shouldn't try to secure your API keys in your mobile app, and instead move them to your backend or a proxy server.
POSSIBLE BETTER SOLUTION
So what exactly should I do to safely use api keys for my third party applications? And some of these api keys are very valuable and some of them just get information from other servers. I just want to know the safest method to store these keys.
The best advice I can give you here is to read my answer to the question How to secure an API REST for mobile app? to understand how you can indeed get ride of API keys in the mobile app and allow for your backend to have a high degree of confidence that the request is originated indeed from a genuine instance of your mobile app.
DO YOU WANT TO GO THE EXTRA MILE?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
If you think API Key should not be compromised then you should not put it inside the app. You can use the following possible solutions
You can keep your keys on a server and route all requests needing that key through your server. So as long as your server is secure then so is your key. Of course, there is a performance cost with this solution. You can use SSL pinning to authenticate the response. Check this
You can get the signature key of your app programmatically and send is to sever in every API call to verify the request. But a hacker can somehow find out the strategy.
Google does not recommend storing API keys in remote config but you can keep one token there and use it to verify the request and send the API key. Check this
In the case of the Android app, you can use SafetyNet API by Google to verify the authenticity of the app and the server can generate a token for the user after verification of the SafetyNet response. The token can be further used to verify the request. There is one plugin available for Flutter for SafetyNet API.
You can use a combination of the above approaches to ensure the security of the API key. To answer your questions, Firebase remote config uses SSL connection to transfer the data, it's very much secure but you should not rely on it completely for your data security. You also can't share API keys using the APIs which are publicly accessible. Moreover, storing both the encrypted key and the data inside the app won't make it secure.
You can use freeRASP for Android, iOS, and Flutter to mitigate the risk of Reverse Engineering.
The premium plans offer more protection such as App Integrity Cryptogram to protect APIs from app impersonation and Secure Storage SDK to protect assets at rest.
I created android app and it is working fine.
The issue is that when we decompile the app we can see all the code, so hacker can see our API URL and API Classes so they can clone the app.
So my question is that how can I secure my android app so I can protect it from hackers.
YOUR PROBLEM
I created android app and it is working fine. The issue is that when we decompile the app we can see all the code, so hacker can see our API URL and API Classes so they can clone the app.
Not matter what tool you use to obfuscate or even encrypt code, your API url will need to be in clear text at some point, aka when you do the API request, therefore it's up for grabs by an attacker. So if an attacker is not able to extract it with static binary analyses, it will extract at runtime with an instrumentation framework, like Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
So basically the attacker will need to find the place in the code where you do the API request, hook Frida on it, and extract the URL or any secret passed along with it to identify/authorize your mobile app in the API server.
Another approach the attacker can take is to perform a MitM attack in a mobile device he controls, and intercept the request being made to the API server:
Image sourced from article: Steal that API key with a Man in the Middle Attack
As you can see in the above example the intercepted API request, reveals the API server url and the API key being used.
POSSIBLE SOLUTIONS
So my question is that how can I secure my android app so I can protect it from hackers.
When adding security, no matter if for software or for a material thing, is always about layers, see the medieval castles for example, they don't have only one defense, they have several layers of it. So you should apply the same principle to your mobile app.
I will listed some of the minimal things you should do, but not an exhaustive list of them.
JNI/NDK
The JNI/NDK:
The Native Development Kit (NDK) is a set of tools that allows you to use C and C++ code with Android, and provides platform libraries you can use to manage native activities and access physical device components, such as sensors and touch input.
In this demo app I show how native C code is being used to hide the API key from being easily reverse engineered by static binary analyses, but as you already have seen you grab it with a MitM attack at runtime.
#include <jni.h>
#include <string>
#include "api_key.h"
extern "C" JNIEXPORT jstring JNICALL
Java_com_criticalblue_currencyconverterdemo_MainActivity_stringFromJNI(
JNIEnv *env,
jobject /* this */) {
// To add the API_KEY to the mobile app when is compiled you need to:
// * copy `api_key.h.example` to `api_key.h`
// * edit the file and replace this text `place-the-api-key-here` with your desired API_KEY
std::string JNI_API_KEY = API_KEY_H;
return env->NewStringUTF(JNI_API_KEY.c_str());
}
Visit the Github repo if you want to see in detail how to implement it in your mobile app.
Obfuscation
You should always obfuscate your code. If you cannot afford a state of the art solution, then at least use the built in ProGuard solution. This increases the time and skills necessary to poke around your code.
Encryption
You can use encryption to hide sensitive code and data, and a quick Google search will yield a lot of resources and techniques.
For user data encryption you can start to understand more about it in the Android docs:
Encryption is the process of encoding all user data on an Android device using symmetric encryption keys. Once a device is encrypted, all user-created data is automatically encrypted before committing it to disk and all reads automatically decrypt data before returning it to the calling process. Encryption ensures that even if an unauthorized party tries to access the data, they won’t be able to read it.
And you can read in the Android docs some examples of doing it:
This document describes the proper way to use Android's cryptographic facilities and includes some examples of its use. If your app requires greater key security, use the Android Keystore system.
But remember, that using Frida will allow for an attacker to hook into the code that returns the data unencrypted, and extract it, but also requires more skill and time to achieve this.
Mobile App Attestation
This concept introduces a new way of dealing with securing your mobile app.
Traditional approaches focus to much on the client side, but in first place the data you want to protect is in the API server, and it's here that you want to have a mechanism that let's you know that what is making the request is really your genuine mobile app, the same you uploaded to the Google Play Store.
Before I dive into the role of the Mobile App Attestation, I would like to first clarify a misconception around what vs who is doing the API request, and I will quote this article I wrote:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
The Mobile App Attestation role is described in this section of another article I wrote, from where I quote the following text:
The role of a Mobile App Attestation service is to authenticate what is sending the requests, thus only responding to requests coming from genuine mobile app instances and rejecting all other requests from unauthorized sources.
In order to know what is sending the requests to the API server, a Mobile App Attestation service, at run-time, will identify with high confidence that your mobile app is present, has not been tampered/repackaged, is not running in a rooted device, has not been hooked into by an instrumentation framework (Frida, xPosed, Cydia, etc.) and is not the object of a Man in the Middle Attack (MitM). This is achieved by running an SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device it is running on.
On a successful attestation of the mobile app integrity, a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud know. In the case that attestation fails the JWT token is signed with an incorrect secret. Since the secret used by the Mobile App Attestation service is not known by the mobile app, it is not possible to reverse engineer it at run-time even when the app has been tampered with, is running in a rooted device or communicating over a connection that is the target of a MitM attack.
The mobile app must send the JWT token in the header of every API request. This allows the API server to only serve requests when it can verify that the JWT token was signed with the shared secret and that it has not expired. All other requests will be refused. In other words a valid JWT token tells the API server that what is making the request is the genuine mobile app uploaded to the Google or Apple store, while an invalid or missing JWT token means that what is making the request is not authorized to do so, because it may be a bot, a repackaged app or an attacker making a MitM attack.
So this approach will let your API server to trust with a very high degree of confidence that the request is coming indeed from the same exact mobile app you uploaded to the Google Play store, provided the JWT token has a valid signature and expire time, and discard all other requests as untrustworthy ones.
GOING THE EXTRA MILE
I cannot resist to recommend you the excellent work of the OWASP foundation, because no security solution for mobile is complete without going through The Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
You can use DexGuard. Protecting Android applications and SDKs against reverse engineering and hacking.DexGuard offers extensive customization options to enable you to adapt the applied protection to your security and performance requirements.DexGuard prevents attackers from gaining insight into your source code and modify it or extract valuable information from it.
ProGuard is a generic optimizer for Java bytecode. DexGuard is a
specialized tool for the protection of Android applications.
Read Dexguard-vs-Proguard
you can use proguard which is by default provided by the Android studio while creating sign apk you can refer below document for that
Link: https://docs.google.com/document/d/1UgEZtKRoAIIXtPLKKHIds33txgU7hH33-3xsoBR4lWY/edit?usp=sharing
Apart from using code obfuscation there is not much you can do. I would recommend to use NDK layer for storing application secrets because C++ libraries can't be easily decompiled. You could use https://github.com/nomtek/android-client-secrets library for that purpose.
You can use ProGuard Tools to secure your code. It's renamed the remaining classes, fields, and methods using short meaningless names.
The API endpoints will always be open to end-users . But If you use the https and SSL in your api server the data will become encrypted like most of the apps. As for the API end-points, you can-not do anything
i found a website JavaDecompiler that will help you to decompile app. and the output of my research is there is no way to provide 100% code security. so what we can do for get accuracy is that put condition at frontend and backend both side.
and i had tried for dexguard but it was expensive for me and also proguard is not working well for me.
I have a cordova/ionic mobile app that loads google maps (in the index.html main file) into the app (both android and ios) using: https://maps.googleapis.com/maps/api/js?key=AndroidKey and https://maps.googleapis.com/maps/api/js?key=iOSKey. Each key is locked down with "app" restrictions and its not working. I discovered that web service api's can only be locked down by HTTP referrer OR Server IP.
But since the maps are loaded directly via the client, there is no HTTP referrer by domain or a server IP...is there any other way I can lock down the API keys?
Can I use something like https://github.com/wymsee/cordova-HTTP to create an HTTP referer? And if I can, what kind of legit domain referrer can I create that would work with google maps api HTTP referer restrictions?
update:
someone marked this as a dup, but that post is about Android SDK API, whereas mine is about Javascript Map API.
in ionic 3/4/5 when using cordova-plugin-ionic-webview (docs) the referrer is ionic://localhost for iOS and http://localhost for Android.
first solution is to customize the scheme and/or hostname - this sounds like a reasonable option as this way the referrer can be set to sth like https://mobileapp.author.domain.com/ which cannot be easily stolen by a website (well another app could possible set the same).
similarly for capacitor "server": {"hostname": "mobileapp.author.domain.com"} can be used (as per this SO answer: How to protect Google Maps API key on Ionic app? )
quick'n'dirty option is to add a *://localhost/* as a website restriction - this is the only way I found to whitelist the ionic://localhost/ referrer. This should also work for capacitor which uses capacitor://localhost/
I'm using getUserMedia() in my web app which works fine when I test my app on localhost. But if I treat my laptop as server and launch app in Google Chrome browser of my android phone, it gives me the error:
getUserMedia() no longer works on insecure origins. To use this
feature, you should consider switching your application to a secure
origin, such as HTTPS. See https://goo.gl/rStTGz for more details.
When I checked [https://goo.gl/rStTGz][1] I got to know that getUserMedia() is deprecated on insecure origins. It is written that for development mode,
You can run chrome with the
--unsafely-treat-insecure-origin-as-secure="example.com" flag (replacing "example.com" with the origin you actually want to test)
How and where can I set this flag? Is there any other alternative?
This can be done from chrome://flags/ or about://flags.
Go to about://flags, search for unsafely-treat-insecure-origin-as-secure flag, and enable it. You will have to provide the origin which you want to be treated as secure.
Multiple origins can be entered as comma-separated values.
Relaunch your browser after making this change.
Note that the protocol part is also important, and specifying the IP address, or the domain name isn't enough. eg. http:// in http://192.168.43.45. If you are not using port 80, then you may have to specify that too.
The following is a screenshot from my mobile phone.
Mobile: Samsung Galaxy S10e
Android version: 10 (Android 10)
Google Chrome version: 79.0.3945.136
For local testing of a website I am building, geolocation was needed.
Geolocation is allowed in secure locations. I do have a production server with HTTPS certificate, but the development and the debugging process would become too slow if I have to upload content to it every time.
More info
https://www.chromium.org/Home/chromium-security/prefer-secure-origins-for-powerful-new-features
Move localhost to the device
One method is to run an HTTP server on your Android device. The consensus in answers to this question is that NanoHTTPD is worth trying. If you want a ready-made application, a web search for http server for android turned up Simple HTTP Server on Google Play Store. After copying the client side of your web application to the device and starting the server, you should be able to open http://localhost:12345 in Chrome for Android.
Or make your test server secure
You can test secure-context-only features without using --unsafely-treat-insecure-origin-as-secure by turning your existing test server into a potentially trustworthy origin. Follow these steps:
If you do not already own a domain at a registrar that bundles DNS hosting compatible with the dehydrated ACME client, register one. This incurs a fee, which recurs as long as you keep the domain active.
Point a subdomain at your test web server's internal IP address. It need not be reachable from the Internet.
Configure your test web server to respond to HTTPS on port 443 of this subdomain, using NameVirtualHost or the like.
Use the dehydrated ACME client with the appropriate dns-01 hook for your DNS host to obtain a certificate from Let's Encrypt for your test web server.
Install this certificate into your test web server.
I faced with this problem too, but in Chromium, Ubuntu. I solved the problem with running this command in console:
chromium-browser --unsafely-treat-insecure-origin-as-secure="http://localhost.dev:3000" --user-data-dir=~/.config/chromium/Profile 1
where localhost.dev:3000 is your website.
For other systems information there:
where is data directory
how to launch chrome and set keys
Short information about --unsafely-treat-insecure-origin-as-secure flag:
Treat given (insecure) origins as secure origins. Multiple origins can
be supplied. Has no effect unless --user-data-dir is also supplied.
Example:
--unsafely-treat-insecure-origin-as-secure=http://a.test,http://b.test --user-data-dir=/test/only/profile/dir
I didn't check, but for android you maybe can also set flags on chrome://flags page.