I'm currently developing an application which uses Google Nearby Connections API. I'm curious whether there is a method to change the timeout for onEndpointLost (method of the EndpointDiscoveryCallback class) and onFailure (method of the OnFailureListener interface).
In my understanding these methods (callbacks) are called when a predefined time is up, and we get these failure calls. I would like to lower this delay, because after a discovered endpoint dissapears, the onEndpointLost method is called too late to my taste. Same applies when a device tries to establish a connection to an endpoint which no longer advertises, resulting in the onFailure callback.
(I would be exxxtra happy if you, Xlythe could spare some time to help me (: )
Thanks in advance!
There's no way to manually adjust these timeouts, and we don't plan to expose a way either. This is because we're combining different scans (eg. BT + BLE + WiFi) and each scan has its own advertising/scanning interval. There isn't a one-size-fits-all number that would work for everything, and we don't control the timeout ourself for every medium (although we do for some).
As for some good news, we are optimizing the onEndpointLost timeout to be shorter for BLE. That's currently our largest timeout (at 15sec) and we're exploring lowering it (to 3sec). This won't lower the overall timeout to 3sec, but it should make it significantly lower.
For the onFailure event, I'd need to know which one you're referring to. If it's a connection request, you can interrupt the request by calling disconnectFromEndpoint. With that, you can have your own timeout of whichever value you want.
Related
I have taken a look at earlier posts regarding firebase child listeners (How firebase listener actually work?). But I still could not find what I was looking for.
I have an android app with a foreground service that runs full time and that has a child listener in it. Based on any childAdded/updated/deleted operations, some operations are performed by the service.
The problem I face is that there is lot of power usage and it has to do with the internet data sent/received (based on battery reading). I am pretty sure that the child listener has been optimized to reduce power and network consumption.
I would like to know how the child listener works (i.e. exponential backoff, etc.) in the following scenarios:
1) There is NO child added on that child-listener reference node for over 20 minutes
How often does the android device connect to the child listener in this case? And will there be any power consumption due to maintaining the open socket connection with the server?
2) After 60 minutes of no changes on the firebase node, suddenly there is a child added/updated/deleted operation and the listener fires.
What is the power consumption in this case? And does the device again fire a 3.5kB overhead to reconnect with the server?
3) If I turn off database (for firestore or realtimeDB) using the goOffline method (or the equivalent method for firestore), how much time does it take before the connection is disconnected? Based on some android profiling that was done by me, I found a nearly five minutes gap before the connection was terminated!
As long as there is an active listener, there is an open socket between your Android device and the Firebase Database server that it is connected to. There is no "exponential backoff" in that case, the listener just stays connected.
If you don't want to keep such an open connection, you can periodically calls addListenerForSingleValueEvent or explicitly call goOffline()/goOnline() from your code. A third option is to access Firebase through its REST API.
Each time the client reconnects to the server, it'll have to go through an SSL handshake. There is no way to prevent this on Android.
The 5 minute delay is likely just the socket time-out on your device. There is nothing the Firebase client can do about that, as it's a normal part of the operation of sockets.
I've been working with the Android WifiP2pManager to discover specific services on other devices. I'm wondering if there is a known timeout period for the function
public void discoverServices (WifiP2pManager.Channel c, WifiP2pManager.ActionListener listener)
I can't find any resources on it in the Android API. I'm aware that I can set a listener for a successful discovery, but I don't know how to tell if no discovery has been made.
Additionally, is there any way to stop discovery without entirely stopping the wifi manager's functionality?
Thanks in advance for any help!
As far as I know the clearServiceRequests() should work fine for cancelling service discovery.
In general I have also not found any docs for timeout, thus I have been using 1 minute timeout timer to fix the issue.
Notice though that you should cancel the timeout timer once you have gotten the callback called once, after that you should just wait for new services being discovered.
I have also not seen any docs for how long the service discovery timeout should be between different services, but with some testing I have determined that it should be at least 5 seconds, in order to get services available discovered nicely.
https://sphen.proxmobil.com/android-wi-fi-direct-service-discovery/
This blog mentioned the timeout is 120 seconds.
Service discovery will only last for 120 seconds from the time the
discoverServices method of WifiP2pManager is called. If application
developers require service discovery for a longer period, they will
need to re-call the WifiP2pManager.discoverServices method.
For the timeout is 2 minutes as mentionned in Android documentation :
// We repeatedly issue calls to discover peers every so often for a few reasons.
// 1. The initial request may fail and need to retried.
// 2. Discovery will self-abort after any group is initiated, which may not necessarily
// be what we want to have happen.
// 3. Discovery will self-timeout after **2 minutes**, whereas we want discovery to
// be occur for as long as a client is requesting it be.
// 4. We don't seem to get updated results for displays we've already found until
// we ask to discover again, particularly for the isSessionAvailable() property.
I am doing my Master thesis at the moment on WiFi positioning and in order to test my algorithms I needed to collect some data.
To do this I have written a short and very simple program for Android which simply collects the RSSI for all availible access points found by each scan and saves them to file. I have set up a BroadcastReceiver that listens on the event WifiManager.SCAN_RESULTS_AVAILABLE_ACTION and I use a Timer, here called tim, to initiate scans with a WifiManager, called wifi as follows:
tim.schedule(new TimerTask(){
#Override
public void run(){
wifi.startScan();
}
}, 0, 1000);
The problem I am having now is that the initiated scans don't seem to happen every second even if I succeed in initiating them and every now and then there are other scans initiated from some other app that gets recorded as well.
Is there any easy way to scan on a set interval and not receive the scans initiated by some other app?
The whole app can be found on https://github.com/while/RSSIMiner if it helps in any way.
Is there any easy way to scan on a set interval?
If this doesn't work well, I'm afraid not. From my experience, "hardware related" methods may not work exactly like their definition says. For example, I once created a small app which records your position every X minutes. So I call requestLocationUpdates with some minTime parameter. But my phone simply ignores the minTime value, and I get updates from the GPS as soon as they're available, whcih is not what I wanted. I posted a question about it here, and got this answer, from which we learn that prior to jelly bean, devices may simply ignore this value...
So it may be something similar now. I'd try to run this code on the latest Android version. And I don't understand that much in Wifi, but isn't 1 second a too frequent interval for scans? Perhaps the system doesn't ignore the scan request (So it returns true) but the hardware does?
Can we ignore the scans initiated by some other app?
As far as I know, it's negative here too. There are no extras contained in the SCAN_RESULTS_AVAILABLE_ACTION broadcast so you can't know which app initiated the scan.
The best solution will be to defnie your requirements. You can use the ScanResult.timestamp to determine if you should use this result or not. For example, if you're trying to get the RSSI for each access point each second, you can compare the current BSSID to previous BSSIDs. If the current BSSID was included in a scan result from the last second, you can simply ignore it. Then, it doesn't matter how many results you get.
Another, much more simple soltuion will be to create a boolean called scanInitiated and set it to true when starting a scan. When receiving the broacast, use the data only if scanInitiated is true, and then set it to false. This isn't so reliable when the intervals are short, but for long intervals it will work great.
I have implemented a queue service in Android that will change states based on queue and wifi/data connectivity events.
I queue transactions to be posted to a remote url. If the device has a data or wifi connection, it will iterate the queue and post data to the url until the queue is empty, or there is a disconnect event.
I can login to my app, enable airplane mode, generate data, turn airplane mode off, and the transaction are posted. No slow down, even with thousands of transactions. (I was trying to pish it a bit)
Enter: low reception!
My app slows down enormously when the 3G reception is low. (Yes, all uploading happens off the ui thread.) It seems that the cause of this slow down has to do with the post to the server taking a very long time to happen and sometimes just failing.
My question is, how can I solve this? Check for signal quality? Poll a known address? How do other apps, such as Gmail solve this? This must be a common scenario!
Well if you could potentially have thousands of tasks that all need to be executed, then surely they should be managed. Have you thought about implementing your own ThreadPoolExecutor? The documentation is very good and the class is easy to understand, but if you need examples try these sites:
http://www.javamex.com/tutorials/threads/ThreadPoolExecutor.shtml
http://javabeanz.wordpress.com/2010/02/19/threadpoolexecutor-basics/
The benefit of this is that you can limit the maximum number of threads you are spawning, so you shouldn't get a system-wide slow down if you limit your thread count to a reasonable number (For Android I'd recommend no more than 20).
May be do some fine-tuning of socket and conenction timeout? Thus, if your connection is slow and stalled, timeout will occur and transmission will fail.
After connection/sending is failed you can retry transmission later or do something else.
To adjust timeouts you can use the following code:
HttpParams httpParameters = new BasicHttpParams();
HttpConnectionParams.setConnectionTimeout(httpParameters, 30 * 1000);
HttpConnectionParams.setSoTimeout(httpParameters, 15 * 1000);
HttpClient client = DefaultHttpClient(httpParameters);
// use client...
We have similar situation for our application. We have considered signal issues to be a reality and one that can happen any time. One of the point that we follow is not to remove any content from device unless we get a functional confirmation from server and just base on http status code.As in the slow network or the cases where we can lose signal suddenly, while we may have posted our content, there were many cases where data was received only partially. And so we decided to let server know device by some manner [result through some http get based request calls made by device] that content has been received.
More than performance or checking the network, the question that you asked, we needed such behavior for our application robustness.
You should check out using HTTP Range headers, for example like here.
The server should write payload to disk while reading, and handle disconnects. The client cannot know how many bytes payload actually reached the server, so it needs to sync up with the server every time there has been a network error. Dont forget to handle battery and user issues too ;-)
If you want to wait for a better signal, perhaps the SignalStrength class, with its getCdmaDbm, getEvdoDbm, and getGsmSignalStrength methods, is what you are looking for.
Check out this:
http://www.youtube.com/watch?v=PwC1OlJo5VM#!
advanced coding tips and tricks, bandwidth-saving techniques, implementation patterns, exposure to some of the lesser-known API features, and insight into how to minimize battery drain by ensuring your app is a good citizen on the carrier network.
Recently google introduced push-to-device service, but it's only available 2.2 and up.
I need a similar system in my app, and I'm trying to get around limitations.
The issue is battery life. Since the user must be notified immediately about the changes on the server, I thought to implement a service that would live in the background (standard Android service) and query the server for updates.
Of course, querying the server, even each second, will cost a lot of bandwidth, as well as battery, so my question is this: does it make a difference if the server is holding the response for some period of time? (the idea behind Comet type ajax request)
Works like this:
Device sends request for data update
Server gets the request and goes in the loop for one minute, checking if there are updates on each iteration
If there are updates, server sends response back with updates
If not, service goes on to the next iteration.
After a minute, it finally sends the response that no data is yet available
After response (no matter whether empty or with data) Android fires another such request.
It will definitely cost less bandwidth, but will it consume less (or even more) battery?
Holding a TCP socket (and consequently waiting for an HTTP response) as you suggest is probably going to be your best option. What you've described is actually already implemented via HTTP continuation requests. Have a look at the Bayeux protocol for HTTP push notifications. Also, check out the Android implementation here. For what it's worth, that's definitely what I would use. I haven't done any sort of analysis of it, but this allows you to minimize the amount of data transmitted over the line (which is directly proportional to the power consumption) by allowing the connection to hang for as long as possible.
In short, the way Bayeux works is very similar to what you've suggested. The client opens a request and the server waits on it. If it has something to send, it sends it otherwise it simply waits. Eventually, the request will timeout. At that point, the client makes another request. What you attain is near instantaneous push to the client from the server without constant polling and duplication of information like HTTP headers, etc.
When the phone is actively using the networks, it's battery is used more. That is to say when it sends the request and when it receives the response. It will also be using battery just by listening for a response. However, will the phone download data, checking to see if there's a response? Or will the phone just be open to receiving it and the server will push the response out to the phone? That is mainly what it depends on. If the phone is just open to receiving the response but does not actually use the network while trying to download some response the whole time it's waiting, it should use less battery.
Additionally, the phone sending a query every minute instead of every second definitely uses less battery, as far as using the networks goes. However it depends on how you make the phone hold, if you tie it up with very complex logic to make it wait it may not help battery life. That's probably not the case, however, and I would say that in all likelihood this would work out for you.
In closing, it should help the battery but there are ways you could make it in which it would not. It wouldn't hurt to write the program and then just change some type of variable (for example WAIT_TIME to 1 second instead of 1 minute) and test battery usage though, would it?