Android new SyncRequest is ignored when another SyncRequest is being handled - android

I am making an app in which I have a local database and I'm using a SyncAdapter to sync this local database with the server, I don't have much experience with a SyncAdapter and I cannot seem to figure something out. So far I've implemented the "Run the sync adapter when content provider data changes" section from the Android documentation (https://developer.android.com/training/sync-adapters/running-sync-adapter), and initially it worked great but I started to notice something. When calling requestSync from inside the ContentObserver a new SyncRequest is queued by Android (I guess?) and then a little later executed but when my onPerformSync method from my SyncAdapter is executing, I find that if I make a new SyncRequest, this SyncRequest is completely ignored and not even executed later on. This is kinda annoying because when I update my database while my database is being synced, then it could happen that my updates do not reach my server (because the updates occured while the old data was being synced already). I cannot find much information about this behaviour, is this normal behaviour and if so how could I avoid this (without needing to write an entire queing system by myself)?
Here is the code from my ContentObserver (AppLogger is some custom logging system I made):
ContentObserver observer = new ContentObserver(null) {
#Override
public void onChange(boolean selfChange) {
super.onChange(selfChange, null);
}
#Override
public void onChange(boolean selfChange, #Nullable Uri uri) {
AppLogger.log(context, "AppBroadCastReceiver", "Requesting a sync for the Datamanager from the observer...");
ContentResolver.requestSync(mAccount, ApplicationProvider.AUTHORITY, new Bundle());
}
};
The onPerformSync method I used to test this behaviour:
#Override
public void onPerformSync(Account account, Bundle extras, String authority, ContentProviderClient provider, SyncResult syncResult) {
AppLogger.log(context, "DataManagerSyncAdapter", "Starting a sync attempt");
try{
Timer timer = new Timer();
CountDownLatch latch = new CountDownLatch(1);
timer.schedule(new TimerTask() {
#Override
public void run() {
latch.countDown();
}
}, 10000);
latch.await();
} catch(InterruptedException e){}
AppLogger.log(context, "DataManagerSyncAdapter", "Finishing a sync attempt");
}
And then the syncadapter xml:
<?xml version="1.0" encoding="utf-8"?>
<sync-adapter
xmlns:android="http://schemas.android.com/apk/res/android"
android:contentAuthority="com.example.getrequest.Providers.application_provider"
android:accountType="example.com"
android:userVisible="true"
android:supportsUploading="false"
android:allowParallelSyncs="false"
android:isAlwaysSyncable="true"/>
And I got as result in the log (when trying to test this behaviour):
AppBroadCastReceiver: Requesting a sync for the Datamanager from the observer...
DataManagerSyncAdapter: Starting a sync attempt
AppBroadCastReceiver: Requesting a sync for the Datamanager from the observer...
DataManagerSyncAdapter: Finishing a sync attempt
And then onPerformSync is never run again (atleast not in the next 20 minutes, after that I lost my patience). I also noticed setting android:supportsUploading="true" kinda solved my problem but then a ton of useless SyncRequests are handled which I don't even ask for (like one every minute almost).
I've also thought about maybe blocking access to the database until my SyncRequest is completely done but is this common practice? If I want to update multiple tables in the server based on multiple tables in my local database then isn't it better to only lock the database per table instead of locking everything until the SyncRequest is completed and added to this, does this really solve anything? At which point in the onPerformSync should I then unlock my database again? It looks to me that unlocking the database in the onPerformSync could always result in a database call being executed while the onPerformSync is still busy (even if the possibility is very small)? Any help or information about this would be greatly appreciated!
Edit:
When digging through the source code of the SyncManager (https://android.googlesource.com/platform/frameworks/base/+/master/services/core/java/com/android/server/content/SyncManager.java) I came across this:
// Check currently running syncs
for (ActiveSyncContext asc: mActiveSyncContexts) {
if (asc.mSyncOperation.key.equals(syncOperation.key)) {
if (isLoggable) {
Log.v(TAG, "Duplicate sync is already running. Not scheduling "
+ syncOperation);
}
return;
}
}
So I guess this is expected behaviour, which from my point of view does not make any sense at all but maybe I don't have enough experience with these sort of things. So how should I ensure that data updated during a onPerformSync is still getting updated (without writing tons of code myself) or how can I ensure that data is not being updated while onPerformSync is busy?

Related

How to change database value if user's internet is switched off

For the past few days i've been trying to show the online/offline status of a user.. For this i have a register activity where they register and their info gets saved in firebase and if they exit an activity i have overriden its onstop method and made the value to set to offline... but if the user suddenly loses internet connection it still shows online.. i cant change it to offline because internet is needed to make a change in the database and the use doesn't have internet... SO how do i set the database value to offline... i googled quite some stuff about this but didnt find anything... Can anyone please help me out please
My code
#Override
protected void onStart() {
super.onStart();
fetchData();
// mDatabaseReference.child("UserData").child(UID).child("Online").setValue("True");
}
#Override
protected void onStop() {
super.onStop();
fetchData();
// mDatabaseReference.child("UserData").child(UID).child("Online").setValue(false);
}
What you're trying to do is known as a presence system. The Firebase Database has a special API to allow this: onDisconnect(). When you attach a handler to onDisconnect(), the write operation you specify will be executed on the server when that server detects that the client has disconnected.
From the documentation on managing presence:
Here is a simple example of writing data upon disconnection by using the onDisconnect primitive:
DatabaseRef presenceRef = FirebaseDatabase.getInstance().getReference("disconnectmessage");
// Write a string when this client loses connection
presenceRef.onDisconnect().setValue("I disconnected!");
In your case this could be as simple as:
protected void onStart() {
super.onStart();
fetchData();
DatabaseReference onlineRef = mDatabaseReference.child("UserData").child(UID).child("Online");
onlineRef.setValue("True");
onlineRef.onDisconnect().setValue("False");
}
Note that this will work in simple cases, but will start to have problems for example when your connection toggles rapidly. In that case it may take the server longer to detect that the client disappears (since this may depends on the socket timing out) than it takes the client to reconnect, resulting in an invalid False.
To handle these situations better, check out the sample presence system in the documentation, which has more elaborate handling of edge cases.

Save event type logs

We want to add a reporting feature to our existing application.
For this purpose we are sending Events in JSON via HTTPS to a server application.
We need to remember Event-objects that could not be send to the server (No internet, server not reachable...). We are considering to store the events in a SQLite database and discard all Events that are older than 24 hours to prevent flooding our storage.
Another option would be to write the JSON-objects to a file and concat each new event when it could not be send to the server. The problem with this solution is, that it would be hard for us to discard logs older than 24 hours.
We store the event sin a table with the columns:
| id | json | created_at |
Can anyone recommend best practices for this use case?
Currently we tend to use the sqlite solution but we are wondering if there are any caveats that we are not aware of.
If you don't mind using third-party lib i can recommend android-priority-jobqueue. You can easily achieve what you are trying to do. You can always create job and it will handle itself. You can set if it needs network, if it is persistent (saved into DB when no network) and even you can customize your own retry logic.
Here's little example.
public class PostTweetJob extends Job {
public static final int PRIORITY = 1;
private String text;
public PostTweetJob(String text) {
// This job requires network connectivity,
// and should be persisted in case the application exits before job is completed.
super(new Params(PRIORITY).requireNetwork().persist());
}
#Override
public void onAdded() {
// Job has been saved to disk.
// This is a good place to dispatch a UI event to indicate the job will eventually run.
}
#Override
public void onRun() throws Throwable {
// yours code here
}
#Override
protected RetryConstraint shouldReRunOnThrowable(Throwable throwable, int runCount,
int maxRunCount) {
// An error occurred in onRun.
return RetryConstraint.createExponentialBackoff(runCount, 1000);
}
}
And you call it like this.
jobManager.addJobInBackground(new PostTweetJob("It works"));
use JobService(Android 5+ - lollipop and above) and AlarmManager (for android sdk<21 - pre lollipop) with this solution u can schedule any task and it would be performed. JobService was developed rxactely for tjis purposes(schedule and perform different tasks) maybe you can try JobIntentService it is would work on kitkat(android 4+) devices
P.S.
In that case you didnt need any third party libs and other dependrncies like firebase/google play services(like for FirebaseDispatcher)

Realm large db size due to frequent updates

I have a thread running every second that updates Realm database fields every second. While the data being updated is tiny, I've found out that updates still increase the database filesize until you explicitly clean it with Realm.compressRealm() so within an hour or two the db size is 50MB+ and will easily bloat up to 750MB+ in a short period as well.
I am closing the Realm with realm.close() in the Activity onStop() and also closing the new Realm instance I create in the timer thread:
public void checkDealersTimer() {
RealmResults<Dealers> dealersLookup = realm.where(Dealers.class).equalTo("thedealers","thedealers").findAll();
dlr = dealersLookup.get(0);
if (dlr.getPerSecond() != 0.00) {
if (dealerTimer == null) {
dealerTimer = new Timer();
dealerTimer.scheduleAtFixedRate(new TimerTask() {
#Override
public void run() {
Realm drealm;
drealm = Realm.getDefaultInstance();
RealmResults<Dealers> dealersLookup = drealm.where(Dealers.class).equalTo("thedealers","thedealers").findAll();
dlr = dealersLookup.get(0);
drealm.beginTransaction();
dlr.setEarnings(dlr.getEarnings()+dlr.getPerSecond());
drealm.commitTransaction();
drealm.close();
}
}, 0, 1000);
}
}
}
This timer is the only place I use Realm outside of the UI thread, and the only place I am making updates so frequent, so I am assuming the "leak" is coming from here though I cannot be sure. The filesize creeps up wether the app is visible or not, but only when it is running.
Here's another user with a similar issue:
App size increase due to realm android
If that is believed to be the solution, I cannot find the correct way to call Realm.compressRealm() since the db is supposed to be updating every second when you're using it, and I can only close the Realm onDestroy() not onStop() (and Realm.compressRealm() requires you to close all Realms)
I appreciate any input, thank you!
I had an issue where my realm file size was increasing at an alarming rate, and it was an issue of not calling close() when the app closed unexpectedly during development. As such my database file (only had about 1k items in it) was at 10MB. Properly closing out my realm instance solved the problem and reduced my database file size to ~300KB. Really, it's worth checking your entire codebase to make sure you're actually closing out all realm instances. It's annoying, but way better than having users complain about running out of storage ;)
Based on your example above, I'd recommend also using the executeTransaction method (instead of beginning/committing transactions) provided by realm:
drealm.executeTransaction(new Realm.Transaction() {
#Override
public void execute(Realm realm) {
dlr.setEarnings(dlr.getEarnings() + dlr.getPerSecond());
}
});
drealm.close();

Google Drive Android API: Deleted folder still exists in query

Running the code below, I create a folder with Google Drive Android API on a tablet. After a few seconds, delete that folder from a remote location on a PC. When I re-run the code, the API still thinks 'MyFolder' exists, even though it was deleted and not visible in the Google Drive app on the tablet. The folder persistance finally disappears after a while and the code works as expected. Is this expected behavior for Cloud drives?
Query query = new Query.Builder()
.addFilter(Filters.and(Filters.eq(
SearchableField.TITLE, "MyFolder"),
Filters.eq(SearchableField.TRASHED, false)))
.build();
Drive.DriveApi.query(getGoogleApiClient(), query)
.setResultCallback(new ResultCallback<DriveApi.MetadataBufferResult>() {
#Override
public void onResult(DriveApi.MetadataBufferResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Cannot create folder in the root.");
} else {
boolean isFound = false;
for(Metadata m : result.getMetadataBuffer()) {
if(!isFound) {
if (m.getTitle().equals("MyFolder")) {
showMessage("Folder exists");
isFound = true;
}
}
}
if(!isFound) {
showMessage("Folder not found; creating it.");
MetadataChangeSet changeSet = new MetadataChangeSet.Builder()
.setTitle("MyFolder")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFolder(getGoogleApiClient(), changeSet)
.setResultCallback(new ResultCallback<DriveFolder.DriveFolderResult>() {
#Override
public void onResult(DriveFolder.DriveFolderResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Error while trying to create the folder");
} else {
mThwingAlbertFolderId = result.getDriveFolder().getDriveId();
showMessage("Created a folder: " + mThwingAlbertFolderId);
}
}
});
}
}
}
});
What you are seeing, is a 'normal' behavior of the GDAA, that can be explained if you look closer at the 'Lifecycle of a Drive file' diagram (warning: I've never seen the source code, just assuming from what I observed).
See, the GDAA, unlike the REST Api, creates a layer that does its best to create caching and network traffic optimization. So, when you manipulate the file/folder from the 'outside' (like the web app), the GDAA layer has no knowledge of the fact until it initiates synchronization, controlled by it's own logic. I myself originally assumed that GooDrive has this under control by dispatching some kind of notification back to the GDAA, but it apparently is not the case. Also, some Googlers mentioned 'requestSync()' as a cure, but I never succeeded to make it work.
What you think you're doing, is polling the GooDrive. But effectively, you're polling the GDAA (local GooPlaySvcs) whose DriveId is still valid (not updated), unlike the real GooDrive object that is already gone.
This is one thing that is not clearly stated in the docs. GDAA is not the best Api for EVERY application. It's caching mechanism is great for transparently managing online/offline states, network traffic optimization. battery life, ... But in your situation, you may be better off by using the REST Api, since the response you get reflects the current GooDrive state.
I myself faced a similar situation and had to switch from the GDAA back to the REST (and replaced polling with a private GCM based notification system). Needless to say, by using the REST Api, your app gets more complex, usually requiring sync adapter / service to do the data synchronization, managing network states, ... all the stuff GDAA gives you for free).
In case you want to play with the 2 apis side-by side, there are two identical CRUD implementation you can use (GDAA, REST) on Github.
Good Luck
Google drive api does not sync immediately, That is why the deleted folders are still showing, so you have to force google drive to sync using requestSync()
Drive.DriveApi.requestSync(mGoogleApiClient).await();
I fount an example snippet here:
http://wiki.workassis.com/android-google-drive-api-deleted-folder-still-exists-in-query/
As Sean mentioned, the Drive Android API caches metadata locally to reduce bandwidth and battery usage.
When you perform an action on the device, e.g. creating a folder, we attempt to apply that action on the server as soon as possible. Though there can be delays due to action dependencies and content transfers, you will generally see the results reflected on the server very quickly.
When an action is performed on the server, e.g. by deleting a folder via the web client, this action is reflected on the device the next time the Drive Android API syncs. In order to conserve battery and bandwidth, sync frequency depends on how the API is being used as this is a priority for users.
If you need to guarantee that a sync has occurred, you can explicitly request a sync using DriveApi.requestSync() and wait on the result. This is currently rate limited to 1 per minute, which is frequently hit during testing, but should have a much smaller impact on real world usage.
Please let us know on our issue tracker if this sync behavior is causing issues for your use case so we can investigate solutions.
Google drive uses its own lifecycle for Drive api and manage all things in cache that's why if you delete some file or folder and try to access using google drive apis it is still available because it is stored in cache so you need to explicitly call requestSync() method for that then after that cache will be updated and gives you that folder or file not found.
below is code for that:
Drive.DriveApi.requestSync(mGoogleApiClient).setResultCallback(new ResultCallback<Status>() {
#Override
public void onResult(#NonNull Status status) {
Log.e("sync_status", status.toString());
if (status.getStatus().isSuccess()) {
setRootFolderDriveId();
}
}
});
and don't call Drive.DriveApi.requestSync(mGoogleApiClient).await() because your main thread will block so it will crash. use above one and after get successful callback you can do your operation on google drive because it's updated.
You can do it in main thread:
Drive.DriveApi.requestSync(mGoogleApiClient).setResultCallback(new ResultCallback<com.google.android.gms.common.api.Status>() {
#Override
public void onResult(com.google.android.gms.common.api.Status status) {
if (!status.getStatus().isSuccess()) {
Log.e("SYNCING", "ERROR" + status.getStatusMessage());
} else {
Log.e("SYNCING", "SUCCESS");
// execute your code to interact with Google Drive
}
}
});
I was having the same issue and using "Drive.DriveApi.requestSync" did the trick.
Also I suggest taking a look at https://github.com/francescocervone/RxDrive because you can concatenate the sync to other drive operations using rxandroid.
For example, this becomes a delete-and-sync operation:
Observable<Boolean> deleteFile = rxDrive.delete(file);
Observable<Void> syncDrive = rxDrive.sync();
Observable.concat(deleteFile, syncDrive);
The reason why you get listed deleted files from your query is that Google Drive has a "Trash" folder that is "searchable". You need to empty your trash first.

Account.setPassword causing SyncAdapter infinite loop

There are quite a few questions considering infinite loop of android's SyncAdapter: [1]
[2]
[3], but none described the problem I encountered.
I am setting up my sync as:
ContentResolver.setIsSyncable(account, AppConstants.AUTHORITY, 1);
ContentResolver.setSyncAutomatically(account, AppConstants.AUTHORITY, true);
ContentResolver.addPeriodicSync(account, AppConstants.AUTHORITY, Bundle.EMPTY, 60);
My sync adapter supports uploading (android:supportsUploading="true"), which means that in my ContentProvider I have to check whether the data change comes from my SyncAdapter, and if it does, then I notify change without requesting sync to network.
boolean syncToNetwork = false;
getContext().getContentResolver().notifyChange(uri, null, syncToNetwork);
Still my sync adapter runs in a constant loop, what another reason could there be for triggering another sync?
In each sync I request the server for data. For each request I get an access token from my custom Account Authenticator. Instead of saving a password in my account, I decided to save the Oauth2 refresh token, which can then be use to refresh the access token. With each refreshed access token the server also send a new refresh token, which I then update to my account:
accountManager.setPassword(account, refreshToken);
And THAT was the problem. Going through the AOSP codes I discovered the following BroadcastReceiver in the SyncManager:
private BroadcastReceiver mAccountsUpdatedReceiver = new BroadcastReceiver() {
public void onReceive(Context context, Intent intent) {
updateRunningAccounts();
// Kick off sync for everyone, since this was a radical account change
scheduleSync(null, UserHandle.USER_ALL, null, null, 0 /* no delay */, false);
}
};
So what it does, on each account change (adding, deleting, setting password) a broadcast in send to trigger sync for all SyncAdapters, not just your own!
I honestly don't know what what the reasoning for that, but I can see it as exploitable - I let my phone (with my app stuck in infinite loop) run over night, in the morning the battery was drained, but also my FUP - only the Google's Docs, Slides and Sheets apps consumed 143MB each.

Categories

Resources