I'm building an Android app in which several users can access, modify and delete the same item and I'm using Firebase to sync all the devices.
In order to keep track of the updates the item has a timestamp.
I wrote a transaction so that when I try to delete the item it checks if the timestamp of my copy is older than the remote copy: in that case the transaction aborts and the item is not deleted.
Here my problem:
My device goes offline
I successfully delete the item
Another user modifies the item on the remote database
My device goes online and propagates his deletion
I thought it would have aborted remotely as the remote timestamp is newer.
I really can't see the point of the abort function if I can only abort basing the decision on my local data...
How should I handle these kinds of conflicts in Firebase?
-- UPDATE
This is the code I use to remove an item. It should abort if another user has changed the item remotely after the deletion has happened locally.
private void removeItem(final ListItem item, final Firebase itemRef) {
itemRef.runTransaction(new Transaction.Handler() {
#Override
public Transaction.Result doTransaction(MutableData mutableData) {
if(mutableData == null) return Transaction.abort();
else if((long)mutableData.child("lastUpdate").getValue() > item.getLastUpdate()) return Transaction.abort();
else{
itemRef.removeValue();
return Transaction.success(mutableData);
}
}
});
Please note I use itemRef.removeValue() instead of mutableData.setValue(null) because the second one doesn't seem to work.
Firebase initially applies transactions client-side, to improve concurrency. If that decision does not meet your use-case, you can pass an additional argument to the transaction method that tells it to by-pass the local apply.
See https://www.firebase.com/docs/web/api/firebase/transaction.html
applyLocally Boolean Optional
By default, events are raised each time the transaction update function runs. So if it is run multiple times, you may see intermediate states. You can set this to false to suppress these intermediate states and instead wait until the transaction has completed before events are raised.
I solved my problem in this way:
I used mutableData.setValue(null), otherwise Firebase can't make the transaction work properly
I set the applyLocallyboolean explicitly to true as I need to see local events too
I don't understand why mutableData.setValue(null) wasn't working before, I may be missing some previous mistake, but that was the problem.
Related
Lets suppose:
I have an app installed in two devices, A and B.
This app listen to a person collection changes, as you can see:
FirebaseFirestore.getInstance().collection("people").addSnapshotListener((snapshots, e) -> {
if (e != null || snapshots == null) {
return;
}
for (DocumentChange dc : snapshots.getDocumentChanges()) {
if(dc == null){
continue;
}
switch (dc.getType()) {
case ADDED:
onDocumentAdded(dc.getDocument());
break;
case MODIFIED:
onDocumentModified(dc.getDocument());
break;
case REMOVED:
onDocumentRemoved(dc.getDocument());
break;
}
}
});
When device A adds a new person to the people collection, device B will be notified about it, but device A as well.
In my case, I am implementing Firestore in a existing app and it already have a persistence logic.
In fact, whenever a new person is added by device A, I already have it stored in the app of device A, but I want to save it in device B as well.
However, as device A is notified too and I would save this person twice.
Some solutions I've been thinking:
Storing an unique ID (UUID) on my local database and check if exists (but on Modified event it would not work);
Defining a client ID (UUID) and send it. When I get the notification by the listener, I check if the client ID is the same I have defined locally.
I asking it because I do not know if already exists a way to handle with it.
If device A creates the document, and you have an active listener on the document created (or in this case, it's collection), you should find that snapshots.getMetadata().isFromCache() == true and the document added should also have a similar trait - dc.getDocument().getMetadata().isFromCache() == true.
However, this is not entirely fool-proof, as documents that have Field Transforms such as serverTimestamp() may only fire the listener once they been accepted and resolved by Firestore.
An alternative is to simply add the new person to Firestore (without saving it locally first) and let the snapshot listeners handle persisting the data. As mentioned above, the listener will normally be fired locally while the data is being sent off to your actual Firestore database.
Whenever I needed this, I've kept a list of the document IDs that the local client has written in local storage, and then check the snapshots in the listener against that list.
It's a bit of a brute force approach, but pretty simple to implement. And if you prune the IDs from the local list in the listener once you've gotten the update, the memory overhead is pretty minimal.
I have a node in Firebase Realtime Database which saves a boolean value true or false along with some more attributes. If and only if this key is false, then I allow the user to perform write operations on the same node in database.
When user has did the task, this value is again set back to false. This is like the critical section concept of the operating systems i.e., only one user can perform a task at a time.
It works as I intended, but the issue I am facing is if user has changed this value to true, and but due to bad network or something he/she is not writing on the database. Now, no one can write on that node on database.
I would like to add a time-interval functionality, if user has not performed any write operation for some interval, say 10 minutes and the boolean is true. Then, I would like to set it to false.
I know that Firebase Cloud Function triggers only on database write, update and create operations. Please suggest me something to handle this issue or some other ways by which I can perform it. There is no code snippet regarding this functionality anywhere. I have looked up various resources on internet, brainstormed myself, I could get nothing.
This might not be perfect solution, but it is a workaround.
At the place, where you are allowing data write, probably through a conditional check; you can simply add another condition by using OR operator.
Simply allow data write when (that Boolean value is false) or (boolean value is true but the node's last updation timestamp is more than current timestamp by your time of interval).
It should me something like this :
if(!dataSnapshot.getBoolean || (dataSnapshot.getBoolean && currentTimestamp - dataSnapshot.getTimestamp >= YOUR_TIME_INTERVAL)){
//Your Logic Here
}
In firebase I set security rules in order to secure some nodes.
Problem
When I use multi path update to update multiple paths at the same time, If one security rule fails for any of the multiple paths, then the whole update fails.
example of my problem
Lets say I have 3 nodes (users, people , tasks) in my real time database, in android the way to update the 3 paths together is by doing something like this:
Map multi-update =new HashMap();
multi-update.put("users/user1/name","any_name");
multi-update.put("people/user2/status", "any_status");
multi-update.put("tasks/task1/details", "any_details");
DatabaseReference root=FirebaseDatabase.getInstance().getReference();
//update
root.updateChildren(multi-update);
lets say my rules are like this:
"users":{
".write":"true"
}
,"people":{
".write":"true"
}
,"tasks":{
".write":"false"
}
since task doesn't allow writes to it, then the multi path update is never updating until all paths allow the write.
can someone explain why that happens?
Thanks.
According to the firebase docs:
Simultaneous updates made this way are atomic: either all updates
succeed or all updates fail.
So the problem you are describing isn't actually a problem but the intended behaviour of root.updateChildren(multi-update);
If this behaviour is a problem in your case you could change your firebase rules to give permission to all parts of your multi-update or split up your multi-update in parts to make sure that the parts that can succeed will succeed.
Another option in your case would be to check for errors like this:
root.updateChildren(multi-update, new DatabaseReference.CompletionListener() {
#Override
public void onComplete(DatabaseError firebaseError, DatabaseReference firebase) {
if (firebaseError != null)
{
//Update the minimum required fields
}
}
});
The downside here is you don't know what part of your update failed.
I have an interesting problem. I have a SQLite update that I am performing within a AsyncTask on Android (because I also have had to do a ton of remote calls before doing the DB call). The code works like a charm, unless the application is pushed to the background (eg, using the Home button). The task continues to work in the background successfully, the DB call is made and returns 1 row changed, but the data never actually makes it to the DB. No errors or exceptions. Even stranger, the logs show everything working just fine - no exceptions, nada.
Again, when NOT pushed to the background this works fine.
The call:
result = (sqlDB.update("FormInstance", values, "InstanceId=?", new String[] { String.valueOf(form.getSubmissionId()) }) > 0);
Also there is no transaction involved with this call (unless it is happening under the hood of the Android SQLite code).
Anyone know of why this might be the case? Is there something that happens to DB connections or SQLLite that I am unaware of when pushed to the backround?
UPDATE
I have tried wrapping the DB call with a begintransaction/endtrans without any success:
sqlDB.beginTransaction();
try {
result = (sqlDB.update("FormInstance", values, "InstanceId=?", new String[] { String.valueOf(form.getSubmissionId()) }) > 0);
}
finally {
sqlDB.endTransaction();
}
Still acts as though it was successful but data never committed.Please note that I pulled the DB from the device and verified that it had NOT been updated.
After much testing, I found that while there was a onPause method occasionally updating the data, the real problem was that the SQLLite update was not really updating one column (FormStatus) when the update was performed. This was only the case when running in the background. I verified this by querying the result immediately after the update. The final solution was a secondary update that only updated the FormStatus column, which did work. Wrapping with begintrans/endtrans did not help.
My queries with .equalTo() return out-of-date data when used with addListenerForSingleValueEvent, while removing .equalTo() causes the listener to return updated data. Any idea why?
.
I'm using the following query to fetch user's posts from Realtime Database with persistance enabled on Android:
mDatabase.child("posts").orderByChild("uid").equalTo(id)
where id is the id of the current user and each post stores its author's id as a field.
When .equalTo(id) is present, the new posts for the particular user are not returned in that query for the first few minutes. Even more, it seems to affect other queries for the same root ("posts") that contain .orderByChild. Eg, following would also fail to recognise the new post:
mDatabase.child("posts").orderByChild("archived")
Once I remove the .equalTo(id) the behaviour goes back to normal. I'm using addListenerForSingleValueEvent. Tried it also withaddValueEventListener which fires two events, one without the new post, one with it. Without .equalTo(id) both single and non-single listeners return the new post in the first callback. Restarting the app doesn't seem to help straight away - the first event stays out-of-date for the next few minutes. The new post is successfully fetched by different queries in other parts of the application (eg mDatabase.child("posts").child(id))
Any idea why .equalTo() causes such behaviour and how to avoid it (other than using non-single listener and ignoring first event)?
Note 1: same thing happens for .startAt(id).endAt(id)
Note 2: other parts of the Realtime Database are functioning normally, device is connected to the internet and new posts are containing the valid uid field matching the current user.
Update 26/10/2016
Calling mDatabase.child("posts").startAt(key).limitToFirst(4) also produces similar behaviour when trying to query a segment of the database (in our case to implement infinite scroll). It seems that explicitly adding .orderByKey() fixes that particular problem: mDatabase.child("posts").orderByKey().startAt(key).limitToFirst(4).
Though the issue outlined in the original question remains.
I've ran into the exact same problem as you, and after experimenting with almost everything, I've managed to solve it on my end.
I'm scanning barcodes and fetching foods that have the scanned barcode:
Query query = refFoods.orderByChild("barcode").equalTo(barcode);
query.addListenerForSingleValueEvent(new Value ... })};
On my rules i had
".indexOn": "['barcode']"
and after i changed it and took the "[]" out as in:
".indexOn": "barcode"
it started working delay free, where before it would take something like 5 minutes to udpate.