I'm doing a query to fetch only 1 month old documents.
I store the creation time of the document itself
timestamp : 9 Apr, 2020 10:03:43 AM
Now, in my query , I want to get all the documents whithin the current month but I dont want to use currentDate from my client (so it cannot be changed) but also I dont want to query that document to find the timestamps of that document
Query
FirebaseFirestore.getInstance().collection("orders").whereEqualTo("shopId",shopId)
.whereEqualTo("status", 7).startAt().endAt().get().await()
I want to know an efficient server aproximation to set at .startAt().endAt() to query just the documents with status 7 in which has past 1 month
any ideas?
Firestore does not offer timeboxed queries that restrict a range of documents based on the sense of server time. You will have to trust that the client is sending the correct time.
The only control you have over time is using request.time in security rules. You could write a rule to allow queries that only fall within times based on the server timestamp, but it's still up to the client to specify the time correctly in the query. The rules will not be able to filter results based on time.
You might want to read more about how server timestamps work with security rules at the end of this article.
Related
I have a collection called Countries and inside this collection, There are 25 documents (25 countries)
When the user presses the select country button, It will take him to CountriesFragment, And when the fragment begins I will fetch the 25 countries from Firestore.
Let's suppose the user visits CountriesFragment many times and every time the user visit this fragment will cost me 25 reads, So I decided to reduce the reads costs using this way.
I created a key using SharedPreferences and I named the key lastCountriesUpdate.
I created a check if the key above is empty or not. If the value of the key was empty then that means I should fetch all countries from Firestore.
Otherwise, I'll get documents from Countries collection that has the lastUpdate value more than in the lastCountriesUpdate value.
Everything works well and I reduced the cost of reads, But the problem now is how can I know if any country was deleted from the Countries collection to remove it from the client-side?
How can I know if any country was deleted from the Countries
collection to remove it from the client-side?
By default you can only know that by fetching the collection (or by fetching a specific document you know could have been deleted, but this does not correspond to your use case I think).
A possible less costly alternative approach could be to:
Keep a timestamp of the last update to the Countries collection in a unique document in a specific collection (e.g. doc path like countriesLastUpdate/lastUpdate), and;
Each time you need to display the countries list, you fetch this unique document (i.e. it costs only one read) and check if the timestamp has changed.
More precisely this means that:
The first time you need to display the countries list you fetch the collection AND you fetch the lastUpdate document, read the timestamp value and store it in your app.
The subsequent times you need to display the countries list you first fetch the lastUpdate document and check if the timestamp has changed. If hasn't change you use the current list saved in your app, if it has changed you do like the first time.
You need, in the back-end, to update the lastUpdate document each time there is a change in the collection. The best solution is to use a Cloud Function with the onWrite trigger.
The Cloud Function would be as simple as:
exports.countriesLastChangeUpdate = functions
.firestore
.document('countries/{docId}')
.onWrite((change, context) => {
return admin.firestore().doc('countriesLastUpdate/lastUpdate').set({lastUpdate: context.timestamp});
});
Note that there could be a tiny time lag between the country doc deletion (or creation, or change) and the completion of the Cloud Function (cold start and execution time). If you want to avoid this you could execute the country doc deletion via a Cloud Function which will update the lastUpdate document at the same time, via a batched write.
Another variation would be to store in your app the current timestamp the first time you fetch the collection and the subsequent times check if the timestamp stored in the lastUpdate document is after the timestamp stored in your app. You avoid the initial read of the lastUpdate document.
As there is no functionality of foreign Key in Firestore like that of MYSQL, so I am not able to replicate one of my important functionality that is to update a file in one place and it will reflect in every place. Also, Firebase has no functionality to update all the document's specific filed at once.
There are already these kinds of questions but I could not get my solution. Suppose I have a million documents containing a filed which is the density of a material. Later on, I found that my density value was wrong so how to update that value in all documents efficiently. Also, I do not want to use server/admin SDK.
If you need to change the contents of 1 million documents, then you will need to query for those 1 million documents, iterate the results, then update each of those 1 million documents individually.
There is no equivalent of a sql "update where" statement that updates multiple documents in one query. It requires one update per document.
If don't want to use the Admin SDK, then the option that you have is to update the value of your densityMaterial property on the client, which might not be the best solution. However, if you can divide the update operation in smaller chunks, you might succeed.
If you are using a POJO class to map each document, then you might be interested in my answer from the following post:
How to update one field from all documents using POJO in Firestore?
And if you are not using a POJO class, please check my answer from the following post:
Firestore firebase Android search and update query
Regarding the cost, you'll be billed with one write operation for every document that is updated. If all 1 MIL documents will be updated, then you'll be billed with 1 MIL write operations.
Edit:
Suppose I have a million documents containing a filed which is the density of a material. Later on, I found that my density value was wrong so how to update that value in all documents efficiently.
If all of those 1 MIL documents contain a property called densityMaterial, that holds the exact same value, it doesn't make any sense to store that property within each document. You can create a single document that contains that particular value, and in each and every document of those 1 MIL, simply add only a reference to that document. A DocumentReference is a supported data-type. Now, if you need to change that value, it will incur only a single document write.
However, if you have different values for the densityMaterial property and all of them are wrong, then you don't have a problem with the database, you have a problem with the mechanism/people that are adding data. It's not a matter of a database problem if you have added 1 MIL incorrect documents.
Why not chose MySQL?
MySQL cannot scale in the way Cloud Firestore does. Firestore simply scales massively.
Can I avoid this problem anyhow?
Yes, you can buy using a single document for such details.
I have data structure like this:
Employees (Collection) > {EmployeeID} (Documents) > Chat (Collection) > {ChatId} (Documents).
In chat collection each document having 3 fields. 1. senderName, 2. sendTimestamp, 3. messageText.
I want to delete chats which are older than 7 days (from today).
I think it might be possible through cloud function but I am really basic user and don't know much about cloud functions. Please note that I don't want to make it automatically (cron job). I will do it manually on daily basis or whenever I wish.
I really searched a lot for this but its really hard. Please help me.
A big part of this task involves querying a sub collection. You can read more about this idea here: Firestore query subcollections
There are basically two options at the time of writing this:
Query the entire top level collection (Employees) something like db.collection('Employees').get(). Then you would have to loop through each employ object querying for their sub collection (Chat) based on their date range. Firestore query by date range for more reading on querying by a date in firestore. This could result in a large amount of reads depending on the number of Employee documents, but is the "easiest" approach in terms of not having to make changes to your data models/application.
Restructure your data to make the sub collection Chat a top level collection. Then you can do a query on this top level collection by the date. Less reads, but may not be as feasible depending on if this app is in production/willingness to make code changes.
A Function would definitely be able to accomplish this task either way you decide to approach it. One thing to note is that a Function executes using the Admin SDK, meaning it can basically ignore security rules set up on your Firestore.
I am learning my basics for Firestore and trying to build an app which allows user1 to share a document with user2/3/4 etc.
For billing purposes, every query which results in a document read counts towards the cost. So, I do not want to follow the approach of adding the user2/3/4 etc emails to a 'sharedWith' variable to type: array or map type structure as I believe every user will then have to scan the entire collection and pick the documents where their email appears.
Is there any other approach to this where user1 can programmatically give access to user2/3/4 of one specific document?
For billing purposes, every query which results in a document read counts towards the cost.
That's correct and according to the official documentation regarding Cloud Firestore billing:
There is a minimum charge of one document read for each query that you perform, even if the query returns no results.
So you're also charged with one document read, even if your query does not return any results.
I believe every user will then have to scan the entire collection and pick the documents where there email appears.
That's also correct. So let's assume the email address that you are looking for exist in a document that is appart of 10k collection of documents. So if you query the database only for that particular document, you'll be charged with only one document read and not for those 10k. So you are charged according to the number of items you get back and not to the number of items your request them from. And this is available for the first request, when you get the data from the Firebase servers. If in the meanwhile nothing has changed, second time you get the data from the cache since Firestore has the offline persistence enabled by default. Which means you aren't charged for any other document reads.
Is there any other approach to this where user1 can programatically give access to user2/3/4 of one specific document?
Without writing the data to database, there is not. So you should add the ids or email addresses to the desired documents and perfom a query according to it.
Need help with DynamoDB.
I am switching back end from Parse.com (because they are retiring parse) to AWS mobile hub.
I want to capture and save the date and time for which a row or item of data is written into my dynamodb table. In parse it is done automatically but not so in dynamodb.
I have searched around on the internet for clues but no solid explanation or example at the moment.
Can someone please point me in the right direction or pls show an example code here on how to implement CreatedAt and UpdatedAt into dynamodb.
Do I get my system time and save it to dynamodb or get server time?
If I need to get server timestamp which AWS server time do I get and how can I implement it?
Thanks a lot.
AWS enables one to Use AWS Lambda with Amazon DynamoDB
This enables you to trigger server-side code based on DynamoDB events. It's way more involved than Parse but would enable you to avoid using app-side dates/times/code to maintain CreatedAt and UpdatedAt values.
Such datetime values need to be Ints or Strings as Datetime is not a JSON type and DynamoDB doesn't go beyond basic JSON field types.
This is tricky for a number of reasons. First if you require strict order of events, then using time can cause a bunch of errors and you might want to look into more advanced distributed systems algorithms like https://en.wikipedia.org/wiki/Lamport_timestamps or https://en.wikipedia.org/wiki/Vector_clock.
If just 'pretty close' is fine for your use case then the main thing you need to keep in mind is that mobile device system time is often incorrect. User's can change the clock, move between time zones, ect... There are a few things you could do.
A) You could have your own server that keeps a central time that you ping
B) Dynamo also returns a date header when you make calls which you could look, but it's returned in the response. However you could at least use it to see if the date on your device is accurate. The SDK some something similar if you see https://github.com/aws/aws-sdk-android/blob/78cdf680115a891a6e1355c56068e2f56e3c5056/aws-android-sdk-core/src/main/java/com/amazonaws/http/AmazonHttpClient.java when Dynamo returns an error if the date in the signature from the request doesn't match the time Dynamo is expecting. This probably isn't the best solution, but should at least give you ideas of possible avenues.
I have decided to capture System.currentTimeMillis() at the time a user clicks the post/save button on my app and save the Epoch time in milliseconds into my DB CreatedAT attribute.
I read here : "One of the most important things to realize is that 2 persons calling System.currentTimeMillis() at the same time should get the same result no matter where they are one the planet" So I intend to generate a random UUID Primary Key (Hash Key) and then save the user's CreatedAT time as the Sort Key.
I am open to any other practical options. I will use this in the mean time for testing and observe the behaviour.