Android Realm for-loop - RealmResults are removed during looping - android

I have some code running in an IntentService, doing image uploads. It starts by fetching posts that are "queued" for upload, then it loops through those and makes synchronous Retrofit calls to upload images. Fairly straight-forward. See code for reference:
final RealmResults<Post> posts = realm.where(Post.class)
.equalTo("uniqueCode", uniqueCode)
.equalTo("queued", true)
.isEmpty("url")
.findAll();
Log.d(TAG, "post count: " + posts.size());
if (posts != null && posts.size() > 0) {
for (int i = 0; i < posts.size(); i++) {
Log.d(TAG, "posts count now: " + posts.size());
Post post = posts.get(i);
Post submittedPost = api.uploadPhoto(<params>); // Retrofit call, which works fine
if (submittedPost != null) {
realm.beginTransaction();
post.setQueued(false);
post.setUrl(submittedPost.getUrl());
realm.commitTransaction();
sendBroadcastUpdate(); // This updates the UI in places
}
}
}
Oddly enough, each time it goes through the for-loop, the size of the results ("posts" in my case above) goes down by one - this is confirmed by my Log output that decrements by one each time, so it only gets through a few of the results. It's almost as if each time I'm committing the Realm transaction during the looping, it's updating my fetched query results, even though that array is set to be final.
I confirmed that it doesn't do this if I don't set those values ('queued' and 'url'). Which tells me it's updating the results somehow. I've tried different things, such as a while-loop (i.e. "while (posts.size() > 0)"), but it gets through 2-3 of them, and then all of a sudden the size of "posts" is immediately 0, for no reason at all that I can see.
I've also tried doing the begin/commit before and after the loop, but it yields similar results. Same goes for if I convert it to an array before processing. It seems that it always gets through a few of them, and then the size is automatically set to 0, so it exits the loop.
This strikes me as very bizarre, especially since I set the results to be "final" - is this expected behavior? Does anyone know a way around this, by chance?
For reference, we are using Realm version: 0.86.

In Realm versions where version < 0.89.0 or version >= 3.0.0*, this is expected behavior (see here).
RealmResults is a view to the latest version of the database (for a given object type where given conditions are met), and a transaction is essentially "creating the latest version of the database" meaning the RealmResults starts to see the new modified data with each modification.
See following:
final RealmResults<Post> posts = realm.where(Post.class)
.equalTo("uniqueCode", uniqueCode)
.equalTo("queued", true) // <---- condition true
.isEmpty("url")
.findAll();
...
if (submittedPost != null) {
realm.beginTransaction();
post.setQueued(false); // <---- condition false
post.setUrl(submittedPost.getUrl());
realm.commitTransaction();
sendBroadcastUpdate(); // This updates the UI in places
}
As you create a transaction, you start to see the latest version, in which case the RealmResults will no longer contain elements that have queued == false.
For Realm 0.88.3 or older, you need to iterate the RealmResults in reverse, or "iterate while the results isn't empty" (I used this method a lot before the 0.89.0 breaking change killed it, but it'd work with 3.0.0+ again so that's nice)
realm.refresh(); // enforce the next RealmResults to be *definitely* up-to-date
final RealmResults<Post> posts = realm.where(Post.class)
.equalTo("uniqueCode", uniqueCode)
.equalTo("queued", true)
.isEmpty("url")
.findAll();
while(!posts.isEmpty()) {
Post post = posts.get(0);
Post submittedPost = api.uploadPhoto(<params>); // Retrofit call, which works fine
if (submittedPost != null) {
realm.beginTransaction();
post.setQueued(false);
post.setUrl(submittedPost.getUrl());
realm.commitTransaction();
sendBroadcastUpdate(); // This updates the UI in places
}
}
For Realm 3.0.0+, you can either use for(Post post : results) { (iterators), or you can use a collection snapshot directly.
final RealmResults<Post> results = realm.where(Post.class)
.equalTo("uniqueCode", uniqueCode)
.equalTo("queued", true) //
.isEmpty("url")
.findAll();
final OrderedRealmCollection<Post> posts = results.createSnapshot(); // <-- snapshot
for (int i = 0; i < posts.size(); i++) {
//...
*(and honestly, the behavior in-between was a hack, where the RealmResults was not synchronized to see the latest version, and 3.0.0 had to undo this hack)

This is a normal behavior of an Iterator. Once you call .next(), you get the item and it is removed from the iterator. If you want to keep items, write a utility to convert an iterator to an ArrayList or some similar thing. You can read more about iterators here: https://www.tutorialspoint.com/java/java_using_iterator.htm

Related

Get all Firestore documents after specific date

What I try to accomplish is the following: I have a collection where documents get added with a timestamp in them. I then want to listen to that collection via snapshotlistener, but just for new documents. So I update my timestamp to the newest document-timestamp received, and try to query only the documents newer than my timestamp. In onCreate I assign lastUpdateTime a date in the past, so that I get the first document added.
val sdf = SimpleDateFormat("dd/MM/yyyy", Locale.US)
try {
lastUpdateTime = sdf.parse("01/01/2000")
} catch (e: ParseException) {
//
}
then I add the snapshotlistener and try to update lastUpdateTime in order to just look for documents newer than thist Date/Time
val path = //...path to my collection
private var lastUpdateTime = Any() //my up-to-date timestamp. I first assign it some date in the past, to make sure it gets the first document added.
// some code
talksListener = path.whereGreaterThan("timestamp", lastUpdateTime)
.addSnapshotListener(EventListener<QuerySnapshot> { snapshot, e ->
if (snapshot != null && !snapshot.isEmpty && !snapshot.metadata.hasPendingWrites()) {
for (dSnapshot in snapshot) {
val thisTimestamp = dSnapshot.get("timestamp")
if (thisTimestamp != null) {
lastUpdateTime = thisTimestamp
}
}
}
})
But every time I add an document, I get the whole collection again.
I also tried all combinations with orderBy and startAt/endAt/startBefore/EndBefore but the result is the same. Either I get nothing, or the whole collection every time a new document is added.
for example:
talksListener = path.orderBy("timestamp").startAfter(lastUpdateTime)
Where is the problem here?
Also, on a different note, is there a possibility to include !snapshot.metadata.hasPendingWrites() into the query in Kotlin. The documentation says to use MetadataChanges.INCLUDE but I do not get how to implement it in Kotlin. Every Hint is much appreciated.
edit 1:
My firestore DB is structured like this:
users/{user}/messages/{message} -> here is the timestamp located
and my path leads to ./messages
edit 2:
the solution is to detach and reattach the listener after the new lastUpdateTime is assigned. That does not sound good to me, so if anyone has a better solution, I am happy to hear it. For the time being I will stick to it though.

150k words text file is 0.8mb where realm db size is 18mb

I am inserting 150000 objects in realm db. Object has only one property which is string.
At the same time I am creating a string builder with new line for each string
and finally writing it into a text file.
At the end text file size is 0.8mb. Where realm db size is 18mb. What is the cause for it. How to minimize realm db size. Can you please helm me. Here is the realm insertion code
private void insertWord() {
long time = System.currentTimeMillis();
StringBuilder builder=new StringBuilder();
RealmConf conf = RealmConf.getInstance(true);
int i = 0;
RealmUtils.startTransaction(conf);
while (i < 150000) {
i++;
String word = "Word:" + i;
EB eb = new EB(word);
builder.append(word+"\n");
RealmUtils.saveWord(eb, conf);
Log.i("word check" + i++, "seelog:" + word);
}
RealmUtils.commitTransaction(conf);
writeStringIntoFile(builder.toString(),0);
}
You could try the following, for science:
private void insertWord() {
long time = System.currentTimeMillis();
StringBuilder builder=new StringBuilder();
RealmConf conf = RealmConf.getInstance(true);
int i = 0;
int batchCount = 0;
while (i < 150000) {
if(batchCount == 0) {
RealmUtils.startTransaction(conf);
}
batchCount++
i++;
String word = "Word:" + i;
EB eb = new EB(word);
builder.append(word+"\n");
RealmUtils.saveWord(eb, conf);
Log.i("word check" + i++, "seelog:" + word);
if(batchCount == 3000) {
RealmUtils.commitTransaction(conf);
batchCount = 0;
}
}
if(batchCount != 0) {
RealmUtils.commitTransaction(conf);
}
writeStringIntoFile(builder.toString(),0);
}
Probably because you forgot to call Realm.close().
Refer to this document for more details.
https://realm.io/docs/java/latest/#faq
Large Realm file size You should expect a Realm database to take less
space on disk than an equivalent SQLite database, but in order to give
you a consistent view of your data, Realm operates on multiple
versions of a Realm. This can cause the Realm file to grow
disproportionately if the difference between the oldest and newest
version of data grows too big.
Realm will automatically remove the older versions of data if they are
not being used anymore, but the actual file size will not decrease.
The extra space will be reused by future writes.
If needed, the extra space can be removed by compacting the Realm
file. This can either be done manually or automatically when opening
the Realm for the first time.
If you are experiencing an unexpected file size growth, it is usally
happening for one of two reasons:
1) You open a Realm on a background thread and forget to close it
again.
This will cause Realm to retain a reference to the data on the
background thread and is the most common cause for Realm file size
issues. The solution is to make sure to correctly close your Realm
instance. Read more here and here. Realm will detect if you forgot to
close a Realm instance correctly and print a warning in Logcat.
Threads with loopers, like the UI thread, do not have this problem.
2) You read some data from a Realm and then block the thread on a
long-running operation while writing many times to the Realm on other
threads.
This will cause Realm to create many intermediate versions that needs
to be tracked. Avoiding this scenario is a bit more tricky, but can
usually be done by either either batching the writes or avoiding
having the Realm open while otherwise blocking the background thread.

How to retrieve data from all pages?

I want to get some data from all users in Users table. I've found that I have to use Data paging. I've written the same code as described in Feature 47->https://backendless.com/feature-47-loading-data-objects-from-server-with-sorting/ (because I also
have to sort) , but then I've figured out that this code takes data only from first page. Then , I decided that I have to go to the next page and read it , until its size is not equal to zero. Below,you can see my wrong solution:
QueryOptions queryOptions = new QueryOptions();
List<String> list = new ArrayList<String>() ;
list.add("point DESC") ;
queryOptions.setSortBy(list);
BackendlessDataQuery backendlessDataQuery = new BackendlessDataQuery();
backendlessDataQuery.setQueryOptions(queryOptions);
Backendless.Data.of(BackendlessUser.class).find(backendlessDataQuery, new AsyncCallback<BackendlessCollection<BackendlessUser>>() {
#Override
public void handleResponse(BackendlessCollection<BackendlessUser> one) {
while(one.getCurrentPage().size()>0) {
Iterator<BackendlessUser> it = one.getCurrentPage().iterator();
while (it.hasNext()) {
//something here,not so important
}
one.nextPage(this);// here I want to get next page,
//but seems like it does not work, cause my loop became infinite
}
}
I think that I have to use nextPage method with AsyncCallback instead of one.nextPage(this) , but if I do so , the method couldn't keep up with the loop. So, how can I solve my problem?
I actually can't find the problem with your solution, but I solved this problem using:
int tot = response.getTotalObjects()
to get the total number of objects at the first response. Then use a loop until your list of objects has size = tot. In each loop you make a query setting the offset equals to the current size of the list.

Android: Quickest way to filter lists as user types a query

Good day all, I have a list of Objects (Let's call them ContactObject for simplicity). This object contains 2 Strings, Name and Email.
This list of objects will number somewhere around 2000 in size. The goal here is to filter that list as the user types letters and display it on the screen (IE in a recyclerview) if they match. Ideally, It would filter where the objects with a not-null name would be above an object with a null name.
As of right now, the steps I am taking are:
1) Create 2 lists to start and get the String the user is typing to compare to
List<ContactObject> nameContactList = new ArrayList<>();
List<ContactObject> emailContactList = new ArrayList<>();
String compareTo; //Passed in as an argument
2) Loop though the master list of ContactObjects via an enhanced for loop
3) Get the name and email Strings
String name = contactObject.getName();
String email = contactObject.getEmail();
4) If the name matches, add it to the list. Intentionally skip this loop if the name is not null and it gets added to the list to prevent doubling.
if(name != null){
if(name.toLowerCase().contains(compareTo)){
nameContactList.add(contactObject);
continue;
}
}
if(email != null){
if(email.toLowerCase().contains(compareTo)){
emailContactList.add(contactObject);
}
}
5) Outside of the for loop now as the object lists are build, use a comparator to sort the ones with names (I do not care about sorting the ones with emails at the moment)
Collections.sort(nameContactList, new Comparator<ContactObject>() {
public int compare(ContactObject v1, ContactObject v2) {
String fName1, fName2;
try {
fName1 = v1.getName();
fName2 = v2.getName();
return fName1.compareTo(fName2);
} catch (Exception e) {
return -1;
}
}
});
6) Loop through the built lists (one sorted) and then add them to the master list that will be used to set into the adapter for the recyclerview:
for(ContactObject contactObject: nameContactList){
masterList.add(contactObject);
}
for(ContactObject contactObject: emailContactList){
masterList.add(contactObject);
}
7) And then we are all done.
Herein lies the problem, this code works just fine, but it is quite slow. When I am filtering through the list of 2000 in size, it can take 1-3 seconds each time the user types a letter.
My goal here is to emulate apps that allow you to search the contact list of the phone, but seem to always to it quicker than I am able to replicate.
Does anyone have any recommendations as to how I can speed this process up at all?
Is there some hidden Android secret I don't know of that only allows you to query a small section of the contacts in quicker succession?

cordovaSQLite retrieving data slowly using ionic framework

I am making an application using the ionic framework and I am using sqlite to store a list of about 150 rows. Each row has two attributes, ID and Name.
Now I am retrieving this data using a database factory which runs a query.
It works, however when I test it on an my Galaxy Tab 3 the list takes about 5-10 seconds to load and once it it loaded the list it super laggy scrolling through the list items.
Here's my controller
.controller('ActionSearchCtrl', function($scope, ActionSearchDataService, DBA, $cordovaSQLite){
var tablet = true;
var query = "select action FROM actions;";
$scope.items = [];
$scope.runSQL = function(){
DBA.query(query).then(function(result){
$scope.items = DBA.getAll(result);
});
};
if(tablet){$scope.runSQL()};
Here's my Database Factory
.factory('DBA', function($cordovaSQLite, $q, $ionicPlatform) {
var self = this;
// Handle query's and potential errors
self.query = function (query, parameters) {
parameters = parameters || [];
var q = $q.defer();
$ionicPlatform.ready(function () {
$cordovaSQLite.execute(herbsDatabase, query, parameters)
.then(function (result) {
q.resolve(result);
}, function (error) {
console.warn('I found an error');
console.warn(error);
alert(error.message);
q.reject(error);
});
});
return q.promise;
}
// Proces a result set
self.getAll = function(result) {
var output = [];
for (var i = 0; i < result.rows.length; i++) {
output.push(result.rows.item(i));
}
return output;
}
// Proces a single result
self.getById = function(result) {
var output = null;
output = angular.copy(result.rows.item(0));
return output;
}
return self;
})
So the query returns about 150 entries which I need all on one page (I've looked into infinite scrolling and pagination but my client wants all the items on one page so this is not an option. From what I've read, 150 entries shouldn't be too slow in terms of watchers as I am using ng-repeat for the list items to display. If anyone has a way I can display this many items using the cordovaSQLite plugin to make the list function quicker let me know! at the moment the list is pretty much unusable, I've tried it on other devices too and it has the same result.
I've also tried creating about 200 dummy objects in the controller (without making a call to the db to get data) and the performance is fine. That's why I am thinking it's an SQLite performance issue. This is my first post on here so apologies if I am not clear enough.
Okay, so I used the collection-repeat instead of ng-repeat in my template where the list was being generated. Dramatically increased the performance of the list. I hope this helps someone as it was driving me crazy!
Your problem may comes from several points.
Are you running on Android?
First, if you are running on android, be aware that list management and SCROLLING is pretty lame at present on ionic. On my case, only Native scroll was providing good results with keeping ng-repeat.
app.config(function($ionicConfigProvider) {
if(ionic.Platform.isAndroid())
$ionicConfigProvider.scrolling.jsScrolling(false);
});
Fasting up SQLite
I have clearly improved my performance of SQLite by defining correct index with this command :
'CREATE INDEX IF NOT EXISTS ' + _indexUniqueId + ' ON ' + _tableName + ' ' + IndexDefinition));
For your case, if your have 2 cols, (A and B) and if you are using only A to query on your table, then you need only one index on ColA, and above IndexDefinition will be equal to : (A)
Moreover, check that officiel sqlite doc to understand how index are managed :
https://www.sqlite.org/lang_createindex.html
Update to ionic RC5
If you have not done it yet, do it.
There huge improvements on scrolling ng-repeat on 1.0.0-rc5
Else
If those information does not work, please provide more info on your issue :
- is the list unscrollable ? or delays a lot before displaying ? or both ?

Categories

Resources