I want to choose between native SQLiteDatabase and Realm to deal with a big amount of data.
For benchmark I add to storage 1 milion of Product entities:
{id:integer,sku:string,name:string,data_creating:string}
Using SQLiteDatabase it takes near 1 minute 34 seconds on my device.
Using Realm it takes more them 10 minutes.
My code is:
Realm realm = Realm.getInstance(getApplicationContext());
realm.beginTransaction();
for(int i = 0 ; i < 1000000;i++){
Product product = realm.createObject(Product.class);
product.setId(i+1);
product.setName("Product_"+i);
product.setSku("SKU__"+i);
product.setDateCreated(new Date());
}
realm.commitTransaction();
How can I improve my code for better time performance?
The original question spawned a discussion within Realm, and we ended up adding a faster method to insert objects. The code for creating and inserting 1 mio objects can now be written as:
final Product product = new Product();
final Date date = new Date();
try(Realm realm = Realm.getDefaultInstance()) {
realm.executeTransaction(new Realm.Transaction() {
#Override
public void execute(Realm realm) {
for(int i = 0 ; i < 1000000; i++){
product.setId(i+1);
product.setName("Product_"+i);
product.setSku("SKU__"+i);
product.setDateCreated(date);
realm.insert(product);
}
}
});
}
You have to be aware that SQLite and Realm are two very different things. Realm is an object store and you are creating a lot of objects in the code shown above. Depending on your model class and the number of rows/objects, you will often see that Realm is a bit slower on inserts. To do a fair comparison, you could compare Realm with one of the many excellent ORMs out there.
Said that, Realm offers a low-level interface (io.realm.internal). I wouldn't recommend you to use it as it is currently undocumented. Your example would look like this:
long numberOfObjects = 1000000;
SharedGroup sharedGroup = new SharedGroup("default.realm");
WriteTransaction writeTransaction = sharedGroup.beginWrite();
Table table = writeTransaction.getTable("class_Product");
table.addEmptyRows(numberOfObjects);
for (int i = 0; i < numberOfObjects; i++) {
table.setLong(0, i, i); // id
table.setString(1, i, "Product_"+i); // name
table.setString(2, i, "SKU__"+i); // sku
table.SetDate(3, i, new Date()); // date
}
writeTransaction.commit();
sharedGroup.close();
You can now compare two table/row oriented data stores, and you will probably find that Realm is a bit faster than SQLite.
At Realm, we have a few ideas on how to get our object interface to run faster, and we hope to be able to implement them in the near future.
Related
I've integrated Realm android to my project.
I want to keep my table restriction to store max 100 records. If any new records come then check for if its limit increases to 100 then It should delete those records (101..N). The table should contain the last 100 records only.
Any help will be appreciable.
Thanks in advance!
There is no automatic way of doing this. But if you add a timestamp in your model class (say, created), you could add a listener and delete the old object. Something like:
realmListener = new RealmChangeListener() {
#Override
public void onChange(Realm realm) {
RealmResults<YourClass> objs;
int nObjsToDelete;
objs = realm.where(YourClass.class).sort(created).findAll();
nObjsToDelete = objs.size()-100;
objs.limit(nObjectToDelete).findAll().deleteAllFromRealm();
}
};
realm.addChangeListener(realmListener);
I am inserting a new book into my book table and after trying to assign it to a many-to-many relation table. Imo this should run in a transaction.
(Because if the m2m insertion fails, the information about the realtionship is lost). My code now looks as follows and fails as i cannot access the BookUserXRefDao.insert(bookUser); query due to static context errors.
Is there an easy way to fix this?
#Transaction
public void insertBook(Book theBook, List<Integer> userIds){
long newBookId= insert(theBook);
//Insert into the m2m relation
BookUserXRef[] bookUser = new BookUserXRef[userIds.size()];
for (int i = 0; i < userIds.size(); i++) {
BookUserXRef[i] = new BookUserXRef(newBookId,userIds.get(i));
}
BookUserXRefDao.insert(bookUser);
}
Just realized that i can access the Singleton Database Instance from within my transaction.
Therefore i could just use
AppDb.getAppDb().BookUserXRefDao().insert(bookUser);
That solved the problem.
I want to use GreenDAO for persistence, but I cannot get it to persist my data.
The data is saved and loaded correctly as long as the application is not restarted.
Once i swipe the app away and reopen it from scratch, GreenDAO does not see the previous data (both on the emulator and real device).
This is my entity:
#Entity
public class TestSingleEntity {
#Id(autoincrement = true)
Long id;
int someNumber;
public TestSingleEntity(int someNumber) {
this.someNumber = someNumber;
}
#Generated(hash = 787203968)
public TestSingleEntity(Long id, int someNumber) {
this.id = id;
this.someNumber = someNumber;
}
#Generated(hash = 1371368161)
public TestSingleEntity() {
}
// ... some more stuff
}
This is how I insert entities to database:
Random rnd = new Random();
TestSingleEntity singleEntity = new TestSingleEntity();
singleEntity.setSomeNumber(rnd.nextInt());
DaoSession session = ((MyApp)getApplication()).getDaoSession();
TestSingleEntityDao dao = session.getTestSingleEntityDao();
dao.insert(singleEntity);
Log.d("tgd", "Inserted an entity with id " + singleEntity.getId());
And this is how I read them:
Query query = dao.queryBuilder().orderAsc(TestSingleEntityDao.Properties.SomeNumber).build();
StringBuilder builder = new StringBuilder();
List<TestSingleEntity> result = query.list();
Log.d("size", result.size());
for (TestSingleEntity testSingleEntity : result) {
Log.d("entity", testSingleEntity.toString());
}
As I have said, as long as I stay in the app (moving around in different activities is okay), everytime the insert is called, a new entity with a new ID is created. As soon as I relaunch the app, it goes back to square one.
The setup was taken directly from the GitHub page. What am I doing wrong? Thanks
Disclaimer: GreenDAO has gone through major changes since I last used it so this is purely based on reading their code on the github.
Apparently GreenDAO's poorly documented DevOpenHelper drops all tables on upgrade, so the real question is why is onUpgrade being called when clearly there hasn't been a change to the schema version. Try to look for the log line that mentions dropping the tables as described in the template for DevOpenHelper.
Regardless, using OpenHelper instead should fix the issue.
I am inserting 150000 objects in realm db. Object has only one property which is string.
At the same time I am creating a string builder with new line for each string
and finally writing it into a text file.
At the end text file size is 0.8mb. Where realm db size is 18mb. What is the cause for it. How to minimize realm db size. Can you please helm me. Here is the realm insertion code
private void insertWord() {
long time = System.currentTimeMillis();
StringBuilder builder=new StringBuilder();
RealmConf conf = RealmConf.getInstance(true);
int i = 0;
RealmUtils.startTransaction(conf);
while (i < 150000) {
i++;
String word = "Word:" + i;
EB eb = new EB(word);
builder.append(word+"\n");
RealmUtils.saveWord(eb, conf);
Log.i("word check" + i++, "seelog:" + word);
}
RealmUtils.commitTransaction(conf);
writeStringIntoFile(builder.toString(),0);
}
You could try the following, for science:
private void insertWord() {
long time = System.currentTimeMillis();
StringBuilder builder=new StringBuilder();
RealmConf conf = RealmConf.getInstance(true);
int i = 0;
int batchCount = 0;
while (i < 150000) {
if(batchCount == 0) {
RealmUtils.startTransaction(conf);
}
batchCount++
i++;
String word = "Word:" + i;
EB eb = new EB(word);
builder.append(word+"\n");
RealmUtils.saveWord(eb, conf);
Log.i("word check" + i++, "seelog:" + word);
if(batchCount == 3000) {
RealmUtils.commitTransaction(conf);
batchCount = 0;
}
}
if(batchCount != 0) {
RealmUtils.commitTransaction(conf);
}
writeStringIntoFile(builder.toString(),0);
}
Probably because you forgot to call Realm.close().
Refer to this document for more details.
https://realm.io/docs/java/latest/#faq
Large Realm file size You should expect a Realm database to take less
space on disk than an equivalent SQLite database, but in order to give
you a consistent view of your data, Realm operates on multiple
versions of a Realm. This can cause the Realm file to grow
disproportionately if the difference between the oldest and newest
version of data grows too big.
Realm will automatically remove the older versions of data if they are
not being used anymore, but the actual file size will not decrease.
The extra space will be reused by future writes.
If needed, the extra space can be removed by compacting the Realm
file. This can either be done manually or automatically when opening
the Realm for the first time.
If you are experiencing an unexpected file size growth, it is usally
happening for one of two reasons:
1) You open a Realm on a background thread and forget to close it
again.
This will cause Realm to retain a reference to the data on the
background thread and is the most common cause for Realm file size
issues. The solution is to make sure to correctly close your Realm
instance. Read more here and here. Realm will detect if you forgot to
close a Realm instance correctly and print a warning in Logcat.
Threads with loopers, like the UI thread, do not have this problem.
2) You read some data from a Realm and then block the thread on a
long-running operation while writing many times to the Realm on other
threads.
This will cause Realm to create many intermediate versions that needs
to be tracked. Avoiding this scenario is a bit more tricky, but can
usually be done by either either batching the writes or avoiding
having the Realm open while otherwise blocking the background thread.
I am currently doing the following but I don't think it's the efficient way of doing it:
Realm defaultInstance = Realm.getDefaultInstance();
RealmResults<Stamp> stamps = defaultInstance.where(Stamp.class).equalTo("exerciseGuid", exerciseGuid).findAll();
if (stamps.size() > 0) {
defaultInstance.beginTransaction();
for (int i = 0; i < stamps.size(); i++) {
Stamp stamp = stamps.get(i);
stamp.setSynced(false);
stamp.setName(newName);
}
defaultInstance.commitTransaction();
}
Not really a Realm user, but it looks like batch updates aren't implemented yet in realm-java and your way of doing massive updates is for now the only supported way.