ORMLite's createOrUpdate seems slow - what is normal speed? - android

Calling the ORMLite RuntimeExceptionDao's createOrUpdate(...) method in my app is very slow.
I have a very simple object (Item) with a 2 ints (one is the generatedId), a String and a double. I test the time it takes (roughly) to update the object in the database (a 100 times) with the code below. The log statement logs:
time to update 1 row 100 times: 3069
Why does it take 3 seconds to update an object 100 times, in a table with only 1 row. Is this the normal ORMLite speed? If not, what might be the problem?
RuntimeExceptionDao<Item, Integer> dao =
DatabaseManager.getInstance().getHelper().getReadingStateDao();
Item item = new Item();
long start = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
item.setViewMode(i);
dao.createOrUpdate(item);
}
long update = System.currentTimeMillis();
Log.v(TAG, "time to update 1 row 100 times: " + (update - start));
If I create 100 new rows then the speed is even slower.
Note: I am already using ormlite_config.txt. It logs "Loaded configuration for class ...Item" so this is not the problem.
Thanks.

This may be the "expected" speed unfortunately. Make sure you are using ORMLite version 4.39 or higher. createOrUpdate(...) was using a more expensive method to test for existing of the object in the database beforehand. But I suspect this is going to be a minimal speed improvement.
If I create 100 new rows then the speed is even slower.
By default Sqlite is in auto-commit mode. One thing to try is to wrap your inserts (or your createOrUpdates) using the the ORMLite Dao.callBatchTasks(...) method.
In by BulkInsertsTest android unit test, the following doInserts(...) method inserts 1000 items. When I just call it:
doInserts(dao);
It takes 7.3 seconds in my emulator. If I call using the callBatchTasks(...) method which wraps a transactions around the call in Android Sqlite:
dao.callBatchTasks(new Callable<Void>() {
public Void call() throws Exception {
doInserts(dao);
return null;
}
});
It takes 1.6 seconds. The same performance can be had by using the dao.setSavePoint(...) method. This starts a transaction but is not as good as the callBachTasks(...) method because you have to make sure you close your own transaction:
DatabaseConnection conn = dao.startThreadConnection();
Savepoint savePoint = null;
try {
savePoint = conn.setSavePoint(null);
doInserts(dao);
} finally {
// commit at the end
conn.commit(savePoint);
dao.endThreadConnection(conn);
}
This also takes ~1.7 seconds.

Related

Fragment crash: IndexOutOfBoundsException: Invalid index x, size is x after updating my database [duplicate]

When I do
ArrayList<Integer> arr = new ArrayList<Integer>(10);
arr.set(0, 1);
Java gives me
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(Unknown Source)
at java.util.ArrayList.set(Unknown Source)
at HelloWorld.main(HelloWorld.java:13)
Is there an easy way I can pre-reserve the size of ArrayList and then use the indices immediately, just like arrays?
How about this:
ArrayList<Integer> arr = new ArrayList<Integer>(Collections.nCopies(10, 0));
This will initialize arr with 10 zero's. Then you can feel free to use the indexes immediately.
Here's the source from ArrayList:
The constructor:
public ArrayList(int initialCapacity)
{
super();
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal Capacity: "+ initialCapacity);
this.elementData = new Object[initialCapacity];
}
You called set(int, E):
public E set(int index, E element)
{
rangeCheck(index);
E oldValue = elementData(index);
elementData[index] = element;
return oldValue;
}
Set calls rangeCheck(int):
private void rangeCheck(int index)
{
if (index >= size) {
throw new IndexOutOfBoundsException(outOfBoundsMsg(index));
}
}
It may be subtle, but when you called the constructor, despite initializing an Object[], you did not initialize size. Hence, from rangeCheck, you get the IndexOutOfBoundsException, since size is 0. Instead of using set(int, E), you can use add(E e) (adds e of type E to the end of the list, in your case: add(1)) and this won't occur. Or, if it suits you, you could initialize all elements to 0 as suggested in another answer.
I believe the issue here is that although you have suggested the allocated space of entries in the Array, you have not actually created entries.
What does arr.size() return?
I think you need to use the add(T) method instead.
Programming aside, what you are trying to do here is illogical.
Imagine an empty egg carton with space for ten eggs. That is more or less what you have created. Then you tell a super-precise-and-annoying-which-does-exactly-what-you-tell-him robot to replace the 0th egg with another egg. The robot reports an error. Why? He can't replace the 0th egg, because there is no egg there! There is a space reserved for 10 eggs, but there are really no eggs inside!
You could use arr.add(1), which will add 1 in the first empty cell, i.e. the 0-indexed one.
Or you could create your own list:
public static class PresetArrayList<E> extends ArrayList<E> {
private static final long serialVersionUID = 1L;
public PresetArrayList(int initialCapacity) {
super(initialCapacity);
addAll(Collections.nCopies(initialCapacity, (E) null));
}
}
Then:
List<Integer> list = new PresetArrayList<Integer>(5);
list.set(3, 1);
System.out.println(list);
Prints:
[null, null, null, 1, null]
This is not an Java-specific answer but an data structure answer.
You are confusing the Capacity concept with the Count (or Size) one.
Capacity is when you tell the list to reserve/preallocate a number of slots in advance (in this ArrayList case, you are saying to it create an array of 10 positions) in its' internal storage. When this happens, the list still does not have any items.
Size (or Count) is the quantity of items the list really have. In your code, you really doesn't added any item - so the IndexOutOfBoundException is deserved.
While you can't do what you want with arraylist, there is another option: Arrays.asList()
Capacity is used to prepare ArrayLists for expansion. Take the loop
List<Integer> list = new ArrayList<>();
for(final int i = 0; i < 1024; ++i) {
list.add(i);
}
list starts off with a capacity of 10. Therefore it holds a new Integer[10] inside. As the loop adds to the list, the integers are added to that array. When the array is filled and another number is added, a new array is allocated twice the size of the old one, and the old values are copied to the new ones. Adding an item is O(1) at best, and O(N) at worst. But adding N items will take about 2*1024 individual assignments: amortized linear time.
Capacity isn't size. If you haven't added to the array list yet, the size will be zero, and attempting to write into the 3rd element will fail.

is Calling new Runnable, or Calling new Class Safe For Memory Management?

hello guys hope you having a nice day, i have and interface that works in background , on one of its methods , it reads a line of string and calls a method of implemented class, like this :
while ( (receivedString = bufferedReader.readLine()) != null &&
activity != null) {
activity.OnFileRowRead(receivedString, stmt, count);
count++;
}
and it is called for like 1million times,the method currently returns a boolean, so it be thread safe, but it is obviously slower this way. the method is this :
if(row != null && row.contains("/.~n/")){
String [] splitted = row.split("\\/.~n/");
for (String str : splitted){
String[] spl = str.split("\\/.nn/");
if(spl.length == 8){
int version = Integer.valueOf(NotNull(spl[7]));
stmt.bindLong(1, Integer.valueOf(NotNull(spl[0])));
stmt.bindLong(2, version);
stmt.bindString(3, NotNull(spl[3]));
stmt.bindDouble(4, Double.valueOf(NotNull(spl[1])));
stmt.bindDouble(5, Double.valueOf(NotNull(spl[2])));
stmt.bindString(6, NotNull(spl[4]));
stmt.bindString(7, NotNull(spl[5]));
stmt.bindString(8, NotNull(spl[6]));
stmt.bindString(9, version == 0 ? MapActivity_Database.STATUS_UNAVAILABLE : MapActivity_Database.STATUS_AVAILABLE);
stmt.execute();
str = null;
stmt.clearBindings();
count ++;
}
}
splitted = null;
}
return true;
}
now i am thinking that method on the class that implemented this, creates a new class and returns the boolean value or runs a new runnable , and returns value , the method is fast as you can see , but 1million is too much yet, is it safe for the memory and thread to do so? i mean will gc be able to recycle these ?
Do you want to run the method 1million times sequential or synchronous? If its synchronous the memory needed is ofc 1million times higher. But after the threads ends the memory should be freed as far as I can say from your snippets (otherwise you have to free them manually on threads end). So I think its "memory safe".
If its thread safe its hard to say with that less information - are there any objects/resources which are used from multiple threads? Are these objets thread safe? If not you have to have to implement if manually (catchword: mutex).
Last but not least: are you sure this is the only way to solve your problem? Calling one method a million times seems like bad practice - may there are some callbacks you could use?

Using limit () and offset () in QueryBuilder (ANDROID , ORMLITE)

#SuppressWarnings("deprecation")
public List<Picture> returnLimitedList(int offset, int end) {
List<Picture> pictureList = new ArrayList<Picture>();
int startRow = offset;
int maxRows = end;
try {
QueryBuilder<Picture, Integer> queryBuilder = dao.queryBuilder();
queryBuilder.offset(startRow).limit(maxRows);
pictureList = dao.query(queryBuilder.prepare());
} catch (SQLException e) {
e.printStackTrace();
}
return pictureList;
}
I have a table of Pictures in the database, and must return a limited list, 20 lines at a time.
But when I use ex: QueryBuilder.offset(11).limit(30);
I can not return the list limited to 20 lines.
The list only comes to me with the limit.
It's as if the offset remain always with value 0
ex: (0 - 30)
Is there any other way to return a limited list for initial index and end index?
Could anyone help me?
This question was asked two months ago, but I'll answer if anyone stumbled on the same problem as I did.
There's misunderstanding about what offset means in this case, here follows what SQLite Documentations says about it
If an expression has an OFFSET clause, then the first M rows are omitted from the result set returned by the SELECT statement and the next N rows are returned, where M and N are the values that the OFFSET and LIMIT clauses evaluate to, respectively.
Source
Based on your query, you'll return 30 lines starting at the #11 line.
So the correct way is:
queryBuilder.offset(startRow).limit(20);
With limit being the number of rows that will return, not the ending row.
pictureList = dao.query(queryBuilder.prepare());
And the returned List with the first value starting on pictureList.get(0)
Edit: #Gray 's help on comments

How to update Android textviews efficiently?

I am working on an Android app which encounters performance issues.
My goal is to receive strings from an AsyncTask and display them in a TextView. The TextView is initially empty and each time the other process sends a string concatenates it to the current content of the textview.
I currently use a StringBuilder to store the main string and each time I receive a new string, I append it to the StringBuilder and call
myTextView.setText(myStringBuilder.toString())
The problem is that the background process can send up to 100 strings per second, and my method is not efficient enough.
Redrawing the whole TextView everytime is obviously a bad idea (time complexity O(N²)), but I'm not seeing another solution...
Do you know of an alternative to TextView which could do these concatenations in O(N) ?
As long as there is a newline between the strings, you could use a ListView to append the strings and hold the strings themselves in an ArrayList or LinkedList to which you append as the AsyncTask receives the strings.
You might also consider simply invalidating the TextField less frequently; say 10 times a second. This would certainly improve responsiveness. Something like the following could work:
static long lastTimeUpdated = 0;
if( receivedString.size() > 0 )
{
myStringBuilder.append( receivedString );
}
if( (System.currentTimeMillis() - lastTimeUpdated) > 100 )
{
myTextView.setText( myStringBuilder.getChars( 0, myStringBuilder.length() );
}
If the strings come in bursts -- such that you have a delay between bursts greater than, say, a second -- then reset a timer every update that will trigger this code to run again to pick up the trailing portion of the last burst.
I finally found an answer with the help of havexz and Greyson here, and some code here.
As the strings were coming in bursts, I chose to update the UI every 100ms.
For the record, here's what my code looks like:
private static boolean output_upToDate = true;
/* Handles the refresh */
private Handler outputUpdater = new Handler();
/* Adjust this value for your purpose */
public static final long REFRESH_INTERVAL = 100; // in milliseconds
/* This object is used as a lock to avoid data loss in the last refresh */
private static final Object lock = new Object();
private Runnable outputUpdaterTask = new Runnable() {
public void run() {
// takes the lock
synchronized(lock){
if(!output_upToDate){
// updates the outview
outView.setText(new_text);
// notifies that the output is up-to-date
output_upToDate = true;
}
}
outputUpdater.postDelayed(this, REFRESH_INTERVAL);
}
};
and I put this in my onCreate() method:
outputUpdater.post(outputUpdaterTask);
Some explanations: when my app calls its onCreate() method, my outputUpdater Handler receives one request to refresh. But this task (outputUpdaterTask) puts itself a refresh request 100ms later. The lock is shared with the process which send the new strings and sets output_upToDate to false.
Try throttling the update. So instead of updating 100 times per sec as that is the rate of generation. Keep the 100 strings in string builder and then update once per sec.
Code should like:
StringBuilder completeStr = new StringBuilder();
StringBuilder new100Str = new StringBuilder();
int counter = 0;
if(counter < 100) {
new100Str.append(newString);
counter++;
} else {
counter = 0;
completeStr.append(new100Str);
new100Str = new StringBuilder();
myTextView.setText(completeStr.toString());
}
NOTE: Code above is just for illustration so you might have to alter it as per your needs.

Insertion of thousands of contact entries using applyBatch is slow

I'm developing an application where I need to insert lots of Contact entries. At the current time approx 600 contacts with a total of 6000 phone numbers. The biggest contact has 1800 phone numbers.
Status as of today is that I have created a custom Account to hold the Contacts, so the user can select to see the contact in the Contacts view.
But the insertion of the contacts is painfully slow. I insert the contacts using ContentResolver.applyBatch. I've tried with different sizes of the ContentProviderOperation list(100, 200, 400), but the total running time is approx. the same. To insert all the contacts and numbers takes about 30 minutes!
Most issues I've found regarding slow insertion in SQlite brings up transactions. But since I use the ContentResolver.applyBatch-method I don't control this, and I would assume that the ContentResolver takes care of transaction management for me.
So, to my question: Am I doing something wrong, or is there anything I can do to speed this up?
Anders
Edit:
#jcwenger:
Oh, I see. Good explanation!
So then I will have to first insert into the raw_contacts table, and then the datatable with the name and numbers. What I'll lose is the back reference to the raw_id which I use in the applyBatch.
So I'll have to get all the id's of the newly inserted raw_contacts rows to use as foreign keys in the data table?
Use ContentResolver.bulkInsert (Uri url, ContentValues[] values) instead of ApplyBatch()
ApplyBatch (1) uses transactions and (2) it locks the ContentProvider once for the whole batch instead locking/unlocking once per operation. because of this, it is slightly faster than doing them one at a time (non-batched).
However, since each Operation in the Batch can have a different URI and so on, there's a huge amount of overhead. "Oh, a new operation! I wonder what table it goes in... Here, I'll insert a single row... Oh, a new operation! I wonder what table it goes in..." ad infinitium. Since most of the work of turning URIs into tables involves lots of string comparisons, it's obviously very slow.
By contrast, bulkInsert applies a whole pile of values to the same table. It goes, "Bulk insert... find the table, okay, insert! insert! insert! insert! insert!" Much faster.
It will, of course, require your ContentResolver to implement bulkInsert efficiently. Most do, unless you wrote it yourself, in which case it will take a bit of coding.
bulkInsert: For those interested, here is the code that I was able to experiment with. Pay attention to how we can avoid some allocations for int/long/floats :) this could save more time.
private int doBulkInsertOptimised(Uri uri, ContentValues values[]) {
long startTime = System.currentTimeMillis();
long endTime = 0;
//TimingInfo timingInfo = new TimingInfo(startTime);
SQLiteDatabase db = mOpenHelper.getWritableDatabase();
DatabaseUtils.InsertHelper inserter =
new DatabaseUtils.InsertHelper(db, Tables.GUYS);
// Get the numeric indexes for each of the columns that we're updating
final int guiStrColumn = inserter.getColumnIndex(Guys.STRINGCOLUMNTYPE);
final int guyDoubleColumn = inserter.getColumnIndex(Guys.DOUBLECOLUMNTYPE);
//...
final int guyIntColumn = inserter.getColumnIndex(Guys.INTEGERCOLUMUNTYPE);
db.beginTransaction();
int numInserted = 0;
try {
int len = values.length;
for (int i = 0; i < len; i++) {
inserter.prepareForInsert();
String guyID = (String)(values[i].get(Guys.GUY_ID));
inserter.bind(guiStrColumn, guyID);
// convert to double ourselves to save an allocation.
double d = ((Number)(values[i].get(Guys.DOUBLECOLUMNTYPE))).doubleValue();
inserter.bind(guyDoubleColumn, lat);
// getting the raw Object and converting it int ourselves saves
// an allocation (the alternative is ContentValues.getAsInt, which
// returns a Integer object)
int status = ((Number) values[i].get(Guys.INTEGERCOLUMUNTYPE)).intValue();
inserter.bind(guyIntColumn, status);
inserter.execute();
}
numInserted = len;
db.setTransactionSuccessful();
} finally {
db.endTransaction();
inserter.close();
endTime = System.currentTimeMillis();
if (LOGV) {
long timeTaken = (endTime - startTime);
Log.v(TAG, "Time taken to insert " + values.length + " records was " + timeTaken +
" milliseconds " + " or " + (timeTaken/1000) + "seconds");
}
}
getContext().getContentResolver().notifyChange(uri, null);
return numInserted;
}
An example of on how to override the bulkInsert(), in order to speed up multiples insert, can be found here
#jcwenger At first, after read your post, I think that's the reason of
bulkInsert is quicker than ApplyBatch, but after read the code of Contact Provider, I don't think so.
1.You said ApplyBatch use transactions, yes, but bulkInsert also use transactions. Here is the code of it:
public int bulkInsert(Uri uri, ContentValues[] values) {
int numValues = values.length;
mDb = mOpenHelper.getWritableDatabase();
mDb.beginTransactionWithListener(this);
try {
for (int i = 0; i < numValues; i++) {
Uri result = insertInTransaction(uri, values[i]);
if (result != null) {
mNotifyChange = true;
}
mDb.yieldIfContendedSafely();
}
mDb.setTransactionSuccessful();
} finally {
mDb.endTransaction();
}
onEndTransaction();
return numValues;
}
That is to say, bulkInsert also use transations.So I don't think that's the reason.
2.You said bulkInsert applies a whole pile of values to the same table.I'm sorry I can't find related code in the source code of froyo.And I want to know how could you find that?Could you tell me?
The reason I think is that:
bulkInsert use mDb.yieldIfContendedSafely() while applyBatch use
mDb.yieldIfContendedSafely(SLEEP_AFTER_YIELD_DELAY)/*SLEEP_AFTER_YIELD_DELAY = 4000*/
after reading the code of SQLiteDatabase.java, I find that, if set a time in yieldIfContendedSafely, it will do a sleep, but if you don't set the time, it will not sleep.You can refer to the code below which is a piece of code of SQLiteDatabase.java
private boolean yieldIfContendedHelper(boolean checkFullyYielded, long sleepAfterYieldDelay) {
if (mLock.getQueueLength() == 0) {
// Reset the lock acquire time since we know that the thread was willing to yield
// the lock at this time.
mLockAcquiredWallTime = SystemClock.elapsedRealtime();
mLockAcquiredThreadTime = Debug.threadCpuTimeNanos();
return false;
}
setTransactionSuccessful();
SQLiteTransactionListener transactionListener = mTransactionListener;
endTransaction();
if (checkFullyYielded) {
if (this.isDbLockedByCurrentThread()) {
throw new IllegalStateException(
"Db locked more than once. yielfIfContended cannot yield");
}
}
if (sleepAfterYieldDelay > 0) {
// Sleep for up to sleepAfterYieldDelay milliseconds, waking up periodically to
// check if anyone is using the database. If the database is not contended,
// retake the lock and return.
long remainingDelay = sleepAfterYieldDelay;
while (remainingDelay > 0) {
try {
Thread.sleep(remainingDelay < SLEEP_AFTER_YIELD_QUANTUM ?
remainingDelay : SLEEP_AFTER_YIELD_QUANTUM);
} catch (InterruptedException e) {
Thread.interrupted();
}
remainingDelay -= SLEEP_AFTER_YIELD_QUANTUM;
if (mLock.getQueueLength() == 0) {
break;
}
}
}
beginTransactionWithListener(transactionListener);
return true;
}
I think that's the reason of bulkInsert is quicker than applyBatch.
Any question please contact me.
I get the basic solution for you,
use "yield points" in batch operation.
The flip side of using batched operations is that a large batch may lock up the database for a long time preventing other applications from accessing data and potentially causing ANRs ("Application Not Responding" dialogs.)
To avoid such lockups of the database, make sure to insert "yield points" in the batch. A yield point indicates to the content provider that before executing the next operation it can commit the changes that have already been made, yield to other requests, open another transaction and continue processing operations.
A yield point will not automatically commit the transaction, but only if there is another request waiting on the database. Normally a sync adapter should insert a yield point at the beginning of each raw contact operation sequence in the batch. See withYieldAllowed(boolean).
I hope it's may be useful for you.
Here is am example of inserting same data amount within 30 seconds.
public void testBatchInsertion() throws RemoteException, OperationApplicationException {
final SimpleDateFormat FORMATTER = new SimpleDateFormat("mm:ss.SSS");
long startTime = System.currentTimeMillis();
Log.d("BatchInsertionTest", "Starting batch insertion on: " + new Date(startTime));
final int MAX_OPERATIONS_FOR_INSERTION = 200;
ArrayList<ContentProviderOperation> ops = new ArrayList<>();
for(int i = 0; i < 600; i++){
generateSampleProviderOperation(ops);
if(ops.size() >= MAX_OPERATIONS_FOR_INSERTION){
getContext().getContentResolver().applyBatch(ContactsContract.AUTHORITY,ops);
ops.clear();
}
}
if(ops.size() > 0)
getContext().getContentResolver().applyBatch(ContactsContract.AUTHORITY,ops);
Log.d("BatchInsertionTest", "End of batch insertion, elapsed: " + FORMATTER.format(new Date(System.currentTimeMillis() - startTime)));
}
private void generateSampleProviderOperation(ArrayList<ContentProviderOperation> ops){
int backReference = ops.size();
ops.add(ContentProviderOperation.newInsert(ContactsContract.RawContacts.CONTENT_URI)
.withValue(ContactsContract.RawContacts.ACCOUNT_NAME, null)
.withValue(ContactsContract.RawContacts.ACCOUNT_TYPE, null)
.withValue(ContactsContract.RawContacts.AGGREGATION_MODE, ContactsContract.RawContacts.AGGREGATION_MODE_DISABLED)
.build()
);
ops.add(ContentProviderOperation.newInsert(ContactsContract.Data.CONTENT_URI)
.withValueBackReference(ContactsContract.Data.RAW_CONTACT_ID, backReference)
.withValue(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.StructuredName.CONTENT_ITEM_TYPE)
.withValue(ContactsContract.CommonDataKinds.StructuredName.GIVEN_NAME, "GIVEN_NAME " + (backReference + 1))
.withValue(ContactsContract.CommonDataKinds.StructuredName.FAMILY_NAME, "FAMILY_NAME")
.build()
);
for(int i = 0; i < 10; i++)
ops.add(ContentProviderOperation.newInsert(ContactsContract.Data.CONTENT_URI)
.withValueBackReference(ContactsContract.Data.RAW_CONTACT_ID, backReference)
.withValue(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.Phone.CONTENT_ITEM_TYPE)
.withValue(ContactsContract.CommonDataKinds.Phone.TYPE, ContactsContract.CommonDataKinds.Phone.TYPE_MAIN)
.withValue(ContactsContract.CommonDataKinds.Phone.NUMBER, Integer.toString((backReference + 1) * 10 + i))
.build()
);
}
The log:
02-17 12:48:45.496 2073-2090/com.vayosoft.mlab D/BatchInsertionTest﹕ Starting batch insertion on: Wed Feb 17 12:48:45 GMT+02:00 2016
02-17 12:49:16.446 2073-2090/com.vayosoft.mlab D/BatchInsertionTest﹕ End of batch insertion, elapsed: 00:30.951
Just for the information of the readers of this thread.
I was facing performance issue even if using applyBatch().
In my case there was database triggers written on one of the table.
I deleted the triggers of the table and its boom.
Now my app insert rows with blessing fast speed.

Categories

Resources