Phonegap SQLite Transaction Crash on Android - android

I need to insert a lot of data into a SQLite database. I use transactions for it. There is unfortunatly no chance I can fill the tables without these transactions. My code works like this:
database.transaction(function(transaction) {
for(var i = 0; i < 300; i++) {
sql = "INSERT INTO test (fielda, fieldb, fieldc) VALUES (111, 1, 23.53)";
transaction.executeSql(sql, function(result) {
});
}
}, function() {
database.transaction(function(transaction) {
for(var i = 0; i < 300; i++) {
sql = "INSERT INTO test (fielda, fieldb, fieldc) VALUES (111, 1, 23.53)";
transaction.executeSql(sql, function(result) {
});
}
}, function() {
.................and so on..............
alert("done");
}, function(error) {
alert("error");
});
}, function(error) {
alert("error");
});
It works fast and easy on my iPad 2 but crashes just as fast on my Galaxy Tab 3. It seems that the number of successive transactions doesn't matter but the number of inserts they have to perform altogether. Always around the 900th entry it crashes. That means it crashes in the 4th transaction with loops of i < 300 but needs 10 transactions with loops of i < 100 to crash.
I even tried calling 3 transactions with one button click, then waited for over ten minutes and calling the 4th transactions by clicking on another button. It still crashes.
There are happening other strange things as well:
When I call "DELETE FROM test" at some point before calling the transactions the app throws the error message "Cannot perform this operation because there is no current transaction." just before it crashes then during the transaction. But only then.
I already played around with every PRAGMA there is, like page_size, journal_mode, etc. It doesn't change a thing.

Related

Improve speed of android app

In my android program, I have 3 functions that will be continuously repeated until the user exits the app. Currently, it is able to update a value(position) every second.
I was wondering if having lots of if else condition in one of the function will slow down the performance of my app i.e. it might take way longer than a second for the updates to kick in.
Note that 2 of the if else function will execute only once. All the if-else function have very little code in it. Below is the code
public void positionUpdated(Coordinate userPosition, int accuracy) {
// GETS EXECUTED ONCE //
if(nameList.size() == 0) {nameList = indoorsFragment.getZones();}
if(end == null) {
for(int i=0 ; i < nameList.size() ; i++) if(nameList.get(i).equals(name)) end = nameList.get(i).getZonePoints().get(0);
}
// GETS EXECUTED ONCE //
if(Math.abs(userPosition.x - end.x) >500) {Toast.makeText(this,"WALK " + Math.abs(end.x - userPosition.x)/1000 + " Meters" , Toast.LENGTH_SHORT).show();return;}
if(turn) {
if ((userPosition.y - end.y) < 0) {Toast.makeText(this, "STOP AND TURN RIGHT", Toast.LENGTH_LONG).show();turn=false;return;}
if ((userPosition.y - end.y) > 0) {Toast.makeText(this, "STOP AND TURN LEFT" , Toast.LENGTH_LONG).show();turn=false;return;}
}
if((Math.abs(userPosition.y - end.y) < 500)) Toast.makeText(this, "WALK", Toast.LENGTH_LONG).show();
}
Hope to receive some feedback regarding this , thanks
Haziq
The number of statements doesn't really matter, in fact they're executed in fractions of nanoseconds. what matters is if you have many nested loops or recursions.
If you don't want much work to be done in the foreground thread, use an AsyncTask or Runnable then update the UI in the main thread.

Count query with large amount of objects

I am using parse.com to populate a listview in android. Each item in the list view has a textview that shows the like count and another one that shows the comment count.
Now, according to parse.com
"For classes with over 1000 objects, count operations are limited by timeouts. They may routinely yield timeout errors or return results that are only approximately correct. Thus, it is preferable to architect your application to avoid this sort of count operation."
what would be the recommended/ideal way of going about it then?
What I did is I created a column called commentCount, and a column called likeCount. Then in afterSave, I modified the appropriate cell.
Parse.Cloud.afterSave("Activity", function(request) {
Parse.Cloud.useMasterKey(); //bypasses ACL requirements
//After commenting, increment commentCount
if(request.object.get("type") == "comment"){
query = new Parse.Query("Posts");
query.get(request.object.get("post").id, {
success: function(post) {
post.increment("commentCount", 1);
post.save();
},
error: function(error) {
console.error("Got an error " + error.code + " : " + error.message);
}
});
}
});

ORMLite's createOrUpdate seems slow - what is normal speed?

Calling the ORMLite RuntimeExceptionDao's createOrUpdate(...) method in my app is very slow.
I have a very simple object (Item) with a 2 ints (one is the generatedId), a String and a double. I test the time it takes (roughly) to update the object in the database (a 100 times) with the code below. The log statement logs:
time to update 1 row 100 times: 3069
Why does it take 3 seconds to update an object 100 times, in a table with only 1 row. Is this the normal ORMLite speed? If not, what might be the problem?
RuntimeExceptionDao<Item, Integer> dao =
DatabaseManager.getInstance().getHelper().getReadingStateDao();
Item item = new Item();
long start = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
item.setViewMode(i);
dao.createOrUpdate(item);
}
long update = System.currentTimeMillis();
Log.v(TAG, "time to update 1 row 100 times: " + (update - start));
If I create 100 new rows then the speed is even slower.
Note: I am already using ormlite_config.txt. It logs "Loaded configuration for class ...Item" so this is not the problem.
Thanks.
This may be the "expected" speed unfortunately. Make sure you are using ORMLite version 4.39 or higher. createOrUpdate(...) was using a more expensive method to test for existing of the object in the database beforehand. But I suspect this is going to be a minimal speed improvement.
If I create 100 new rows then the speed is even slower.
By default Sqlite is in auto-commit mode. One thing to try is to wrap your inserts (or your createOrUpdates) using the the ORMLite Dao.callBatchTasks(...) method.
In by BulkInsertsTest android unit test, the following doInserts(...) method inserts 1000 items. When I just call it:
doInserts(dao);
It takes 7.3 seconds in my emulator. If I call using the callBatchTasks(...) method which wraps a transactions around the call in Android Sqlite:
dao.callBatchTasks(new Callable<Void>() {
public Void call() throws Exception {
doInserts(dao);
return null;
}
});
It takes 1.6 seconds. The same performance can be had by using the dao.setSavePoint(...) method. This starts a transaction but is not as good as the callBachTasks(...) method because you have to make sure you close your own transaction:
DatabaseConnection conn = dao.startThreadConnection();
Savepoint savePoint = null;
try {
savePoint = conn.setSavePoint(null);
doInserts(dao);
} finally {
// commit at the end
conn.commit(savePoint);
dao.endThreadConnection(conn);
}
This also takes ~1.7 seconds.

For loop in android stopping short and restarting

The loop below seems to stop short, and then restart. It is not inside of another loop. The first Log call prints 36, thus the outer for loop should run 36 times. The Log call inside of the loop though, which is meant to print the number of times the loop has run, prints "0" up to "4" meaning the loop only ran 5 times. Would there be any reason for this process to start over so that the first Log call fires again, and the loop again runs through only 5 times? This occurs twice according to my Logcat output.
ArrayList<RunData_L> rdUP = t.getMyPositiveRunData();
ArrayList<RunData_L> rdDOWN = t.getMyNegativeRunData();
Log.d("rdUP size", rdUP.get(0).getMyMeasurementData().size() + "");
for (int i = 0; i < rdUP.get(i).getMyMeasurementData().size(); i++) {
Log.d("i", i + "");
ArrayList<BigDecimal> tempUP = new ArrayList<BigDecimal>(), tempDOWN = new ArrayList<BigDecimal>();
for(int j = 0; j < rdUP.size(); j++) {
tempUP.add(rdUP.get(j).getMyMeasurementData().get(i));
tempDOWN.add(rdDOWN.get(j).getMyMeasurementData().get(i));
}
pdUP.add(tempUP);
pdDOWN.add(tempDOWN);
}
I suspect as per my comment that the use of rdUP.get(j) in the inner loop is causing the problem.
You first test to see the size of rdUP.get(i).getMyMeasurementData() in the outer loop as your bounding condition for the loop and so i runs from 0 to the size of rdUP.get(i).getMyMeasurementData().
Your inner loop then says go through each element of rdUP and get the i th value from rdUP.get(j).getMyMeasurementData(). How do you know that the j th rdUP has enough elements to satisfy your get(i) ?

Insertion of thousands of contact entries using applyBatch is slow

I'm developing an application where I need to insert lots of Contact entries. At the current time approx 600 contacts with a total of 6000 phone numbers. The biggest contact has 1800 phone numbers.
Status as of today is that I have created a custom Account to hold the Contacts, so the user can select to see the contact in the Contacts view.
But the insertion of the contacts is painfully slow. I insert the contacts using ContentResolver.applyBatch. I've tried with different sizes of the ContentProviderOperation list(100, 200, 400), but the total running time is approx. the same. To insert all the contacts and numbers takes about 30 minutes!
Most issues I've found regarding slow insertion in SQlite brings up transactions. But since I use the ContentResolver.applyBatch-method I don't control this, and I would assume that the ContentResolver takes care of transaction management for me.
So, to my question: Am I doing something wrong, or is there anything I can do to speed this up?
Anders
Edit:
#jcwenger:
Oh, I see. Good explanation!
So then I will have to first insert into the raw_contacts table, and then the datatable with the name and numbers. What I'll lose is the back reference to the raw_id which I use in the applyBatch.
So I'll have to get all the id's of the newly inserted raw_contacts rows to use as foreign keys in the data table?
Use ContentResolver.bulkInsert (Uri url, ContentValues[] values) instead of ApplyBatch()
ApplyBatch (1) uses transactions and (2) it locks the ContentProvider once for the whole batch instead locking/unlocking once per operation. because of this, it is slightly faster than doing them one at a time (non-batched).
However, since each Operation in the Batch can have a different URI and so on, there's a huge amount of overhead. "Oh, a new operation! I wonder what table it goes in... Here, I'll insert a single row... Oh, a new operation! I wonder what table it goes in..." ad infinitium. Since most of the work of turning URIs into tables involves lots of string comparisons, it's obviously very slow.
By contrast, bulkInsert applies a whole pile of values to the same table. It goes, "Bulk insert... find the table, okay, insert! insert! insert! insert! insert!" Much faster.
It will, of course, require your ContentResolver to implement bulkInsert efficiently. Most do, unless you wrote it yourself, in which case it will take a bit of coding.
bulkInsert: For those interested, here is the code that I was able to experiment with. Pay attention to how we can avoid some allocations for int/long/floats :) this could save more time.
private int doBulkInsertOptimised(Uri uri, ContentValues values[]) {
long startTime = System.currentTimeMillis();
long endTime = 0;
//TimingInfo timingInfo = new TimingInfo(startTime);
SQLiteDatabase db = mOpenHelper.getWritableDatabase();
DatabaseUtils.InsertHelper inserter =
new DatabaseUtils.InsertHelper(db, Tables.GUYS);
// Get the numeric indexes for each of the columns that we're updating
final int guiStrColumn = inserter.getColumnIndex(Guys.STRINGCOLUMNTYPE);
final int guyDoubleColumn = inserter.getColumnIndex(Guys.DOUBLECOLUMNTYPE);
//...
final int guyIntColumn = inserter.getColumnIndex(Guys.INTEGERCOLUMUNTYPE);
db.beginTransaction();
int numInserted = 0;
try {
int len = values.length;
for (int i = 0; i < len; i++) {
inserter.prepareForInsert();
String guyID = (String)(values[i].get(Guys.GUY_ID));
inserter.bind(guiStrColumn, guyID);
// convert to double ourselves to save an allocation.
double d = ((Number)(values[i].get(Guys.DOUBLECOLUMNTYPE))).doubleValue();
inserter.bind(guyDoubleColumn, lat);
// getting the raw Object and converting it int ourselves saves
// an allocation (the alternative is ContentValues.getAsInt, which
// returns a Integer object)
int status = ((Number) values[i].get(Guys.INTEGERCOLUMUNTYPE)).intValue();
inserter.bind(guyIntColumn, status);
inserter.execute();
}
numInserted = len;
db.setTransactionSuccessful();
} finally {
db.endTransaction();
inserter.close();
endTime = System.currentTimeMillis();
if (LOGV) {
long timeTaken = (endTime - startTime);
Log.v(TAG, "Time taken to insert " + values.length + " records was " + timeTaken +
" milliseconds " + " or " + (timeTaken/1000) + "seconds");
}
}
getContext().getContentResolver().notifyChange(uri, null);
return numInserted;
}
An example of on how to override the bulkInsert(), in order to speed up multiples insert, can be found here
#jcwenger At first, after read your post, I think that's the reason of
bulkInsert is quicker than ApplyBatch, but after read the code of Contact Provider, I don't think so.
1.You said ApplyBatch use transactions, yes, but bulkInsert also use transactions. Here is the code of it:
public int bulkInsert(Uri uri, ContentValues[] values) {
int numValues = values.length;
mDb = mOpenHelper.getWritableDatabase();
mDb.beginTransactionWithListener(this);
try {
for (int i = 0; i < numValues; i++) {
Uri result = insertInTransaction(uri, values[i]);
if (result != null) {
mNotifyChange = true;
}
mDb.yieldIfContendedSafely();
}
mDb.setTransactionSuccessful();
} finally {
mDb.endTransaction();
}
onEndTransaction();
return numValues;
}
That is to say, bulkInsert also use transations.So I don't think that's the reason.
2.You said bulkInsert applies a whole pile of values to the same table.I'm sorry I can't find related code in the source code of froyo.And I want to know how could you find that?Could you tell me?
The reason I think is that:
bulkInsert use mDb.yieldIfContendedSafely() while applyBatch use
mDb.yieldIfContendedSafely(SLEEP_AFTER_YIELD_DELAY)/*SLEEP_AFTER_YIELD_DELAY = 4000*/
after reading the code of SQLiteDatabase.java, I find that, if set a time in yieldIfContendedSafely, it will do a sleep, but if you don't set the time, it will not sleep.You can refer to the code below which is a piece of code of SQLiteDatabase.java
private boolean yieldIfContendedHelper(boolean checkFullyYielded, long sleepAfterYieldDelay) {
if (mLock.getQueueLength() == 0) {
// Reset the lock acquire time since we know that the thread was willing to yield
// the lock at this time.
mLockAcquiredWallTime = SystemClock.elapsedRealtime();
mLockAcquiredThreadTime = Debug.threadCpuTimeNanos();
return false;
}
setTransactionSuccessful();
SQLiteTransactionListener transactionListener = mTransactionListener;
endTransaction();
if (checkFullyYielded) {
if (this.isDbLockedByCurrentThread()) {
throw new IllegalStateException(
"Db locked more than once. yielfIfContended cannot yield");
}
}
if (sleepAfterYieldDelay > 0) {
// Sleep for up to sleepAfterYieldDelay milliseconds, waking up periodically to
// check if anyone is using the database. If the database is not contended,
// retake the lock and return.
long remainingDelay = sleepAfterYieldDelay;
while (remainingDelay > 0) {
try {
Thread.sleep(remainingDelay < SLEEP_AFTER_YIELD_QUANTUM ?
remainingDelay : SLEEP_AFTER_YIELD_QUANTUM);
} catch (InterruptedException e) {
Thread.interrupted();
}
remainingDelay -= SLEEP_AFTER_YIELD_QUANTUM;
if (mLock.getQueueLength() == 0) {
break;
}
}
}
beginTransactionWithListener(transactionListener);
return true;
}
I think that's the reason of bulkInsert is quicker than applyBatch.
Any question please contact me.
I get the basic solution for you,
use "yield points" in batch operation.
The flip side of using batched operations is that a large batch may lock up the database for a long time preventing other applications from accessing data and potentially causing ANRs ("Application Not Responding" dialogs.)
To avoid such lockups of the database, make sure to insert "yield points" in the batch. A yield point indicates to the content provider that before executing the next operation it can commit the changes that have already been made, yield to other requests, open another transaction and continue processing operations.
A yield point will not automatically commit the transaction, but only if there is another request waiting on the database. Normally a sync adapter should insert a yield point at the beginning of each raw contact operation sequence in the batch. See withYieldAllowed(boolean).
I hope it's may be useful for you.
Here is am example of inserting same data amount within 30 seconds.
public void testBatchInsertion() throws RemoteException, OperationApplicationException {
final SimpleDateFormat FORMATTER = new SimpleDateFormat("mm:ss.SSS");
long startTime = System.currentTimeMillis();
Log.d("BatchInsertionTest", "Starting batch insertion on: " + new Date(startTime));
final int MAX_OPERATIONS_FOR_INSERTION = 200;
ArrayList<ContentProviderOperation> ops = new ArrayList<>();
for(int i = 0; i < 600; i++){
generateSampleProviderOperation(ops);
if(ops.size() >= MAX_OPERATIONS_FOR_INSERTION){
getContext().getContentResolver().applyBatch(ContactsContract.AUTHORITY,ops);
ops.clear();
}
}
if(ops.size() > 0)
getContext().getContentResolver().applyBatch(ContactsContract.AUTHORITY,ops);
Log.d("BatchInsertionTest", "End of batch insertion, elapsed: " + FORMATTER.format(new Date(System.currentTimeMillis() - startTime)));
}
private void generateSampleProviderOperation(ArrayList<ContentProviderOperation> ops){
int backReference = ops.size();
ops.add(ContentProviderOperation.newInsert(ContactsContract.RawContacts.CONTENT_URI)
.withValue(ContactsContract.RawContacts.ACCOUNT_NAME, null)
.withValue(ContactsContract.RawContacts.ACCOUNT_TYPE, null)
.withValue(ContactsContract.RawContacts.AGGREGATION_MODE, ContactsContract.RawContacts.AGGREGATION_MODE_DISABLED)
.build()
);
ops.add(ContentProviderOperation.newInsert(ContactsContract.Data.CONTENT_URI)
.withValueBackReference(ContactsContract.Data.RAW_CONTACT_ID, backReference)
.withValue(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.StructuredName.CONTENT_ITEM_TYPE)
.withValue(ContactsContract.CommonDataKinds.StructuredName.GIVEN_NAME, "GIVEN_NAME " + (backReference + 1))
.withValue(ContactsContract.CommonDataKinds.StructuredName.FAMILY_NAME, "FAMILY_NAME")
.build()
);
for(int i = 0; i < 10; i++)
ops.add(ContentProviderOperation.newInsert(ContactsContract.Data.CONTENT_URI)
.withValueBackReference(ContactsContract.Data.RAW_CONTACT_ID, backReference)
.withValue(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.Phone.CONTENT_ITEM_TYPE)
.withValue(ContactsContract.CommonDataKinds.Phone.TYPE, ContactsContract.CommonDataKinds.Phone.TYPE_MAIN)
.withValue(ContactsContract.CommonDataKinds.Phone.NUMBER, Integer.toString((backReference + 1) * 10 + i))
.build()
);
}
The log:
02-17 12:48:45.496 2073-2090/com.vayosoft.mlab D/BatchInsertionTest﹕ Starting batch insertion on: Wed Feb 17 12:48:45 GMT+02:00 2016
02-17 12:49:16.446 2073-2090/com.vayosoft.mlab D/BatchInsertionTest﹕ End of batch insertion, elapsed: 00:30.951
Just for the information of the readers of this thread.
I was facing performance issue even if using applyBatch().
In my case there was database triggers written on one of the table.
I deleted the triggers of the table and its boom.
Now my app insert rows with blessing fast speed.

Categories

Resources