i'm new to programming with DB and not an expert in android programming so bear with me!
I have a DB with 2 tables (A and B) where I get a list of ID from the table A (1 to 100 rows) and get rows from table B for each id I got from table A giving me a total of 400 to 800 rows from table B.
This approach is not ideal for my app as it take 4 to 10+ seconds to process where I would ideally want less than 1s.
I'm trying to understand what would be best in a case like this.
Would having less row but more content in each help?
the DB is aprox 15mb Would loading it all in the background be better (I guess not as it would mean 5 min+ of loading)?
what is most expensive / have the worst performance: queries, cursor iteration, loading data from a field?
I have no specific index with my DB, would generating some help? If so how can I do that?
I currently have the bellow code collecting my data:
Long start = System.currentTimeMillis();
while (cursorTableA.moveToNext()) {
long id = cursorTableA.getLong(0);
int paragraphNunber = cursorTableA.getInt(1);
boolean isPoetry = (cursorTableA.getInt(2) != 0);
Paragraph mParagraph = new Paragraph(id, paragraphNunber,isPoetry);
// GET WORDS
String selectionWords = DbContract.WordsEntry.CONNECTED_PARAGRAPH_ID+ " = ?";
String[] selectionWordsArgs = new String[]{ Long.toString(paragraphNunber) };
String sortOrder = DbContract.WordsEntry.WORD_NUMBER+ " ASC";
Cursor cursorTableB = db.query(
DbContract.WordsEntry.TABLE_NAME,
DbContract.WordsEntry.DEFAULT_QUERY_COLUMNS_TO_RETURN,
selectionWords, selectionWordsArgs, null, null, sortOrder
);
while (cursorTableB.moveToNext()) {
String word = cursorTableB.getString(0);
String thesaurusRef = cursorTableB.getString(1);
String note = cursorTableB.getString(2);
mParagraph.addWord(new Word(word,thesaurusRef,note));
}
cursorTableB.close();
long finish = System.currentTimeMillis();
timeElapsed = finish- start;
System.out.println("DB Query => timeElapsed(s): "+(timeElapsed/1000)+" timeElapsed(ms): "+timeElapsed);
}
I should add that my DB is only used in read only, I copy it on first execution to the data/data/.../databases folder I never write in it.
I suggest you use a JOIN of your two tables and get just one cursor to iterate instead of get two and nest them. Try something like this:
final String MY_QUERY = "SELECT * FROM table_a a INNER JOIN table_b b ON a.id = b.other_id";
Cursor cursorTable = db.rawQuery(MY_QUERY, null);
while (cursorTable.moveToNext()) {
String id = cursorTable.getString(0);
String paragraphNunber = cursorTable.getString(1);
boolean isPoetry = (cursorTable.getInt(2) != 0);
String word = cursorTable.getString(3);
String thesaurusRef = cursorTable.getString(4);
String note = cursorTable.getString(5);
}
cursorTableB.close();
long finish = System.currentTimeMillis();
timeElapsed = finish- start;
System.out.println("DB Query => timeElapsed(s): "+(timeElapsed/1000)+"
timeElapsed(ms): "+timeElapsed);
Of course you will have to handle the logic of the creation of the object Paragraph as you will now have a cursor with as many rows as in tableB and "paragraph_id".
I have a Person object that I store in a SQLite database.
Person has Age, FirstName, and LastName.
At a given moment, I want to find the youngest person (lowest Age).
How can I do this?
So far I have this:
public Person getYoungest(Cursor c) {
c.moveToFirst();
db.rawQuery("SELECT MIN(" + AGE + ") FROM " + PERSON_TABLE, null);
// Person youngestPerson = ???
// return youngestPerson;
}
However, I'm really confused at what the "rawQuery" does. It doesn't return a person object. Also, I'm not sure whether the query gives me the lowest "age", or the record containing the lowest "age" (which is what I want). I'm very new to SQL, so this is all strange to me.
Any advice?
What you are currently doing is querying for smallest age value from your dataset. If you want to get the whole data row of such you need to use ORDER BY, like
SELECT * FROM ... ORDER BY age ASC LIMIT 1
which would sort the data by Age in ascending order. As you want just one we use LIMIT to ensure this is going that way (and to make things faster), yet note that most likely many records may have the same age (incl. lowest value) so you may extend ORDER BY to fine tune sorting.
SQLite returns strings, numbers, byte arrays.
They are not like java-objects. You have to retrieve your each value and initialize your person in the code using a constructor.
Or use realm database that easily helps to store java-objects using sqlite.
do not forget to improve the performance by using
SELECT * FROM ... LIMIT 1
like Marcin Orlowski said.
to retrieve the data use this:
if (c.moveToFirst()) {
int idColIndex = c.getColumnIndex("id");
int nameColIndex = c.getColumnIndex("name");
int emailColIndex = c.getColumnIndex("email");
int id = c.getInt(idColIndex)
String name = c.getString(nameColIndex)
String email = c.getString(emailColIndex)
to create an object use this:
new Person(<your values>);
db.rawQuery ==> return a Cursor which point to the returned data from select statement so you should make
Cursor cursor = db.rawQuery("SELECT * FROM PERSON_TABLE ORDER BY age ASC , null);
if(cursor.moveToFirst()) //mean find data and point to the samllest
cursor.getString(c.getColumnIndex("field_name"));
Sorry, I'm new. I have a table and need to get the column ID of the first min value of the table. The table is organized so the values keep decreasing until they get to 0 and all subsequent values are equal to zero.
It is possible for none of the values to be zero in which case Id need the last ID. It is important that I only have one return ID because of how I'm implementing it. This is the code I tried first but I'm getting an error.
I did not try to add the exception of there being no 0s here because I thought it might be easier to add an If statement in the implementation of the method I use.
The error I get confuses me because It seems like I can't use FIRST when I thought I could, but here it is:
android.database.sqlite.SQLiteException: no such function: FIRST (code
1): , while compiling: SELECT FIRST (_id) FROM graph WHERE bac = 0;
My code:
public int getWhereZero(){
int zero = 0;
SQLiteDatabase db = getReadableDatabase();
String query = "SELECT FIRST (" + COLUMN_ID
+ ") FROM " + TABLE_GRAPH
+" WHERE " + COLUMN_BAC
+ " = 0;";
Cursor cursor = db.rawQuery(query, null);
if(cursor != null){
cursor.moveToFirst();
zero = cursor.getInt(cursor.getColumnIndex(COLUMN_ID));
cursor.close();
}
return zero;
}
SQLite doesn't have a FIRST() function. However, you can limit the number of rows returned to one using LIMIT, so sorting by the desired order will get the row you need:
SELECT column_id FROM graph ORDER BY bac LIMIT 1;
I'm having some issues working with a CursorAdapter.
In bindView(), I retrieve data in this way:
final String id = c.getString(c.getColumnIndexOrThrow(MySQLiteHelper.PROF_CONTACTS_KEY_ID));
final String name = c.getString(c.getColumnIndexOrThrow(MySQLiteHelper.PROF_CONTACTS_KEY_NAME));
Right after this code, I call
Log.e("Log",id+" <=> "+name);
But, because of some weird problem, I got as a result an ID moved forward by 1.
This is the situation in the DB (pulling it from the emulator, and opening it with SQLite Manager):
And this is the output:
With bigger numbers (>9), IDs start to mess even more up: number 10 becomes number 1, number 13 becomes number 5, etc.
I wouldn't have a lot of problems, in fact the only thing not matching is the id, all other info correspond, but I have a details activity to which I pass the ID in order to show to the user the detailed info.
This is the piece of code where I apply the adapter:
mCursor = mDb.rawGet("SELECT * FROM "+MySQLiteHelper.PROF_CONTACTS_TB_NAME+" LEFT JOIN "+
MySQLiteHelper.EXAMS_TB_NAME+" ON "+
MySQLiteHelper.PROF_CONTACTS_TB_NAME+"."+MySQLiteHelper.PROF_CONTACTS_KEY_COD_ESAME+"="+
MySQLiteHelper.EXAMS_TB_NAME+"."+MySQLiteHelper.EXAMS_KEY_COD
+ " ORDER BY " + MySQLiteHelper.PROF_CONTACTS_TB_NAME+"."+MySQLiteHelper.PROF_CONTACTS_KEY_ID);
if (mCursor.getCount() == 0) {
// error stuff.
} else {
String[] columns = new String[] {};
int[] to = new int[] {};
mDataAdapter = new CursorAdapterProfContacts(getSherlockActivity(), R.layout.item_prof_contact, mCursor, columns, to, 0);
mLvContacts.setAdapter(mDataAdapter);
}
Move the cursor to the first row,after initial cursor like,
mCursor.moveToFirst()
Are you sure that you have _id correctly populated when you insert a value? You can extract the database if you use the emulator and open it with SQLiteManager plugin for Firefox. As well, instead of quering all with *, use the same projection column names as you use inside y our bindView(); something is not matching here
It was due to a collision name: _id can be referred both to EXAMS and PROF. SQLlite chose EXAMS instead of PROF.
mCursor = mDb.rawGet("SELECT *, "+
MySQLiteHelper.PROF_CONTACTS_TB_NAME+"."+MySQLiteHelper.PROF_CONTACTS_KEY_ID+" AS idProf "+
" FROM "+MySQLiteHelper.PROF_CONTACTS_TB_NAME+" LEFT JOIN "+
MySQLiteHelper.EXAMS_TB_NAME+" ON "+
MySQLiteHelper.PROF_CONTACTS_TB_NAME+"."+MySQLiteHelper.PROF_CONTACTS_KEY_COD_ESAME+"="+
MySQLiteHelper.EXAMS_TB_NAME+"."+MySQLiteHelper.EXAMS_KEY_COD +
" ORDER BY " + MySQLiteHelper.PROF_CONTACTS_TB_NAME+"."+MySQLiteHelper.PROF_CONTACTS_KEY_ID);
And finally
final Long id = c.getLong(c.getColumnIndexOrThrow("idProf"));
This made the trick.
Collision name errors should be thrown, as it is in SQL and MySQL.
Currently, I am using the following statement to create a table in an SQLite database on an Android device.
CREATE TABLE IF NOT EXISTS 'locations' (
'_id' INTEGER PRIMARY KEY AUTOINCREMENT, 'name' TEXT,
'latitude' REAL, 'longitude' REAL,
UNIQUE ( 'latitude', 'longitude' )
ON CONFLICT REPLACE );
The conflict-clause at the end causes that rows are dropped when new inserts are done that come with the same coordinates. The SQLite documentation contains further information about the conflict-clause.
Instead, I would like to keep the former rows and just update their columns. What is the most efficient way to do this in a Android/SQLite environment?
As a conflict-clause in the CREATE TABLE statement.
As an INSERT trigger.
As a conditional clause in the ContentProvider#insert method.
... any better you can think off
I would think it is more performant to handle such conflicts within the database. Also, I find it hard to rewrite the ContentProvider#insert method to consider the insert-update scenario. Here is code of the insert method:
public Uri insert(Uri uri, ContentValues values) {
final SQLiteDatabase db = mOpenHelper.getWritableDatabase();
long id = db.insert(DatabaseProperties.TABLE_NAME, null, values);
return ContentUris.withAppendedId(uri, id);
}
When data arrives from the backend all I do is inserting the data as follows.
getContentResolver.insert(CustomContract.Locations.CONTENT_URI, contentValues);
I have problems figuring out how to apply an alternative call to ContentProvider#update here. Additionally, this is not my favored solution anyways.
Edit:
#CommonsWare: I tried to implement your suggestion to use INSERT OR REPLACE. I came up with this ugly piece of code.
private static long insertOrReplace(SQLiteDatabase db, ContentValues values, String tableName) {
final String COMMA_SPACE = ", ";
StringBuilder columnsBuilder = new StringBuilder();
StringBuilder placeholdersBuilder = new StringBuilder();
List<Object> pureValues = new ArrayList<Object>(values.size());
Iterator<Entry<String, Object>> iterator = values.valueSet().iterator();
while (iterator.hasNext()) {
Entry<String, Object> pair = iterator.next();
String column = pair.getKey();
columnsBuilder.append(column).append(COMMA_SPACE);
placeholdersBuilder.append("?").append(COMMA_SPACE);
Object value = pair.getValue();
pureValues.add(value);
}
final String columns = columnsBuilder.substring(0, columnsBuilder.length() - COMMA_SPACE.length());
final String placeholders = placeholderBuilder.substring(0, placeholdersBuilder.length() - COMMA_SPACE.length());
db.execSQL("INSERT OR REPLACE INTO " + tableName + "(" + columns + ") VALUES (" + placeholders + ")", pureValues.toArray());
// The last insert id retrieved here is not safe. Some other inserts can happen inbetween.
Cursor cursor = db.rawQuery("SELECT * from SQLITE_SEQUENCE;", null);
long lastId = INVALID_LAST_ID;
if (cursor != null && cursor.getCount() > 0 && cursor.moveToFirst()) {
lastId = cursor.getLong(cursor.getColumnIndex("seq"));
}
cursor.close();
return lastId;
}
When I check the SQLite database, however, equal columns are still removed and inserted with new ids. I do not understand why this happens and thought the reason is my conflict-clause. But the documentation states the opposite.
The algorithm specified in the OR clause of an INSERT or UPDATE
overrides any algorithm specified in a CREATE TABLE. If no algorithm
is specified anywhere, the ABORT algorithm is used.
Another disadvantage of this attempt is that you loose the value of the id which is return by an insert statement. To compensate this, I finally found an option to ask for the last_insert_rowid. It is as explained in the posts of dtmilano and swiz. I am, however, not sure if this is safe since another insert can happen inbetween.
I can understand the perceived notion that it is best for performance to do all this logic in SQL, but perhaps the simplest (least code) solution is the best one in this case? Why not attempt the update first, and then use insertWithOnConflict() with CONFLICT_IGNORE to do the insert (if necessary) and get the row id you need:
public Uri insert(Uri uri, ContentValues values) {
final SQLiteDatabase db = mOpenHelper.getWritableDatabase();
String selection = "latitude=? AND longitude=?";
String[] selectionArgs = new String[] {values.getAsString("latitude"),
values.getAsString("longitude")};
//Do an update if the constraints match
db.update(DatabaseProperties.TABLE_NAME, values, selection, null);
//This will return the id of the newly inserted row if no conflict
//It will also return the offending row without modifying it if in conflict
long id = db.insertWithOnConflict(DatabaseProperties.TABLE_NAME, null, values, CONFLICT_IGNORE);
return ContentUris.withAppendedId(uri, id);
}
A simpler solution would be to check the return value of update() and only do the insert if the affected count was zero, but then there would be a case where you could not obtain the id of the existing row without an additional select. This form of insert will always return to you the correct id to pass back in the Uri, and won't modify the database more than necessary.
If you want to do a large number of these at once, you might look at the bulkInsert() method on your provider, where you can run multiple inserts inside a single transaction. In this case, since you don't need to return the id of the updated record, the "simpler" solution should work just fine:
public int bulkInsert(Uri uri, ContentValues[] values) {
final SQLiteDatabase db = mOpenHelper.getWritableDatabase();
String selection = "latitude=? AND longitude=?";
String[] selectionArgs = null;
int rowsAdded = 0;
long rowId;
db.beginTransaction();
try {
for (ContentValues cv : values) {
selectionArgs = new String[] {cv.getAsString("latitude"),
cv.getAsString("longitude")};
int affected = db.update(DatabaseProperties.TABLE_NAME,
cv, selection, selectionArgs);
if (affected == 0) {
rowId = db.insert(DatabaseProperties.TABLE_NAME, null, cv);
if (rowId > 0) rowsAdded++;
}
}
db.setTransactionSuccessful();
} catch (SQLException ex) {
Log.w(TAG, ex);
} finally {
db.endTransaction();
}
return rowsAdded;
}
In truth, the transaction code is what makes things faster by minimizing the number of times the database memory is written to the file, bulkInsert() just allows multiple ContentValues to be passed in with a single call to the provider.
One solution is to create a view for the locations table with a INSTEAD OF trigger on the view, then insert into the view. Here's what that would look like:
View:
CREATE VIEW locations_view AS SELECT * FROM locations;
Trigger:
CREATE TRIGGER update_location INSTEAD OF INSERT ON locations_view FOR EACH ROW
BEGIN
INSERT OR REPLACE INTO locations (_id, name, latitude, longitude) VALUES (
COALESCE(NEW._id,
(SELECT _id FROM locations WHERE latitude = NEW.latitude AND longitude = NEW.longitude)),
NEW.name,
NEW.latitude,
NEW.longitude
);
END;
Instead of inserting into the locations table, you insert into the locations_view view. The trigger will take care of providing the correct _id value by using the sub-select. If, for some reason, the insert already contains an _id the COALESCE will keep it and override an existing one in the table.
You'll probably want to check how much the sub-select affects performance and compare that to other possible changes you could make, but it does allow you keep this logic out of your code.
I tried some other solutions involving triggers on the table itself based on INSERT OR IGNORE, but it seems that BEFORE and AFTER triggers only trigger if it will actually insert into the table.
You might find this answer helpful, which is the basis for the trigger.
Edit: Due to BEFORE and AFTER triggers not firing when an insert is ignored (which could then have been updated instead), we need to rewrite the insert with an INSTEAD OF trigger. Unfortunately, those don't work with tables - we have to create a view to use it.
INSERT OR REPLACE works just like ON CONFLICT REPLACE. It will delete the row if the row with the unique column already exists and than it does an insert. It never does update.
I would recommend you stick with your current solution, you create table with ON CONFLICT clausule, but every time you insert a row and the constraint violation occurs, your new row will have new _id as origin row will be deleted.
Or you can create table without ON CONFLICT clausule and use INSERT OR REPLACE, you can use insertWithOnConflict() method for that, but it is available since API level 8, requires more coding and leads to the same solution as table with ON CONFLICT clausule.
If you still want to keep your origin row, it means you want to keep the same _id you will have to make two queries, first one for inserting a row, second to update a row if insertion failed (or vice versa). To preserve consistency, you have to execute queries in a transaction.
db.beginTransaction();
try {
long rowId = db.insert(table, null, values);
if (rowId == -1) {
// insertion failed
String whereClause = "latitude=? AND longitude=?";
String[] whereArgs = new String[] {values.getAsString("latitude"),
values.getAsString("longitude")};
db.update(table, values, whereClause, whereArgs);
// now you have to get rowId so you can return correct Uri from insert()
// method of your content provider, so another db.query() is required
}
db.setTransactionSuccessful();
} finally {
db.endTransaction();
}
Use insertWithOnConflict and set the last parameter (conflictAlgorithm) to CONFLICT_REPLACE.
Read more at the following links:
insertWithOnConflict documentation
CONFLICT_REPLACE flag
for me, none of the approaches are work if I don't have "_id"
you should first call update, if the affected rows are zero, then insert it with ignore:
String selection = MessageDetailTable.SMS_ID+" =?";
String[] selectionArgs = new String[] { String.valueOf(md.getSmsId())};
int affectedRows = db.update(MessageDetailTable.TABLE_NAME, values, selection,selectionArgs);
if(affectedRows<=0) {
long id = db.insertWithOnConflict(MessageDetailTable.TABLE_NAME, null, values, SQLiteDatabase.CONFLICT_IGNORE);
}
Use INSERT OR REPLACE.
This is the correct way to do it.