SQLiteStatement.simpleQueryForLong() is behaving differently compared to what the documentation says - android

I'm using the simpleQueryForLong() method in the following way:
mResuableStatment = DatabaseHandler.database.compileStatement(
"SELECT MIN(timestamp) FROM " + TABLE_LOCAITON_LOGGING);
long oldestTimestamp = mResuableStatment.simpleQueryForLong();
timestamp is the name of the column within the table TABLE_LOCAITON_LOGGING.
the database is open for read and write at that point.
according to the documentation - if there are no rows at all, then this code suppose to throw SQLiteDoneException, but what really happening is that when there are no rows - the simpleQueryForLong() returns zero (without throwing excpetion)
What was even more strange and unexpected, when I actually wrap this code in try/catch block, it will throw SQLiteDoneException every time, even when there are rows in the table:
try {
long oldestTimestamp = mResuableStatment.simpleQueryForLong();
} catch (SQLiteDoneException e) {
e.printStackTrace()
}
This behavior was seen on Nexus 5, Galaxy Nexus, LG G2, HTC One X, all running Android KitKat.
I'm a little confused because this forced me not to use the try/catch block, since if it's used - the exception is always thrown regardless if there are rows or not, and vice versa to what the documentation says.
Seems like in every device I tested - without the try/catch block, when there are no columns - it returns zero.
Please help me to understand what is the right way, and what I'm doing wrong, or if simply the documentation is wrong.

Actually, I think you'll find you have it the wrong way around.
select min(something) ...
will never return zero rows. It will either return the minimum value (if the table has rows) or null if the table has no rows. If your query was just select something when the table was empty, that could return zero rows so that might be worth testing.
I've tested this with SqlFiddle, entering the commands:
create table xyzzy(a int);
select a from xyzzy;
select min(a) from xyzzy;
The first select returns zero rows, the second returns one row with a (null) value.
So I don't believe it's the empty table causing the exception. It may be the attempted conversion of (null) into a long.
How you "fix" this depends entirely on what you want to use the data for. Relying on a null row being converted to zero may or may not be what you want. For example, if there was a zero timestamp in the table, you would not be able to tell the difference between that and an empty table.
That may be acceptable, it depends on your business requirements.
If you need to distinguish, I would run two queries. The first would return count(*) from the table. That's guaranteed to be a single row containing a numeric value (no nulls). If that's zero, it means the table is empty.
If non-zero, then do the min(column_name) and you'll get a single row with the minimum timestamp. If you get zero at that point, you know it's because the minimum timestamp was zero. Unless of course there's a NULL timestamp in there in which case you may have to do more checks. But I think that unlikely if you've structured your schema correctly.
Note that I wouldn't normally suggest this in a multi-user database since it may lead to race conditions. But I think it's probably okay since there should only be one "user" of this database, your application.

Related

Does SQLite `NOT IN` parameter have any size limit?

I have an SQLite DB where I perform a query like
Select * from table where col_name NOT IN ('val1','val2')
Basically I'm getting a huge list of values from server and I need to select the ones which is not present in the list given.
Currently its working fine, No issues. But the number of values from server becomes huge as the server DB is getting updated frequently.
So, I may get thousands of String values which I need to pass to the NOT IN
My question is, Will it cause any perfomance issue in the future? Does the NOT IN parameter have any size restriction? (like max 10000 values you can check)?
Will it cause any crash at some point?
This is an official reference about various limitation in sqlite. I think the Maximum Length Of An SQL Statement may related to your case. Default value is 1000000, and it is adjustable.
Except this I don't think any limitation existed for numbers of parameter of NOT IN clause.
With more than a few values to test for, you're better off putting them in a table that has an index on the column holding them. Then things like
SELECT *
FROM table
WHERE col_name NOT IN (SELECT value_col FROM value_table);
or
SELECT *
FROM table AS t
WHERE NOT EXISTS (SELECT 1 FROM value_table WHERE value_col = t.col_name);
will be reasonably efficient no matter how many records are in value_table because that index will be used to find entries.
Plus, of course, it makes it a lot easier to re-use prepared statements because you don't have to create a new one and re-bind every value (You are using prepared statements with placeholders for these values, right, and not trying to put their contents inline into a string?) every time you add a value to the ones you need to check. You just insert it into value_table instead.
Yes, there is a limit of 999 arguments as reported in the official documentation: https://www.sqlite.org/limits.html#max_variable_number

Android-SQLite: How to deal with unique values?

I'm trying to write to a database, in my spec I had to ensure that there are no duplicates for a specific field. Great! I can just make the column unique.
But I have no idea how to deal with that after. If I use the application and accidentally insert a new value which happens to already exist, the app will just crash. How do I check that the value already exists before I try to update the database?
I feel like an if command would work, Buuuuut, How do you scan every value for that column on android anyway?
I assume you propose that we can read all rows in a table, and for each row, check whether the value already exists. If not exist, insert, else, handle conflict.
Another way of doing it is using insertWithOnConflict() method. You can set various conflict resolution strategy such as:
CONFLICT_ABORT
CONFLICT_FAIL
CONFLICT_IGNORE
CONFLICT_NONE
CONFLICT_REPLACE
CONFLICT_ROLLBACK
I don't have any idea on the complexity of this method, but probably it is much better than reading all rows manually and check manually.
http://developer.android.com/reference/android/database/sqlite/SQLiteDatabase.html#insertWithOnConflict(java.lang.String,%20java.lang.String,%20android.content.ContentValues,%20int)

SQL interface like pattern?

I have two tables, SyncedComments and QueuedComments, the latter holds local comments until they are synced with a webserver, when they are synced succesfully they get placed in the synced table, my application should be indifferent to each type. I load in the comments through a CursorLoader, and they may be moved to the synced table while users are reading them. Let's say the user can also edit comments, perhaps while they are being moved, so the application should know where the comment is, regardless of it's table.
To support this, I've thought of having a table with 3 columns, local_id, synced_id and queued_id, the local_id is persistent and simply serves as a reference to either one of the two other id's. When a comment is created a new row is inserted with it's sync_id set to NULL and the queue id it's been given, when a comment is moved then the queue_id is set to NULL and the sync_id is set. This way my application only needs to reference the local id at all times.
How does this solution look? Any flaws? Could it be done smarter?
I would in the first place put all the comments in one table, with flag for whether the comment is synchronized (actually it would probably be ID on server, set to NULL until synchronized and the value obtained from server afterwards). That will take you down to 1 table instead of 3, make it easier to show all comments (because you won't need to do union) and above all avoid problems when the comment is synchronized while being shown, because the comment will not be moving anywhere. And it does less writes to the database file, so it causes less fragmentation and fewer writes to the flash device.

SQLite: How to limit the number of rows based on the timestamp?

I successfully used the following BEFORE INSERT trigger to limit the number of rows stored in the SQLite database table locations. The database table acts as a cache in an Android application.
CREATE TRIGGER 'trigger_locations_insert'
BEFORE INSERT ON 'locations'
WHEN ( SELECT count(*) FROM 'locations' ) > '100'
BEGIN
DELETE FROM 'locations' WHERE '_id' NOT IN
(
SELECT '_id' FROM 'locations' ORDER BY 'modified_at' DESC LIMIT '100'
);
END
Meanwhile, I added a second trigger that allows me to INSERT OR UPDATE rows. - The discussion on that topic can be found in another thread. The second trigger requires a VIEW on which each INSERTis executed.
CREATE VIEW 'locations_view' AS SELECT * FROM 'locations';
Since an INSERT is no longer executed on the TABLE locations but on the VIEW locations_view, the above trigger does no longer work. If I apply the trigger on the VIEW the following error message is thrown.
Failure 1 (cannot create BEFORE trigger on view: main.locations_view)
Question:
How can I change the above trigger to observe each INSERT on the VIEW - or do you recommend another way to limit the number of rows? I would prefer to handle this kind of operation within the database, rather then running frontend code on my client.
Performance issues:
Although, the limiter (the above trigger) works in general - it performs less then optimal! Actually, the database actions take so long that an ANR is raised. As far as I can see, the reason is, that the limiter is called every time an INSERT happens. To optimize the setup, the bulk INSERT should be wrapped into a transaction and the limiter should perform right after. Is this possible? If you like to help, please place optimization comments concerning the bulk INSERT into the original question. Comments regarding the limiter are welcome here.
This type of trigger should work fine in conjunction with the other one. The problem appears to be that the SQL is unnecessarily quoting the _id field. It is selecting the literal string "_id" for every row and comparing that to the same literal string.
Removing the quotes around '_id' (both in the DELETE and in the sub-SELECT) should fix the problem.

autoincrement & decrement integer field are available in sqlite database?

I am fetching my data with id which is Integer primary key or integer.
But after deleting any row...
After that if we make select query to show all.
But it will give force close because one id is missing.
I want that id can itself take auto increment & decrement.
when i delete a record at the end(i.g. id=7) after this i add a row then id must be 7 not 8. as same when i delete a row in middle(i.g. id=3) then all the row auto specify by acceding.
your idea can help me.
Most systems with auto-incrementing columns keep track of the last value inserted (or the next one to be inserted) and do not ever reissue a number (give the same number twice), even if the last number issued has been removed from the table.
Judging from what you are asking, SQLite is another such system.
If there is any concurrency in the system, then this is risky, but for a single-user, single-app-at-a-time system, you might get away with:
SELECT MAX(id_column) + 1 FROM YourTable
to find the next available value. Depending on how SQLite behaves, you might be able to embed that in the VALUES list of an INSERT statement:
INSERT INTO YourTable(id_column, ...)
VALUES((SELECT MAX(id_column) + 1 FROM YourTable), ...);
That may not work; you may have to do this as two operations. Note that if there is any concurrency, the two statement form is a bad ideaTM. The primary key unique constraint normally prevents disaster, but one of two concurrent statements fails because it tries to insert a value that the other just inserted - so it has to retry and hope for the best. Clearly, a cell phone has less concurrency than, say, a web server so the problem is correspondingly less severe. But be careful.
On the whole, though, it is best to let gaps appear in the sequence without worrying about it. It is usually not necessary to worry about them. If you must worry about gaps, don't let people make them in the first place. Or move an existing row to fill in the gap when you do a delete that creates one. That still leaves deletes at the end creating gaps when new rows are added, which is why it is best to get over the "it must be a contiguous sequence of numbers" mentality. Auto-increment guarantees uniqueness; it does not guarantee contiguity.

Categories

Resources