Little help putting together a delete statement in SqlLite using OrmLite - android

I have trouble putting together a delete statement in my Android application (I am using OrmLite).
I have a table filled with records. Two of the fields are "dateCreated" (type Date) and "imageSize" (type int). In my code I have a method free(int size). This method tells me that I have to delete oldest records from my table that sum "imageSize" <= size.
For instance .. i get parameter 1000. Each record has value lets say 100. That means i have to delete 10 of the oldest records.
Can some one please provide me with optimal raw SQL statement or even better an OrmLite code for this?
I would be most gratefull.

Unfortunately, you can't do this with a single SQL statement. There is no way to say
select records until their sum is less than X
You could doing multiple queries until you found the oldest records whose sum is less than X but it would take a number of separate SQL calls.
I'd recommend selecting the last X images with their sizes and then doing a delete of the right number of images using a DELETE ... IN .... Here's the pseudo code:
while (true) {
SELECT id, imageSize FROM yourtable ORDER BY dateCreated DESC LIMIT 10;
find the images from the bottom whose sum(imageSize) <= parameter
delete the found images
break if you exceed the parameter otherwise loop and get the next 10
}

try this,
DELETE FROM yourtable WHERE imageSize <= (SELECT SUM(ImageSize) FROM yourtable)
Use the same parameter for your function

Related

Does SQLite `NOT IN` parameter have any size limit?

I have an SQLite DB where I perform a query like
Select * from table where col_name NOT IN ('val1','val2')
Basically I'm getting a huge list of values from server and I need to select the ones which is not present in the list given.
Currently its working fine, No issues. But the number of values from server becomes huge as the server DB is getting updated frequently.
So, I may get thousands of String values which I need to pass to the NOT IN
My question is, Will it cause any perfomance issue in the future? Does the NOT IN parameter have any size restriction? (like max 10000 values you can check)?
Will it cause any crash at some point?
This is an official reference about various limitation in sqlite. I think the Maximum Length Of An SQL Statement may related to your case. Default value is 1000000, and it is adjustable.
Except this I don't think any limitation existed for numbers of parameter of NOT IN clause.
With more than a few values to test for, you're better off putting them in a table that has an index on the column holding them. Then things like
SELECT *
FROM table
WHERE col_name NOT IN (SELECT value_col FROM value_table);
or
SELECT *
FROM table AS t
WHERE NOT EXISTS (SELECT 1 FROM value_table WHERE value_col = t.col_name);
will be reasonably efficient no matter how many records are in value_table because that index will be used to find entries.
Plus, of course, it makes it a lot easier to re-use prepared statements because you don't have to create a new one and re-bind every value (You are using prepared statements with placeholders for these values, right, and not trying to put their contents inline into a string?) every time you add a value to the ones you need to check. You just insert it into value_table instead.
Yes, there is a limit of 999 arguments as reported in the official documentation: https://www.sqlite.org/limits.html#max_variable_number

How can I select the last index of a column split with bigquery

There are a lot of questions about splitting a BigQuery, MySQL column, but I can't find one that fits my situation.
I am processing a large dataset (3rd party) that includes a freeform location field to normalize it for my Android app. When I run a select I'd like to split the column data by commas, take only the last segment and trim it of whitespace.
So far I've come up with the following by Googling documentation:
SELECT RTRIM(LOWER(SPLIT(location, ',')[OFFSET(-1)])) FROM `users` WHERE location <> ''
But the -1 trick to split at last element does not work (with either offset or ordinal). I can't use ARRAY_LENGTH with the same array inline and I'm not exactly sure how to structure a nested query and know the last column index of the row.
I might be approaching this from the wrong angle, I work with Android and NoSQL now so I haven't used MySQL in a long time
How do I structure this query correctly?
I'd like to split the column data by commas, take only the last segment ...
You can use below approach (BigQuery Standard SQL)
SELECT ARRAY_REVERSE(SPLIT(location))[SAFE_OFFSET(0)]
Below is an example illustrating it:
#standardSQL
WITH `project.dataset.table` AS (
SELECT '1,2,3,4,5' location UNION ALL
SELECT '6,7,8'
)
SELECT location, ARRAY_REVERSE(SPLIT(location))[SAFE_OFFSET(0)] last_segment
FROM `project.dataset.table`
with result
Row location last_segment
1 1,2,3,4,5 5
2 6,7,8 8
For trimming - you can use LTRIM(RTRIM()) - like in
SELECT LTRIM(RTRIM(ARRAY_REVERSE(SPLIT(location))[SAFE_OFFSET(0)]))
To get the last part of the split string, I use the len(string) - len(replace(string,delimeter,'')) trick to count the number of delimiters:
split(<string>,'-')[OFFSET(length(<string>)-length(replace(<string>,'-',''))]

Does sqlite support reading certain number of rows for each query? [duplicate]

This question already has answers here:
How to get Top 5 records in SqLite?
(8 answers)
Closed 8 years ago.
My Android application has an activity to present data from SQLite database. The db table might contain huge number of rows. For performance reasons, I want to load 20 rows from db at a time, and when user scrolls down listview to the end, read next 20 rows.
So I want to use SQL statement like this:
select * from mytable where id > N and count = 20;
I just wonder if SQLite supports this kind of "count=20" feature to read at maximum 20 rows for the query. If it is supported, what is exact syntax?
Yes, it does. It is called LIMIT:
SELECT *
FROM mytable
WHERE id > N
LIMIT 20
You can also use optional OFFSET clause to start at certain row:
SELECT *
FROM mytable
WHERE id > N
LIMIT 20
OFFSET 100
You can use LIMIT and OFFSET to specify a set of rows to return.
However:
This is risky if other threads may be updating the database, particularly if your query will use ORDER BY.
Use tools like Traceview to really determine how long things take, and use that to determine the number of rows to fetch. 20 seems seriously annoying.
If your "table might contain huge number of rows", you should be focused on a search interface, not expecting people to browse linearly through some list that is long enough to warrant this sort of batching.

Android - easy/efficient way to maintain a "cumulative sum" for a SQLite column

What is the best way to maintain a "cumulative sum" of a particular data column in SQLite? I have found several examples online, but I am not 100% certain how I might integrate these approaches into my ContentProvider.
In previous applications, I have tried to maintain cumulative data myself, updating the data each time I insert new data into the table. For example, in the sample code below, every time I would add a new record with a value score, I would then manually update the value of cumulative_score based on its value in the previous row.
_id score cumulative_score
1 100 100
2 50 150
3 25 175
4 25 200
5 10 210
However, this is far from ideal and becomes very messy when handling tables with many columns. Is there a way to somehow automate the process of updating cumulative data each time I insert/update records in my table? How might I integrate this into my ContentProvider implementation?
I know there must be a way to do this... I just don't know how. Thanks!
Probably the easiest way is with a SQLite trigger. That is the closest I know
of to "automation". Just have an insert trigger that takes the previous
cumulative sum, adds the current score and stores it in the new row's cumulative
sum. Something like this (assuming _id is the column you are ordering on):
CREATE TRIGGER calc_cumulative_score AFTER INSERT ON tablename FOR EACH ROW
BEGIN
UPDATE tablename SET cumulative_score =
(SELECT cumulative_score
FROM tablename
WHERE _id = (SELECT MAX(_id) FROM tablename))
+ new.score
WHERE _id = new._id;
END
Making sure that the trigger and the original insert are in the same
transaction. For arbitrary updates of the score column, you would have to
have to implement a recursive trigger that somehow finds the next highest id (maybe by selecting by the min id
in the set of rows with an id greater than the current one) and updates its
cumulative sum.
If you are opposed to using triggers, you can do more or less the same thing in
the ContentProvider in the insert and update methods manually, though since
you're pretty much locked into SQLite on Android, I don't see much reason not to
use triggers.
I assume you are wanting to do this as an optimization, as otherwise you could just calculate the sum on demand (O(n) vs O(1), so you'd have to consider how big n might get, and how often you need the sums).

Android: SQLite FTS3 slows down when fetching next/previous rows

I have a sqlite db that at the moment has few tables where the biggest one has over 10,000 rows. This table has four columns: id, term, definition, category. I have used a FTS3 module to speed up searching which helped a lot. However, now when I try to fetch 'next' or 'previous' row from table it takes longer than it was before I started using FTS3.
This is how I create virtual table:
CREATE VIRTUAL TABLE profanity USING fts3(_id integer primary key,name text,definition text,category text);
This is how I fetch next/previous rows:
SELECT * FROM dictionary WHERE _id < "+id + " ORDER BY _id DESC LIMIT 1
SELECT * FROM dictionary WHERE _id > "+id + " ORDER BY _id LIMIT 1
When I run these statements on the virtual table:
NEXT term is fetch within ~300ms,
PREVIOUS term is fetch within ~200ms
When I do it with normal table (the one created without FTS3):
NEXT term is fetch within ~3ms,
PREVIOUS term is fetch within ~2ms
Why there is such a big difference? Is there any way I can improve this speed?
EDITED:
I still can't get it to work!
Virtual table you've created is designed to provide full text queries. It's not aimed to fast processing standard queries using PK in where condition.
In this case there is no index on your _id column, so SQLite probably performs full table scan.
Next problem is your query - it's totally inefficient. Try something like this (untested):
SELECT * FROM dictionary WHERE _id = (select max(_id) from dictionary where _id < ?)
Next thing you can consider is redesign of your app. Instead of loading 1 row you, maybe you should get let's say 40, load them into memory and make background data loading when there is less than n to one of the ends. Long SQL operation will become invisible to user even if it'll last 3s instead of 0,3s
If you're running LIMIT 1 to begin with, you can remove the order by clause completely. This may help. I'm not familiar with FTS3, however.
You could also just flat out assign your id variable a ++ or -- and assert `WHERE _id = "+id+" LIMIT 1" which would make a single lookup instead of < or >.
Edit: and now that I look back at what I typed, if you do it that way, you can just remove LIMIT 1 completely, since your _id is your pk and must be unique.
hey look, a raw where clause!

Categories

Resources