Well as title says, I'm trying to set a column value based on the sum of two columns in the same table, I mean, I have a row on a table where I have some attributes "capital,income,mat_expense,other_expense,net_profit" I will be updating this row everytime I sell a new product or register some payments, when I sell a new product, I will update the "income" attribute adding the sale price, when I register a mat_expense(raw material expense) I will update that attribut adding the new expense price the same action with other_expense, my point is I want to calculate the net_profit of my sell, If i sell 20$ and I spend 10$ on raw material, I want my net_profit attribute to be 10, and make the same operation everytime I update de the table, the same thing I wanna do with the "capital" attribute (income - (mat_expense + other_expense), that is basiclly what I need to do, I've been reading this operation should be done by a "trigger" on sqlite, I have been reading some post but I don't get how to fit it to my case, can you guys give me a hands with this? Example:
| capital | income | mat_expense | other_expense | net_profit |
5 20 10 5 10
By the way, this is a consult, could be possible to make a trigger which make an attribute acts as a accumulator? as I explained before, I'll be upgrading some attributes, adding new values,everytime I do that, I need to consult the currently value save it in a Variable then I sum the new value, which I think It's not much efficient.
Thank you so much by reading and I really thank any help you guys can give me.
In SQL you can use expressions to define new columns, ie:
select
income,
mat_expense,
other_expense,
income - (mat_expense + other_expense) as capital
from your_table;
You will get the 4rd column called 'capital'
As for the second question - you should use such calculated virtual columns whenever it's possible. Expressions may be quite complex and include SQL functions, and even include subqueries. For example you can add a column with minimal value from other table rows correlated with current row from the first table.
Generally SQL language is about transforming source sets of data into other representations, some tables into others.
When you can't calculate result set in single SQL statement, then you may have to calculate intermediate temporary data/variables, may be via temp tables/cursors etc, but it is the last thing one should do. Still we can't avoid it sometimes
Related
I have an SQLite DB where I perform a query like
Select * from table where col_name NOT IN ('val1','val2')
Basically I'm getting a huge list of values from server and I need to select the ones which is not present in the list given.
Currently its working fine, No issues. But the number of values from server becomes huge as the server DB is getting updated frequently.
So, I may get thousands of String values which I need to pass to the NOT IN
My question is, Will it cause any perfomance issue in the future? Does the NOT IN parameter have any size restriction? (like max 10000 values you can check)?
Will it cause any crash at some point?
This is an official reference about various limitation in sqlite. I think the Maximum Length Of An SQL Statement may related to your case. Default value is 1000000, and it is adjustable.
Except this I don't think any limitation existed for numbers of parameter of NOT IN clause.
With more than a few values to test for, you're better off putting them in a table that has an index on the column holding them. Then things like
SELECT *
FROM table
WHERE col_name NOT IN (SELECT value_col FROM value_table);
or
SELECT *
FROM table AS t
WHERE NOT EXISTS (SELECT 1 FROM value_table WHERE value_col = t.col_name);
will be reasonably efficient no matter how many records are in value_table because that index will be used to find entries.
Plus, of course, it makes it a lot easier to re-use prepared statements because you don't have to create a new one and re-bind every value (You are using prepared statements with placeholders for these values, right, and not trying to put their contents inline into a string?) every time you add a value to the ones you need to check. You just insert it into value_table instead.
Yes, there is a limit of 999 arguments as reported in the official documentation: https://www.sqlite.org/limits.html#max_variable_number
I have in my app a database with two tables : country and rights. Long story short, the db tells me whether a right (there is 10 rights in total) is legal or not in a specific country.
Now, I want the user to be able to search in my db by criterias. I have a layout with checkbox. If the user check a box, it mean he want to see every country in where the right is legal. For exemple, if he check the box "criteria1" and "criteria6", the user want the list of every country where criteria1 and criteria6 are legal, but we don't care wether the other rights are legal or not.
I asigned values to the checkboxs (1 if legal, 0 if illegal, just like in my db) and passes all of them to the activity who display the result of the search.
My problem is, I can't figure out how to search in my database. I need to only get the country where where the selected criters are equal to 1, but I don't know how to formulate my sql request (since I never know which criterias are going to be checked or not). My request need to only be about the criterias who has the value 1.
I had the idea of sending all my values to a function (witch returns a cursor) where I excecute a select statement if the value is equal to one, but I don't know how I could join all the result of my selects in a cursor. I also thought about using "CASE WHEN..." but it doesn't seem to work.
Does anyone have a clue on how I could deal with my search ?
If you need precisions on my problem, please ask.
This guy here (https://www.youtube.com/watch?v=NGRV2qY9ZiU&list=PL200JxfhYIgCrrpH4rCz-uNfBTb5sng1e) has the right idea.
The clip may be a bit slow but it does exactly what you want.
He creates a custom string based on if checkbox is checked and removes it from the string if unchecked.
To get what you want, you need to do a couple of things.
First, create a table with countries as rows, and rights as columns. Add 1 for right is present in country and 0 if not. Get this into an sqlite database (eg import via csv in DB browser for SQLite, free software; don't forget to create the android_metadata table in the sqlite database - search online for this). Import the database in the app (there is plenty of documentation for this online).
Second, change the text inputed in the if/else checkbox part of the script (he writes fruit names, you write for ex. "right1 = 1", or the exact query the checkbox should do on the column right1).
You also need to pay attention to the selection.add and selection.remove (know that selection is an array list which will store all your criteria for search by column).
Third, you need to change the content of his finalSelection (View view).
Delete all he has written and just create two strings:
String final1 = android.text.TextUtils.join(" or ", selection);
String final2 = "select country from table where " + final1;
The string final2 is your key for a cursor with a rawQuery. Just create a cursor in the database and pass the key to it. This can be done easily.
PS the method android.text.TextUtils.join() is amazing :)
You can place the operator of choice there.
If you need more than one operator (and, or etc), you can create different ArrayLists which you fill in the if/else checkbox is filled and join later in the finalSelection.
Oh, btw, if you have too many checkboxes, you will get a warning in the XML file (layout has more than 80 views is bad for performance).
In order to get around that, you need to get to know grid views a bit better. After reading a few tutorials on the basic use of GridViews, a good start for checkboxes inside them is here.
It may seem like a lot, but you need to learn to use holders to get information out of the getView of the modified BaseAdapter.
If you want to understand it better, follow the arrPath.
It is a String[] filled with all the paths of images found inside the cursor (string values from the dataColumnIndex, which contains paths of images).
Within the onClick() listener of the Button, from the arrPath he extracts only the rows of the cursor that were selected by checkbox click (thumbnailsselection[i] is a boolean - with a value TRUE/FALSE for each row in the cursor).
The selected paths are placed in the selectImages String, separated by OR.
Scenario:
I am working with a what I think is a fairly large SQLite database (around 20 MB) in my Android app, which consists of around 50 tables.
Most of these tables are linked by foreign keys, and a lot of the time, I need to retrieve information from two or more tables at a time. To illustrate an example:
Table1:
Id | Name | Attribute1 | Attribute2 | ForeignKey
1 | "Me" | SomeValue | AnotherVal | 49
2 | "A" | ... | ... | 50
3 | "B" | | | 49
Table2:
Id | Attribute3 | Attribute4 | Attribute5
49 | ThirdVal | FourthVal | FifthVal
50 | ... | ... | ...
Sometimes, there are more than two tables that link together in this way. Almost all of the time, there are more columns than those presented above, and there are usually around 1000 rows.
My aim is to display a few of the attributes from the database as items in a RecyclerView, but I will need to use both tables to retrieve these attributes.
My method:
Currently, I am using the android-sqlite-asset-helper library to copy this database (.db extension) from the assets folder into the app. When I recorded the time for this copying to happen, it completed in 732 ms, which is fine.
However, when I want to retrieve the data from two tables using the foreign key from the first table, it takes far too long. It took around 11.47 seconds when I tested this, and I want to speed this up.
The way in which I retrieve the data is that I read each row in the first table, and put it into an object:
public static ArrayList<FirstItem> retrieveFirstItemList(Context context) {
Cursor cursor = new DbHelper(context).getReadableDatabase()
.query(DbHelper.TABLE_NAME, null, null, null, null, null, null);
ArrayList<FirstItem> arrayList = new ArrayList<>();
cursor.moveToFirst();
while (!cursor.isAfterLast()) {
// I read all the values from each column and put them into variables
arrayList.add(new FirstItem(id, name, attribute1, attribute2, foreignKey));
cursor.moveToNext();
}
cursor.close();
return arrayList;
}
The FirstItem object would contain getter methods in addition to another used for getting the SecondItem object from the foreign key:
public SecondItem getSecondItem(Context context) {
Cursor cursor = new SecondDbHelper(context).getReadableDatabase().query(
SecondDbHelper.TABLE_NAME,
null,
SecondDbHelper.COL_ID + "=?",
new String[] {String.valueOf(mForeignKey)},
null, null, null);
cursor.moveToFirst();
SecondItem secondItem = new SecondItem(mForeignKey, attribute3, attribute4, attribute5);
cursor.close();
return secondItem;
}
When I print values from both tables into the logcat (I have decided not to use any UI for now, to test database performance), I use something like this:
for (FirstItem firstItem : DBUtils.retrieveFirstItemList(this)) {
Log.d("First item id", firstItem.getId());
Log.d("Second item attr4", firstItem.getSecondItem(this).getAttribute4());
}
I suspect there is something wrong with this method as it needs to search through Table2 for each row in Table1 - I think it's inefficient.
An idea:
I have one other method I am considering using, however I do not know if it is better than my current solution, or if it is the 'proper' way to achieve what I want. What I mean by this is that I am unsure as to whether there is a way I could slightly modify my current solution to significantly increase performance. Nevertheless, here is my idea to improve the speeds of reading data from the database.
When the app loads for the first time, data from various tables of the SQLite database would be read then put into one SQLite database in the app. This process would occur when the app is run for the first time and each time the tables from the database are updated. I am aware that this would result in duplication of data across different rows, but it is the only way I see that would avoid me having to search multiple tables to produce a list of items.
// read values from SQLite database and put them in arrays
ContentValues cv = new ContentValues();
// put values into variables
cv.put(COL_ID, id);
...
db.insert(TABLE_NAME, null, values);
Since this process would also take a long time (as there are multiple rows), I was a little concerned that this would not be the best idea, however I read about transactions in some Stack Overflow answers, which would increase write speeds. In other words, I would use db.beginTransaction();, db.setTransactionSuccessful(); and db.endTransaction(); appropriately to increase the performance when rewriting the data to a new SQLite database.
So the new table would look like this:
Id | Name | Attribute1 | Attribute2 | Attribute3 | Attribute4 | Attribute5
1 | "Me" | SomeValue | AnotherVal | ThirdVal | FourthVal | FifthVal
2 | "A" | ... | ... | ... | ... | ...
3 | "B" | SomeValue | AnotherVal | ThirdVal | FourthVal | FifthVal
This means that although there would be more columns in the table, I would avoid having to search through multiple tables for each row in the first table, and the data would be more easily accessible too (for filtering and things like that). Most of the 'loading' would be done at the start, and hopefully sped up with methods for transactions.
Overview:
To summarise, I want to speed up reading from an SQLite database with multiple tables, where I have to look through these tables for each row of the first table in order to produce the desired result. This takes a long time, and is inefficient, but I'm not sure if there is a way I can adjust my current method to greatly improve read speeds. I think I should 'load' the data when the app is first run, by reorganising the data from various tables into one table.
So I am asking, which of the two methods is better (mostly concerning performance)? Is there a way I can adjust my current method or is there something I am doing incorrectly? Finally, if there is a better way to do this than the two methods I have already mentioned, what is it and how would I go about implementing it?
A couple of things that you should try:
Optimise your loading. As far as I understood your current method, it runs into the N + 1 queries problem. You have to execute a query to get the first batch of data, and then another query for every row of the original result set, so you can fetch the related data. It's normal that you get a performance problem with that approach. I don't think it's scalable and I would recommend you move away from it. The easiest way is to use joins instead of multiple queries. This is referred to as eager loading.
Introduce appropriate indexes on your tables. If you are performing a lot of joins, you should really think about speeding them up. Indexes are the obvious choice here. Normally, primary key columns are indexed by default, but foreign keys are not. This means that you perform linear searches on the your tables for each join, and this is slow. I would try and introduce indexes on your foreign key columns (and all columns that are used in joins). Try to measure the performance of a join before and after to see if you have made any progress there.
Consider using database views. They are quite useful when you have to perform joins often. When creating a view, you get a precompiled query and save quite a bit of time compared to running the join each time. You can try executing the query using joins and against a view and this will show how much time you will save. The downside of this is that it is a bit harder to map your result set to a hierarchy of Java objects, but, at least in my experience, the performance gain is worth.
You can try and use some kind of lazy loading. Defer loading the related data unless it is being explicitly requested. This can be hard to implement, and I think that it should be your last resort, but it's an option nevertheless. You may get creative and leverage dynamic proxies or something like this to actually perform the loading logic.
To summarise, being smart with indexes / views should do the trick most of the time. Combine this with eager / lazy loading, and you should be able to get to the point where you are happy with your performance.
EDIT: Info on Indexes, Views and Android Implementation
Indexes and Views are not alternatives to the same problem. They have different characteristics and application.
When you apply an Index to a column, you speed up the search on those column's values. You can think of it as a linear search vs. a tree search comparison. This speeds up join, because the database already knows which rows correspond to the foreign key value in question. They have a beneficial effect on simple select statements as well, not only ones using joins, since they also speed up the execution of where clause criteria. They come with a catch, though. Indexes speed up the queries, but they slow down insert, update and delete operations (since the indexes have to maintained as well).
Views are just precompiled and stored queries, whose result sets you can query just like a normal table. The gain here is that you don't need to compile and validate the query each time.
You should not limit yourself to just one of the two things. They are not mutually exclusive and can give you optimal results when combined.
As far as Android implementation goes, there is not much to do. SQLite supports both indexes and queries out of the box. The only thing you have to do is create them. The easiest way is to modify your database creation script to include CREATE INDEX and CREATE VIEW statements. You can combine the creation of a table with the creation of a index, or you can add it later manually, if you need to update an already existing schema. Just check the SQLite manual for the appropriate syntax.
maybe try this : https://realm.io/products/java/ i never use it, i know nothing about their performance. It can be a way that can interest you.. or not ;)
I've got two SQLite databases, each with a table that I need to keep synchronized by merging rows that have the same key. The tables are laid out like this:
CREATE TABLE titles ( name TEXT PRIMARY KEY,
chapter TEXT ,
page INTEGER DEFAULT 1 ,
updated INTEGER DEFAULT 0 );
I want to be able to run the same commands on each of the two tables, with the result that for pairs of rows with the same name, whichever row has the greater value in updated will overwrite the other row completely, and rows which do not have a match are copied across, so both tables are identical when finished.
This is for an Android app, so I could feasibly do the comparisons in Java, but I'd prefer an SQLite solution if possible. I'm not very experienced with SQL, so the more explanation you can give, the more it'll help.
EDIT
To clarify: I need something I can execute at an arbitrary time, to be invoked by other code. One of the two databases is not always present, and may not be completely intact when operations on the other occur, so I don't think a trigger will work.
Assuming that you have attached the other database to your main database:
ATTACH '/some/where/.../the/other/db-file' AS other;
you can first delete all records that are to be overwritten because their updated field is smaller than the corresponding updated field in the other table:
DELETE FROM main.titles
WHERE updated < (SELECT updated
FROM other.titles
WHERE other.titles.name = main.titles.name);
and then copy all newer and missing records:
INSERT INTO main.titles
SELECT * FROM other.titles
WHERE name NOT IN (SELECT name
FROM main.titles);
To update in the other direction, exchange the main/other database names.
For this, you can use a trigger.
i.e.
CREATE TRIGGER sync_trigger
AFTER INSERT OR UPDATE OF updated ON titles
REFERENCING NEW AS n
FOR EACH ROW
DECLARE updated_match;
DECLARE prime_name;
DECLARE max_updated;
BEGIN
SET prime_name = n.name;
ATTACH database2name AS db2;
SELECT updated
INTO updated_match
FROM db2.titles t
WHERE t.name=prime_name)
IF updated_match is not null THEN
IF n.updated > updated_match THEN
SET max_updated=n.updated;
ELSE
SET max_updated=updated_match;
END IF;
UPDATE titles
SET updated=max_updated
WHERE name=prime_name;
UPDATE db2.titles
SET updated=max_updated
WHERE name=prime_name;
END IF;
END sync_trigger;
The syntax may be a little off. I don't use triggers all that often and this is a fairly complex one, but it should give you an idea of where to start at least. You will need to assign this to one database, exchanging "database2name" for the other database's name and then assign it again to the other database, swapping the "database2name" out for the other database.
Hope this helps.
What is the best way to maintain a "cumulative sum" of a particular data column in SQLite? I have found several examples online, but I am not 100% certain how I might integrate these approaches into my ContentProvider.
In previous applications, I have tried to maintain cumulative data myself, updating the data each time I insert new data into the table. For example, in the sample code below, every time I would add a new record with a value score, I would then manually update the value of cumulative_score based on its value in the previous row.
_id score cumulative_score
1 100 100
2 50 150
3 25 175
4 25 200
5 10 210
However, this is far from ideal and becomes very messy when handling tables with many columns. Is there a way to somehow automate the process of updating cumulative data each time I insert/update records in my table? How might I integrate this into my ContentProvider implementation?
I know there must be a way to do this... I just don't know how. Thanks!
Probably the easiest way is with a SQLite trigger. That is the closest I know
of to "automation". Just have an insert trigger that takes the previous
cumulative sum, adds the current score and stores it in the new row's cumulative
sum. Something like this (assuming _id is the column you are ordering on):
CREATE TRIGGER calc_cumulative_score AFTER INSERT ON tablename FOR EACH ROW
BEGIN
UPDATE tablename SET cumulative_score =
(SELECT cumulative_score
FROM tablename
WHERE _id = (SELECT MAX(_id) FROM tablename))
+ new.score
WHERE _id = new._id;
END
Making sure that the trigger and the original insert are in the same
transaction. For arbitrary updates of the score column, you would have to
have to implement a recursive trigger that somehow finds the next highest id (maybe by selecting by the min id
in the set of rows with an id greater than the current one) and updates its
cumulative sum.
If you are opposed to using triggers, you can do more or less the same thing in
the ContentProvider in the insert and update methods manually, though since
you're pretty much locked into SQLite on Android, I don't see much reason not to
use triggers.
I assume you are wanting to do this as an optimization, as otherwise you could just calculate the sum on demand (O(n) vs O(1), so you'd have to consider how big n might get, and how often you need the sums).