My database has 6384 records and I am using the below query:
SELECT T.t_name, S.s_code, S.s_name, R.s_code, R.s_name, M.arrival_time, L.arrival_time, M.dest_time, M.train_id, S.id, R.id
FROM TRAIN_SCHEDULE M,
TRAIN_SCHEDULE L,
TRAIN T,
STATION S,
STATION R
WHERE S.s_name = 'Versova'
AND R.s_name = 'Ghatkopar'
AND M.arrival_time > '00:00:00'
AND M.arrival_time < L.arrival_time
AND M.train_id = L.train_id
AND M.dest_time = L.dest_time
AND T.id = M.train_id
AND S.id = M.station_id
AND R.id = L.station_id
This query takes 8 second to fetch the data.
I have also indexed my tables, but fetching time is reduced to only 2 seconds.
Schema:
CREATE TABLE [STATION] (
[id] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
[s_code] VARCHAR(10) NOT NULL,
[s_name] VARCHAR(50) NOT NULL);
CREATE TABLE TRAIN_SCHEDULE(
id INT,
station_id INT,
train_id INT,
arrival_time NUM,
departure_time NUM,
dest_time NUM
);
CREATE TABLE TRAIN(id INT,t_name TEXT);
CREATE INDEX idx_arrival_time ON train_schedule (arrival_time);
CREATE INDEX idx_dest_time ON train_schedule (dest_time);
CREATE INDEX idx_id ON train (id);
How can I improve this?
You can check with EXPLAIN QUERY PLAN which indexes are being used.
In this query, the database needs to scan through the STATION table; an index on the name column would improve this (although not by much with such a small table):
CREATE INDEX Station_Name ON STATION(s_name);
Also, lookups on the TRAIN_SCHEDULE table are done over multiple columns.
The query optimizer cannot use more than one index per table instance, so you should create a multi-column index.
And a column with a non-equality comparison must come last (see the documentation):
CREATE INDEX Schedule_Station_Train_DestTime_ArrivalTime
ON TRAIN_SCHEDULE(station_id, train_id, dest_time, arrival_time);
Also execute ANALYZE once to help the optimizer pick the right index.
Related
I have an application, where I am detecting the type of a particular column at run-time, on page load. Please refer the below code:
public String fncCheckColumnType(String strColumnName){
db = this.getWritableDatabase();
String strColumnType = "";
Cursor typeCursor = db.rawQuery("SELECT typeof (" + strColumnName +") from tblUsers, null);
typeCursor.moveToFirst();
strColumnType = typeCursor.getString(0);
return strColumnType;
}
The above method simply detects the type of column with column Name 'strColumnName'. I am getting the type of column in this case.
Now, I want to change the column type to TEXT if I am receiving INTEGER as the column type. For this, I tried the below code:
public String fncChangeColumnType(String strColumnName){
db = this.getWritableDatabase();
String newType = "";
Cursor changeCursor = db.rawQuery("ALTER TABLE tblUsers MODIFY COLUMN " + strColumnName + " TEXT", null);
if (changeCursor != null && changeCursor.moveToFirst()){
newType = changeCursor.getString(0);
}
return newType;
}
But while executing the 'fncChangeColumnType' method, I am getting this error, android.database.sqlite.SQLiteException: near "MODIFY": syntax error (code 1): , while compiling: ALTER TABLE tblUsers MODIFY COLUMN UserID TEXT
NOTE: I also replaced 'MODIFY' with 'ALTER', but still getting the same error.
Please check if this is the right method to change the type dynamically.
Please respond back if someone has a solution to this.
Thanks in advance.
In brief, the solution could be :-
Do nothing (i.e. take advantage of SQLite's flexibility)
you could utilise CAST e.g. CAST(mycolumn AS TEXT) (as used below)
Create a new table to replace the old table.
Explanations.
With SQLite there are limitations on what can be altered. In short you cannot change a column. Alter only allows you to either rename a table or to add a column. As per :-
SQL As Understood By SQLite - ALTER TABLE
However, with the exception of a column that is an alias of the rowid column
one defined with ?? INTEGER PRIMARY KEY or ?? INTEGER PRIMARY KEY AUTOINCREMENT or ?? INTEGER ... PRIMARY KEY(??) (where ?? represents a valid column name)
you can store any type of value in any type of column. e.g. consider the following (which stores an INTEGER, a REAL, a TEXT, a date that ends up being TEXT and a BLOB) :-
CREATE TABLE IF NOT EXISTS example1_table (col1 BLOB);
INSERT INTO example1_table VALUES (1),(5.678),('fred'),(date('now')),(x'ffeeddccbbaa998877665544332211');
SELECT *, typeof(col1) FROM example1_table;
The result is :-
As such is there a need to change the column type at all?
If the above is insufficient then your only option is to create a new table with the new column definitions, populate it if required from the original table, and to then replace the original table with the new table ( a) drop original and b)rename new or a) rename original, b) rename new and c) drop original)
e.g. :-
DROP TABLE IF EXISTS original;
CREATE TABLE IF NOT EXISTS original (mycolumn INTEGER);
INSERT INTO original VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(0);
-- The original table now exists and is populated
CREATE TABLE IF NOT EXISTS newtable (mycolumn TEXT);
INSERT INTO newtable SELECT CAST(mycolumn AS TEXT) FROM original;
ALTER TABLE original RENAME TO old_original;
ALTER TABLE newtable RENAME TO original;
DROP TABLE IF EXISTS old_original;
SELECT *,typeof(mycolumn) FROM original;
The result being :-
i think the sql query statement is wrong ,try
ALTER TABLE tblUsers MODIFY COLUMN id TYPE integer USING (id::integer);
instead of id use column name....
hope this helps....
EDIT:
"ALTER TABLE tblUsers MODIFY COLUMN "+strColumnName+" TYPE integer USING ("+strColumnName+"::integer);"
We have a requirement where some fields in a table need to have the same value as their ID. Unfortunately, we currently have to insert a new record and then, if needed, run another update to set the duplicate field (ID_2) value to equal the ID.
Here is the Android Sqlite code:
mDb.beginTransaction();
// ... setting various fields here ...
ContentValues contentValues = new ContentValues();
contentValues.put(NAME, obj.getName());
// now insert the record
long objId = mDb.insert(TABLE_NAME, null, contentValues);
obj.setId(objId);
// id2 needs to be the same as id:
obj.setId2(objId);
// but we need to persist it so we update it in a SECOND call
StringBuilder query = new StringBuilder();
query.append("update " + TABLE_NAME);
query.append(" set " + ID_2 + "=" + objId);
query.append(" where " + ID + "=" + objId);
mDb.execSQL(query.toString());
mDb.setTransactionSuccessful();
As you can see, we are making a second call to set ID_2 to the same value of ID. Is there any way to set it at INSERT time and avoid the second call to the DB?
Update:
The ID is defined as follows:
ID + " INTEGER PRIMARY KEY NOT NULL ," +
The algorithm used for autoincrementing columns is documented, so you could implement it manually in your code, and then use the new value for the INSERT.
This is quite a ugly hack, but it may be possible :
with id_table as (
select coalesce(max(seq), 0) + 1 as id_column
from sqlite_sequence
where name = 'MY_TABLE'
)
insert into MY_TABLE(ID_1, ID_2, SOME, OTHER, COLUMNS)
select id_column, id_column, 'SOME', 'OTHER', 'VALUES'
from id_table
It only works if the table ID is an AUTOINCREMENT, and is therefore managed via the documented sqlite_sequence table.
I also have no idea what happen in case of concurrent executions.
You could use an AFTER INSERT TRIGGER e.g.
Create your table (at least for this example) so that ID_2 is defined as INTEGER DEFAULT -1 (0 or any negative value would be ok)
e.g. CREATE TABLE IF NOT EXISTS triggertest (_id INTEGER PRIMARY KEY ,name TEXT ,id_2 INTEGER DEFAULT -1);
Then you could use something like (perhaps when straight after the table is created, perhaps create it just before it's to be used and drop it after done with it ) :-
CREATE TRIGGER triggertesting001
AFTER INSERT ON triggertest
BEGIN
UPDATE triggertest SET id_2 = `_id`
WHERE id_2 = -1;
END;
Drop using DROP TRIGGER IF EXISTS triggertesting001;
Example usage (testing):-
INSERT INTO triggertest (name) VALUES('Fred');
INSERT INTO triggertest (name) VALUES('Bert');
INSERT INTO triggertest (name) VALUES('Harry');
Result 1 :-
Result 2 (trigger dropped inserts run again ):-
Result 3 (created trigger) same as above.
Result 4 (ran inserts for 3rd time) catch up i.e. 6 rows updated id_2 with _id.
I'd strongly suggest reading SQL As Understood By SQLite - CREATE TRIGGER
Alternative solution
An alternative approach could be to simply use :-
Before starting transaction, retrieve mynextid from table described below
ContentValues contentValues = new ContentValues();
contentValues.put(ID,mynextid);
contentvalues.put(ID_2,mynextid++);
contentValues.put(NAME, obj.getName());
Then at end of the transactions update/store the value of mynextid in a simple single column, single row table.
i.e. you are managing the id's (not too dissimilar to how SQLite manages id's when 'AUTOINCREMENT' is specified)
I have database sqlite contain 2 tables:
names
n_data
and query:
select
n_data.id,n_data.value, count( n_data.id) as count
from
n_data
INNER JOIN names on names.id = n_data.name_id
group by
n_data.name_id
order by
n_data.id asc
In activity I have used
Cursor and while
while (res.moveToNext()) {
System.out.println("id=>"+res.getString(0)+" count=>"+res.getString(2)+" =value=>"+res.getString(1));
}
but result just show last row in group. How can I get all rows for every group?
CREATE TABLE "names" (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT );
INSERT INTO names (id,name) VALUES
(1,'name_1'),
(2,'name_2'),
(3,'name_3'),
(4,'name_4');
CREATE TABLE "n_data" (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name_id TEXT,
value TEXT );
INSERT INTO n_data (id,name_id,value) VALUES
(1,'3','value_8'),
(2,'2','value_7'),
(3,'2','value_6'),
(4,'2','value_5'),
(5,'1','value_4'),
(6,'1','value_3'),
(7,'1','value_2'),
(8,'1','value_1'),
(9,'3','value_9');
OP is satisfied by:
select
n_data.id,
group_concat(n_data.value) as 'all values',
count( n_data.id) as count
from
n_data INNER JOIN names
on names.id = n_data.name_id
group by n_data.name_id
order by n_data.id asc;
It uses group_concat(n_data.value) instead of n_data.value.
I.e. all the data.value which get counted by count(n_data.id) are concatenated.
Output (.headers on, .mode column and .width 3 32 6; SQLite 3.18.0 2017-03-28) :
id all values count
--- -------------------------------- ------
4 value_7,value_6,value_5 3
8 value_4,value_3,value_2,value_1 4
9 value_8,value_9 2
The tailored .width is needed, otherwise for id 8, only 3 values are shown, though 4 are retrieved.
I am working with GTFS data on Android (SQlite). And I would like to improve performance when I do select queries in my database filled with GTFS data.
The query below select the stop times associated to a route at a stop:
The first sub query gets the daily stop times on thursday.
The second gets all the exception stop times which are not valid for TODAY (2013-07-25).
The third one gets all the exception stop time which are only valid for TODAY (2013-07-25).
Then I remove the non-valid one and add the valid one to the first sub query.
select distinct stop_times_arrival_time
from stop_times, trips, calendar
where stop_times_trip_id=trip_id
and calendar_service_id=trip_service_id
and trip_route_id='11821949021891616'
and stop_times_stop_id='3377699721872252'
and calendar_start_date<='20130725'
and calendar_end_date>='20130725'
and calendar_thursday=1
and stop_times_arrival_time>='07:40'
except
select stop_times_arrival_time
from stop_times, trips, calendar, calendar_dates
where stop_times_trip_id=trip_id
and calendar_service_id=trip_service_id
and calendar_dates_service_id = trip_service_id
and trip_route_id='11821949021891694'
and stop_times_stop_id='3377699720880977'
and calendar_thursday=1
and calendar_dates_exception_type=2
and stop_times_arrival_time > '07:40'
and calendar_dates_date = 20130725
union
select stop_times_arrival_time
from stop_times, trips, calendar, calendar_dates
where stop_times_trip_id=trip_id
and calendar_service_id=trip_service_id
and calendar_dates_service_id = trip_service_id
and trip_route_id='11821949021891694'
and stop_times_stop_id='3377699720880977'
and calendar_thursday=1
and calendar_dates_exception_type=1
and stop_times_arrival_time > '07:40'
and calendar_dates_date = 20130725;
It took about 15 seconds to compute (which is very long).
I am sure there is a better to do this query since I do 3 different queries (almost the same by the way) which take time.
Any idea how to improve it?
EDIT:
Here is the schema:
table|calendar|calendar|2|CREATE TABLE calendar (
calendar_service_id TEXT PRIMARY KEY,
calendar_monday INTEGER,
calendar_tuesday INTEGER,
calendar_wednesday INTEGER,
calendar_thursday INTEGER,
calendar_friday INTEGER,
calendar_saturday INTEGER,
calendar_sunday INTEGER,
calendar_start_date TEXT,
calendar_end_date TEXT
)
index|sqlite_autoindex_calendar_1|calendar|3|
table|calendar_dates|calendar_dates|4|CREATE TABLE calendar_dates (
calendar_dates_service_id TEXT,
calendar_dates_date TEXT,
calendar_dates_exception_type INTEGER
)
table|routes|routes|8|CREATE TABLE routes (
route_id TEXT PRIMARY KEY,
route_short_name TEXT,
route_long_name TEXT,
route_type INTEGER,
route_color TEXT
)
index|sqlite_autoindex_routes_1|routes|9|
table|stop_times|stop_times|12|CREATE TABLE stop_times (
stop_times_trip_id TEXT,
stop_times_stop_id TEXT,
stop_times_stop_sequence INTEGER,
stop_times_arrival_time TEXT,
stop_times_pickup_type INTEGER
)
table|stops|stops|13|CREATE TABLE stops (
stop_id TEXT PRIMARY KEY,
stop_name TEXT,
stop_lat REAL,
stop_lon REAL
)
index|sqlite_autoindex_stops_1|stops|14|
table|trips|trips|15|CREATE TABLE trips (
trip_id TEXT PRIMARY KEY,
trip_service_id TEXT,
trip_route_id TEXT,
trip_headsign TEXT,
trip_direction_id INTEGER,
trip_shape_id TEXT
)
index|sqlite_autoindex_trips_1|trips|16|
And here is the query plan:
2|0|0|SCAN TABLE stop_times (~33333 rows)
2|1|1|SEARCH TABLE trips USING INDEX sqlite_autoindex_trips_1 (trip_id=?) (~1 rows)
2|2|2|SEARCH TABLE calendar USING INDEX sqlite_autoindex_calendar_1 (calendar_service_id=?) (~1 rows)
3|0|3|SCAN TABLE calendar_dates (~10000 rows)
3|1|2|SEARCH TABLE calendar USING INDEX sqlite_autoindex_calendar_1 (calendar_service_id=?) (~1 rows)
3|2|0|SEARCH TABLE stop_times USING AUTOMATIC COVERING INDEX (stop_times_stop_id=?) (~7 rows)
3|3|1|SEARCH TABLE trips USING INDEX sqlite_autoindex_trips_1 (trip_id=?) (~1 rows)
1|0|0|COMPOUND SUBQUERIES 2 AND 3 USING TEMP B-TREE (EXCEPT)
4|0|3|SCAN TABLE calendar_dates (~10000 rows)
4|1|2|SEARCH TABLE calendar USING INDEX sqlite_autoindex_calendar_1 (calendar_service_id=?) (~1 rows)
4|2|0|SEARCH TABLE stop_times USING AUTOMATIC COVERING INDEX (stop_times_stop_id=?) (~7 rows)
4|3|1|SEARCH TABLE trips USING INDEX sqlite_autoindex_trips_1 (trip_id=?) (~1 rows)
0|0|0|COMPOUND SUBQUERIES 1 AND 4 USING TEMP B-TREE (UNION)
Columns that are used for lookups should be indexed, but for a single (sub)query, it is not possible to use more than one index per table.
For this particular query, the following additional indexes would help:
CREATE INDEX some_index ON stop_times(
stop_times_stop_id,
stop_times_arrival_time);
CREATE INDEX some_other_index ON calendar_dates(
calendar_dates_service_id,
calendar_dates_exception_type,
calendar_dates_date);
I have a query with a subquery that returns multiple rows.
I have a table with lists and a table with users. I created a many-to-many table between these two tables, called list_user.
LIST
id INTEGER
list_name TEXT
list_description TEXT
USER
id INTEGER
user_name TEXT
LIST_USER
id INTEGER
list_id INTEGER
user_id INTEGER
My query with subquery
SELECT * FROM user WHERE id = (SELECT user_id FROM list_user WHERE list_id = 0);
The subquery works (and I use it in code so the 0 is actually a variable) and it returns multiple rows. But the upper query only returns one row, which is pretty logical; I check if the id equals something and it only checks against the first row of the subquery.
How do I change my statement so I get multiple rows in the upper query?
I'm surprised the = works in SQLite. It would return an error in most databases. In any case, you want the in statement:
SELECT *
FROM list
WHERE id in (SELECT user_id FROM list_user WHERE list_id = 0);
For a better performance, use this query:
SELECT LIST.ID,
LIST.LIST_NAME,
LIST.LIST_DESCRIPTION
FROM LIST,
USER,
LIST_USER
WHERE LIST.ID = LIST_USER.USER_ID = USER.ID AND
LIST.LIST_ID = 0