Android SQLite comparison - android

I want a conditional statement that determines if the table exists. Unfortunately the Cursor data type returns an uncaught exception (and crashes the app) before I can do a null comparison or check for false etc
What is the best way to determine from Java if values in my sql table exist

"What is the best way to determine from Java if values in my sql table exist?"
SELECT exists
(select * from T where mycolumn = {some value} )
will return 1 if true or 0 if false, in SQLite.

Try running this query:
SELECT * FROM dbname.sqlite_master WHERE type='table';
This queries the sqlite master table and you should be able to see the table you want there.

Related

How to insert only 3 columns data in room database table if we have more columns?

I have a project that I wrote in kotlin. I want to insert data in different columns of the same table on different pages. I have specified these columns in the dataclass, but it gives a null data error.
In order to make this insert process more healthy, should I divide the table into two separate tables or send static 'null' data and update these fields?
In a database, such as SQLite (which Room is a wrapper around), the unit of insertion is a row.
A row will consist of the same number of columns. You cannot insert a column on it's own, other than if you ALTER the table to add or remove a column, when the change is reflected throughout the entire table.
if adding a column then a DEFAULT VALUE must be provided, this could be the default/implicit value of null or another specific value.
Room with Kotlin will apply a constraint (rule) of NOT NULL unless nulls are specifically allowed using for example ?
var theColumn: Int has the implicit NOT NULL
var theColumn: Int? does not have the implicit NOT NULL and nulls can be stored
var theColumn: Int=-1 will apply a default value of -1 in the absence of the field not being supplied a value when instantiating the object.
var theColumn: Int?=null will apply null in the absence of the field not being supplied a value when instantiating the object.
obviously fields may be altered before inserting the object, if var rather than val is used.
The data stored in the column can be interpreted to represent whatever you wish, often NULL will be used to represent a special situation such as no value.
If using an #Insert annotated function, then ALL columns are applied the values as obtained from the object or objects passed to the function. In Kotlin whether or not NULLs can be used is dependent upon the field definition or in some cases the #NonNull annotation.
#Insert indicates what is termed as a convenience method, it actually builds the underlying SQL along with binding the values using the SQLite API.
However, if you want flexibility, then an #Query annotation with suitable INSERT SQL statement can be used.
e.g. you could perhaps have a table that has 4 columns COL1, COL2, COL3 and COL4 and only apply some of the columns (the DEFAULT VALUE will be applied to the other column if specified, if not the NULL but if there is a NOT NULL constraint then a conflict would be raised).
So to insert when only two of the columns (COL2 and COL4) then you could use:-
#Query("INSERT INTO theTable (COL2,COL4) VALUES(:valueForCol2,:valueForCol4)")
fun insertCol2AndCol4Only(valueForCol2: Int, valueForCol4: Int?)
Note that valueForCol4 could be NULL. However, whether or not a NULL will result in a conflict depends upon how the field is defined in the #Entity annotated class.
Conflicts (breaking a rule) can be handled by SQLite, depending upon the type of the conflict. UNIQUE, PRIMARY KEY (which is really a UNIQUE conflict), CHECK (Room doesn't cater for CHECK constraints) and NOT NULL constraints can be handled in various ways at the SQLite level.
A common use of conflict handling is to IGNORE the conflict, in which case the action (INSERT or UPDATE) is ignored. In the case of INSERT the row is not inserted but SQLite ignores the conflict and doesn't issue an error.
So if for example COL4's field was var COL4: Int and not var COL4: Int? then the insert would fail and an SQlite Exception would occurr.
However if instead
#Query("INSERT OR IGNORE INTO theTable (COL2,COL4) VALUES(:valueForCol2,:valueForCol4)")
were used and the COL4 field were defined as var COL4: Int (implied NOT NULL constraint) then the conflict if NULL was passed as valueForCol4 then the row would not be inserted but no failure would occur as the NOT NULL conflict would be ignored.
With the #Insert annotation you can defined this conflict handling via the onConflictStrategy parameter e.g. #Insert(onConflictStrategy=OnConflict.IGNORE)
You may wish to consider reading the following:-
The On Conflict Clause
INSERT
In order to make this insert process more healthy, should I divide the table into two separate tables or send static 'null' data and update these fields?
Note the above is only a summary, INTEGER PRIMARY KEY aka #PrimaryKey var id: Long?=null or variations such as #PrimaryKey(autoGenerate=true) etc has specifically not been discussed.
The design of the database could be handled either way, from the very limited description of the scenario, a most likely suitable scenario cannot really be advised, although either could probably be an approach.
Additional
Based upon the comment:-
For example, I'm going to add the features of a car to the database, but it could be a different type at a time. So on the first page, the type of car will be chosen, like off road, sedan, 4x4, hatchback.
The perhaps consider having a feature table and a mapping table for a many-many relationship between car and it's features as per my response:-
I would suggest that features be a table and with a many-many relationship with the car. That is a car could have a 0-n features and a feature could be used by 0-n cars. The many-many relationship would require a third table known by many terms such as an associative table/reference table/ mapping table. Such a table has 2 core columns a column to map to the car and a column to map to the feature, the primary key being a composite of both these columns.
Here's a basic example of how this could work from an SQLite basis:-
DROP INDEX IF EXISTS carFeatureMap_idxon_feature;
DROP TABLE IF EXISTS carFeatureMap;
DROP TABLE IF EXISTS car;
DROP TABLE IF EXISTS feature;
CREATE TABLE IF NOT EXISTS car (
carId INTEGER PRIMARY KEY,
carname TEXT /* and other columns */
);
CREATE TABLE IF NOT EXISTS feature (
featureId INTEGER PRIMARY KEY,
featureDescription TEXT
);
CREATE TABLE IF NOT EXISTS carFeatureMap (
carIdMap INTEGER REFERENCES car(carId) ON DELETE CASCADE ON UPDATE CASCADE,
featureIdMap INTEGER REFERENCES feature(featureId) ON DELETE CASCADE ON UPDATE CASCADE,
PRIMARY KEY(carIdMap, featureIdMap)
);
/* Should improve efficiency of mapping from a feature */
CREATE INDEX IF NOT EXISTS carFeatureMap_idxon_feature ON carFeatureMap(featureIdMap);
/* Add some features */
INSERT OR IGNORE INTO feature VALUES(100,'4x4'),(101,'Sedan'),(106,'Convertable'),(null /*<<<< featureId generated by SQLite*/ ,'Hatchback');
/*Report1 Output the features */
SELECT * FROM feature;
/* Add some cars */
INSERT OR IGNORE INTO car VALUES(10,'Car1'),(20,'Car2'),(30,'Car3');
/*Report2 Output the cars */
SELECT * FROM car;
/* add the mappings/relationships/associations between cars and features */
INSERT OR IGNORE INTO carFeatureMap VALUES (10,101) /* Car 1 has 4x4*/,(10,106) /* Car 1 has Sedan */,(20,100);
/*Report3 Get the Cars with features cartesian product */
SELECT
car.carName,
featureDescription
FROM car
JOIN carFeatureMap ON car.carId=carFeatureMap.carIdMap
JOIN feature ON featureIdMap=featureId
;
/*Report4 Get the Cars with features with all the features concatendated, i.e. single output per car with features */
SELECT
car.carName,
group_concat(featureDescription,': ') AS allFeatures
FROM car
JOIN carFeatureMap ON car.carId=carFeatureMap.carIdMap
JOIN feature ON featureIdMap=featureId GROUP BY (carId)
;
/*Report5 Similar to the previous BUT if no features then output none so ALL cars are output */
SELECT
carName,
coalesce(
(
SELECT
group_concat(featureDescription)
FROM feature
JOIN carFeatureMap ON carFeatureMap.featureIdMap=featureId AND carFeatureMap.carIdMap=carId
),
'none'
) AS features
FROM car
;
/* Clean Up After Demo*/
DROP INDEX IF EXISTS carFeatureMap_idxon_feature;
DROP TABLE IF EXISTS carFeatureMap;
DROP TABLE IF EXISTS car;
DROP TABLE IF EXISTS feature;
Results from the demo code above
Report1 - The features
Report2 - The cars
**Report3 ** Cars and features
Report 4 Cars and features 2
Report 5 Cars and features 3

Android SQLite additional WHERE condition after MATCH

In Android SQLite i got tabel like this
domainObjectId: String // like '9876543210'
name: String
description: String
I want to use FTS on this to search without worrying about diacritical marks, how ever i want to let user select also by typing part of object ID(ex. last 4 char)
I got select like
`SELECT * FROM tabel LEFT JOIN tabel_fts on tabel_fts.domainObjectId = tabel.domainObjectId WHERE tabel_fts MATCH '3210*' OR tabel.domainObjectId LIKE '%3210%'
But in return i get error
unable to use function MATCH in the requested context (code 1 SQLITE_ERROR);
Is this possible to add additional condition to select with MATCH?
Try to remove "MATCH" into separate "SELECT":
`SELECT * FROM tabel LEFT JOIN (select * from tabel_fts WHERE tabel_fts.domainObjectId MATCH '3210*') as tabel_fts WHERE tabel.domainObjectId LIKE '%3210%' OR table_fts.ID IS NOT NULL
By the way:
In your "WHERE tabel_fts" it seemed you've missed a column name
There is no "ON" condition in tables JOINm just "WHERE". That's OK? May be it would be better to use UNION?

Finding tables having columntype BLOB in sqlite

How can I find the tables having column Blob type in Sqlite. I need to get the table names from which I get the column blob type and then want to see the total no. of records where the blob is not empty.
If you wanted tables that have a column defined as a blob then you could use
SELECT * FROM sqlite_master WHERE sql LIKE '%blob%';
as the basis for determining the tables. e.g. this could return results such as :-
However, this does not necessarily find all values that are stored as blobs. This is because with the exception of the rowid column or an alias thereof, any type of value (blob included) can be stored in any column.
e.g. consider the following :-
DROP TABLE IF EXISTS not_a_blob_table;
CREATE TABLE IF NOT EXISTS not_a_blob_table (col1 TEXT, col2 INTEGER, col3 REAL, col4 something_or_other);
INSERT INTO not_a_blob_table VALUES
('test text',123,123.4567,'anything'), -- Insert using types as defined
(x'00',x'12',x'34',x'1234567890abcdefff00') -- Insert with all columns as blobs
;
SELECT typeof(col1),typeof(col2),typeof(col3),typeof(col4) FROM not_a_blob_table;
This results in :-
If you want to find all blobs then you would need to process all columns from all rows of all tables based upon a check for the column type. This could perhaps be based upon :-
SELECT typeof(col1),typeof(col2),typeof(col3),typeof(col4),* FROM not_a_blob_table
WHERE typeof(col1) = 'blob' OR typeof(col2) = 'blob' OR typeof(col3) = 'blob' OR typeof(col4) = 'blob';
Using the table above this would result (only the 2nd row has blobs) in :-
A further complication is what you mean by not empty, null obviously. However what about x'00'? or if you used a default of zeroblob(0) ?.
zeroblob(N)
The zeroblob(N) function returns a BLOB consisting of N bytes of 0x00. SQLite manages these zeroblobs very efficiently. Zeroblobs can
be used to reserve space for a BLOB that is later written using
incremental BLOB I/O. This SQL function is implemented using the
sqlite3_result_zeroblob() routine from the C/C++ interface.
If null though then this wouldn't have a type of blob, instead it's type would be null, which could complicate matters if checking for all values stored as blobs.
You may wish to consider having a look at the code from Are there any methods that assist with resolving common SQLite issues?
as this could well be the basis for what you want.
You also wish to have a look at typeof(X) and zeroblob(N).

compileStatement throws Exception [duplicate]

Question: Is it possible to use a variable as your table name without having to use string constructors to do so?
Info:
I'm working on a project right now that catalogs data from a star simulation of mine. To do so I'm loading all the data into a sqlite database. It's working pretty well, but I've decided to add a lot more flexibility, efficiency, and usability to my db. I plan on later adding planetoids to the simulation, and wanted to have a table for each star. This way I wouldn't have to query a table of 20m some planetoids for the 1-4k in each solar system.
I've been told using string constructors is bad because it leaves me vulnerable to a SQL injection attack. While that isn't a big deal here as I'm the only person with access to these dbs, I would like to follow best practices. And also this way if I do a project with a similar situation where it is open to the public, I know what to do.
Currently I'm doing this:
cursor.execute("CREATE TABLE StarFrame"+self.name+" (etc etc)")
This works, but I would like to do something more like:
cursor.execute("CREATE TABLE StarFrame(?) (etc etc)",self.name)
though I understand that this would probably be impossible. though I would settle for something like
cursor.execute("CREATE TABLE (?) (etc etc)",self.name)
If this is not at all possible, I'll accept that answer, but if anyone knows a way to do this, do tell. :)
I'm coding in python.
Unfortunately, tables can't be the target of parameter substitution (I didn't find any definitive source, but I have seen it on a few web forums).
If you are worried about injection (you probably should be), you can write a function that cleans the string before passing it. Since you are looking for just a table name, you should be safe just accepting alphanumerics, stripping out all punctuation, such as )(][;, and whitespace. Basically, just keep A-Z a-z 0-9.
def scrub(table_name):
return ''.join( chr for chr in table_name if chr.isalnum() )
scrub('); drop tables --') # returns 'droptables'
For people searching for a way to make the table as a variable, I got this from another reply to same question here:
It said the following and it works. It's all quoted from mhawke:
You can't use parameter substitution for the table name. You need to add the table name to the query string yourself. Something like this:
query = 'SELECT * FROM {}'.format(table)
c.execute(query)
One thing to be mindful of is the source of the value for the table name. If that comes from an untrusted source, e.g. a user, then you need to validate the table name to avoid potential SQL injection attacks. One way might be to construct a parameterised query that looks up the table name from the DB catalogue:
import sqlite3
def exists_table(db, name):
query = "SELECT 1 FROM sqlite_master WHERE type='table' and name = ?"
return db.execute(query, (name,)).fetchone() is not None
I wouldn't separate the data into more than one table. If you create an index on the star column, you won't have any problem efficiently accessing the data.
Try with string formatting:
sql_cmd = '''CREATE TABLE {}(id, column1, column2, column2)'''.format(
'table_name')
db.execute(sql_cmd)
Replace 'table_name' with your desire.
To avoid hard-coding table names, I've used:
table = "sometable"
c = conn.cursor()
c.execute('''CREATE TABLE IF NOT EXISTS {} (
importantdate DATE,
somename VARCHAR,
)'''.format(table))
c.execute('''INSERT INTO {} VALUES (?, ?)'''.format(table),
(datetime.strftime(datetime.today(), "%Y-%m-%d"),
myname))
As has been said in the other answers, "tables can't be the target of parameter substitution" but if you find yourself in a bind where you have no option, here is a method of testing if the table name supplied is valid.
Note: I have made the table name a real pig in an attempt to cover all of the bases.
import sys
import sqlite3
def delim(s):
delims="\"'`"
use_delim = []
for d in delims:
if d not in s:
use_delim.append(d)
return use_delim
db_name = "some.db"
db = sqlite3.connect(db_name)
mycursor = db.cursor()
table = 'so""m ][ `etable'
delimiters = delim(table)
if len(delimiters) < 1:
print "The name of the database will not allow this!"
sys.exit()
use_delimiter = delimiters[0]
print "Using delimiter ", use_delimiter
mycursor.execute('SELECT name FROM sqlite_master where (name = ?)', [table])
row = mycursor.fetchall()
valid_table = False
if row:
print (table,"table name verified")
valid_table = True
else:
print (table,"Table name not in database", db_name)
if valid_table:
try:
mycursor.execute('insert into ' +use_delimiter+ table +use_delimiter+ ' (my_data,my_column_name) values (?,?) ',(1,"Name"));
db.commit()
except Exception as e:
print "Error:", str(e)
try:
mycursor.execute('UPDATE ' +use_delimiter+ table +use_delimiter+ ' set my_column_name = ? where my_data = ?', ["ReNamed",1])
db.commit()
except Exception as e:
print "Error:", str(e)
db.close()
you can use something like this
conn = sqlite3.connect()
createTable = '''CREATE TABLE %s (# );''' %dateNow)
conn.execute(createTable)
basically, if we want to separate the data into several tables according to the date right now, for example, you want to monitor a system based on the date.
createTable = '''CREATE TABLE %s (# );''' %dateNow) means that you create a table with variable dateNow which according to your coding language, you can define dateNow as a variable to retrieve the current date from your coding language.
You can save your query in a .sql or txt file and use the open().replace() method to use variables in any part of your query. Long time reader but first time poster so I apologize if anything is off here.
```SQL in yoursql.sql```
Sel *
From yourdbschema.tablenm
```SQL to run```
tablenm = 'yourtablename'
cur = connect.cursor()
query = cur.execute(open(file = yoursql.sql).read().replace('tablenm',tablenm))
You can pass a string as the SQL command:
import sqlite3
conn = sqlite3.connect('db.db')
c = conn.cursor()
tablename, field_data = 'some_table','some_data'
query = 'SELECT * FROM '+tablename+' WHERE column1=\"'+field_data+"\""
c.execute(query)

How to check the values of an android sqlite db table?

I really have two questions:
1/ how to read/check sqlite db table offline?
2/ Why is a sqlite table column of boolean type set to NULL?
ad 1/ I have downloaded sqlite3.exe and when i use the cmd > .dump quotes_table i see all rows like
INSERT INTO quotes_table` VALUES................
INSERT INTO quotes_table` VALUES................
INSERT INTO quotes_table` VALUES................
INSERT INTO quotes_table` VALUES................
etc
Is this correct as i was expecting to see just the values and not the queries?
ad 2/ I alter+updated the quotes_table by
db.execSQL("ALTER TABLE quotes_table ADD COLUMN quoteUsed boolean");
db.execSQL("UPDATE 'quotes_table' SET quoteUsed = 0");
I read on http://www.sqlite.org/datatype3.html sqlite use 0 and 1 for boolean type so i thought putting quoteUsed = 0 should be OK, but the value reads NULL.
How can i fix this?
regards,
ps: FYI just installed SQLite manager addon for Firefox and it makes things easier
1/ how to read/check sqlite db table offline?
Normally, SQLite is used with local databases, so database exists or does not exists - "offline" doesn't apply.
2/ Why is a sqlite table column of boolean type set to NULL?
SQLite doesn't have a boolean data type. It's affinity is NUMERIC. Check Datatypes In SQLite Version 3
NULL values, as in other DBMS, means nothing assigned.
when i use the cmd > .dump quotes_table i see all rows like ...
DUMP is used to extract all data from database. Actually, it returns SQL statements you may use to rebuild a new database from scratch.
To get data only, use SQL queries (check SELECT statement).
when i use the cmd > .dump quotes_table i see all rows like
...
Is this correct as i was expecting to see just the values and not the queries?
Yes, .dump creates the SQL that would produce an identical database.
2/ I alter+updated the quotes_table by
...
I read on http://www.sqlite.org/datatype3.html sqlite use 0 and 1 for boolean type so i thought putting quoteUsed = 0 should be OK, but the value reads NULL. How can i fix this?
Nothing really wrong with the SQL. How are you reading the value?

Categories

Resources