How to store Protobuf Java model in Room database? - android

Is it possible to store objects from a generated Protobuf Java class in a Room database? Do I need to serialize the objects and store the serialized versions inside of an Entity class?

You basically have two options. Serialise and save as a Blob (byte[]) or to have Entities that reflect the objects properties and if those are themselves are objects then like wise break them down. This latter approach is potentially more complicated but it does open the data to being more useful database wise.
In short serialise or not to serialise is a choice that would depend upon what is required. Do you just want to use the database for storage or do you want to take advantage of the searchability aspect offered by SQLite (Room).
As an example lets say you have a contract for an address that has the following properties:-
postbox (int)
street (string)
city (string)
country (string or probably more correctly a reference to a country object)
then if you serialise the object you have a string of bytes (byte[]) and all the data will be stored in a single column.
If you wanted to search for all addresses in a city that would be a) inefficient and also relatively difficult.
If each property were saved in a table with a column per property then that search for the city is straightforward e.g. SELECT your_columns FROM the_table WHERE city = 'MyCity';
If the address object were serialised and saved the you might be fortunate and be able to use SELECT serialised_address FROM the_table WHERE instr(serialised_address,hex('MyCity') > 0;
It depends upon how serialisation generates the underlying data
Underneath Room is SQLite and here's a quick SQLite example that demonstrates the difference assuming that the property markers are the property name (e.g. city) followed by = e.g. city MyCity is stored as city=MyCity converted to hex (bytes).
DROP TABLE IF EXISTS test1;
CREATE TABLE IF NOT EXISTS test1 (serialised_address BLOB);
CREATE TABLE IF NOT EXISTS test2 (postbox INTEGER, street TEXT, city TEXT, country TEXT);
INSERT INTO test1
VALUES(hex('postbox=1,street="The Street",city="MyCity",country="England"')),
(hex('postbox=1,street="The Street",city="NotMyCity",country="England"'))
;
INSERT INTO test2 VALUES(1,'The Street','MyCity','England'),(1,'The Street','NotMyCity','England');
SELECT *,null,null,null FROM test1 UNION SELECT * FROM test2;
SELECT 'S1',serialised_address,null,null,null FROM test1 WHERE instr(serialised_address,hex('MyCity')) > 0
UNION SELECT 'S2',serialised_address,null,null,null FROM test1 WHERE instr(serialised_address,hex('myCity')) > 0
UNION SELECT 'S3',* FROM test2 WHERE city = 'MyCity'
UNION SELECT 'S4',* FROM test2 WHERE city = 'mycity'
;
This creates 2 tables (Entities in Room speak).
Table test1 is for the serialised object address.
Table test2 stores the properties in individual columns.
Both tables are loaded with the equivalent data.
The first query gets the data from both tables using nulls for the 3 columns that don't exist in the test1 table
i.e. all the data is stored in the single column
The second query looks for the city called MyCity expecting to just extract that 1 city.
When run the results are :-
(i.e. 2 rows from each table)
and :-
As you can see 3 rows have been extracted not the expected 2 rows. This is because the search through the byte array for MyCity has also found NotMyCity.

Related

How to model a relational database that stores order details?

I am making a restaurant POS app for android and I am trying to decide the best way to model the database for it using Room ORM that ensures maintainability. My database needs, among a lot of other things, to keep a record of all items sold within a transaction/order, as well as a log of the orders themselves and a list of the food products sold within the restaurant.
Considering the following tables (for brevity purposes I only include columns I think relevant to the question and may not illustrate all the information I will need to catalog), I can create a table that includes a log of all the orders ever placed and call it all_orders:
all_orders
-----------
id (PK)
oder_details_id (FK) - referencing the PK from order_details table
date
notes
total
payment_type
I can also create a table that contains all the food products/dishes that the restaurant serves, and we’ll call it all_items:
all_items
---------
id (PK)
name
category
price
No problems there so far, but my current confusion lies here—how do I manage to keep a log of the actual food items sold within an order?
One approach I thought about was to create a table per order number, but creating tables dynamically is already a problem and having 60,000 tables at the end of the year will be a maintainability nightmare.
So my other possible solution is to create a table called order_details that will probably end up with hundreds of thousands of entries per year with the following columns:
order_details
-------------
id (PK)
item_id (FK) - referencing the PK from the all_items table
order_id (FK) - referencing the PK from the all_orders table
quantity_ordered
And when a user wants to pull up an order from say, last week, the program can use a join query that will produce the following to be displayed in the app’s UI:
order
---------
id (PK)
date (from the all_orders table)
name (from all_items)
category (from all_items)
price (from all_items)
total (from all_orders)
payment_type (from all_orders)
I am afraid that the order_details table is just too broad since it will contain hundreds of thousands of entries, and querying it for entries will be sluggish. I'm sure indexing it will help, but is this the correct approach to this problem? If not, is there a better, “best practice” solution? If possible something that focuses on grouping any order and its items together without just dumping all items from all orders into one table. Any help will be most appreciated.
Edit: This question is not a duplicate of this, and while helpful, the supplied link has not provided any additional context on what I am really asking about nor is it entirely relevant to the answer I am after. I have bolded my last original paragraph since my question is really about a how I can model the above data as it isn't clear to me based on my research how to store actual order details attached to an order (many tutorials/similar questions I've come across fail short of thoroughly explaining the aforementioned).
The all_orders table would be superfluous as that is just repeating other data and would be contrary to normalisation.
You probably want a category table rather than repeat data (i.e. normalise categories).
Likewise, you also probably want a payment_type table (again to normalise).
Creating individual tables for orders would probably just create a nightmare.
Price and total aren't they the same? Saying that totals can be derived when extracting the data so there is no need to store such information.
As such the following structure schema may be close to what you want :-
DROP TABLE IF EXISTS item;
DROP TABLE IF EXISTS category;
CREATE TABLE IF NOT EXISTS category (_id INTEGER PRIMARY KEY, category_name TEXT);
CREATE TABLE IF NOT EXISTS item (
_id INTEGER PRIMARY KEY,
item_name TEXT UNIQUE,
category_ref INTEGER REFERENCES category(_id) ON DELETE CASCADE ON UPDATE CASCADE,
item_price REAL
);
DROP TABLE IF EXISTS payment_type;
CREATE TABLE IF NOT EXISTS payment_type (
_id INTEGER PRIMARY KEY,
payment_type TEXT UNIQUE,
surcharge REAL
);
-- NOTE cannot call a table order as it is a keyword (not rea true but have to enclose the name e.g.g [order]).
DROP TABLE IF EXISTS customer_order;
CREATE TABLE IF NOT EXISTS customer_order (
_id INTEGER PRIMARY KEY,
customer_name TEXT,
date TEXT DEFAULT CURRENT_TIMESTAMP,
payment_type_ref INTEGER REFERENCES payment_type(_id) ON DELETE CASCADE ON UPDATE CASCADE
);
DROP TABLE IF EXISTS order_detail;
CREATE TABLE IF NOT EXISTS order_detail (
customer_order_ref INTEGER REFERENCES customer_order(_id) ON DELETE CASCADE ON UPDATE CASCADE,
item_ref REFERENCES item(_id) ON DELETE CASCADE ON UPDATE CASCADE,
quantity
);
Example
The following is native SQL that demonstrates the schema above :-
Part 1 adding (inserting) the data :-
INSERT INTO category (category_name) VALUES
('Fish'),('Beef'),('Chicken'),('Lamb'),('Sea Food')
;
INSERT INTO item (item_name, item_price, category_ref) VALUES
('Fish and Chips',11.30,1),
('Steak and Kidney Pudding',15.45,2),
('Lamb Chops, Mashed Potato and Gravy',17.40,3)
;
INSERT INTO payment_type (payment_type, surcharge) VALUES
('Master Card',0.05),('Visa',0.05),('Cash',0),('American Express',0.15)
;
INSERT INTO customer_order (customer_name, payment_type_ref) VALUES
('Fred',3),
('Mary',1),
('Tom',2),
('Jane',4)
;
INSERT INTO order_detail (customer_order_ref, item_ref, quantity) VALUES
(1,1,2),(1,2,1), -- Fred (id 1) orders 2 Fish and Chips (id 1) and 1 Steak and Kidney (id 2)
(2,3,10), -- Mary orders 10 Lamb chops
(3,2,1),(3,1,1),(3,3,1), -- Tom orders 1 of each
(4,1,1) -- Just Fish and chips for Jane
;
Part 2 - Extracting Useful(perhaps) Data
Here's and example of what you can do with SQL which includes derived data (as suggested above) :-
SELECT
customer_name,
date,
group_concat(item_name) ||'('||quantity||')' AS items,
sum(item_price) AS total_price,
payment_type,
round(sum(item_price) * surcharge,2) AS surcharge,
round((sum(item_price) * surcharge) + sum(item_price),2) AS total_price
FROM customer_order
JOIN order_detail ON customer_order._id = order_detail.customer_order_ref
JOIN item ON order_detail.item_ref = item._id
JOIN payment_type ON customer_order.payment_type_ref = payment_type._id
GROUP BY customer_order._id -- Treats all data for an order as a single row allowing the use of aggregate functions on the groups e.g. sum, group_concat
;
Result

Finding tables having columntype BLOB in sqlite

How can I find the tables having column Blob type in Sqlite. I need to get the table names from which I get the column blob type and then want to see the total no. of records where the blob is not empty.
If you wanted tables that have a column defined as a blob then you could use
SELECT * FROM sqlite_master WHERE sql LIKE '%blob%';
as the basis for determining the tables. e.g. this could return results such as :-
However, this does not necessarily find all values that are stored as blobs. This is because with the exception of the rowid column or an alias thereof, any type of value (blob included) can be stored in any column.
e.g. consider the following :-
DROP TABLE IF EXISTS not_a_blob_table;
CREATE TABLE IF NOT EXISTS not_a_blob_table (col1 TEXT, col2 INTEGER, col3 REAL, col4 something_or_other);
INSERT INTO not_a_blob_table VALUES
('test text',123,123.4567,'anything'), -- Insert using types as defined
(x'00',x'12',x'34',x'1234567890abcdefff00') -- Insert with all columns as blobs
;
SELECT typeof(col1),typeof(col2),typeof(col3),typeof(col4) FROM not_a_blob_table;
This results in :-
If you want to find all blobs then you would need to process all columns from all rows of all tables based upon a check for the column type. This could perhaps be based upon :-
SELECT typeof(col1),typeof(col2),typeof(col3),typeof(col4),* FROM not_a_blob_table
WHERE typeof(col1) = 'blob' OR typeof(col2) = 'blob' OR typeof(col3) = 'blob' OR typeof(col4) = 'blob';
Using the table above this would result (only the 2nd row has blobs) in :-
A further complication is what you mean by not empty, null obviously. However what about x'00'? or if you used a default of zeroblob(0) ?.
zeroblob(N)
The zeroblob(N) function returns a BLOB consisting of N bytes of 0x00. SQLite manages these zeroblobs very efficiently. Zeroblobs can
be used to reserve space for a BLOB that is later written using
incremental BLOB I/O. This SQL function is implemented using the
sqlite3_result_zeroblob() routine from the C/C++ interface.
If null though then this wouldn't have a type of blob, instead it's type would be null, which could complicate matters if checking for all values stored as blobs.
You may wish to consider having a look at the code from Are there any methods that assist with resolving common SQLite issues?
as this could well be the basis for what you want.
You also wish to have a look at typeof(X) and zeroblob(N).

Android SQLite insert JSON in TEXT column with escaping double quotes

I am using SQLite in an Android app which stores, amongst other things, a table that records the mood of the user. The table schema is shown below
CREATE TABLE moods
(
dow INTEGER,
tsn INTEGER,
lato INTEGER,
agitation TEXT DEFAULT '{}',
preoccupancy TEXT DEFAULT '{}',
tensity TEXT DEFAULT '{}',
taps TEXT DEFAULT '{}'
);
Unlike, say, Postgres, SQLite does not have a dedicated jsonb storage type. To quote
Backwards compatibility constraints mean that SQLite is only able to store values that are NULL, integers, floating-point numbers, text, and BLOBs. It is not possible to add a sixth "JSON" type.
SQLite JSON1 extension documentation
On the SQLite commandline processor I can populate this table using a simple INSERT statement such as this one
INSERT INTO moods (dow,tsn,lato,agitation,preoccupancy,tensity,taps)
VALUES(1,0,20,'{"A2":1}','{"P4":2}','{"T4":3}','{"M10":4}');
since it accepts both single and double quotes for strings.
In order to accomplish the same thing in my Hybrid Android app I do the following
String ag = JOString("A" + DBManage.agitation,1);
String po = JOString("P" + (DBManage.preoccupancy + 15),1);
String te = JOString("T" + DBManage.tensity,1);
which returns strings such as {"P4":2} etc. My next step is as follows
ContentValues cv = new ContentValues();
cv.put("dow",0);
cv.put("tsn",1);
cv.put("lato",2);
cv.put("agitation",ag);
cv.put("preoccupancy",po);
cv.put("tensity",te);
long rowid = db.insert("moods",null,cv);
which works. However, upon examining the stored values I find that what has gone in is in fact
{"dow":0,"tsn":5,"lato":191,"tensity":"{\"T0\":1}","agitation":"
{\"A1\":1}","preoccupancy":"{\"P15\":1}","taps":"{}"}]
Unless I am misunderstanding something here the underlying ContentValues.put or the SQLiteDatabase.insert implementation has taken it upon itself to escape the quoted strings. Perhaps this is to be expected given that the benefit of going down the SQLiteDatabase.insert route with ContentValues is protection from SQL injection attacks. However, in the present instance it is being a hindrance not a help.
Short of executing raw SQL via execSQL is there another way to get the JSON into the SQLite database table here?
I should add that simply replacing the double quotes with single quotes
String ag = JOString("A" + DBManage.agitation,1);
ag = ag.replaceAll("\"","'");
will not work since a subsequent attempt to extract JSON from the table
SELECT ifnull((select json_extract(preoccupancy,'$.P4') from moods where dow = '1' AND tsn = '0'),0);
would result in the error malformed JSON being emitted by the SQLite JSON1 extension.

SQLite language independent query of numbers

I have written an android app for fun that have inner sqlite database after gathering many data using html parser I found my numbers that are saved as
text in database, are written in English language, so doing query in persian that people in my country try will return nothing on numbers
String q = "SELECT * FROM studentIDs WHERE field1 LIKE '%"+name+"%' OR field2 LIKE '%"+name+"%'";
while doing good on both field1 that is string ,it won't work on field2 that number stored as string, how should I perform language independent query on numbers?
I can't change characters from English to other I want support for both languages and I can't change it's type to integer because some records are English name
Sorry about my English and thanks in advance
Since your data type is String, it means you can store any character sequence to it(depends on your sqlite encoding config e.g utf8) and Sqlite doesn't care and shouldn't care about it.
You have a simple solution here:
Just write a simple mapper before any query to database:
String mapToEn(String query) {
return query
.replace('۰', '0')
.replace('۱', '1')
.replace('۲', '2')
.replace('۳', '3')
.replace('۴', '4')
.replace('۵', '5')
.replace('۶', '6')
.replace('۷', '7')
.replace('۸', '8')
.replace('۹', '9');
}
And use it on your query or query parameters before executing the query to database:
Result query(mapToEn(query));
Edit:
Since you said
I can't change it's type to integer cause some recored's are english
name
I thought your data in the field1 is a combination of numbers and characters, now that you clarified it only contains numeric or String data you have another solution.
Database Schema Migration
Since your database schema doesn't match your requirements anymore you have to make some changes to it. You need to differentiate the data type you have entered, simply adding two new column as field_str and field_num.
Basically you should write a database migration which is responsible for converting the field1 column's data from String to Integer if its an Integer without losing any data, Here are the steps you should do:
Add an Integer and a String column to your table respectively field_num and field_str.
Iterate through the table and parse all those Strings in field1
to Integer and insert them into 'field_numcolumn, and insert the unparseable ones intofield_str` column.
change your query accordingly.
Since sqlite does not support column drop, You either have to add a new column to your existing table and leave alone the old data to be there, or You can create a new table and migrate all of your data to the new table:
Here is some hypothetical situation:
sqlite> .schema
CREATE TABLE some_table(
id INTEGER PRIMARY KEY,
field1 TEXT,
field2 TEXT,
);
sqlite> select * from some_table;
id field1 field2
---------- ---------- ------
0 1234 name<br>
1 bahram name
Now create another table
sqlite> CREATE TABLE new_some_table(
...> id INTEGER PRIMARY KEY,
...> field_str TEXT,
...> field_num INTEGER,
...> field2 TEXT,
...> ) ;
Now copy your data from the old table
sqlite> INSERT INTO new_some_table(id, field_str, field2)
...> SELECT id, field1, field2, FROM some_table ;
sqlite> INSERT INTO new_some_table(id, field_num)
...> SELECT id, field1, FROM some_table WHERE typeof(field1) = "integer" ;
Now you can query your table based on what type of data you have.
Consider using an ORM which provides the migration tool, like Google's Room or dbflow.
As a simple knowledge all database store data in one file and there data get store as string. It depends on our Database Driver that interpret them and convert them into other data type as per Table schema.
So, different languages is also stored in database as string. No matter if it is number or character. For each language you have to put language converter.
So idea to make your database usable for your and all other country language is to Encode and Decode. convert any country number in English while save into database and vice-versa.
You have to make method for all languages for conversation Like this. Or make your own library class to use it universally. Hope you find this useful.
my idea is make semi-translator for numbers first check each char like this to see is it number in any language
static boolean isDigit(char ch)//from java doc
Determines if the specified character is a digit.
then use getNumericValue(char)
The nice thing about getNumericValue(char) is that it also works with strings like "٥" and "५" where ٥ and ५ are the digits 5 in Eastern Arabic and Hindi/Sanskrit respectively.

Creating an 2D Array from database table

Could someone present me the simpliest way to pass the data from sqlite table to 2D array?
Lets assume table with columns: _id, date, value
I assume I have to count records in the table before I define an array
The way that I would usually do this is by creating a Struct with the necessary properties (id,
date, value in your case), and then have an array to hold the instances of the struct.

Categories

Resources