One problem I have them is 'like system' such as facebook like system.
I making the program using foreign hosting site having myisam mysql.
as you know the myisam mysql has not transaction system.
so it can not set key like foreign key to present relation or to set reference integrity
I think 'like system' what I making is important that how present and how set the reference integrity.
For instance, I made the A content and it has 3 'Likes'. if the contents delected, Like which already has also has to delected.
but int myisam mysql system it is impossible. because the isam mysql cannot set reference integrity system to tables
for these reason I regret the choice that I selected myisam mysql system at first step.
Even though I have know inno DB can these problems, I spend a lot of time to this project.
So how can I set reference integrity system to my project (Like system) or any other method to solve this situation?
If you can't change Database engine to Innodb and want to continue with MyISAM, triggers will be useful for you.
Syntax
CREATE TRIGGER trigger_name
BEFORE DELETE
ON table_name FOR EACH ROW
BEGIN
-- variable declarations
-- trigger code
END;
Example
DELIMITER //
CREATE TRIGGER delete_likes
BEFORE DELETE
ON tbl_content FOR EACH ROW
BEGIN
DELETE FROM tbl_likes WHERE content_id = OLD.id;
END; //
DELIMITER ;
Lets say you have 2 tables named tbl_content and tbl_likes.
Column content_id in tbl_likes is a reference to id column in tbl_content.
The sample trigger named delete_likes will get triggered before deleting a row in tbl_content and will delete related rows from tbl_likes.
Related
My understanding of SQLite transactions on Android is based largely on this article. In its gist it suggests that
if you do not wrap calls to SQLite in an explicit transaction it will
create an implicit transaction for you. A consequence of these
implicit transactions is a loss of speed
.
That observation is correct - I started using transactions to fix just that issue:speed. In my own Android app I use a number of rather complex SQLite tables to store JSON data which I manipulate via the SQLite JSON1 extension - I use SQLCipher which has JSON1 built in.
At any given time I have to manipulate - insert, update or delete - rows in several tables. Given the complexity of the JSON I do this with the help of temporary tables I create for each table manipulation. The start of the manipulation begins with SQL along the lines of
DROP TABLE IF EXISTS h1;
CREATE TEMP TABLE h1(v1 TEXT,v2 TEXT,v3 TEXT,v4 TEXT,v5 TEXT);
Some tables require just one table - which I usually call h1 - others need two in which case I call them h1 and h2.
The entire sequence of operations in any single set of manipulations takes the form
begin transaction
manipulate Table 1 which
which creates its own temp tables, h1[h2],
then extracts relevant existing JSON from Table 1 into the temps
manipulates h1[h2]
performs inserts, updates, deletes in Table 1
on to the next table, Table 2 where the same sequence is repeated
continue with a variable list of such tables - never more than 5
end transaction
My questions
does this sound like an efficient way to do things or would it be better to wrap each individual table operation in its own transaction?
it is not clear to me what happens to my DROP TABLE/CREATE TEMP TABLE calls. If I end up with h1[h2] temp tables that are pre-populated with data from manipulating Table(n - 1) when working with Table(n) then the updates on Table(n) will go totally wrong. I am assuming that the DROP TABLE bit I have is taking care of this issue. Am I right in assuming this?
I have to admit to not being an expert with SQL, even less so with SQLite and quite a newbie when it comes to using transactions. The SQLite JSON extension is very powerful but introduces a whole new level of complexity when manipulating data.
The main reason to use transactions is to reduce the overheads of writing to the disk.
So if you don't wrap multiple changes (inserts, deletes and updates) in a transaction then each will result in the database being written to disk and the overheads involved.
If you wrap them in a transaction and the in-memory version will be written only when the transaction is completed (note that if using the SQLiteDatabase beginTransaction/endTransaction methods, that you should, as part of ending the transaction use the setTransactionSuccessful method and then use the endTransaction method).
That is, the SQLiteDatabase method are is different to doing this via pure SQL when you'd begin the transaction and then end/commit it/them (i.e. the SQLiteDatabase methods would otherwise automatically rollback the transactions).
Saying that the statement :-
if you do not wrap calls to SQLite in an explicit transaction it will
create an implicit transaction for you. A consequence of these
implicit transactions is a loss of speed
basically reiterates :-
Any command that changes the database (basically, any SQL command
other than SELECT) will automatically start a transaction if one is
not already in effect. Automatically started transactions are
committed when the last query finishes.
SQL As Understood By SQLite - BEGIN TRANSACTION i.e. it's not Android specific.
does this sound like an efficient way to do things or would it be
better to wrap each individual table operation in its own transaction?
Doing all the operations in a single transaction will be more efficient as there is just the single write to disk operation.
it is not clear to me what happens to my DROP TABLE/CREATE TEMP TABLE
calls. If I end up with h1[h2] temp tables that are pre-populated with
data from manipulating Table(n - 1) when working with Table(n) then
the updates on Table(n) will go totally wrong. I am assuming that the
DROP TABLE bit I have is taking care of this issue. Am I right in
assuming this?
Dropping the tables will ensure data integrity (i.e. you should, by the sound of it, do this), you could also use :-
CREATE TEMP TABLE IF NOT EXISTS h1(v1 TEXT,v2 TEXT,v3 TEXT,v4 TEXT,v5 TEXT);
DELETE FROM h1;
I am developing an Odoo 8 (OpenERP) application. That application use postgresql database. In backend of Odoo 8, there is add sale order button. So, I want to know, how to know the changing data in the last 5 seconds? My need is I want to insert data from mobile apps. What tables that that changing? Any query for do that? Or another suggest.
the database has 314 tables. Is there application of part3 like MONYog may be?
Any help is very appreciate.
For this kind of situation, there is one best way to manage as
Use always two columns in each table as "CreatedOn" and "LastUpdatedOn", and insert proepr values in it as on create time add current time in both then on every update just change last updated on by current time so you will easily get data as per your requirement.
You could also add a function like this:
create or replace function notify_table_change() RETURNS TRIGGER
LANGUAGE PLPGSQL AS
$$
NOTIFY all_writes TG_RELNAME;
IF TG_OP = 'DELETE' THEN RETURN OLD; ELSE RETURN NEW; END OF;
$$;
Then you could add the trigger for all inserts, updates, and deletes (but not truncate since I didn't handle that).
Then te client can:
LISTEN all_writes;
And will get live notifications of which tables are updated in real time.
Monitoring database feels a bit strange approach to the problem. Writing custom modules for odoo (and OpenERP) is very simple and staightforward. I'd create module which triggers whatever you want to do.
Here is a brief example of simplest OpenERP / odoo module:
from osv import osv
class my_custom_module(osv.osv):
_inherit = 'sale.order'
_name = 'sale.order'
def create(self, cr, uid, vals, ctx={}):
<your code here, whatever you want to do when new sale.order obejcet is created >
return super(my_custom_module, self).create(cr, uid, vals, ctx)
my_custom_module()
Does this help?
In a live commercial app. The method I use is a creation and update date/time stamp column.
Example
create table foo (
foo_id serial not null unique,
created_timestamp timestamp not null default current_timestamp,
updated_timestamp timestamp not null default current_timestamp
) with oids;
The trigger to do the work
CREATE TRIGGER check_table_to_do_update
--What i want to do
The trigger on the table
CREATE TRIGGER check_update_foo
AFTER UPDATE ON foo
FOR EACH ROW
WHEN (OLD.updated_timestamp IS DISTINCT FROM NEW.updated_timestamp)
EXECUTE PROCEDURE check_table__to_do_update();
Anything else can put an unnecessary overhead on the system.
All the best
I've got two SQLite databases, each with a table that I need to keep synchronized by merging rows that have the same key. The tables are laid out like this:
CREATE TABLE titles ( name TEXT PRIMARY KEY,
chapter TEXT ,
page INTEGER DEFAULT 1 ,
updated INTEGER DEFAULT 0 );
I want to be able to run the same commands on each of the two tables, with the result that for pairs of rows with the same name, whichever row has the greater value in updated will overwrite the other row completely, and rows which do not have a match are copied across, so both tables are identical when finished.
This is for an Android app, so I could feasibly do the comparisons in Java, but I'd prefer an SQLite solution if possible. I'm not very experienced with SQL, so the more explanation you can give, the more it'll help.
EDIT
To clarify: I need something I can execute at an arbitrary time, to be invoked by other code. One of the two databases is not always present, and may not be completely intact when operations on the other occur, so I don't think a trigger will work.
Assuming that you have attached the other database to your main database:
ATTACH '/some/where/.../the/other/db-file' AS other;
you can first delete all records that are to be overwritten because their updated field is smaller than the corresponding updated field in the other table:
DELETE FROM main.titles
WHERE updated < (SELECT updated
FROM other.titles
WHERE other.titles.name = main.titles.name);
and then copy all newer and missing records:
INSERT INTO main.titles
SELECT * FROM other.titles
WHERE name NOT IN (SELECT name
FROM main.titles);
To update in the other direction, exchange the main/other database names.
For this, you can use a trigger.
i.e.
CREATE TRIGGER sync_trigger
AFTER INSERT OR UPDATE OF updated ON titles
REFERENCING NEW AS n
FOR EACH ROW
DECLARE updated_match;
DECLARE prime_name;
DECLARE max_updated;
BEGIN
SET prime_name = n.name;
ATTACH database2name AS db2;
SELECT updated
INTO updated_match
FROM db2.titles t
WHERE t.name=prime_name)
IF updated_match is not null THEN
IF n.updated > updated_match THEN
SET max_updated=n.updated;
ELSE
SET max_updated=updated_match;
END IF;
UPDATE titles
SET updated=max_updated
WHERE name=prime_name;
UPDATE db2.titles
SET updated=max_updated
WHERE name=prime_name;
END IF;
END sync_trigger;
The syntax may be a little off. I don't use triggers all that often and this is a fairly complex one, but it should give you an idea of where to start at least. You will need to assign this to one database, exchanging "database2name" for the other database's name and then assign it again to the other database, swapping the "database2name" out for the other database.
Hope this helps.
I have two tables, SyncedComments and QueuedComments, the latter holds local comments until they are synced with a webserver, when they are synced succesfully they get placed in the synced table, my application should be indifferent to each type. I load in the comments through a CursorLoader, and they may be moved to the synced table while users are reading them. Let's say the user can also edit comments, perhaps while they are being moved, so the application should know where the comment is, regardless of it's table.
To support this, I've thought of having a table with 3 columns, local_id, synced_id and queued_id, the local_id is persistent and simply serves as a reference to either one of the two other id's. When a comment is created a new row is inserted with it's sync_id set to NULL and the queue id it's been given, when a comment is moved then the queue_id is set to NULL and the sync_id is set. This way my application only needs to reference the local id at all times.
How does this solution look? Any flaws? Could it be done smarter?
I would in the first place put all the comments in one table, with flag for whether the comment is synchronized (actually it would probably be ID on server, set to NULL until synchronized and the value obtained from server afterwards). That will take you down to 1 table instead of 3, make it easier to show all comments (because you won't need to do union) and above all avoid problems when the comment is synchronized while being shown, because the comment will not be moving anywhere. And it does less writes to the database file, so it causes less fragmentation and fewer writes to the flash device.
So my fundamentals of creating and manipulating databases are a bit messed up. My aim here is that whenever the app is launched, the user is allowed to specify a table name, and whatever data is then collected is put into that table.
However, I'm confused as to how to do this. Do I simply pass the value of a user entered variable as the table name in my contentprovider class and execute sqlite statements to create it?
I've read/reading the documentation already, so if anyone has any insight or clarity, or even better, code snippets, it would be great.
Why not simply use one table, and create a value that stands for the current app-session, and insert that value with each row. This would make your code simpler, and would still allow you to segregate/filter out the values from a particular app-session. If you want to give the user the ability to enter the value (as you are giving them the ability to choose the table name) you'd just want to check to see if that value had already been used, just as you would have to see if the table-name had already been used.