I have a sequence of insert, after insert trigger update, and query operations and my problem is that the data received from the query does not reflect the updates performed by the trigger (even though the data in the database is indeed updated).
I'm using PHP, PostgreSQL and Propel 1.6 ORM (although this problem might be database and ORM agnostic).
I have the following sequence:
Use AJAX to insert a new row (a vote) in the "book_vote" table (with columns: 'book_id', 'book_score', 'voter_id').
Have a PostgreSQL after insert trigger to update the corresponding book "vote_count" and "average_score" columns in the "book" table.
Make a query to get the new "vote_count" and "average_score" data for the "book" and send that data back to the client with AJAX (to display updated values after the vote).
This all happens within the same PHP session, and my problem is that I do not get the updated "book" values in the AJAX response. It seems like the query is performed before the database trigger is performed. Is there any way to ensure the query happens after the database trigger?
It seems to me that you are doing a save on a Propel object, and you have an external trigger that modifies that row further. You need to be able to tell Propel that the object needs refreshing immediately after the insert/update... and thankfully Propel supports that directly. In your table element in your schema, do this:
<table name="book_vote" reloadOnInsert="true" reloadOnUpdate="true">
You can use either or both of the reload statements, depending on requirements (in your case you'll probably just want the insert one). From there, just rebuild your model and it should work fine.More details here.
Addendum: as per discussion, the issue appears to be that you have already loaded the foreign row that you wish to reload, and as such it is being retrieved from the instance pool rather than the database. You've found that BookPeer::clearInstancePool() solved it for you - great! For bonus points, see if you can remove items from the pool individually - it is probably more efficient to allow the pool to run normally and to remove items one at a time, rather than to clear the pool for a whole table. Let us know if this is possible!
See in ORM documentation how to refetch data after update.
You may have option for column or you have to invoke some method.
Same problem is if you have default value for column and insert.
Related
Got a big problem that's confusing as hell. I'm using Laravel (3.2.5 and now 3.2.7) and I'm using the Eloquent ORM for updating a database (PostgreSQL).
Here's what I'm doing:
I have a db full of data
I'm pulling info from an external API to update my db full of data
I run a script that puts the db full of data into arrays and same with API. This gets compared
I fill a data object with an array full of changes
I "save" it
nothing happens -.-
$updateLinks = array_diff($dbLinkArray, $dbLinkArrayOriginal);
$dbLink->fill($updateLinks);
Log::info('1st LOG Original: '.$dbLinkArrayOriginal['link_text'].' New: '.$dbLinkArray['link_text']);
Log::info('2nd Log Dirty: '.implode(', ', $dbLink->get_dirty()));
$dbLink->save();
Log::info('3rd Log Supposed to be changed: '.implode(', ',array_keys($updateLinks)));
I employed some logging and the debug toolbar to figure out wtf happened. Here's the info:
all the SQL queries run to update with correct information. When the Query is run via phpPgAdmin, it updates as it should. The problem here is that the query updates EVERY column in the row instead of just the changes. Using "update" instead of "fill/save" creates the same problem.
none of the table information gets updated, ever.
The 1st log shows that the link_text isn't equal. This is okay because it shows the link_text needs to be updated. However, it's a clear indicator that nothing was updated the next time I run my script. The log shows the same info every time and just as many log events happen.
The 2nd log shows that the ENTIRE object is dirty rather than just what was supposed to be updated. This is why the SQL gets updated
The 3rd log spits out exactly what's supposed to be updated. 3-5 columns max and that's it. And all is in correct format.
Any idea why, first of all, the database is not getting updated even though Laravel marks the SQL as being run and shows the correct query?
Also, any idea why the ENTIRE object is dirty and the query tries to update the entire object (23+ columns) instead of only the changes (3-5 columns)?
For your second question (why all columns update, instead of just the dirty ones). The Laravel documentation states:
By default, all attribute key/value pairs will be store during mass-assignment. However, it is possible to create a white-list of attributes that will be set. If the accessible attribute white-list is set then no attributes other than those specified will be set during mass-assignment.
Does this help you?
Kind regards,
Hendrik
Basically, I am trying to create an interface that will tell an administrator "Hey, we ran this query, and we weren't so sure about it, so if it broke things click here to undo it".
The easiest way I can think to do this is to somehow figure out what tables and cells an identified "risky" query writes to, and store this data along with some bookkeeping data in a "backups" table, so that if necessary the fields can be repopulated with their original contents.
How do I go about figuring out which fields get overwritten by a particular (possibly complicated) mysql command?
Edit: "risky" in terms of completing successfully but doing unwanted things, not in terms of throwing an error or failing and leaving the system in an inconsistent state.
I suggest the following things:
- add an AFTER UPDATE trigger to every table you want to monitor
- create a copy of every table (example: [yourtable]_backup) you want to monitor
- in all AFTER UPDATE triggers, add code: INSERT INTO yourtable_backup VALUES(OLD.field1, OLD.field2..., OLD.fieldN)
How it works: the AFTER UPDATE trigger detects an update of the table, and backups the old values into the backup table
Important: you need to use INNODB table format for triggers to work. Triggers don't work with MyISAM tables.
You may add a timestamp field to the backup tables to know when each row was inserted.
Documentation: http://dev.mysql.com/doc/refman/5.5/en/create-trigger.html
Is it possible to queue client requests for accessing database in MySQL. I am trying to do this for concurrency management. MySQL Locks can be used but somehow I am not able to get the desired outcome.
Effectively what I am trying to do is:
INSERT something in a new row
SELECT a column from that row
Store that value in a variable
The issue comes up when two different clients INSERT at the same time, thus variables for both clients store the value of the last INSERT.
I worked the following alternative, but it failed in a few test runs, and the bug is quite evident:
INSERT
LOCK Table
SELECT
Store
UNLOCK
Thanks!
My best guess is that you have an auto-increment column and want to get its value after inserting a row. One option is to use LAST_INSERT_ID() (details here and here).
If this is not applicable, then please post some more details. What exactly are you trying to do and what queries are being fired?
I have a functionality in my project where in I have to show a preview to the user as to how the page will look like after submitting the form. For preview, I am setting related propel object with the form values and in the end not saving any values as this is only a preview.
This works, but the previous values of related table get deleted and after preview all related tables are not restored to their previous state as I am not saving the object. Please, is this a bug? I don't want to save any of the values to any table, but just use the object to show the preview.
Is there proper way of doing this.
EDIT: I will rephrase this question. If I don't save the propel object, will the changes be affected in the tables?. Right now, if I don't save, the main table is unaffected but the relations are affected and not restored to old values if object is not saved.
Eg: I have two tables, job and jobsectors with foreign key relationship. I do $job->addJobsector('someSector');
I don't save the object, but the previous value in jobsector is deleted and there is no new value.
Thanks
I resolved it. Whenever functions starting with initTablename() are used, it seems the previous values were getting deleted. I just don't call these functions for preview. And not saving the object will not store any data to database.
Thanks
Try with a transaction: start a transaction, then save all the needed data, display what you need and rollback the transaction. So, anything you did in you transaction (saving, deleting) is stored in the database
I have a mysql database. What I'd like to do is perform an arbitrary action on it, and then figure out what changed. Something like this:
//assume connection to db already established
before();//saves db state
perform_action();//does stuff to db
diff();//prints what happened
I'd want it to output something like:
Row added in table_0 ]details]
Row added in table_1 [details]
Row modified in table_5 [details]
Row deleted in table_2 [details]
Any ideas?
To further clarify: You know how on stackoverflow, if you check a post's edits, you can see red lines/green highlights indicating what's been changed? I want something like that, but for mysql databases.
Instead of copying your whole database in order to save the state for a later diff, you might be better off by using triggers:
http://dev.mysql.com/doc/refman/5.0/en/triggers.html
When you setup appropriate triggers, you can log changes to a table - for example, you can setup a trigger that automatically logs the old values and the new values for every update. To see the changes, query the table that was filled by the trigger.
Of course, the trigger is not restricted to changes made by your application, it will also log updates done by other applications. But this is also the case if you diff the old version of the database with the new version of the database.
I think normally your application would log any interesting changes as it makes them. Or you would set up history tables for everything with datetimes.
To do it the way you describe, you could dump the contents of the database into a file before and after your action and do a diff on the two files. In php, you can check out xdiff: http://us.php.net/manual/en/book.xdiff.php
If this is something you're doing only occasionally in controlled circumstances to test some queries you're not sure about, you can dump and diff on the command line.
One way is to parse the log files, which will give you exact SQL statements executed in your database. I'm not exactly sure how to separate SQL statements made by your application from other applications (if thats the case)
The only thing I can think of is to do some combination of a few somewhat hackey things:
Save a [temporary?] table of row IDs, to check for new rows. If you need to know what was in deleted or modified rows before, you'll need to copy the whole DB, which would be rather messy.
Have each row have a datestamp that gets modified on update; grab rows for whom the updated datestamp is newer than when the analysis started.
Have a layer between your application and the database (if you have something like the classic $db->query(), it would make this easy), log queries sent, which can then be looked at.
I suppose the real question is if you want to know what queries are being executed against the DB, or if you want to know what they queries you're running are actually doing.