I have a functionality in my project where in I have to show a preview to the user as to how the page will look like after submitting the form. For preview, I am setting related propel object with the form values and in the end not saving any values as this is only a preview.
This works, but the previous values of related table get deleted and after preview all related tables are not restored to their previous state as I am not saving the object. Please, is this a bug? I don't want to save any of the values to any table, but just use the object to show the preview.
Is there proper way of doing this.
EDIT: I will rephrase this question. If I don't save the propel object, will the changes be affected in the tables?. Right now, if I don't save, the main table is unaffected but the relations are affected and not restored to old values if object is not saved.
Eg: I have two tables, job and jobsectors with foreign key relationship. I do $job->addJobsector('someSector');
I don't save the object, but the previous value in jobsector is deleted and there is no new value.
Thanks
I resolved it. Whenever functions starting with initTablename() are used, it seems the previous values were getting deleted. I just don't call these functions for preview. And not saving the object will not store any data to database.
Thanks
Try with a transaction: start a transaction, then save all the needed data, display what you need and rollback the transaction. So, anything you did in you transaction (saving, deleting) is stored in the database
Related
Hey Guys I have researched and have tested few method for logging user activity such as when an user updates his profile details or when an user updates his status in a task.
What I require to log :
User ID from session
Table being updated
Field Name
Old Value
New Value
Timestamps
Method 1:
Run an additional query along with the insert/update/delete query to store details.
Method 2:
Using http://packalyst.com/packages/package/regulus/activity-log
In both the above methods I have to write multiple code for each create/update/delete I would like to know if there exist a better way to handel this problem.
You want to store revisions of the data being manipulated by the user.
This calls for Revisionable.
Revisionable works using trait-implementation. Every action made by the user, will have the old and new value of the column stored in a seperate table. You can then query the revisionable table to get the changes made by the user.
Please note that the Revisionable version quoted above, doesn't store INSERT actions.
A few days ago I've created such package, which, unlike VentureCraft's one, logs only static data - tables, values. No fks, no model names etc.
Also it handles the revisions in different manner, which makes it much easier to eg. compare given 2 versions, since it doesn't log single field change per row, but all the data involved per row.
Check this out: Sofa/Revisionable
This is pretty young and will be improved.
It's also not Eloquent specific, but it works out of the box with Laravel 4. You simply download it, adjust config if needed, add a few lines of code to your models and it's ready to go.
I have a sequence of insert, after insert trigger update, and query operations and my problem is that the data received from the query does not reflect the updates performed by the trigger (even though the data in the database is indeed updated).
I'm using PHP, PostgreSQL and Propel 1.6 ORM (although this problem might be database and ORM agnostic).
I have the following sequence:
Use AJAX to insert a new row (a vote) in the "book_vote" table (with columns: 'book_id', 'book_score', 'voter_id').
Have a PostgreSQL after insert trigger to update the corresponding book "vote_count" and "average_score" columns in the "book" table.
Make a query to get the new "vote_count" and "average_score" data for the "book" and send that data back to the client with AJAX (to display updated values after the vote).
This all happens within the same PHP session, and my problem is that I do not get the updated "book" values in the AJAX response. It seems like the query is performed before the database trigger is performed. Is there any way to ensure the query happens after the database trigger?
It seems to me that you are doing a save on a Propel object, and you have an external trigger that modifies that row further. You need to be able to tell Propel that the object needs refreshing immediately after the insert/update... and thankfully Propel supports that directly. In your table element in your schema, do this:
<table name="book_vote" reloadOnInsert="true" reloadOnUpdate="true">
You can use either or both of the reload statements, depending on requirements (in your case you'll probably just want the insert one). From there, just rebuild your model and it should work fine.More details here.
Addendum: as per discussion, the issue appears to be that you have already loaded the foreign row that you wish to reload, and as such it is being retrieved from the instance pool rather than the database. You've found that BookPeer::clearInstancePool() solved it for you - great! For bonus points, see if you can remove items from the pool individually - it is probably more efficient to allow the pool to run normally and to remove items one at a time, rather than to clear the pool for a whole table. Let us know if this is possible!
See in ORM documentation how to refetch data after update.
You may have option for column or you have to invoke some method.
Same problem is if you have default value for column and insert.
Got a big problem that's confusing as hell. I'm using Laravel (3.2.5 and now 3.2.7) and I'm using the Eloquent ORM for updating a database (PostgreSQL).
Here's what I'm doing:
I have a db full of data
I'm pulling info from an external API to update my db full of data
I run a script that puts the db full of data into arrays and same with API. This gets compared
I fill a data object with an array full of changes
I "save" it
nothing happens -.-
$updateLinks = array_diff($dbLinkArray, $dbLinkArrayOriginal);
$dbLink->fill($updateLinks);
Log::info('1st LOG Original: '.$dbLinkArrayOriginal['link_text'].' New: '.$dbLinkArray['link_text']);
Log::info('2nd Log Dirty: '.implode(', ', $dbLink->get_dirty()));
$dbLink->save();
Log::info('3rd Log Supposed to be changed: '.implode(', ',array_keys($updateLinks)));
I employed some logging and the debug toolbar to figure out wtf happened. Here's the info:
all the SQL queries run to update with correct information. When the Query is run via phpPgAdmin, it updates as it should. The problem here is that the query updates EVERY column in the row instead of just the changes. Using "update" instead of "fill/save" creates the same problem.
none of the table information gets updated, ever.
The 1st log shows that the link_text isn't equal. This is okay because it shows the link_text needs to be updated. However, it's a clear indicator that nothing was updated the next time I run my script. The log shows the same info every time and just as many log events happen.
The 2nd log shows that the ENTIRE object is dirty rather than just what was supposed to be updated. This is why the SQL gets updated
The 3rd log spits out exactly what's supposed to be updated. 3-5 columns max and that's it. And all is in correct format.
Any idea why, first of all, the database is not getting updated even though Laravel marks the SQL as being run and shows the correct query?
Also, any idea why the ENTIRE object is dirty and the query tries to update the entire object (23+ columns) instead of only the changes (3-5 columns)?
For your second question (why all columns update, instead of just the dirty ones). The Laravel documentation states:
By default, all attribute key/value pairs will be store during mass-assignment. However, it is possible to create a white-list of attributes that will be set. If the accessible attribute white-list is set then no attributes other than those specified will be set during mass-assignment.
Does this help you?
Kind regards,
Hendrik
Basically, I am trying to create an interface that will tell an administrator "Hey, we ran this query, and we weren't so sure about it, so if it broke things click here to undo it".
The easiest way I can think to do this is to somehow figure out what tables and cells an identified "risky" query writes to, and store this data along with some bookkeeping data in a "backups" table, so that if necessary the fields can be repopulated with their original contents.
How do I go about figuring out which fields get overwritten by a particular (possibly complicated) mysql command?
Edit: "risky" in terms of completing successfully but doing unwanted things, not in terms of throwing an error or failing and leaving the system in an inconsistent state.
I suggest the following things:
- add an AFTER UPDATE trigger to every table you want to monitor
- create a copy of every table (example: [yourtable]_backup) you want to monitor
- in all AFTER UPDATE triggers, add code: INSERT INTO yourtable_backup VALUES(OLD.field1, OLD.field2..., OLD.fieldN)
How it works: the AFTER UPDATE trigger detects an update of the table, and backups the old values into the backup table
Important: you need to use INNODB table format for triggers to work. Triggers don't work with MyISAM tables.
You may add a timestamp field to the backup tables to know when each row was inserted.
Documentation: http://dev.mysql.com/doc/refman/5.5/en/create-trigger.html
I am having a few issues when people are trying to access a MySQL database and they are trying to update tables with the same information.
I have a webpage written using PHP. In this webpage is a query to check if certain data has been entered into the database. If the data hasn't, then i proceed to insert it. The trouble is that if two people try at the same time, the check might say the data has not been entered yet but when the insert takes place it has been by the other person.
What is the best way to handle this scenario? Can i lock the database to only process my queries first then anothers?
Read up on database transactions. That's probably a better way to handle what you need than running LOCK TABLES.
Manually locking tables is the worst think you could ever do. What happens if the code to unlock them never runs (because the PHP fails, or the user next clicks the next step, walks away from the PC, etc).
One way to minimize this in a web app, and a common mistake devs do, is to have a datagrid full of text boxes of data to edit, with a save button per row or on the whole table. Obviously if the person opens this on Friday and comes back Monday, the data could be wrong and they could be saving over new data. One easy way to fix this is to instead have EDIT buttons on each row, and clicking the button then loads an editing form, this way they are hopefully loading fresh data and can only submit 1 row change at a time.
But even more importantly, you should include a datetime field as a hidden input box, and when they try to submit the data look at the date and decide how old the data is and make a decision how old is too old and to warn or deny the user about their action.
You're looking for LOCK.
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
This can be run as a simple mysql_query (or MySQLi::query/prepare).
I'd say it's better to lock specific tables (although you can probably try LOCK TABLES *) that need to be locked rather than the whole database - as nothing will be able to be read. I think you're looking for something like:
LOCK TABLES items;
START TRANSACTION;
INSERT INTO items (name, label) VALUES ('foo', 'bar');
UNLOCK TABLES;
Or in PHP:
mysql_query('LOCK TABLES items');
mysql_query("INSERT INTO items (name, label) VALUES ('foo', 'bar')");
mysql_query('UNLOCK TABLES');
You could check if data has been changed before you edit something. In that case if someone has edited data while other person is doing his edit, he will be informed about it.
Kind of like stackoverflow handles commenting.