I have a Many-to-Many table, in which I input some form info. I recently made this form dynamic, so that when an input elements value is changed, it is sent to database via AJAX.
So my question is:
Is it faster to try and find the values that exist, edit them, create the ones that don't and delete the ones that are not used anymore OR Should I delete all of the values for the id, and insert all of the new ones?
In response to comment, an elaboration.
A form , that has about 10 fields. Some of them mandatory, some not. Every time you access it, it generates a random identifier.
When a user starts filling the form, after the focus an element is lost, the whole form is submitted through AJAX, and all of the values that are not empty, are input in the many to many table.
The table has 3 fields : form identificator , element name , element value;
The question rephrased:
Do I delete all of the entries with the required form identificator, or try to find the fields and edit them?
It will require less code to delete all the existing relations and add new ones
Make sure you do this in a transaction
Handle errors correctly
Less code == fewer bugs, less developer time. So that is definitely faster.
I always delete all & insert in cases like this. I'd suspect that it'd be more processing time to search, edit, create, delete.
You can also try looking at:
INSERT INTO table (field1, field2) VALUES ('Value1', 'Value2')
ON DUPLICATE KEY UPDATE field1 = 'Value1'
Which will insert a new record or update an existing -- I'd still suspect the delete/insert to be faster -- depending on the number of fields you'd be updating at any given time.
Related
I'm putting together a series of identical tables that consist of only dates submitted to DB. The dates mark when a particular job was executed.
The tables work like dominoes in that when a date is submitted in the update table/form the next table assumes the old date through a BEFORE UPDATE trigger, and so on, and so on.
Each of these tables can be linked from the update table, and the point of them is to view a history of work performed, when a particular job was executed. After 20 or so tables the older dates become somewhat irrelevant, but should all be eventually archived in a final, 21st table which absorbs all the dates that keep getting updated.
This last table is updated via trigger but the old dates/values should be kept, maybe separated by comma. In other words while tables 1-20 contain only one date/entry per field, the overwritten date from the previous table, table 21 will list ALL the dates associated with that particular field that have been, or will be passed down, so no OLD values are overwritten.
After extensive research I discoverted that INSERT does not overwrite old data, but every attempt at writing a trigger with INSERT to this last table has failed. All tables have the same ID-"1". No new tables are created, this is a simple exercise in storing data, and yet this last method is elusive.
No previous answers on SO really helped. How to do this simple job?
The UPDATE trigger that works, one derived from a previous SO question, for all the other tables, looks something like this:
BEGIN
UPDATE work2 SET
ins1 = OLD.ins1,
insp1 = OLD.insp1,
b1psp = OLD.b1psp,
b1ptp = OLD.b1ptp,
..........................etc
WHERE work2.id = OLD.id;
END
There must be a simple solution, yet I'm not familiar with PHP enough to solve this. I'm using EasyPHP DevServer 14.1.
I am building a system, where you can create blog posts for your website.
In there I have an ajax-function which is saving a draft of your post every two minutes.
In this way you're not losing your work if your computer or internet crashes.
But right now, it is saving a new row every time it auto-saves. How to I do, so instead of creating multiple rows, it is updating the row instead?
I have already tried with ON DUPLICATE KEY UPDATE, which didn't work. I think it might be because that it requires an unique field in the form. But the only unique in my database is the actual ID of the post/row.
This is the code I tried:
INSERT INTO blog (title, text, date)
VALUES ('$blog_title','$blog_text','".time()."')
ON DUPLICATE KEY UPDATE
title='$blog_title', text='$blog_text', modified_date='".time()."'
I have an idea, to get the post/row ID, when the post is auto-saved the first time. Here I could use mysql_insert_id(). Then this ID could be stored in a hidden input field and when it auto-saves again, it will see that there already is a post/row with that ID, and then it will just update instead of creating a new one.
Is that a good and safe solution, or should I do something else?
I can't seem to find a better one.
I have found some other similar questions, but they where using JSON, which I haven't worked with yet.
Hope someone can help me with this :)
When you create a new row, put the ID into a hidden field. Then the code to process the input can do:
if ($_POST['id']) {
// update existing row
} else {
// insert new row and put ID into hidden field
}
There's no need to use ON DUPLICATE KEY because you know from the input data whether it's indended as a new entry or an update.
I am experimenting trying to find the best and most efficient way to alter the data in a given table through a form using PHP.
The scenario is a list of items in a table, if you right click->edit an item, a request is made to MySQL for all the data and the fields are populated.
The user can change or leave the data untouched in any of the fields, and then presses save which sends everything back to PHP.
The easy way would be to just update all the columns regardless of whether or not they have changed, i.e.:
$this->model->set('name', 'some name string from the form', $itemId);
$this->model->set('price', 'number from the form', $itemId);
...etc...
So potentially I could change just the name and needlessly update the rest of the columns with the same data as what was received. (As a side question, does MySQL know this and ignore the update behind the scenes?)
Would a good way to perform an intelligent update be to compare two arrays? One that contains the original data and another with the data from the user. If values of a given index don't match, then it must have changed and so do the update?
i.e. a very simplified example:
if($submittedValues['name'] != $originalValues['name'])
{
...Update...
}
I guess you answer your question, and you could compare two array, either in your PHP code or using javascript and instead of sending every thing to the server, only send the changed values.
But in general I wouldn't care if I reset all the data, the process of affecting all fields again could be faster than comparing between old and new data in arrays, I would take much care if I was making many queries to the database but its only one update query
What could be interesting in test is, when the user lefts the fields empty, then the request will send an empty string, at the end it the update request will insert an empty string where a NULL value would have a better signification
I've got a PHP script pulling a file from a server and plugging the values in it into a Database every 4 hours.
This file can and most likely change within the 4 hours (or whatever timeframe I finally choose). It's a list of properties and their owners.
Would it be better to check the file and compare it to each DB entry and update any if they need it, or create a temp table and then compare the two using an SQL query?
None.
What I'd personally do is run the INSERT command using ON DUPLICATE KEY UPDATE (assuming your table is properly designed and that you are using at least one piece of information from your file as UNIQUE key which you should based on your comment).
Reasons
Creating temp table is a hassle.
Comparing is a hassle too. You need to select a record, compare a record, if not equal update the record and so on - it's just a giant waste of time to compare a piece of info and there's a better way to do it.
It would be so much easier if you just insert everything you find and if a clash occurs - that means the record exists and most likely needs updating.
That way you took care of everything with 1 query and your data integrity is preserved also so you can just keep filling your table or updating with new records.
I think it would be best to download the file and update the existing table, maybe using REPLACE or REPLACE INTO. "REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted." http://dev.mysql.com/doc/refman/5.0/en/replace.html
Presumably you have a list of columns that will have to match in order for you to decide that the two things match.
If you create a UNIQUE index over those columns then you can use either INSERT ... ON DUPLICATE KEY UPDATE(manual) or REPLACE INTO ...(manual)
I have a MySQL database where I am storing information that is entered from a PHP web page. I have a page that allows the user to view an existing row, and make changes and save them to the database. I want to know the best way to keep the original entries, as well as the new update and any subsequent updates.
My thought is to make a new table with the same columns as the first, with an additional timestamp field. When a user submits an update, the script would take the contents of the main table's row, and enter them into the archive table with a timestamp when it was done, and then enter in the new values to the main table. I'd also add a new field to the main table to specify whether or not the row has ever been edited.
This way, I can do a query of the main table and get the most current data, and I can also query the archive table to see the change history. Is this the best way to accomplish this, or is there a better way?
You can use triggers on update, delete, or insert to keep track of all changes, who made them and at what time.
Lookup database audit tables. There are several methods, I like the active column which gets set to 0 when you 'delete' or 'update' and the new record gets inserted. It does make a headache for unique key checking. The alternative I've used is the one you have mentioned, a separate table.
As buckbova mentions you can use a trigger to do the secondary insert on 'delete' or 'update'. Otherwise manage it in your PHP code if you don't have that ability.
You don't need a second table. Just have a start and end date on each row. The row without an end date is the active record. I've built entire systems using this method, and just so long as you index the date fields, it's very fast.
When retrieving the current record, AND end_date IS NULL gets added to the WHERE clause.
In this situation, I would recommend you to consider all properties in one table after adding it few columns:
active/ not active
ID of the person who kept these parameters
timestamp of adding