Ok, i have a checklist system that i am working on. There is a page that pulls data from the database and displays it as a checklist. I am trying to create a button that when pushed will reset the database to a certain state that i want. Basically, i have a button that sends an ajax callout to a php page that executes an UPDATE query. This query is as follows:
UPDATE $table SET value='$value', comments='$comments', editedBy='$editedBy', editedDate='$editedDate' WHERE projectId='$projectId';
I set the variables first of course, that's not my question. Just pretend they have data. My question is how can i repeat this query so that every row table x that has a projectId of n is updated? I'm guessing this involves a for loop?
SIDE NOTE: Since this query is just setting the value to false and making the comments, editedBy, and editedDate fields blank for every row in table x that has a projectId of n, is there a better way of doing this other than the UPDATE query?
Thanks for any help!
As long as you don't sepcify a LIMIT in your UPDATE query, it will update every row it finds that satisfies your where clause.
Now, if you're updating the projects table and projectID is your primary key, you'll need to run a loop to update other projectIDs. If you're not updating the project table, then your update query will update any record that has a foreign key match to the project ID you specified.
Does that help?
Related
I am implementing a request mechanism where the user have to approve a request. For that i have implemented a temporary table and main table. Initially when the request is added the data will be inserted to the temporary table, on approval it will be copied to the main table.
The issue is there will be more than 5k rows to be moved to the main table after approval + another 3-5 row for each row in the detail table (stores the details).
My current implementation is like this
//Get the rows from temporary table (batch_temp)
//Loop through the data
//Insert the data to the main table (batch_main) and return the id
//Get the details row from the temporary detail table (batch_temp_detail) using detail_tempid
//Loop through the data
//Insert the details to the detail table (batch_main_detail) with the main table id amount_id
//End Loop
//End Loop
But this implementation would take atleast 20k queries. Is there any better ways to implement the same.
I tried to create a sqlfiddle but was unable to create one. So i have pasted the query in pgsql.privatepaste.com
I'm sorry that I'm not familiar with PostgreSQL. My solution is in MySQL, I hope it can help since if they (MySQL & PostgreSQL) are same.
First, we should add 1 more field into your batch_main table to track the origin batch_temp record for each batch_main record.
ALTER TABLE `batch_main`
ADD COLUMN tempid bigint;
Then, on approval, we will insert 5k rows by 1 query:
INSERT INTO batch_main
(batchid, userid, amount, tempid)
SELECT batchid, userid, amount, amount_id FROM batch_temp;
So, with each new batch_main record we have its origin batch_temp record's id. Then, insert the detail records
INSERT INTO `batch_main_detail`
(detail_amount, detail_mainid)
SELECT
btd.detail_amount, bm.amount_id
FROM
batch_temp_detail `btd`
INNER JOIN batch_main `bm` ON btd.detail_tempid = bm.tempid
Done!
P/S:
I'm confuse a bit about the way you name your fields, and since I do not know about PostgreSQL and by looking into your syntax, can you use same sequence for primary key of both table batch_temp & batch_main? If you can, it's no need to add 1 more field.
Hope this help,
Simply need to update your Schema. Instead of having two tables: one main and one temporary, you should have all the data in main table, but have a flag which indicates whether a certain record is approved or no. Initially it will be set to false, and once approved it will simply be set to true and then the data can display on your website etc. That way you will not need to write the data two times, or even have to move it from one table to another
You haven't specified RDBMS you are using, but good old INSERT with SELECT in it must do the trick in one command:
insert main (field1,...,fieldN) select field1,...,fieldN from temporary
I am working on a database application with MySQL and PHP. At this moment I'm trying to get the changes caused by the last UPDATE. My first way to solve the problem is
getting the 'old' state with SELECT
Doing the changes with UPDATE
getting the 'new' state with SELECT
comparing the arrays with php
These are three mysql-connections...
Is there any way to shorten this?
You could do an before update trigger that will push an entire copy of the record to a history table that also contains additional state data you wish to store (updated date, user etc.)
This way you will have a complete revision history of what happened with what records and it should happen transparently. only think to remember is you should drop any unique constraints from the history table.
Hope this helps.
you can use the following hack using variables:
update table set
col=(#oldValue:=col),col=newValue
where id=1234;
select #oldValue;
Let me tell you how I do that,
When I update a row, firstly I get which row I'm updating and I call them active records. Then I compare each column of active records with the form fields. That's how I know which column has changed.
And if you want to store changed columns, create history table that would be like;
id (for primary key)
tablename (which table i'm updating)
recordid (which row i'm updating)
column (which columns has been changed)
oldvalue (active record value)
newvalue (form value-updated value)
date (obvious)
user (who did this change)
After that, you can use your imagination for structures how you want to use.
Just looking for some tips and pointers for a small project I am doing. I have some ideas but I am not sure if they are the best practice. I am using mysql and php.
I have a table called nomsing in the database.
It has a primary key called row id which is an integer.
Then I have about 8 other tables referencing this table.
That are called nomplu, accsing,accplu, datsing, datplu for instance.
Each has a column that references the primary key of nomsing.
Withing my php code I have all the information to insert into the tables except one thing , the row id primary key of the nomsing table. So that php generates a series of inserts like the following.
INSERT INTO nomsing(word,postress,gender) VALUES (''велосипед","8","mask").
INSERT INTO nomplu(word,postress,NOMSING?REFERENCE) VALUES (''велосипеды","2",#the reference to the id of the first insert#).
There are more inserts but this one gets the point across. The second insert should reference the auto generated id for the first insert. I was this to work as a transaction so all inserts should complete or none.
One idea I have is to not auto generate the id and generate it myself in php. That way would know the id given before the transaction but then I would have to check if the id was already in the db.
Another idea I have is to do the first insert and then query for the row id of that insert in php and then make the second insert. I mean both should work but they don't seem like an optimal solution. I am not too familiar with the database transactional features but what would be the best approach to do in this case. I don't like the idea of inserting then querying for the id and then running the rest of the queries. Just seems very inefficient or perhaps I am wrong.
Just insert a row in the master table. Then you can fetch the insert id ( lastInserId when on PDO) and use that to populate your other queries.
You could use the php version as given by JvdBerg , or Mysql's LAST_INSERT_ID. I usually use the former option.
See a similar SO question here.
You could add a new column to the nomsing table, called 'insert_order' (or similar) with a default value of 0, then instead of generating one SQL statement per insert create a bulk insert statement e.g.
INSERT INTO nomsing(word,postress,gender, insert_order)
VALUES (''велосипед","8","mask",1), (''abcd'',"9","hat",2).....
you generate the insert_order number with a counter in your loop starting at one. Then you can perform one SELECT on the table to get the ids e.g.
SELECT row_id
FROM nomsing
WHERE insert_order > 0;
now you have all the IDs you can now do a bulk insert for your following queries. At the end of your script just do an update to reset the insert_order column back to 0
UPDATE nomsing SET insert_order = 0 WHERE insert_order > 0;
It may seem messy to add an extra column to do this but it will add a significant speed increase over performing one query at a time.
I need to update two tables in MySQL with PHP. The second table needs the ID of the row being inserted in the to first table.
At the moment I have some PHP code that loops through this process for each of the items in an array:
Check if record exists by attempting to get it's ID.
If the record doesn't exist insert it and get the last insert ID.
Update the second table using the ID we found as a foreign key.
This is very inefficient as multiple database calls are made. I would rather store the data in two arrays, one for each table, then batch insert them when the loop is done. The problem is I need to get the ID of the row in the first table before I can do this.
This is a problem I come across a lot. What is the most efficient / 'best practice' way of doing this?
Thank you
Create stored procedure for inserting whole hierarchy in one server call. Supply all parent-child records as XML and parse it/insert records inside procedure (afaik MySql should have XML-functions similar to MS SQL). This will result in the same number of INSERT statements however they will execute on server side which should improve performance. E.g.
exec MySp #myHierarchy = '<Recs><Parent Name="P1"><Child Name="C1" /><Child Name="C2"/></Parent></Recs>'
This is my db structure:
ID NAME SOMEVAL API_ID
1 TEST 123456 A123
2 TEST2 223232 A123
3 TEST3 918922 A999
4 TEST4 118922 A999
I'm filling it using a function that calls an API and gets some data from an external service.
The first run, I want to insert all the data I get back from the API. After that, each time I run the function, I just want to update the current rows and add rows in case I got them from the API call and are not in the db.
So my initial thought regarding the update process is to go through each row I get from the API and SELECT to see if it already exists.
I'm just wondering if this is the most efficient way to do it, or maybe it's better to DELETE the relevant rows from the db and just re-inserting them all.
NOTE: each batch of rows I get from the API has an API_ID, so when I say delete the rows i mean something like DELETE FROM table WHERE API_ID = 'A999' for example.
If you retrieving all the rows from the service i recommend you the drop all indexes, truncate the table, then insert all the data and recreate indexes.
If you retrieving some data from the service i would drop all indexes, remove all relevant rows, insert all rows then recreate all indexes.
In such scenarios I'm usually going with:
start transaction
get row from external source
select local store to check if it's there
if it's there: update its values, remember local row id in list
if it's not there: insert it, remember local row id in list
at the end delete all rows that are not in remembered list of local row ids (NOT IN clause if the count of ids allows for this, or other ways if it's possible that there will be many deleted rows)
commit transaction
Why? Because usually I have local rows referenced by other tables, and deleting them all would break the references (not to mention deletete cascade).
I don't see any problem in performing SELECT, then deciding between an INSERT or UPDATE. However, MySQL has the ability to perform so-called "upserts", where it will insert a row if it does not exist, or update an existing row otherwise.
This SO answer shows how to do that.
I would recommend using INSERT...ON DUPLICATE KEY UPDATE.
If you use INSERT IGNORE, then the row won't actually be inserted if it results in a duplicate key on API_ID.
Add unique key index on API_ID column.
If you have all of the data returned from the API that you need to completely reconstruct the rows after you delete them, then go ahead and delete them, and insert afterwards.
Be sure, though, that you do this in a transaction, and that you are using an engine that supports transactions properly, such as InnoDB, so that other clients of the database don't see rows missing from the table just because they are going to be updated.
For efficiency, you should insert as many rows as you can in a single query. Much faster that way.
BEGIN;
DELETE FROM table WHERE API_ID = 'A987';
INSERT INTO table (NAME, SOMEVAL, API_ID) VALUES
('TEST5', 12345, 'A987'),
('TEST6', 23456, 'A987'),
('TEST7', 34567, 'A987'),
...
('TEST123', 123321, 'A987');
COMMIT;