Is it possible to UPDATE and then INSERT where row exists in mysql? I have this query,
$q = $dbc -> prepare("UPDATE accounts SET lifeforce = maxLifeforce, inHospital = 0 WHERE hospitalTime <= NOW() AND inHospital = 1");
$q -> execute();
How can I either get the primary key into an associative array to then do an insert for each item in the array, or do an UPDATE AND INSERT?
Or does it involve doing a SELECT to get all that match criteria, then UPDATE then INSERT using array from the select? This seems rather a long way to do it?
Basically I need to INSERT onto another table using the same primary keys that get updated.
Or does it involve doing a SELECT to get all that match criteria, then UPDATE then INSERT using array from the select?
Yes, sorry, that's the main way.
Another approach is to add a column called (say) last_updated, that you set whenever you update the column. You can then use that column in a query that drives your insert. That would have other advantages — I find last_updated columns to be useful for many things — but it's overkill if this is the only thing you'd ever use it for.
Edited to add: Another option, which just occurred to me, is to add a trigger to your accounts table, that will perform the insert you need. That's qualitatively different — it causes the insertion to be a property of accounts, rather than a matter of application logic — but maybe that's what you want? Even the most extreme partisans of the "put-all-constraints-in-the-database-so-application-logic-never-introduces-inconsistency" camp are usually cautious about triggers — they're really not a good way to implement application logic, because it hides that logic somewhere that no-one will think to look for it. But if the table you're inserting into is some sort of account_history table that will keep track of all changes to account, then it might be the way to go.
You can use a multiple table update as written in the manual: http://dev.mysql.com/doc/refman/5.0/en/update.html
If the second table needs an insert, you probably would have to do it manually.
You can use the mysqli_last_id function:
http://php.net/manual/en/mysqli.insert-id.php
Also, when running consecutive queries like that, I'd recommend using transactions:
http://www.techrepublic.com/article/implement-mysql-based-transactions-with-a-new-set-of-php-extensions/6085922
Related
I'm facing a challenge that has never come up for me before and having trouble finding an efficient solution. (Likely because I'm not a trained programmer and don't know all the terminology).
The challenge:
I have a feed of data which I need to use to maintain a mysql database each day. To do this requires checking if a record exists or not, then updating or inserting accordingly.
This is simple enough by itself, but running it for thousands of records -- it seems very inefficient to do a query for each record to check if it already exists in the database.
Is there a more efficient way than looping through my data feed and running an individual query for each record? Perhaps a way to somehow prepare them into one larger query (assuming that is a more efficient approach).
I'm not sure a code sample is needed here, but if there is any more information I can provide please just ask! I really appreciate any advice.
Edits:
#Sgt AJ - Each record in the data feed has a number of different columns, but they are indexed by an ID. I would check against that ID in the database to see if a record exists. In this situation I'm only updating one table, albeit a large table (30+ columns, mostly text).
What is the problem;
if problem is performance for checking, inserting & updating;
insert into your_table
(email, country, reach_time)
values ('mike#gmail.com','Italy','2016-06-05 00:44:33')
on duplicate key update reach_time = '2016-06-05 00:44:33';
I assume that, your key is email
Old style, dont use
if email exists
update your_table set
reach_time = '2016-06-05 00:44:33'
where email = 'mike#gmail.com';
else
insert into your_table
(email, country, reach_time)
values ('mike#gmail.com','Italy','2016-06-05 00:44:33')
It depends on how many 'feed' rows you have to load. If it's like 10 then doing them record by record (as shown by mustafayelmer) is probably not too bad. Once you go into the 100 and above region I would highly suggest to use a set-based approach. There is some overhead in creating and loading the staging table, but this is (very) quickly offset by the reduction of queries that need to be executed and the amount of round-trips going on over the network.
In short, what you'd do is :
-- create new, empty staging table
SELECT * INTO stagingTable FROM myTable WHERE 1 = 2
-- adding a PK to make JOIN later on easier
ALTER TABLE stagingTable ADD PRIMARY KEY (key1)
-- load the data either using INSERTS or using some other method
-- [...]
-- update existing records
UPDATE myTable
SET field1 = s.field1,
field2 = s.field2,
field3 = s.field3
FROM stagingTable s
WHERE s.key1 = myTable.key1
-- insert new records
INSERT myTable (key1, field1, field2, field3)
SELECT key1, field1, field2, field3
FROM stagingTable new
WHERE NOT EXISTS ( SELECT *
FROM myTable old
WHERE old.key1 = new.key1 )
-- get rid of staging table again
DROP TABLE stagingTable
to bring your data up to date.
Notes:
you might want to make the name of the stagingTable 'random' to avoid the situation where 2 'loads' are running in parallel and might start re-using the same table giving all kinds of weird results (and errors). Since all this code is 'generated' in php anyway you can simply add a timestamp or something to the tablename.
on MSSQL I would load all the data in the staging table using a bulk-insert mechanism. It can use bcp or BULK INSERT; .Net actually has the SqlBulkCopy class for this. Some quick googling shows me mysql has mysqlimport if you don't mind writing to a temp-file first and then loading from there, or you could use this to do big INSERT blocks rather than one by one. I'd avoid doing 10k inserts in one go though, rather do them per 100 or 500 or so, you'll need to test what's most efficient.
PS: you'll need to adapt my syntax a bit here and there, like I said I'm more familiar with MSSQLs T-SQL dialect. Also, it could be you can use the on duplicate key methodology on the staging table direclty, thus combining the UPDATE and INSERT in one command. [MSSQL uses MERGE for this, but it would look completely different so I won't bother to include that here.]
Good luck.
Let's say I have dynamic numbers with unique id's to them.
I'd like to insert them into database. But if I already have that certain ID (UNIQUE) I need to add to the value that already exists.
I've already tried using "ON KEY UPDATE" ,but it's not really working out. And selecting the old data so we could add to it and then updating it ,is not efficient.
Is there any query that could do that?
Incrementing your value in your application does not guarantee you'll always have accurate results in your database because of concurrency issues. For instance, if two web requests need to increment the number with the same ID, depending on when the computer switches the processes on the CPU, you could have the requests overwriting each other.
Instead do an update similar to:
UPDATE `table` SET `number` = `number` + 1 WHERE `ID` = YOUR_ID
Check the return value from the statement. An update should return the number of rows affected, so if the value is 1, you can move on happy to know that you were as efficient as possible. On the other hand, if your return value is 0, then you'll have to run a subsequent insert statement to add your new ID/Value.
This is also the safest way to ensure concurrency.
Hope this helps and good luck!
Did something different. Instead of updating the old values ,I'm inserting new data and leaving old one ,but using certain uniques so I wouldn't have duplicates. And now to display that data I use a simple select query with sum property and then grouping it by an id. Works great ,just don't know if it's the most efficient way of doing it.
I'm still new and some of the right coding practices escape me. Documentation on this particular situation is weak, so I would like to get some advice/suggestions from you experts :) on the following.
I have an API that allows users to update 2 tables in one call. One is a SUMMARY table and the other is a DETAIL table with an FK to the SUMMARY table.
What I have my code doing is I do an UPSERT (insert/update) to the SUMMARY table, grab the insert_id and then delete the records from the DETAIL table, then insert the ones I need (referencing SUMMARY with the fk of course).
However, in the instance that there are no changes to SUMMARY data - insert_id returns 0. This seems expected as no row was updated/inserted.
So here is my question:
Should I be doing a full read of the tables and comparing data prior to this update/delete/insert attempt? Or is there another nifty way of grabbing the id of the SUMMARY that was a duplicate of the UPSERT attempt? I feel that my users will 'almost' ALWAYS be changing the SUMMARY and DETAIL data when using this API.
What is the correct coding practice here? Is the extra read worth it every time? Or should I read only if insert_id = 0?
Thoughts? My biggest problem is that I don't know what the magnitude difference of a read vs a write is here - especially since I don't believe the API will be called much without having changed values.
Again my options are:
Read db and compare to see if there is a diff
Insert/Update accordingly
Attempt Insert/update.
if (insert_id = 0) then read db to get summary id for details table
copmlete process
Attempt Insert/Update
use ?something? to get id of summary of record that was duplicate (and prevented insert/update)
use the id to complete steps.
If the id you need is an auto_increment field, option 4 (do everything inside DB with 1 execute action) 100% of the time. This is the general SQL structure you need:
Insert into summary (primaryKey, fieldA, fieldB) values (NULL, valueA, valueB) on duplicate key update primaryKey=LAST_INSERT_ID(primaryKey), fieldA = fieldA, fieldB=fieldB;
If you then do SELECT LAST_INSERT_ID() it'll give you either the successful inserted id or ,if duplicate, the duplicate entrie's id. So do something like:
delete from detail where summary_id = LAST_INSERT_ID();
At the companies I've worked for, option 1 is usually the one I've seen used if you're wanting to compare record by record. This can be implemented either in a stored proc or in the code itself. Depends on the context of what "same" means. If it's raw values, then sql is probably the easiest. If there's a context in addition to what the database has, you'll want to do it at the code level. Hope that helps.
working on the PHP project related to web scraping and my aim is to store the data into the mysql database,i'm using unique key index on 3 indexes in 9 columns table and records are more than 5k.
should i check for unique data at program level like putting values in arrays and then comparing before inserting into database ?
is there any way so that i can speed up my database insertion ?
Never ever create a duplicate table this is a anti SQL pattern and it makes it more difficult to work with your data.
Maybe PDO and prepared statement will give you a little boost but dont expect wonders from it.
multible INSERT IGNORE may also give you a little boost but dont expect wonders from it.
You should generate a multiinsert query like so
INSERT INTO database.table (columns) VALUES (values),(values),(values)
Keep in mind to keep under the max packet size that mysql will have.
this way the index file have to be updated once.
You could create a duplicate of the table that you currently have except no indices on any field. Store the data in this table.
Then use events to move the data from the temp table into the main table. Once the data is moved to the main table then delete if from the temp table.
you can follow your updates with triger. You should do update table and you have to right trigger for this table.
use PDO, mysqli_* function, to increase insertion into database
You could use "INSERT IGNORE" in your query. That way the record will not be inserted if any unique constraints are violated.
Example:
INSERT IGNORE INTO table_name SET name = 'foo', value = 'bar', id = 12345;
it is possible in one SQL statement to insert a record then take the autoincrement id, and update for the same record one specific column
with this autoincrement value.
Thanks in Advance.
Strictly speaking you can not do it in a single SQL statement (as others have already pointed out).
However, since you mention that you want to avoid making changes to legacy application let me clarify some options that might work for you.
If you had a trigger on the table that would update the second column, then issuing single insert will give you what you want and you might not need to change anything in the application
If possible, you could rename the table and in its place put a VIEW with the same name. With such simple view it might be transparent to your application (not sure if VIEW would remain updateable with your framework, but generally speaking it should)
Finally, with mysqli library you are free to issue multiple SQL statements, so it will be a single call to the database - which might be enough for you, depending on how exactly you define 'single statement'
None of the above will never be comparable to fixing the application in terms of maintainability for the guy who will inherit your code.
Doing an insert automatically fills in the value for an auto_increment column (just define it to use AUTO_INCREMENT). There is no need to have the same value twice in one record.
Doing an UPDATE + INSERT together is not possible in a single query.
I found this artcile that may be of interest to you:
http://www.daniweb.com/forums/thread107837.html
They suggest it is possible to do the insert and update in one query.
They show a query like:
INSERT INTO table (FIELD) VALUES (value) ON DUPLICATE KEY UPDATE FIELD=value
I hope this helps and to all the nay sayers, anything is possible.
While I believe it is possible, your safest bet is probably to split this operation up into three stages.
I successfully did this on my own database locally with this code:
INSERT INTO status set status_id = 5 ON DUPLICATE KEY UPDATE status_id=5;select last_insert_id()
You should be able to transform it to work for you.
You can write a AFTER INSERT trigger which takes max(id) and updates the record
That's not possible at all.
You have to either do this separately or you may create a function/stored procedure to achieve this mission.
Multiple statements can be separated by a semicolon, but I believe you need to use a function in PHP to get the autoincrement value. Your best bet might be to use a stored procedure.