Well basically I'm inserting into a relational MySQL database using a MYISAM collation with php.
Lets say a user fills out a web form and posts it for insertion, all the data they provide may belong to parts of several tables.
So using php I'll insert the values into one table, then into another, then into another, etc etc...
But say I insert 3 lots of data into 3 tables, but on the 4th insertion... the sql fails... I then need to return an error message to the user, but ALSO I have to undo all the last inserts.
I could simply just delete all past inserts on fail...
However I wondered if there was an easier way??
Somehow providing the sql queries to mysql engine which temporarily stores the data and SQL, and on command, it runs through all the statements?
Any Ideas?
Start with:
mysql_query("start transaction");
Then, if all of your inserts work successfully:
mysql_query("commit");
Otherwise, if there is a failure somewhere...
mysql_query("rollback");
Done ^_^ I love this feature.
EDIT (following point made in a comment): This will only work in a database engine supporting transactions, and MyISAM is not one of them. I strongly recommend InnoDB, as it supports row-level locking, making your queries much less likely to encounter a lockup.
Hope this will help you:
http://www.icommunicate.co.uk/blog/-/myisam-transactions_20/
Try using SET AUTOCOMMIT=0 to disable autocommiting
Also realize, you must use a recent MySQL version which supports InnoDB tables
Notice that when doing that you need to always use commit to save changes...
START TRANSACTION;
UPDATE table SET summmary='whatever' WHERE type=1;
COMMIT;
PHP does support transactions, use seperate query statements for each command...
mysql_query("BEGIN");
mysql_query("COMMIT");
mysql_query("ROLLBACK");
Friend,
you can't use MYISAM storage engine if you want to do rollback after any fail steps... If you can use InnoDB as your mysql storage engine then i can answer your question that you asked here... Not only that I can say you what you have to write in php, for mysql and can share some sample codes that will clear you concept and it will be very easy to you then... But first you have to confirm me that can you change you table storage engine to InnoDB...?? if you want to change but don't know how can you change you storage engine then you should feel free to ask me... i will try my best...
If you want to ask me anything just edit this answer and add you question last of all... even you can add comments too...
Good luck... friend...
Related
I have a table with user login information and registration too. So when two users consecutively try to add their details:
Will both the writes clashes and the table wont be updated?
Using threads for these writes is bad idea. As for each write a new thread would be created and it would clog the server. Is the server responsible for it to manage on its own?
Is locking the table a good idea?
My back-end runs on PHP/Apache with MySQL (InnoDB) for the database.
Relational databases are designed to avoid these kinds of conditions. You don't need to worry about them unless you are designing your own relational database from scratch.
In short, just know this: Any time a write is initiated, there is a row-level lock. If another transaction wants to write to that same row, then it has to wait until the first transaction releases the lock. This is a fundamental part of relational databases. You don't need to add a lock because they've already thought of that :)
You can read more about how MySQL performs locks to avoid deadlocking and other transaction errors here.
If you're really paranoid about this, or perhaps you are doing multiple things when you register a user and need them done atomically, you might want to look at using Transactions in MySQL. There's a decent write-up about Transactions here http://www.mysqltutorial.org/mysql-transaction.aspx
BEGIN;
do related reads/writes to the data
COMMIT;
Inside that "transaction", the connection sees a consistent view of the data, and blocks anyone else from messing with that view.
There are exceptions. The main one is
BEGIN
SELECT ... FOR UPDATE;
fiddle with the values SELECTed
UPDATE ...; -- and change those values
COMMIT;
The SELECT .. FOR UPDATE announces what should not be tampered with. If another connection wants to mess with the same rows, it will have to wait until your COMMIT, at which time he may find that things have changed and he will need to do something different. But, in general, this avoids a "deadlock" wherein two transactions are stepping on each other so badly that one has to be "rolled back".
With techniques like this, the "concurrency" is prevented only briefly and relatively precisely. That is, if two connections are working with different rows, both can proceed -- there is no need to "prevent concurrency".
I'm sorry, this is a very general question but I will try to narrow it down.
I'm new to this whole transaction thing in MySQL/PHP but it seems pretty simple. I'm just using mysql not mysqli or PDO. I have a script that seems to be rolling back some queries but not others. This is uncharted territory for me so I have no idea what is going on.
I start the transaction with mysql_query('START TRANSACTION;'), which I understand disables autocommit at the same time. Then I have a lot of complex code and whenever I do a query it is something like this mysql_query($sql) or $error = "Oh noes!". Then periodically I have a function called error_check() which checks if $error is not empty and if it isn't I do mysql_query('ROLLBACK;') and die($error). Later on in the code I have mysql_query('COMMIT;'). But if I do two queries and then purposely throw an error, I mean just set $error = something, it looks like the first query rolls back but the second one doesn't.
What could be going wrong? Are there some gotchas with transactions I don't know about? I don't have a good understanding of how these transactions start and stop especially when you mix PHP into it...
EDIT:
My example was overly simplified I actually have at least two transactions doing INSERT, UPDATE or DELETE on separate tables. But before I execute each of those statements I backup the rows in corresponding "history" tables to allow undoing. It looks like the manipulation of the main tables gets rolled back but entries in the history tables remain.
EDIT2:
Doh! As I finished typing the previous edit it dawned on me...there must be something wrong with those particular tables...for some reason they were all set as MyISAM.
Note to self: Make sure all the tables use transaction-supporting engines. Dummy.
I'd recommend using the mysqli or PDO functions rather than mysql, as they offer some worthwhile improvements—especially the use of prepared statements.
Without seeing your code, it is difficult to determine where the problem lies. Given that you say your code is complex, it is likely that the problem lies with your code rather than MySQL transactions.
Have you tried creating some standalone test scripts? Perhaps you could isolate the SQL statements from your application, and create a simple script which simply runs them in series. If that works, you have narrowed down the source of the problem. You can echo the SQL statements from your application to get the running order.
You could also try testing the same sequence of SQL statements from the MySQL client, or through PHPMyAdmin.
Are your history tables in the same database?
Mysql transactions only work using the mysqli API (not the classic methods). I have been using transactions. All I do is deactivate autocommit and run my SQL statements.
$mysqli->autocommit(FALSE);
SELECT, INSERT, DELETE all are supported. as long as Im using the same mysqli handle to call these statements, they are within the transaction wrapper. nobody outside (not using the same mysqli handle) will see any data that you write/delete using INSERT/DELETE as long as the transaction is still open. So its critical you make sure every SQL statement is fired with that handle. Once the transaction is committed, data is made available to other db connections.
$mysqli->commit();
For fun I am replacing the mysqli extension in my app with PDO.
Once in awhile I need to use transactions + table locking.
In these situations, according to the mysql manual, the syntax needs to be a bit different. Instead of calling START TRANSACTION, you do it like so...
SET autocommit=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
... do something with tables t1 and t2 here ...
COMMIT;
UNLOCK TABLES;
(http://dev.mysql.com/doc/refman/5.0/en/lock-tables-and-transactions.html)
My question is, how does this interact with PDO::beginTransaction? Can I use PDO::beginTransaction in this case? Or should I manually send the sql "SET autocommit = 0; ... etc".
Thanks for the advice,
When you call PDO::beginTransaction(), it turns off auto commit.
So you can do:
$db->beginTransaction();
$db->exec('LOCK TABLES t1, t2, ...');
# do something with tables
$db->commit();
$db->exec('UNLOCK TABLES');
After a commit() or rollBack(), the database will be back in auto commit mode.
I have spent a huge amount of time running around this issue, and the PHP documentation in this area is vague at best. A few things I have found, running PHP 7 with a MySQL InnoDB table:
PDO::beginTransaction doesn't just turn off autocommit, having tested the answer provided by Olhovsky with code that fails, rollbacks do not work; there is no transactional behaviour. This means it can't be this simple.
Beginning a transaction may be locking the used tables... I eagerly await for someone to tell me I'm wrong with this, but here are the reasons it could be: This comment, which shows a table being inaccessible when a transaction has started, without being locked. This PHP documentation page, that slips in on the end:
... while the transaction is active, you are guaranteed that no one else can make changes while you are in the middle of your work
To me this behaviour is quite smart, and also provides enough wiggle room for PDO to cope with every database, which is after all the aim. If this is what is going on though, its just massively under documented and should've been called something else to avoid confusion with a true database transaction, which doesn't imply locking.
Charles' answer I think is probably the best if you are after certainty with a workload that will require high concurrency; do it by hand using explicit queries to the database, then you can go by the database's documentation.
Update
I have had a production server up and running using the PDO transaction functions for a while now, recently using AWS's Aurora database (fully compatible with MySQL but built to automatically scale etc). I have proven these two points to myself:
Transactions (purely the ability to commit all database changes together) work using PDO::beginTransaction(). In short, I know many scripts have failed half way through their database select/updates and data integrity has been maintained.
Table locking isn't happening, I've had an index duplication error to prove this.
So, to further my conclusion, looks like the behaviour of these functions seems to change based on database engine (and possibly other factors). As far as I can tell both from experience and the documentation, there is no way to know programmatically what is going on... whoop...
In MySQL, beginning a transaction is different than turning off autocommit, due to how LOCK/UNLOCK TABLES works. In MySQL, LOCK TABLES commits any open transactions, but turning off autocommit isn't actually starting a transaction. MySQL is funny that way.
In PDO, starting a transaction using beginTransaction doesn't actually start a new transaction, it just turns off autocommit. In most databases, this is sane, but it can have side effects with MySQL's mentioned behavior.
You probably should not rely on this behavior and how it interacts with MySQL's quirks. If you're going to deal with MySQL's behavior for table locking and DDL, you should avoid it. If you want autocommit off, turn it off by hand. If you want to open a transaction, open a transaction by hand.
You can freely mix the PDO API methods of working with transactions and SQL commands when not otherwise working with MySQL's oddities.
I have 2 tables:
user_tb.username
user_tb.point
review_tb.username
review_tb.review
I am coding with PHP(CodeIgniter). So I am trying to insert data into review_tb with the review the user had submitted and if that is a success, i will award the user with some points.
Well this look like a very simple process. We will first insert the review into the review_tb with the username and use PHP to check if there is any problem with the query executed and if it's a success, we will proceed with updating the points in the user_tb.
Yea, but here comes the problem. What if inserting into review_tb is a success but the second query, inserting into the user_tb is NOT a success, can we kind of "undo" the review_tb query or "revert" the change that we did to review_tb.
It's kind of like "all or nothing".
The purpose of this is to sync all data across the database, where in real life, we will be managing a database of more tables, and inserting more data into each table which depends on each other.
Please give some enlightenment on how we can do this in PHP or CodeIgniter or just MySql query.
If you want a "all or nothing" behavior for your SQL operations, you are looking for transactions ; here is the relevant page from the MySQL manual : 12.4.1. START TRANSACTION, COMMIT, and ROLLBACK Syntax.
Wikipedia describes those this way :
A database transaction comprises a
unit of work performed within a
database management system (or
similar system) against a database,
and treated in a coherent and reliable
way independent of other transactions.
Transactions in a database environment
have two main purposes:
To provide reliable units of work that allow correct recovery from
failures and keep a database
consistent even in cases of system
failure, when execution stops
(completely or partially) and many
operations upon a database remain
uncompleted, with unclear status.
To provide isolation between programs accessing a database
concurrently. Without isolation the
programs' outcomes are typically
erroneous.
Basically :
you start a transaction
you do what you have to ; ie, your first insert, and your update
if everything is OK, you commit the transaction
else, if there is any problem with any of your queries, you rollback the transaction ; and it will cancel everything you did in that transaction.
There is a manual page about transactions and CodeIgniter here.
Note that, with MySQL, no every Engine supports transaction ; between the two most used engines, MyISAM doesn't support transactions, while InnoDB supports them.
Can't you use transactions? If you did both inserts inside the same transaction, then either both succeed or neither does.
Try something like
BEGIN;
INSERT INTO review_tb(username, review) VALUES(x, y);
INSERT INTO user_tb(username, point) VALUES(x, y);
COMMIT;
Note that you need to use a database engine that supports transactions (such as InnoDB).
If you have InnoDB support use it, but when its not possible you can use a code similar to the following:
$result=mysql_query("INSERT INTO ...");
if(!$result) return false;
$result=mysql_query("INSERT INTO somewhereelse");
if(!$result) {
mysql_query("DELETE FROM ...");
return false;
}
return true;
This cleanup might still fail, but can work whenever the insert query fails because of duplicates or constraints. For unexpected terminations, only way is to use transactions.
Is it possible to do a simple count(*) query in a PHP script while another PHP script is doing insert...select... query?
The situation is that I need to create a table with ~1M or more rows from another table, and while inserting, I do not want the user feel the page is freezing, so I am trying to keep update the counting, but by using a select count(\*) from table when background in inserting, I got only 0 until the insert is completed.
So is there any way to ask MySQL returns partial result first? Or is there a fast way to do a series of insert with data fetched from a previous select query while having about the same performance as insert...select... query?
The environment is php4.3 and MySQL4.1.
Without reducing performance? Not likely. With a little performance loss, maybe...
But why are you regularily creating tables and inserting millions of row? If you do this only very seldom, can't you just warn the admin (presumably the only one allowed to do such a thing) that this takes a long time. If you're doing this all the time, are you really sure you're not doing it wrong?
I agree with Stein's comment that this is a red flag if you're copying 1 million rows at a time during a PHP request.
I believe that in a majority of cases where people are trying to micro-optimize SQL, they could get much greater performance and throughput by approaching the problem in a different way. SQL shouldn't be your bottleneck.
If you're doing a single INSERT...SELECT, then no, you won't be able to get intermediate results. In fact this would be a Bad Thing, as users should never see a database in an intermediate state showing only a partial result of a statement or transaction. For more information, read up on ACID compliance.
That said, the MyISAM engine may play fast and loose with this. I'm pretty sure I've seen MyISAM commit some but not all of the rows from an INSERT...SELECT when I've aborted it part of the way through. You haven't said which engine your table is using, though.
The other users can't see the insertion until it's committed. That's normally a good thing, since it makes sure they can't see half-done data. However, if you want them to see intermediate data, you could throw in an occassional call to "commit" while you're inserting.
By the way - don't let anybody tell you to turn autocommit on. That a HUGE time waster. I have a "delete and re-insert" job on my database that takes 1/3rd as long when I turn off autocommit.
Just to be clear, MySQL 4 isn't configured by default to use transactions. It uses the MyISAM table type which locks the entire table for each insert, if I remember correctly.
Your best bet would be to use one of the MySQL bulk insertion functions, such as LOAD DATA INFILE, as these are dramatically faster at inserting large amounts of data. As for the counting, well, you could break the inserts into N groups of 1000 (or Y) then divide your progress meter into N sections and just update it on each group's request.
Edit: Another thing to consider is, if this is static data for a template, then you could use a "select into" to create a new table with the same data. Not sure what your application is, or the intended functionality, but that could work as well.
If you can get to the console, you can ask various status questions that will give you the information you are looking for. There's a command that goes something like "SHOW processlist".