Is it possible that a php script gets terminated between two mysql queries.
For example user registration:
1st INSERT : I will enter the basic first-name,last-name,address in one table
and
2nd INSERT : i will enter the user's hashed password and salt in another table
This operation probably requires two queries and either of them independently are useless records.
What if the php script terminates after executing the first query?
User will just get a server error message but one useless record will be generated.
Any Solutions??
EDIT ------
My Web host does not provide a InnoDB engine.
Only MyISAM supported
Use a transaction:
START TRANSACTION;
INSERT INTO foo ...
INSERT INTO bar ...
COMMIT
If either INSERT fails, you ROLLBACK the transaction and you won't be left with "useless" records.
if you use a transaction such as:
BEGIN;
INSERT INTO TBL(...) VALUES(...);
INSERT INTO TBL(...) VALUES(...);
COMMIT;
Everything is sent to the mysql server in one go, and then ran as a batch, meaning your script can terminate as soon as the transfer is complete, rather than waiting for each individual query. It also has the added bonus of using less ram and being much faster.
Related
Let's say I insert some data into multiple different tables.
Table A:
Name
Address
Location
Table B:
Name
Address
Location
What is the chance of MySQL say inserting into 1 but not the other if these we're 2 different mysql queries
I am trying to say, what is the chance of PHP or MySQL not inserting the data if all the data is completely valid.
Can PHP or MySQL mess up in any way and miss a query, especially if I am doing hundreds a second?
If so, how would I combat this?
Use a "database transaction".
A database transaction commits ALL or NONE of the operations you are performing, all at once.
If you have multiple INSERT, UPDATE, and/or DELETE operations that you would like to perform together, then you should:
Initiate a transaction.
Perform each one of the operations, one after the other.
Commit the transaction.
This way if something fails in between, NONE of them will actually happen until the "commit" is executed.
I have a transaction like this (innoDB):
START TRANSACTION;
SELECT 1 FROM test WHERE id > 5; // Let's assume this returns 0 rows
// Some very long operation here
//If the previous SELECT contained 0 results, this Insert will be executed
INSERT INTO test VALUES...;
COMMIT;
Now the problem is that if more sessions execute at the same time, then they will all end up executing the INSERT, because by the time the long task in those sessions has finished, all of the sessions had plenty of time to do the SELECT, and it will return 0 row result for all of them, since the INSERT haven't been executed quite yet due to the long task running.
So basically, I need to somehow lock the whole table test (so it can't be read by other sessions and they will be forced to wait) after I execute START TRANSACTION, but I am not sure how, because I can't use the LOCK TABLES test query, because that COMMITs the transaction I have started.
I also cannot use SELECT .. FOR UPDATE, because that only prevents existing rows from being modified, but it won't prevent new rows from being inserted.
If you've got some long-running task which only needs to be run once, set a flag in a separate table to say that the task has started, and check that instead of the number of rows written to the target table, so that another instance of the service does not kick off the job twice.
This also has the advantage that you're not relying on the specifics of the task in order to know the status of it (and therefore if the nature of the task changes, the integrity of your other code doesn't collapse), and you're not trying to work round the transaction by doing something horrible like locking an entire table. One of the points of using transactions is that it's not necessary to lock everything (although of course different isolation levels can be used, but that's not the subject of this discussion).
Then set the flag back to false when the last bit of the task has finished.
I am developing an app based on PHP/Laravel.There is a scenario where I have to insert large number of data in MySQL table,it takes ~1-2 minutes to complete insert query once submitted by user.I wanted to bypass another insert query while previous is still in progress.
So I made a log table, where value of 0 is inserted on is_completed field before the bulk insert starts on another table and when the loop of bulk insert completes, I have trrigered another query that updates 1 on is_completed field of log table.
This way I can check if is_complete flag is set to 1 to allow next request by user else just bypass the query.I have used transaction for bulk insert and I clear log when handling exceptions if it occurs any.
Now the problem occurs if the server restarts while bulk insert is in progress in that case the is_complete flag is set to 0 on log table.If so user won't be able to request to insert data until the next day.
Is there provision of any sorts of automation on MYSQL, so that when the DB server just starts we can run query ? If so I will trigger query to clear log table.Please can any one tell me how to do that or suggest me better solution,regarding my scenario ?
You can use the init_file server option to run SQL statements on server startup.
https://dev.mysql.com/doc/refman/5.6/en/server-options.html#option_mysqld_init-file
insert into tbl(name)values('john'),('bale'),('ron')
if a person runs this query and another person at different part of the world runs
insert into tbl(name)values('johnny'),('baleton'),('ronny')
this at few seconds after previous query but before its completion on a server. Wil the values inserted be consecutive? Like this
'john','bale','ron','johnny','baleton','ronny'
or it may not be the tbl has id|name as columns.
I believe each query in MySQL happens in single transaction (if autocommit is enabled). If you manage your transaction yourself that then the situation is even more obvious.
I believe that for this reason the records will be inserted in order.
Edit:
I assume this is all about autoincrement otherwise the question doesn't make sense as explained in comment under the original question.
I stand corrected. The doc states:
When accessing the auto-increment counter, InnoDB uses a special table-level AUTO-INC lock that it keeps to the end of the current SQL statement, not to the end of the transaction.
So yes, still in order. Not for scope of transaction but single SQL statement.
MySQL (at least with the InnoDB engine) will assign the next Auto Increment value (ai_max + ai_increment) to the first statement (not transaction) that reads the table on INSERT. So if another statement comes along, attempts to INSERT, and finalizes before the first, it gets the NEXT AI value (ai_max + 2*ai_increment), not the one assigned to the first statement.
This is about as "in order" as databases get.
For more information on MySQL InnoDB Auto Increment, see the MySQL developer's reference
I have no knowledge of locking whatsoever. I have been looking through some MySQL documentation and can't fully understand how this whole process goes about. What I need, is for the following events in my script to happen:
step 1) table user gets locked
step 2) my script selects two rows from table user
step 3) my script makes an update to table user
step 4) table user gets unlocked because the script is done
How do I go about this exactly? And what happens when another user runs this same script while the table is locked? Is there a way for the script to know when to proceed (when the table becomes unlocked?). I have looked into start transaction and select for update but the documentation is very unclear. Any help is appreciated. And yes, the table is innodb.
I believe what you are look for is the SELECT ... FOR UPDATE syntax available for InnoDB tables. This will lock only the records you want to update. You do need to wrap it in a transaction.
http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html
For example, run your queries like this:
START TRANSACTION
SELECT ... FOR UPDATE
UPDATE ...
COMMIT
Eliminate step 2 by performing your select query as part of your update call. Then MySQL takes care of the rest. Only one write query can run at the same time, others will be queued behind.