Under plain PHP 5.3, I have some code which uses MySQL to first deletes some old records, record a log, perform a few tiny operations and then adds new replacement records.
The delete command looks like this:
DELETE FROM `rtable` WHERE `UserName`='%s';
And the add commands looks like this:
INSERT INTO `table` (`UserName`,`Attribute`,`op`,`Value`) VALUES ('%s','%s','%s','%s');
Oddly though, the insert commands appear to not execute when running normally, however if I enable my debugger and step through one line at a time, it appears to work. Likewise, if I insert a sleep command of two seconds after the delete commands. It appears to work. I am therefor assuming that the insert commands are running -before- the delete commands and thus the delete commands are also erasing the new records.
How can I get PHP to wait for the delete operation to finish before continuing to the insert commands?
That sounds really odd.
Do you happen to have a replicated database cluster?
Also, do you check the return value of the mysql_query or whatever command and print the error message (which of course is not recommended for scripts in production)?
I am not totally certain how PHP deals with processes and how consecutive queries are run, but if you want to make certain to encapsulate the delete in a transaction, you can do so with PDO like this:
$dbh->beginTransaction();
$sth = $dbh->exec("DELETE FROM `rtable` WHERE `UserName`='%s'");
$dbh->commit();
// You could also pop a transaction around the inserts
// in case another page tries to do the same
$sth = $dbh->exec("INSERT INTO `table`
(`UserName`,`Attribute`,`op`,`Value`)
VALUES ('%s','%s','%s','%s')");
BTW: I took the liberty of correcting the single quotes to backticks in your queries.
Related
I used to use
<?php
$sql = "insert into test (owner) values ('owen')";
$db->autocommit(false);
if (!$db->query($sql))
$db->rollback();
else
$db->commit();
$db->close();
?>
However, today I run two insert php files in a same tables, without any action. It is simple like:
<?php
$sql = "insert into test (owner) values ('owen')"; //the other php is the same but replacing 'owen' to 'huhu'
for ($i = 0; $i < 100 * 1000; $i++) {
$db->query($sql);
}
$db->close();
?>
I run two php files in two different consoles. Then I got 200,000 records without any error. Does that mean using transaction manually is really not needed. As there are no conflicts.
You do not need transactions for this.
Transactions exist to be able to roll back half-finished changes to the database. These only occur when you have a set of multiple statements changing the database which might be interrupted inbetween. Then often only some of the statements have been execute which might leave the database in a state which is not 'clean' from the applications point of view.
A simple and good example is a money transfer between two tables:
first a removal from one table
then it is added to a second table
If this process is interrupted inbetween the money has vanished. That is not intended, you probably want to be able to rollback.
In your case however all statements are 'atomic', meaning they succeed or fail, but the databases state is always 'clean'. It does not matter in this if it is a single or multiple clients running the statements.
I got a very weird problem with oracle today.
I setup a new server with xampp for developing, i activated mssql and oracle and everything was just fine until i tried to execute an update statement.
Every select, insert, etc is working fine with PHP 5.3.
I also can parse the statement and get a ressource id back, but when i try to execute the statement my whole site is not responding.
no error, nothing. just timeout until i restart the apache.
here the code... it's the test code, so there should be no problem at all.
$conn = oci_connect('***', '***', '***');
$query ="UPDATE CHAR*** SET TPOS = 14, ID = 5, DIFF = 'J' WHERE ***NR = '3092308' AND LA*** = '5'";
echo $query;
echo '<br>';
echo $stid = oci_parse($conn, $query);
oci_execute($stid, OCI_DEFAULT);
oci_free_statement($stid2);
Any hints or ideas? :-(
I already tried to reinstall the oracle instant client and another version. I am using 10g like our db at the moment.
best regards
pad
The row may be locked by another session. If this is the case, your session will hang until the other transaction ends (commit/rollback).
You should do a SELECT FOR UPDATE NOWAIT before attempting to update a row (pessimistic locking):
If the row is locked, you will get an error and can return a message to the user that this record is currently being updated by another session. In most cases an explicit message is preferable to indefinite waiting.
If the row is available, you will make sure no session modifies its content until you commit (and thus you will prevent any form of lost update).
There are other reasons why a simple update may take a long time but they are less likely, for instance:
When you update an unindexed foreign key, Oracle needs to acquire a lock on the whole parent table for a short time. This may take a long time on a busy and/or large table.
There could be triggers on the table that perform additional work.
For further reading: pessimistic vs optimistic locking.
I'm curios if this can be achieved as I'm currently facing a bug and would like to see if putting a SELECT and an UPDATE in a transaction would fix it (if you're wondering why I'm not posting the code that causes the bug it's because it's a complex environment and I can't post all the influencing factors).
Something that I'm also interested in, related to this, is if you have ever experienced code that had and UPDATE query written after a SELECT query, yet the UPDATE gets executed before the SELECT (with the possibility that the script might run twice ruled out).
It depends on what do you mean by a transaction.
There are two types of transactions:
Implicit transactons: as in INSERT, UPDATE, SELECT, DELETE statements, and in such statements there is no explicit transaction commands, and the database engine will rollback the whole statement if an error happens.
Explicit Transactions: in such the enclosed statements inside the transaction are executed as a unit and either COMMIT the whole transaction or ROLLBACK .
So you can't have both SELECT and UPDATE inside one query, but you can but them inside a transaction like:
START TRANSACTION;
SELECT * FROM tableName;
UPDATE table SET something = 'other something' WHERE thirdsomething = #s;
COMMIT;
Then Put them in a stored procedure or a UDF.
Note that: SELECT statements do not modify data, so you might not need to enclose it in a transaction, so in your case you will have only UPDATE statement you can just use a stored procedure without a transaction.
I am trying to write some code that updates a mysql table, and then selects out of that same table in the same page. However, I find that when I do the update query, then the select query, it does not recognize the changes. If, however, I refresh the page, then it recognized the changes.
I first have an insert statement something like this
$query = 'INSERT INTO matches (uid, win) VALUES ($uid, $win)';
mysql_query($query) or die(mysql_error() . ' in ' . $query);
Then, just after this, I have a select statement like
$query = "SELECT * FROM matches where uid = $uid";
$resultmain = mysql_query($query) or die(mysql_error() . ' in ' . $query);
Of course I simplified the queries but, that is the general idea - and what happens is: the select statement will not recognize the update that was run immediately before it. However, if I reload the page, and the select statement runs again after some time, it does recognize the change.
I googled for this and was very surprised to not come across anything yet. Is there any good way to force to wait until the mysql update query finished before selecting? If not, I might just have to use javascript to automatically reload the page, but this sounds like a messy solution.
Any help would be greatly appreciated, this has been driving me crazy...
--Anthony
That should not happen. Maybe it’s a problem in your code, which you did not post?
Things that come to mine, which could be the problem:
The 2 queries are run on different connections to the MySQL database. And auto-commit is not enabled.
Thus, first query would send the update but not commit, the second query will query on old data, and only after the page finishes (/later on) the commit occurs.
I’m not quite sure if non-auto-commited changes will be commited or rolled back when a PHP script ends, but it should be a rollback. Thus a later commit would be needed in your code as well for this possible scenario to apply.
Can the php function mysql_insert_id() return no result after processing the INSERT query in mysql db?
Just to clarify. There was a script performing by cron on the production site. It contained a cycle for generating invoices for users. Each iteration consists of a INSERT db query and the mysql_insert_id() operation going right after the query - to fetch the generated invoice number. A set of iterations were performed without fetching the last inserted number.
Can it be caused by high db server load or by some other reasons that are not linked to the problem at the php code site?
Any help would be appreciated!
Offhand, I can think of a few cases where MySQL wouldn't return the ID:
The table you're inserting into doesn't have an AUTO_INCREMENTed primary key.
You're inserting multiple rows at once.
You're calling mysql_insert_id() from a different connection than the INSERT query was executed.
The INSERT query didn't succeed (for instance, it encountered a deadlock). Make sure you are checking the return value from mysql_query(), then use mysql_errno() and mysql_error().
MySQL docs have a full list of conditions and details on how this function works.
Of course, it's also possible there is a bug in MySQL, which would depend on which version of MySQL you are using.
If you're running the commands through a shell script, and run them both separately as in;
mysql -e "insert into table ( field1 ) values ( 'val1' );" "database"
lastId=`mysql -e "select last_insert_id();" "database"`
Then that won't work as the second call makes a new connection to the server. You need to do something like the following, as it is all done within a single database call / connection;
lastId=`mysql -e "
insert into table ( field1 ) values ( 'val1' );
select last_insert_id();
" "database"`
You'll need to look up the extra parameters required for the MySQL command to remove formatting and header row - I'm afraid I can't remember them off the top of my head!