I'm working with mysql and php and I'm attempting to test the error handling of a call, but I can't figure out why this doesn't give an error. I'm executing the following line:
if (! mysql_query("UPDATE Accounts SET disabled='0' WHERE id='15'")) { ... }
Here's the scenario... There is a table called 'Accounts', but there isn't a record with an id of 15 (which is the primary key). I have tried this from the command line and via a web browser, but this line executes without problems. I checked the php manual for this and here's a quote from their pages:
For other type of SQL statements, INSERT, UPDATE, DELETE, DROP, etc, mysql_query() returns TRUE on success or FALSE on error.
Why is this not generating an error? Any help would greatly be appreciated!
The query is not failing.
Just because the ID doesn't exist, doesn't mean that the query fails. Mysql successfully looked for the record, found none, and didn't apply any action. This is much different that what the !mysql_query statement suggests. That implies that mysql was unable to run your command.
Here your command ran successfully, just didn't affect your table due to the nonexistent row.
Your query will update no record.
This is not an error, it happens every time the conditions in the WHERE clause are not met.
There are many ways to cause your query to fail. One of them would be to use a non-existing field:
UPDATE Accounts SET blablabla='0' WHERE id='15'
There's a difference between an empty result set, and an error. A query which results in no changes is NOT an error, it's simply a valid result that happens to be empty, e.g.
this can never return anything:
mysql> select now() from dual where 1=0;
Empty set (0.01 sec)
but is still not an error. It's just an empty set. By comparison, this will always return one row:
mysql> select now() from dual where 1=1;
+---------------------+
| now() |
+---------------------+
| 2013-05-03 09:51:19 |
+---------------------+
1 row in set (0.00 sec)
and then there's errors. This will not return an empty set, because the query itself failed at the parser level:
mysql> select now() from dual where abc=def;
ERROR 1054 (42S22): Unknown column 'abc' in 'where clause'
Related
I've read the online php manual but I'm still not sure of the way these two functions work: mysqli::commit & mysqli::rollback.
The first thing I have to do is to:
$mysqli->autocommit(FALSE);
Then I make some queries:
$mysqli->query("...");
$mysqli->query("...");
$mysqli->query("...");
Then I commit the transaction consisting of these 3 queries by doing:
$mysqli->commit();
BUT in the unfortunate case in which one of these queries does not work, do all 3 queries get cancelled or do I have to call a rollback myself? I want all 3 queries to be atomic and be considered as only one query. If one query fails then all 3 should fail and have no effect.
I'm asking this because in the comments I've seen on the manual page: http://php.net/manual/en/mysqli.commit.php
the user Lorenzo calls a rollback if one of the queries failed.
What's a rollback good for if the 3 queries are atomic? I don't understand.
EDIT: This is the code example I am doubtful about:
<?php
$all_query_ok=true; // our control variable
$mysqli->autocommit(false);
//we make 4 inserts, the last one generates an error
//if at least one query returns an error we change our control variable
$mysqli->query("INSERT INTO myCity (id) VALUES (100)") ? null : $all_query_ok=false;
$mysqli->query("INSERT INTO myCity (id) VALUES (200)") ? null : $all_query_ok=false;
$mysqli->query("INSERT INTO myCity (id) VALUES (300)") ? null : $all_query_ok=false;
$mysqli->query("INSERT INTO myCity (id) VALUES (100)") ? null : $all_query_ok=false; //duplicated PRIMARY KEY VALUE
//now let's test our control variable
$all_query_ok ? $mysqli->commit() : $mysqli->rollback();
$mysqli->close();
?>
I think this code is wrong because if any of the queries failed and $all_query_ok==false then you don't need to do a rollback because the transaction was not processed. Am I right?
I think this code is wrong because if any of the queries failed and
$all_query_ok==false then you don't need to do a rollback because the
transaction was not processed. Am I right?
No, the transaction does not keep track if a single SQL-Statement fails.
If a single SQL-Statement fails the statement is rolled back (like it is described in #eggyal's Answer) - but the transaction is still open. If you call commit now, there is no rollback of the successful statements and you just inserted "corrupted" data into your database. You can reproduce this easily:
m> CREATE TABLE transtest (id INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(100) NOT NULL DEFAULT '',
CONSTRAINT UNIQUE KEY `uq_transtest_name` (name)) ENGINE=InnoDB;
Query OK, 0 rows affected (0.07 sec)
m> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)
m> INSERT INTO transtest (name) VALUE ('foo');
Query OK, 1 row affected (0.00 sec)
m> INSERT INTO transtest (name) VALUE ('foo');
ERROR 1062 (23000): Duplicate entry 'foo' for key 'uq_transtest_name'
m> INSERT INTO transtest (name) VALUE ('bar');
Query OK, 1 row affected (0.00 sec)
m> COMMIT;
Query OK, 0 rows affected (0.02 sec)
m> SELECT * FROM transtest;
+----+------+
| id | name |
+----+------+
| 3 | bar |
| 1 | foo |
+----+------+
2 rows in set (0.00 sec)
You see that the insertion of 'foo' and 'bar' were successful although the second SQL-statement failed - you can even see that the AUTO_INCREMENT-value has been increased by the faulty query.
So you have to check the results of each query-call and if one fails, call rollback to undo the otherwise successful queries. So Lorenzo's code in the PHP-manual makes sense.
The only error which forces MySQL to roll back the transaction is a "transaction deadlock" (and this is specific to InnoDB, other storage engines may handle those errors differently).
As documented under InnoDB Error Handling:
Error handling in InnoDB is not always the same as specified in the SQL standard. According to the standard, any error during an SQL statement should cause rollback of that statement. InnoDB sometimes rolls back only part of the statement, or the whole transaction. The following items describe how InnoDB performs error handling:
If you run out of file space in a tablespace, a MySQL Table is full error occurs and InnoDB rolls back the SQL statement.
A transaction deadlock causes InnoDB to roll back the entire transaction. Retry the whole transaction when this happens.
A lock wait timeout causes InnoDB to roll back only the single statement that was waiting for the lock and encountered the timeout. (To have the entire transaction roll back, start the server with the --innodb_rollback_on_timeout option.) Retry the statement if using the current behavior, or the entire transaction if using --innodb_rollback_on_timeout.
Both deadlocks and lock wait timeouts are normal on busy servers and it is necessary for applications to be aware that they may happen and handle them by retrying. You can make them less likely by doing as little work as possible between the first change to data during a transaction and the commit, so the locks are held for the shortest possible time and for the smallest possible number of rows. Sometimes splitting work between different transactions may be practical and helpful.
When a transaction rollback occurs due to a deadlock or lock wait timeout, it cancels the effect of the statements within the transaction. But if the start-transaction statement was START TRANSACTION or BEGIN statement, rollback does not cancel that statement. Further SQL statements become part of the transaction until the occurrence of COMMIT, ROLLBACK, or some SQL statement that causes an implicit commit.
A duplicate-key error rolls back the SQL statement, if you have not specified the IGNORE option in your statement.
A row too long error rolls back the SQL statement.
Other errors are mostly detected by the MySQL layer of code (above the InnoDB storage engine level), and they roll back the corresponding SQL statement. Locks are not released in a rollback of a single SQL statement.
To begin with, I apologize if this has been asked already, I could not find anything at least.
Anyway, I'm going to run a cron task each 5 minutes. The script loads 79 external pages, whereas each page contain ~200 values I need to check in database (in total, say 15000 values). 100% of the values will be checked if they exist in database, and if they do (say 10% does) I will use an UPDATE query.
Both queries are very basic, no INNER etc.. It's the first time I use cron and I'm already assuming I will get the response "don't use cron for that" but my host doesn't allow daemons.
The query goes as:
SELECT `id`, `date` FROM `users` WHERE `name` = xxx
And if there was a match, it will use an UPDATE query (sometimes with additional values).
The question is, will this overload my mysql server? If yes, what are the suggested methods? I'm using PHP if that matters.
If you are just checking the same query over and over, there are a few options. Off the top of my head, you can use WHERE name IN ('xxx','yyy','zzz','aaa','bbb'...etc). Other than that, you could possibly do a file import into another table and probably run one query to do an insert/update.
Update:
//This is what I'm assuming your data looks like after loading/parsing all the pages.
//if not, it should be similar.
$data = array(
'server 1'=>array('aaa','bbb','ccc'),
'server 2'=>array('xxx','yyy','zzz'),
'server 3'=>array('111','222', '333'));
//where the key is the name of the server and the value is an array of names.
//I suggest using a transaction for this.
mysql_query("SET AUTOCOMMIT=0");
mysql_query("START TRANSACTION");
//update online to 0 for all. This is why you need transactions. You will set online=1
//for all online below.
mysql_query("UPDATE `table` SET `online`=0");
foreach($data as $serverName=>$names){
$sql = "UPDATE `table` SET `online`=1,`server`='{$serverName}' WHERE `name` IN ('".implode("','", $names)."')";
$result = mysql_query($sql);
//if the query failed, rollback all changes
if(!$result){
mysql_query("ROLLBACK");
die("Mysql error with query: $sql");
}
}
mysql_query("COMMIT");
About MySQL and lot of queries
If you have enough rights on this server - you may try to increase Query Cache.
You can do it in SQL or in mysql config file.
http://dev.mysql.com/doc/refman/5.1/en/query-cache-configuration.html
mysql> SET GLOBAL query_cache_size = 1000000;
Query OK, 0 rows affected (0.04 sec)
mysql> SHOW VARIABLES LIKE 'query_cache_size';
+------------------+--------+
| Variable_name | Value |
+------------------+--------+
| query_cache_size | 999424 |
+------------------+--------+
1 row in set (0.00 sec)
Task sheduler in MySQL
If your updates may work only on data stored in database (there are no PHP variables) - consider using EVENT in MySQL instead of running SQL scripts from PHP.
I have made a database wrapper with extra functionality around the PDO system (yes, i know a wrapper around a wrapper, but it is just PDO with some extra functionality). But i have noticed a problem.
The folowing doesn't work like it should be:
<?php
var_dump($db->beginTransaction());
$db->query('
INSERT INTO test
(data) VALUES (?)
;',
array(
'Foo'
)
);
print_r($db->query('
SELECT *
FROM test
;'
)->fetchAll());
var_dump($db->rollBack());
print_r($db->query('
SELECT *
FROM test
;'
)->fetchAll());
?>
The var_dump's shows that the beginTransaction and rollBack functions return true, so no errors.
I expected that the first print_r call show a array of N items and the second call show N-1 items. But that issn't true, they both show same number of items.
My $db->query(< sql >, < values >) call nothing else then $pdo->prepare(< sql >)->execute(< values >) (with extra error handling ofcourse).
So i think or the transaction system of MySQL doesn't work, or PDO's implenmentaties doesn't work or i see something wrong.
Does anybody know what the problem is?
Check if your type of database equals innoDB. In one word you must check if your database supports transactions.
Two possible problems:
The table is MyISAM which doesn't support transaction. Use InnoDB.
Check to make sure auto-commit is OFF.
http://www.php.net/manual/en/pdo.transactions.php
I'm entering this as an answer, as a comment is to small to contain the following:
PDO is just a wrapper around the various lower level database interface libraries. If the low-level library doesn't complain, either will PDO. Since MySQL supports transactions, no transaction operations will return a syntax error or whatever. You can use MyISAM tables within transactions, but any operations done on them will be done as if auto-commit was still active:
mysql> create table myisamtable (x int) engine=myisam;
Query OK, 0 rows affected (0.00 sec)
mysql> create table innodbtable (x int) engine=innodb;
Query OK, 0 rows affected (0.00 sec)
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> insert into myisamtable (x) values (1);
Query OK, 1 row affected (0.00 sec)
mysql> insert into innodbtable (x) values (2);
Query OK, 1 row affected (0.00 sec)
mysql> rollback;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> select * from myisamtable;
+------+
| x |
+------+
| 1 |
+------+
1 row in set (0.00 sec)
mysql> select * from innodbtable;
Empty set (0.00 sec)
mysql>
As you can see, even though a transaction was active, and some actions were performed on the MyISAM table, no errors were thrown.
MySQL doesn't support transactions on the MyISAM table type, which is unfortunately the default table type.
If you need transactions, you should switch to the InnoDB table type.
Another reason this may happen is certain types of SQL statements cause an immediate auto-commit. I had a large script that ran in a transaction that was getting committed immediately and ignored the transaction. I eventually found out it was because any ALTER TABLE statement immediately causes a commit to happen.
Types of statements that cause auto commits are:
Anything that modifies a table or the database, such as ALTER TABLE, CREATE TABLE, etc.
Anything that modifies table permissions, such as ALTER USER or SET PASSWORD
Anything that locks that tables or starts a new transaction
Data loading statements
Administrative statements, such as ANALYZE TABLE, FLUSH, or CACHE INDEX
Replication control statements, such as anything to do with a slave or master
More info and a complete list can be found here: https://dev.mysql.com/doc/refman/8.0/en/implicit-commit.html
If you're having this problem only with a specific script and you're sure you're using InnoDB, you might want to look to see if any SQL statements in your script match these.
I have a table with various VARCHAR fields in MySQL. I would like to insert some user data from a form via PHP. Obviously if I know the field lengths in PHP, I can limit the data length there with substr(). But that sort of violates DRY (field length stored in MySQL and as a constant in my PHP script). Is there a way for me to configure an INSERT so it automatically chops off excessively-long strings, rather than fails?
edit: it's failing (or at least causing an exception) in PHP/PDO, when I have excessively long strings. Not sure what I have to do in PHP/PDO so it Does The Right Thing.
edit 2: Ugh. This is the wrong approach; even if I get it to work ok on INSERT, if I want to check for a duplicate string, it won't match properly.
Actually, MySQL truncates strings to the column width by default. It generates a warning, but allows the insert.
mysql> create table foo (str varchar(10));
mysql> insert into foo values ('abcdefghijklmnopqrstuvwxyz');
Query OK, 1 row affected, 1 warning (0.00 sec)
mysql> show warnings;
+---------+------+------------------------------------------+
| Level | Code | Message |
+---------+------+------------------------------------------+
| Warning | 1265 | Data truncated for column 'str' at row 1 |
+---------+------+------------------------------------------+
mysql> select * from foo;
+------------+
| str |
+------------+
| abcdefghij |
+------------+
If you set the strict SQL mode, it turns the warning into an error and rejects the insert.
Re your comment: SQL mode is a MySQL Server configuration. It probably isn't PDO that's causing it, but on the other hand it's possible, because any client can set SQL mode for its session.
You can retrieve the current global or session sql_mode value with the following statements:
SELECT ##GLOBAL.sql_mode;
SELECT ##SESSION.sql_mode;
The default should be an empty string (no modes set). You can set SQL mode in your my.cnf file, with the --sql-mode option for mysqld, or using a SET statement.
Update: MySQL 5.7 and later sets strict mode by default. See https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html
You can use column metadata to check string length. (PHP Manual on PDOStatement->getColumnMeta)
Get metadata for whole table this way
$query = $conn->query("SELECT * FROM places");
$numcols = $query->columnCount();
$tablemeta = array();
for ($i = 0; $i < $numcols; $i++) {
$colmeta = $query->getColumnMeta($i);
$tablemeta[$colmeta['name']] = $colmeta;
}
you could query information_schema to get the lenght of the column and use that data to truncate before insert, but that's more overhead than necessary. just turn off strict SQL mode for this statement:
SET #prev_mode = ##sql_mode;
SET sql_mode = '';
INSERT blah....;
SET sql_mode = #prev_mode;
I have to following code:
http://www.nomorepasting.com/getpaste.php?pasteid=22987
If PHPSESSID is not already in the table the REPLACE INTO query works just fine, however if PHPSESSID exists the call to execute succeeds but sqlstate is set to 'HY000' which isn't very helpful and $_mysqli_session_write->errno and
$_mysqli_session_write->error are both empty and the data column doesn't update.
I am fairly certain that the problem is in my script somewhere, as manually executing the REPLACE INTO from mysql works fine regardless of whether of not the PHPSESSID is in the table.
Why are you trying to doing your prepare in the session open function? I don't believe the write function is called more then once during a session, so preparing it in the open doesn't do much for you, you might as well do that in your session write.
Anyway I believe you need some whitespace after the table name, and before the column list. Without the whitespace I believe mysql would act as if you where trying to call the non-existent function named session().
REPLACE INTO session (phpsessid, data) VALUES(?, ?)
MySQL sees no difference between
'COUNT ()' and 'COUNT()'
Interesting, when I run the below in the mysql CLI I seem to get a different result.
mysql> select count (*);
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '*)' at line 1
mysql> select count(*);
+----------+
| count(*) |
+----------+
| 1 |
+----------+
1 row in set (0.00 sec)
REPLACE INTO executes 2 queries: first a DELETE then an INSERT INTO.
(So a new auto_increment is "By Design")
I'm also using the REPLACE INTO for my database sessions, but I'm using the MySQLi->query() in combination with MySQLI->real_escape_string() in stead of a MySQLi->prepare()
So as it turns out there are other issues with using REPLACE that I was not aware of:
Bug #10795: REPLACE reallocates new AUTO_INCREMENT (Which according to the comments is not actually a bug but the 'expected' behaviour)
As a result my id field keeps getting incremented so the better solution is to use something along the lines of:
INSERT INTO session(phpsessid, data) VALUES('{$id}', '{$data}')
ON DUPLICATE KEY UPDATE data='{$data}'
This also prevents any foreign key constraints from breaking and potential causing data integrity problems.