lots of queries - slow? - php

To begin with, I apologize if this has been asked already, I could not find anything at least.
Anyway, I'm going to run a cron task each 5 minutes. The script loads 79 external pages, whereas each page contain ~200 values I need to check in database (in total, say 15000 values). 100% of the values will be checked if they exist in database, and if they do (say 10% does) I will use an UPDATE query.
Both queries are very basic, no INNER etc.. It's the first time I use cron and I'm already assuming I will get the response "don't use cron for that" but my host doesn't allow daemons.
The query goes as:
SELECT `id`, `date` FROM `users` WHERE `name` = xxx
And if there was a match, it will use an UPDATE query (sometimes with additional values).
The question is, will this overload my mysql server? If yes, what are the suggested methods? I'm using PHP if that matters.

If you are just checking the same query over and over, there are a few options. Off the top of my head, you can use WHERE name IN ('xxx','yyy','zzz','aaa','bbb'...etc). Other than that, you could possibly do a file import into another table and probably run one query to do an insert/update.
Update:
//This is what I'm assuming your data looks like after loading/parsing all the pages.
//if not, it should be similar.
$data = array(
'server 1'=>array('aaa','bbb','ccc'),
'server 2'=>array('xxx','yyy','zzz'),
'server 3'=>array('111','222', '333'));
//where the key is the name of the server and the value is an array of names.
//I suggest using a transaction for this.
mysql_query("SET AUTOCOMMIT=0");
mysql_query("START TRANSACTION");
//update online to 0 for all. This is why you need transactions. You will set online=1
//for all online below.
mysql_query("UPDATE `table` SET `online`=0");
foreach($data as $serverName=>$names){
$sql = "UPDATE `table` SET `online`=1,`server`='{$serverName}' WHERE `name` IN ('".implode("','", $names)."')";
$result = mysql_query($sql);
//if the query failed, rollback all changes
if(!$result){
mysql_query("ROLLBACK");
die("Mysql error with query: $sql");
}
}
mysql_query("COMMIT");

About MySQL and lot of queries
If you have enough rights on this server - you may try to increase Query Cache.
You can do it in SQL or in mysql config file.
http://dev.mysql.com/doc/refman/5.1/en/query-cache-configuration.html
mysql> SET GLOBAL query_cache_size = 1000000;
Query OK, 0 rows affected (0.04 sec)
mysql> SHOW VARIABLES LIKE 'query_cache_size';
+------------------+--------+
| Variable_name | Value |
+------------------+--------+
| query_cache_size | 999424 |
+------------------+--------+
1 row in set (0.00 sec)
Task sheduler in MySQL
If your updates may work only on data stored in database (there are no PHP variables) - consider using EVENT in MySQL instead of running SQL scripts from PHP.

Related

How can I use an SQL query's result for the WHERE clause of another query?

Okay, basically I have a table that contains statements like:
incident.client_category = 1
incident.client_category = 8
incident.severity = 1
etc.
I would like to use the contents from this table to generate other tables that fulfill the conditions expressed in this one. So I would need to make it something like
SELECT * FROM incident WHERE incident.client_category = 1
But the last part of the where has to come from the first table. Right now what I'm trying to do is something like
SELECT * FROM incident WHERE (SELECT condition FROM condition WHERE id = 1)
id = 1 stands for the condition's id. Right now I only want to work with ONE condition for testing purposes. Is there a way to achieve this? Because if there isn't, I might have to just parse the first query's results through PHP into my incident query.
Table schemas:
Engineering Suggestion - Normalize the DB
Storing a WHERE clause, like id = 10, in a field in a MySQL table, is not a good idea. I recommend taking a look at MySQL Normalization. You shouldn't store id = 10 as a varchar, but rather, you should store something like OtherTableid. This allows you to use indices, to optimize your DB, and to get a ton of other features that you are deprived of by using fields as WHERE clauses.
But sometimes we need a solution asap, and we can't re-engineer everything! So let's take a look at making one...
Solution
Here is a solution that will work even on very old, v. 5.0 versions of MySQL. Set the variable using SET, prepare a statement using PREPARE, and execute it using EXECUTE. Let's set our query into a variable...
SET #query = CONCAT(
"SELECT * FROM incident WHERE ",
(SELECT condition FROM condition WHERE id = 1)
);
I know for a fact that this should work, because the following definitely works for me on my system (which doesn't require building any new tables or schema changes)...
SET #query = CONCAT("SELECT id FROM myTable WHERE id = ", (SELECT MAX(id) FROM myTable));
If I SELECT #query;, I get: SELECT id FROM myTable WHERE id = 1737901. Now, all we need to do is run this query!
PREPARE stmt1 FROM #query;
EXECUTE stmt1;
DEALLOCATE PREPARE stmt1;
Here we use a prepare to build the query, execute to execute it, and deallocate to be ready for the next prepared statement. On my own example above, which can be tested by anyone without DB schema changes, I got good, positive results: EXECUTE stmt1; gives me...
| id | 1737901 | .
here is one way to achieve your goal by using what is called dynamic sql, be ware that this works only select from condition table returns only one record.
declare #SQLSTRING varchar(4000)
, #condition VARCHAR(500) -- change the size to whatever condition column size is
SELECT #condition = condition
FROM
condition
WHERE
id = 1
SET #SQLSTRING= 'SELECT * FROM incident WHERE ' + #condition
exec sp_executesql(#SQLSTRING)
Since you have also tagged the question with PHP, I would suggest using that. Simply select the string from the condition table and use the result to build up a SQL query (as a string in PHP) including it. Then run the second query. Psudo-code (skipping over what library/framework you re using to call the db):
$query = "select condition from condition where id = :id";
$condition = callDbAndReturnString($query, $id);
$query = "select * from incident where " . $condition;
$result = callDb($query);
However, be very careful. Where and how are you populating the possible values in the condition table? Even how is your user choosing which one to use? You run the risk of opening yourself up to a secondary SQL injection attack if you allow the user to generate values and store them there. Since you are using the value from the condition table as a string, you cannot parametrise the query using it as you (hopefully!) normally would. Depending on the queries you run and the possible values there as conditions, there might also be risk even if you just let them pick from a pre-built list. I would seriously ask myself if this (saving parts of SQL queries as strings in another table) is the best approach. But, if you decide it is, this should work.

PDO DELETE unexpectedly slow when working with millions of rows

I'm working with a MYISAM table that has about 12 million rows. A method is used to delete all records older than a specified date. The table is indexed on the date field. When run in-code, the log shows that this takes about 13 seconds when there are no records to delete and about 25 seconds when there are 1 day's records. When the same query is run in mysql client (taking the query from the SHOW PROCESSLIST when the code is running) it takes no time at all for no records, and about 16 seconds for a day's records.
The real-life problem is that this is taking a long time when there are records to delete when run once a day, so running it more often seems logical. But I'd like it to exit as quick as possible when there is nothing to do.
Method extract:
try {
$smt = DB::getInstance()->getDbh()->prepare("DELETE FROM " . static::$table . " WHERE dateSent < :date");
$smt->execute(array(':date' => $date));
return true;
} catch (\PDOException $e) {
// Some logging here removed to ensure a clean test
}
Log results when 0 rows for deletion:
[debug] ScriptController::actionDeleteHistory() success in 12.82 seconds
mysql client when 0 rows for deletion:
mysql> DELETE FROM user_history WHERE dateSent < '2013-05-03 13:41:55';
Query OK, 0 rows affected (0.00 sec)
Log results when 1 days results for deletion:
[debug] ScriptController::actionDeleteHistory() success in 25.48 seconds
mysql client when 1 days results for deletion:
mysql> DELETE FROM user_history WHERE dateSent < '2013-05-05 13:41:55';
Query OK, 672260 rows affected (15.70 sec)
Is there a reason why PDO is slower?
Cheers.
Responses to comments:
It's the same query on both, so the index is either being picked up or it's not. And it is.
EXPLAIN SELECT * FROM user_history WHERE dateSent < '2013-05-05 13:41:55'
1 SIMPLE user_history range date_sent date_sent 4 NULL 4 Using where
MySQL and Apache are running on the same server for the purposes of this test. If you're getting at an issue of load, then mysql does hit 100% for the 13 seconds on the in-code query. On the mysql client query, it doesn't get chance to register on top before the query is complete. I can't see how this is not something that PHP/PDO is adding to the equation but I'm open to all ideas.
:date is a PDO placeholder, and the fieldname is dateSent so there is no conflict with mysql keywords. Still, using :dateSent instead still causes the delay.
Also already tried without using placeholders but neglected to mention this so good call, thanks! Along the lines of this. Still the same delay with PHP/PDO.
DB::getInstance()->getDbh()->query(DELETE FROM user_history WHERE dateSent < '2013-05-03 13:41:55')
And using placeholders in mysql client still shows no delay:
PREPARE test from 'DELETE FROM user_history WHERE dateSent < ?';
SET #datesent='2013-05-05 13:41:55';
EXECUTE test USING #datesent;
Query OK, 0 rows affected (0.00 sec)
It's a MYISAM table so no transactions involved on this one.
Value of $date differs to test for no deletions or one day's deletions, as shown in the query run on mysql client which is taken from SHOW PROCESSLIST while the code is running. In this case it is not passed to the method and is derived from:
if (!isset($date)) {
$date = date("Y-m-d H:i:s", strtotime(sprintf("-%d days", self::DELETE_BEFORE)));
}
And at this point the table schema may get called into question, so:
CREATE TABLE IF NOT EXISTS `user_history` (
`userId` int(11) NOT NULL,
`asin` varchar(10) COLLATE utf8_unicode_ci NOT NULL,
`dateSent` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`userId`,`asin`),
KEY `date_sent` (`dateSent`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
It's a decent sized website with lots of DB calls throughout. I see no evidence in the way the site performs in any other respect that suggests it is down to dodgy routing. Especially as I see this query on SHOW PROCESSLIST slowly creeping its way up to 13 seconds when run in PHP/PDO, but it takes no time at all when run in mysql (particularly referring to where no records are to be deleted which takes 13 seconds in PHP/PDO only).
Currently it is only this particular DELETE query that is in question. But I don't have another bulk DELETE statement like this anywhere else in this project, or any other project of mine that I can think of. So the question is particular to PDO DELETE queries on big-ish tables.
"Isn't that your answer then?" - No. The question is why does this take significantly longer in PHP/PDO compared to mysql client. The SHOW PROCESSLIST only shows this query taking time in PHP/PDO (for no records to be deleted). It takes no time at all in mysql client. That's the point.
Tried the PDO query without the try-catch block, and there is still a delay.
And trying with mysql_* functions shows the same timings as with using the mysql client directly. So the finger is pointing quite strongly at PDO right now. It could be my code that interfaces with PDO, but as no other queries have an unexpected delay, this seems less likely:
Method:
$conn = mysql_connect(****);
mysql_select_db(****);
$query = "DELETE FROM " . static::$table . " WHERE dateSent < '$date'";
$result = mysql_query($query);
Logs for no records to be deleted:
Fri May 17 15:12:54 [verbose] UserHistory::deleteBefore() query: DELETE FROM user_history WHERE dateSent < '2013-05-03 15:12:54'
Fri May 17 15:12:54 [verbose] UserHistory::deleteBefore() result: 1
Fri May 17 15:12:54 [verbose] ScriptController::actionDeleteHistory() success in 0.01 seconds
Logs for one day's records to be deleted:
Fri May 17 15:14:24 [verbose] UserHistory::deleteBefore() query: DELETE FROM user_history WHERE dateSent < '2013-05-07 15:14:08'
Fri May 17 15:14:24 [verbose] UserHistory::deleteBefore() result: 1
Fri May 17 15:14:24 [debug] ScriptController::apiReturn(): {"message":true}
Fri May 17 15:14:24 [verbose] ScriptController::actionDeleteHistory() success in 15.55 seconds
And tried again avoid calls to DB singleton by creating a PDO connection in the method and using that, and this has a delay once again. Though there are no other delays with other queries that all use the same DB singleton so worth a try, but didn't really expect any difference:
$connectString = sprintf('mysql:host=%s;dbname=%s', '****', '****');
$dbh = new \PDO($connectString, '****', '****');
$dbh->exec("SET CHARACTER SET utf8");
$dbh->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION);
$smt = $dbh->prepare("DELETE FROM " . static::$table . " WHERE dateSent < :date");
$smt->execute(array(':date' => $date));
Calling method with time logger:
$startTimer = microtime(true);
$deleted = $this->apiReturn(array('message' => UserHistory::deleteBefore()));
$timeEnd = microtime(true) - $startTimer;
Logger::write(LOG_VERBOSE, "ScriptController::actionDeleteHistory() success in " . number_format($timeEnd, 2) . " seconds");
Added PDO/ATTR_EMULATE_PREPARES to DB::connect(). Still has the delay when deleting no records at all. I've not used this before but it looks like the right format:
$this->dbh->setAttribute(\PDO::ATTR_EMULATE_PREPARES, false);
Current DB::connect(), though if there were general issues with this, surely it would affect all queries?
public function connect($host, $user, $pass, $name)
{
$connectString = sprintf('mysql:host=%s;dbname=%s', $host, $name);
$this->dbh = new \PDO($connectString, $user, $pass);
$this->dbh->exec("SET CHARACTER SET utf8");
$this->dbh->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION);
}
The indexes are shown above in the schema. If it was directly related to rebuilding the indexes after the deletion of the record, then mysql would take the same time as PHP/PDO. It doesn't. This is the issue. It's not that this query is slow - it's expected to take some time. It's that PHP/PDO is noticeably slower than queries executed in the mysql client or queries that use the mysql lib in PHP.
MYSQL_ATTR_USE_BUFFERED_QUERY tried, but still a delay
DB is a standard singleton pattern. DB::getInstance()->getDbh() returns the PDO connection object created in the DB::connect() method shown above, eg: DB::dbh. I believe I've proved that the DB singleton is not an issue as there is still a delay when creating the PDO connection in the same method as the query is executed (6 edits above).
I've found what it causing, but I don't know why this is happening right this minute.
I've created a test SQL that creates a table with 10 million random rows in the right format, and a PHP script that runs the offending query. And it takes no time at all in PHP/PDO or mysql client. Then I change the DB collation from the default latin1_swedish_ci to utf8_unicode_ci and it takes 10 seconds in PHP/PDO and no time at all in mysql client. Then I change it back to latin1_swedish_ci and it takes no time at all in PHP/PDO again.
Tada!
Now if I remove this from the DB connection, it works fine in either collation. So there is some sort of problem here:
$dbh->exec("SET CHARACTER SET utf8");
I shall research more, then follow up later.
So...
This post explains where the flaw was.
Is "SET CHARACTER SET utf8" necessary?
Essentially, it was the use of:
$this->dbh->exec("SET CHARACTER SET utf8");
which should have been this in DB::connect()
$this->dbh->exec("SET NAMES utf8");
My fault entirely.
It seems to have had dire effects because of a need on the part of the mysql server to convert the query to match the collation of the DB. The above post gives much better details than I can.
If anyone has the need to confirm my findings, this series of SQL queries will setup a test DB and allow you to check for yourself. Just make sure that the indexes are correctly enabled after the test data has been entered because I had to drop and re-add these for some reason. It creates 10 million rows. Maybe less will be enough to prove the point.
DROP DATABASE IF EXISTS pdo_test;
CREATE DATABASE IF NOT EXISTS pdo_test;
USE pdo_test;
CREATE TABLE IF NOT EXISTS test (
`userId` int(11) NOT NULL,
`asin` varchar(10) COLLATE utf8_unicode_ci NOT NULL,
`dateSent` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`userId`,`asin`),
KEY `date_sent` (`dateSent`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
drop procedure if exists load_test_data;
delimiter #
create procedure load_test_data()
begin
declare v_max int unsigned default 10000000;
declare v_counter int unsigned default 0;
while v_counter < v_max do
INSERT INTO test (userId, asin, dateSent) VALUES (FLOOR(1 + RAND()*10000000), SUBSTRING(MD5(RAND()) FROM 1 FOR 10), NOW());
set v_counter=v_counter+1;
end while;
end #
delimiter ;
ALTER TABLE test DISABLE KEYS;
call load_test_data();
ALTER TABLE test ENABLE KEYS;
# Tests - reconnect to mysql client after each one to reset previous CHARACTER SET
# Right collation, wrong charset - slow
SET CHARACTER SET utf8;
ALTER DATABASE pdo_test COLLATE='utf8_unicode_ci';
DELETE FROM test WHERE dateSent < '2013-01-01 00:00:00';
# Wrong collation, no charset - fast
ALTER DATABASE pdo_test COLLATE='latin1_swedish_ci';
DELETE FROM test WHERE dateSent < '2013-01-01 00:00:00';
# Right collation, right charset - fast
SET NAMES utf8;
ALTER DATABASE pdo_test COLLATE='utf8_unicode_ci';
DELETE FROM test WHERE dateSent < '2013-01-01 00:00:00';
Try to Analyze and Optimize tables:
http://dev.mysql.com/doc/refman/5.5/en/optimize-table.html
http://dev.mysql.com/doc/refman/5.5/en/analyze-table.html

PHP MySQL Insert fail after DELETE

I got two tables. One is account, another is Interest.
One account can have multi Interests and It can be edited.
Now, the process is deleting all Interest of this account then insert these insterests.
The QUERY IS:
"DELETE FROM Interests WHERE account_id='$id'"
"INSERT INTO Interests (account_id, interest_name) VALUES('$id', '$name')"
I use the both query when user update their account, but the insert is fail, there is nothing insert into the table (ps. the interests_id is auto_increment and this was be counted) but there is nothing new in the table. When I comment out the delete query. The insert will be successful.
Does any one know what can i do?
If you want to update your table records, you will do update operation.
like this:
UPDATE TABLE_NAME SET FIELD_NAME = 'VARIABLE_NAME'
WHERE PRIMERY_FIELD_NAME = 'VARIABLE_NAME' ;
you did not have to use these two queries, if you want to update data simply use the updat query of mysql.use this:
<?php
$query = "UPDATE Interests SET interest_name = '".$name."' WHERE account_id = '".$id."'" ;
mysql_query($query);
?>
If you want to update your table records then you may execute update operation. It like following
UPDATE Interests
SET
interest_name = '$name'
WHERE
accountno = '$id' ;
Try it. You may solve your problem by this way.
If you have queries failing, you should capture the error and see what went wrong. In all MySQL APIs for PHP, a query that fails returns a status code to indicate this. Examples of checking this status code are easy to find in the docs. But most developers fail to check the status.
Use transactions to ensure that both changes succeed together or neither are applied.
How to Decide to use Database Transactions
Definition of a transaction in MySQL: http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_transaction
Syntax for starting and committing transactions in MySQL: http://dev.mysql.com/doc/refman/5.5/en/commit.html
You need to use InnoDB. MyISAM does not support transactions. http://dev.mysql.com/doc/refman/5.5/en/innodb-storage-engine.html
In PHP, you need to stop using the old ext/mysql API and start using MySQLi or PDO.
http://php.net/manual/en/mysqli.quickstart.transactions.php
http://php.net/manual/en/pdo.begintransaction.php
This happens because the query are treated as two single transaction, so the order of execution is not guaranteed.
The effect you are describing is because the insert is processed before delete, so the interests_id is auto-incremented properly, then the row is deleted by delete statement.
You should change the query logic or perform both queries in one single transaction.

SQL Server Query Slow from PHP, but FAST from SQL Mgt Studio - WHY?

I have a fast running query (sub 1 sec) when I execute the query in SQL Server Mgt Studio, but when I run the exact same query in PHP (on the same db instace)
using FreeTDS v8, mssql_query(), it takes much longer (70+ seconds).
The tables I'm hitting have an index on a date field that I'm using in the Where clause.
Could it be that PHP's mssql functions aren't utilizing the index?
I have also tried putting the query inside a stored procedure, then executing the SP from PHP - the same results in time difference occurs.
I have also tried adding a WITH ( INDEX( .. ) ) clause on the table where that has the date index, but no luck either.
Here's the query:
SELECT
1 History,
h.CUSTNMBR CustNmbr,
CONVERT(VARCHAR(10), h.ORDRDATE, 120 ) OrdDate,
h.SOPNUMBE OrdNmbr,
h.SUBTOTAL OrdTotal,
h.CSTPONBR PONmbr,
h.SHIPMTHD Shipper,
h.VOIDSTTS VoidStatus,
h.BACHNUMB BatchNmbr,
h.MODIFDT ModifDt
FROM SOP30200 h
WITH (INDEX (AK2SOP30200))
WHERE
h.SOPTYPE = 2 AND
h.DOCDATE >= DATEADD(dd, -61, GETDATE()) AND
h.VOIDSTTS = 0 AND
h.MODIFDT = CONVERT(VARCHAR(10), DATEADD(dd, -1*#daysAgo, GETDATE()) , 120 )
;
what settings are on, usually ARITHABORT is the culprit, it is ON in SSMS but you might be connecting with it off
Run this in SSMS while you are running your query and see what the first column is for the session that is connected from PHP
select arithabort,* from sys.dm_exec_sessions
where session_id > 50
Run the SQL Profiler, and set up a trace and see if there are any differences between the two runs.
Using the LOGIN EVENT (and EXISTING CONNECTION) in SQL Profiler with the Text column will show the connection settings of a lot of important SET commands--Arithabort, Isolation Level, Quoted Identifier, and others. Compare and contrast these between the fast and slow connections to see if anything stands out.
SET ARITHABORT ON; in your session, might improve query performance.
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-arithabort-transact-sql?view=sql-server-ver16
Always set ARITHABORT to ON in your logon sessions. Setting ARITHABORT to OFF can negatively impact query optimization, leading to performance issues.

Postgresql and PHP: is the currval a efficent way to retrieve the last row inserted id, in a multiuser application?

Im wondering if the way i use to retrieve the id of the last row inserted in a postgresql table is efficent..
It works, obviously, but referencing on the serial sequence currval value could be problematic when i have many users adding rows in the same table at the same time.
My actual way is:
$pgConnection = pg_connect('host=127.0.0.1 dbname=test user=myuser password=xxxxx')or die('cant connect');
$insert = pg_query("INSERT INTO customer (name) VALUES ('blabla')");
$last_id_query = pg_query("SELECT currval('customer_id_seq')");
$last_id_results = pg_fetch_assoc($last_id_query);
print_r($last_id_results);
pg_close($pgConnection);
Well, its just a test atm.
But anyway, i can see 3 issues with this way:
Referencing on the customer_id_seq, if two user do the same thing in the same time, could happen that them both get the same id from that way... or not?
I have to know the table's sequence name. Becose pg_get_serial_sequence dont works for me (im newbie on postgresql, probably is a configuration issue)
Any suggestion/better ways?
p.s: i can't use the PDO, becose seem lack a bit with the transaction savepoint; I wont use zend and, in the end, i'll prefer to use the php pg_* functions (maybe i'll build up my classes in the end)
EDIT:
#SpliFF(thet deleted his answer): this would works better?
$pgConnection = pg_connect('host=127.0.0.1 dbname=test user=myuser password=xxxxx')or die('cant connect');
pg_query("BEGIN");
$insert = pg_query("INSERT INTO customer (name) VALUES ('blabla')");
$last_id_query = pg_query("SELECT currval('customer_id_seq')");
$last_id_results = pg_fetch_assoc($last_id_query);
print_r($last_id_results);
//do somethings with the new customer id
pg_query("COMMIT");
pg_close($pgConnection);
If you use a newer version of PostgreSQL (> 8.1) you should use the RETURNING clause of INSERT (and UPDATE) command.
OTOH if you insist on using one of the sequence manipulation functions, please read the fine manual. A pointer: "Notice that because this is returning a session-local value, it gives a predictable answer whether or not other sessions have executed nextval since the current session did."
Insert and check curval(seq) inside one transaction. Before commiting transaction you'll see curval(seq) for your query and no matter who else inserted at the same time.
Don't remember the syntax exactly - read in manual (last used pgsql about 3 years ago), but in common it looks like this:
BEGIN TRANSACTION;
INSERT ...;
SELECT curval(seq);
COMMIT;
ex. minsert into log (desc,user_id) values ('drop her mind',6) returning id

Categories