Adding a short sleep to mysql query to debug caching - php

I'd like to make a sql query sleep for a couple of seconds to verify if my application caches the query result properly.
I tried to add SLEEP(2) to the select query, which results in mysql hanging up until it gets restarted.
Also tried to add a DO SLEEP(2); line before the actual query, which made php throw a "General Error" Exception.
Here's some example code:
$sql = "SELECT ... HUGE LIST OF THINGS";
$result = $myCachedDatabase->query($sql); // Does this actually cache the query result? Or does it perform the query every time?
What I'd like is something along of this:
$sql = "DELAY(5 seconds); SELECT ... HUGE LIST OF THINGS";
$result = $myCachedDatabase->query($sql); // First time it took 5s, second time it was instant - yay it gets cached!
the DELAY(5 seconds); is the part I'm looking for
What is the best way to accomplish this?

Related

Sphinx How can i keep connection active even if no activity for longer time?

I was doing bulk inserts in the RealTime Index using PHP and by Disabling AUTOCOMIT ,
e.g.
// sphinx connection
$sphinxql = mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','');
//do some other time consuming work
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
//do 50k updates or inserts
// Commit transaction
mysqli_commit($sphinxql);
and kept the script running overnight, in the morning i saw
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate
212334 bytes) in
so when i checked the nohup.out file closely , i noticed , these lines ,
PHP Warning: mysqli_query(): MySQL server has gone away in /home/script.php on line 502
Warning: mysqli_query(): MySQL server has gone away in /home/script.php on line 502
memory usage before these lines was normal , but memory usage after these lines started to increase, and it hit the php mem_limit and gave PHP Fatal error and died.
in script.php , line 502 is
mysqli_query($sphinxql,$update_query_sphinx);
so my guess is, sphinx server closed/died after few hours/ minutes of inactivity.
i have tried setting in sphinx.conf
client_timeout = 3600
Restarted the searchd by
systemctl restart searchd
and still i am facing same issue.
So how can i not make sphinx server die on me ,when no activity is present for longer time ?
more info added -
i am getting data from mysql in 50k chunks at a time and doing while loop to fetch each row and update it in sphinx RT index. like this
//6mil rows update in mysql, so it takes around 18-20 minutes to complete this then comes this following part.
$subset_count = 50000 ;
$total_count_query = "SELECT COUNT(*) as total_count FROM content WHERE enabled = '1'" ;
$total_count = mysqli_query ($conn,$total_count_query);
$total_count = mysqli_fetch_assoc($total_count);
$total_count = $total_count['total_count'];
$current_count = 0;
while ($current_count <= $total_count){
$get_mysql_data_query = "SELECT record_num, views , comments, votes FROM content WHERE enabled = 1 ORDER BY record_num ASC LIMIT $current_count , $subset_count ";
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
if ($result = mysqli_query($conn, $get_mysql_data_query)) {
/* fetch associative array */
while ($row = mysqli_fetch_assoc($result)) {
//sphinx escape whole array
$escaped_sphinx = mysqli_real_escape_array($sphinxql,$row);
//update data in sphinx index
$update_query_sphinx = "UPDATE $sphinx_index
SET
views = ".$escaped_sphinx['views']." ,
comments = ".$escaped_sphinx['comments']." ,
votes = ".$escaped_sphinx['votes']."
WHERE
id = ".$escaped_sphinx['record_num']." ";
mysqli_query ($sphinxql,$update_query_sphinx);
}
/* free result set */
mysqli_free_result($result);
}
// Commit transaction
mysqli_commit($sphinxql);
$current_count = $current_count + $subset_count ;
}
So there are a couple of issues here, both related to running big processes.
MySQL server has gone away - This usually means that MySQL has timed out, but it could also mean that the MySQL process crashed due to running out of memory. In short, it means that MySQL has stopped responding, and didn't tell the client why (i.e. no direct query error). Seeing as you said that you're running 50k updates in a single transaction, it's likely that MySQL just ran out of memory.
Allowed memory size of 134217728 bytes exhausted - means that PHP ran out of memory. This also leads credence to the idea that MySQL ran out of memory.
So what to do about this?
The initial stop-gap solution is to increase memory limits for PHP and MySQL. That's not really solving the root cause, and depending on t he amount of control you have (and knowledge you have) of your deployment stack, it may not be possible.
As a few people mentioned, batching the process may help. It's hard to say the best way to do this without knowing the actual problem that you're working on solving. If you can calculate, say, 10000 or 20000 records instad of 50000 in a batch that may solve your problems. If that's going to take too long in a single process, you could also look into using a message queue (RabbitMQ is a good one that I've used on a number of projects), so that you can run multiple processes at the same time processing smaller batches.
If you're doing something that requires knowledge of all 6 million+ records to perform the calculation, you could potentially split the process up into a number of smaller steps, cache the work done "to date" (as such), and then pick up the next step in the next process. How to do this cleanly is difficult (again, something like RabbitMQ could simplify that by firing an event when each process is finished, so that the next one can start up).
So, in short, there are your best two options:
Throw more resources/memory at the problem everywhere that you can
Break the problem down into smaller, self contained chunks.
You need to reconnect or restart the DB session just before mysqli_begin_transaction($sphinxql)
something like this.
<?php
//reconnect to spinx if it is disconnected due to timeout or whatever , or force reconnect
function sphinxReconnect($force = false) {
global $sphinxql_host;
global $sphinxql_port;
global $sphinxql;
if($force){
mysqli_close($sphinxql);
$sphinxql = #mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','') or die('ERROR');
}else{
if(!mysqli_ping($sphinxql)){
mysqli_close($sphinxql);
$sphinxql = #mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','') or die('ERROR');
}
}
}
//10mil+ rows update in mysql, so it takes around 18-20 minutes to complete this then comes this following part.
//reconnect to sphinx
sphinxReconnect(true);
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
//do your otherstuff
// Commit transaction
mysqli_commit($sphinxql);

Efficient query to just return row count

I have five different queries running on my about page showing basic data like the number of news stories we have on the site. I am using queries like this:
$sql4 = "SELECT `ride_id` FROM `tpf_rides` WHERE `type` LIKE '%Roller Coaster%'" ;
$result4 = $pdo->query($sql4);
$coasters = $result4->rowCount();
but wonder if there is a more efficient way. I've tried to minimize the load by only pulling id's but because I only need the count can the load be lightened even more?
Also these queries only really need to run once or twice per day, not every time the page is loaded. Can someone point me in the direction of setting this up? I've never had to do this before. Thanks.
Yes there is a more efficient way. Let the database do the counting for you:
SELECT count(*) as cnt
FROM `tpf_rides`
WHERE `type` LIKE '%Roller Coaster%';
If all the counts you are looking for are from the tpf_rides table, then you can do them in one query:
SELECT sum(`type` LIKE '%Roller Coaster%') as RollerCoaster,
sum(`type` LIKE '%Haunted House%') as HauntedHouse,
sum(`type` LIKE '%Ferris Wheel%') as FerrisWheel
FROM `tpf_rides`;
That would be even faster than running three different queries.
If you want to run those queries only every now and then you need to keep the result stored somewhere. This can take a form of a pre-calculated sum you manage yourself or a simple cache.
Below is a very simple and naive cache implementation that should work reliably on linux. Many things can be improved here but maybe this will give you an idea of what you could do.
The below is not compatible with the query suggested by Gordon Linoff which returns multiple counts.
The code has not been tested.
$cache_directory = "/tmp/";
$cache_lifetime = 86400; // time to keep cache in seconds. 24 hours = 86400sec
$sql4 = "SELECT count(*) FROM `tpf_rides` WHERE `type` LIKE '%Roller Coaster%'";
$cache_key = md5($sql4); //generate a semi-unique identifier for the query
$cache_file = $cache_directory . $cache_key; // generate full cache file path
if (!file_exists($cache_file) || time() <= strtotime(filemtime($cache)) + $cache_lifetime)
{
// cache file doesn't exist or has expired
$result4 = $pdo->query($sql4);
$coasters = $result4->fetchColumn();
file_put_contents($cache_file, $coasters); // store the result in a cache file
} else {
// file exists and data is up to date
$coasters = file_get_contents($cache_file);
}
I would strongly suggest you break this down into functions that take care of different aspects of the problem.

How can I determine which query is taking the longest time to execute

I have a list of rules and based on those rules I trigger multiple queries,
But it takes a while for one of the queries to run and I am trying to figure out which one is taking too long so I can figure out a solution.
I don't see why it is taking too long (more than 7 seconds)
All of my queries are update, insert, delete, and couple of a small selects
Note: I am passing a jquery post event so I get a time out error currently and thats because execution time of the page is longer that 7 seconds. I can fix it by increasing the time out value. But I want to fix the root of the problem. it should not take that long to execute a simple select, insert, update or remove.
I am using PDO to execute queries and I am using my own class to connect to the tables and execute queries. if you like to see my class please follow this this How can I return LastInsertID from PDO whin a method of a class
You can enable the MySQL Slow Query Log to see which DB queries are taking a long time
http://dev.mysql.com/doc/refman/5.5/en/slow-query-log.html
By default it will show queries that take longer than 10 seconds
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_long_query_time
I would suggest changing that parameter to 2 seconds.
You could wrap the PDO class and log the execution time in the query() method. What about something like this:
class MyPDO extends PDO {
// extended query method that logs the time a query takes to execute
public function query($sql) {
$start = microtime(true);
parent::query($sql);
printf("the query %s took: %s seconds\n",
$sql,
microtime(true) - $start);
}
}
Use the class by replacing PDO with MyPDO. Like this:
old code:
$connection = new PDO(...);
new code:
$connection = new MyPDO(...);
This is similar to the answer from hek2mgl but I'm including an explanation.
When I want to know how long anything takes (queries, functions, whatever) the most straightforward way is to take a timestamp just before you start, and then another one as soon as you're done. Subtract the start time from the end time and you know exactly how many seconds has passed.
$start = time();
doStuff();
$end = time();
$duration_in_seconds = $end - $start;
echo "Function doStuff() took $duration_in_seconds seconds";
You can echo the duration to the screen or log it to a file, whatever is convenient.
If you need to be more precise you can use microtime() as hek2mgl has done.

joomla-php mysql not updating a record with data from previous query

I'm counting the right answers field of a table and saving that calculated value on another table. For this I'm using two queryes, first one is the count query, i retrieve the value using loadResult(). After that i'm updating another table with this value and the date/time. The problem is that in some cases the calculated value is not being saved, only the date/time.
queries look something like this:
$sql = 'SELECT count(answer)
FROM #_questionsTable
WHERE
answer = 1
AND
testId = '.$examId;
$db->setQuery($sql);
$rightAnsCount = $db->loadResult();
$sql = 'UPDATE #__testsTable
SET finish = "'.date('Y-m-d H:i:s').'", rightAns='.$rightAnsCount.'
WHERE testId = '.$examId;
$db->setQuery($sql);
$db->Query();
answer = 1 means that the question was answered ok.
I think that when the 2nd query is executed the first one has not finished yet, but everywhere i read says that it waits that the first query is finished to go to the 2nd, and i don't know how to make the 2nd query wait for the 1st one to end.
Any help will be appreciated. Thanks!
a PHP MySQL query is synchronous ie. it completes before returning - Joomla!'s database class doesn't implement any sort of asynchronous or call-back functionality.
While you are missing a ';' that wouldn't account for it working some of the time.
How is the rightAns column defined - eg. what happens when your $rightAnsCount is 0
Turn on Joomla!'s debug mode and check the SQL that's generated in out the profile section, it looks something like this
eg.
Profile Information
Application afterLoad: 0.002 seconds, 1.20 MB
Application afterInitialise: 0.078 seconds, 6.59 MB
Application afterRoute: 0.079 seconds, 6.70 MB
Application afterDispatch: 0.213 seconds, 7.87 MB
Application afterRender: 0.220 seconds, 8.07 MB
Memory Usage
8511696
8 queries logged.
SELECT *
FROM jos_session
WHERE session_id = '5cs53hoh2hqi9ccq69brditmm7'
DELETE
FROM jos_session
WHERE ( TIME < '1332089642' )
etc...
you may need to add a semicolon to the end of your sql queries
...testId = '.$examID.';';
ah, something cppl mentioned is the key I think. You may need to account for null values from your first query.
Changing this line:
$rightAnsCount = $db->loadResult();
To this might make the difference:
$rightAnsCount = ($db->loadResult()) ? $db->loadResult() : 0;
Basically setting to 0 if there is no result.
I am pretty sure you can do this in one query instead:
$sql = 'UPDATE #__testsTable
SET finish = NOW()
, rightAns = (
SELECT count(answer)
FROM #_questionsTable
WHERE
answer = 1
AND
testId = '.$examId.'
)
WHERE testId = '.$examId;
$db->setQuery($sql);
$db->Query();
You can also update all values in all rows in your table this way by slightly modifying your query, so you can do all rows in one go. Let me know if this is what you are trying to achieve and I will rewrite the example.

Php query MYSQL very slow. what possible to cause it?

I have a php page query mysql database, it will return about 20000 rows. However the browser will take above 20 minutes to present. I have added index on my database and it do used it, the query time in command line is about 1 second for 20000 rows. but in web application, it takes long. is anyone know which causing this problem? and better way to improve it?Below is my php code to retrieve the data:
select * from table where Date between '2010-01-01' and '2010-12-31'
$result1 = mysql_query($query1) or die('Query failed: ' . mysql_error());
while ($line = mysql_fetch_assoc($result1)) {
echo "\t\t<tr>\n";
$Data['Date'] = $line['Date'];
$Data['Time'] = $line['Time'];
$Data['Serial_No'] = $line['Serial_No'];
$Data['Department'] = $line['Department'];
$Data['Team'] = $line['Team'];
foreach ($Data as $col_value) {
echo "\t\t\t<td>$col_value</td>\n";
};
echo "\t\t</tr>\n";
}
Try adding an index to your date column.
Also, it's a good idea to learn about the EXPLAIN command.
As mentioned in the comments above, 1 second is still pretty long for your results.
You might consider putting all your output into a single variable and then echoing the variable once the loop is complete.
Also, browsers wait for tables to be completely formed before showing them, so that will slow your results (at least slow the process of building the results in the browser). A list may work better - or better yet a paged view if possible (as recommended in other answers).
It's not PHP that's causing it to be slow, but the browser itself rendering a huge page. Why do you have to display all that data anyway? You should paginate the results instead.
Try constructing a static HTML page with 20,000 table elements. You'll see how slow it is.
You can also improve that code:
while ($line = mysql_fetch_assoc($result1)) {
echo "\t\t<tr>\n";
foreach ($line as $col_value) {
echo "\t\t\t<td>$col_value</td>\n";
flush(); // optional, but gives your program a sense of responsiveness
}
echo "\t\t</tr>\n";
}
In addition, you should increase your acceptance rate.
You could time any steps of the script, by echoing the time before and after connecting to the database, running the query and outputting the code.
This will tell you how long the different steps will take. You may find out that it is indeed the traffic causing the delay and not the query.
On the other hand, when you got a table with millions of records, retreiving 20000 of them can take a long time, even when it is indexed. 20 minutes is extreme, though...

Categories