Some pages on my site can take a bit of time to process and thus the connection to mysql closes.
I'm not in a position to increase the timeout variable for mysql, so I'd therefore like to find a way to tell codeigniter to attempt to reconnect X amount of times when it tries to run a query before dying.
I've done this in non-ci projects by running all queries through a custom query comment which:
checks if the connection is there
if it isn't, tries to reconnect and stores the number of attempts in a global variable
runs the query
returns the results.
What is the best method of doing something like this in CI?
Related
We have a site that gets a fair amount of traffic, and we have a query that seems to be executed every time a query is run. It is getting logged in a query log because it is sometimes slow.
SELECT CHARACTER_SET_NAME FROM INFORMATION_SCHEMA.COLLATIONS WHERE COLLATION_NAME= 'latin1_swedish_ci';
It appears to happen inside the getCharsetName() function, which is called from the describe() function. Does anyone happen to know why this is being called? Is there a way to make it go away? It seems unnecessary to call this EVERY time.
Thanks
I have a mysqli_query() in a PHP script which usually takes about 10 ms to finish. However, sometimes it takes about 5000 ms. This query isn't important for the site to function properly, i.e. it can be ignored if it takes for more than 10 ms.
Is this possible to do?
Something like:
<?php
// microtime here is X
$query = mysqli_query($connection, "(do something)");
// if difference between time now and previous time is bigger than 10 ms,
// skip waiting for the $query result and go on
?>
If anything like this is possible, can we also cancel the running query, so that MySQL server knows we don't want the result anymore and stops processing it?
Since the query is not sensitive, you should cache the results instead of hitting the database each time.
PHP provides mysqlnd-qc the (qc is for query cache) which works with APC and friends to cache the results of MySQL queries.
The other option is to cache the template instead. Smarty has this included. This means you render the page once, and then only render it after a certain fixed time.
I have a db with over 5 million rows and for each row i have to do a http post to a server with some parameters at maximum rate of 500 connections. each post request takes 12 secs to process. so as old connections are completed i have to do new ones and maintain ~500 connection. I have to then update DB with values returned from these webcalls.
How do I make the webcalls as above?
My App is in PHP. Can I use PHP or should I switch to something else for this.
Actually you can definitely do this with PHP using a technique called long-polling. Basically how it works is the client machine pings the server and says "Do you have anything for me" and the server sees that it does not. Instead of responding it holds onto the request and responds when it has something to send.
Long polling is a method that is used by both DrupalChat and the APE project (AJAX Push Engine).
http://drupal.org/project/drupalchat
http://www.ape-project.org/
Here is some more info on push tech: http://en.wikipedia.org/wiki/Push_technology and http://en.wikipedia.org/wiki/Comet_%28programming%29
And here is a stackoverflow post about it: How do I implement basic "Long Polling"?
Now I have to say that 12 seconds is really dang long for a DB query to run. It sounds like either the query needs to be optimized or the DB does (or both). Have you normalized the database and setup good table and inter-table indexing?
Now as for preventing DB update collisions you need to use transactions (which both PostGres and newer versions of MySQL offer along with most enterprise DB systems). Transactions will allow you to rollback db changes and reserve table IDs and things like that.
http://en.wikipedia.org/wiki/Database_transaction
PHP isn't the right tool to make long-running scripts, since it by default has a maximum execution time which is pretty short. You might look into using python for this task. Also note that you can call external scripts from PHP (such as python scripts) using the system() function, if the only reason you're using PHP is to make it easy to integrate a web front-end.
However, you [b]can[/b] do this in php with a cron-job by simply having your php script only handle a single row at a time, and have the cron-job call the php script every second. Just maintain the index into the table elsewhere (either elsewhere in the DB or just write the number to a file)
If you wanted to saturate your 500 connection limit, have your script do 40 rows at a time. 40 rows / second is roughly 500 rows / 12 seconds
I'm building a web application that allows users to run a query against one of two databases. They are only allowed to submit 1 query to each database at a time. Currently, I'm setting $_SESSION['runningAdvancedQuery'] (for instance) to let me know if they're running a query. This is to prevent someone from opening up a 2nd browser tab and hitting the resource again. It works great except...
If I start a query in a tab, then close that tab before it finishes, the flag does not get unset. Therefore, if I reopen the page, I cannot run any queries because it thinks I have one still running. Anyone have advice for getting around this?
Set this value not to for example 1, but to unix timestamp, and do checking by comprasion last-query-timestamp to now, setting up some time difference that must go by to execute next query. Remeber to set a block-time to safe value - longest time which query can be executed. If user closes his tab, after a short time he will be "unlocked".
<?php
ignore_user_abort(true);
//if session variable is not set
//set session variable
//run query
//unset session variable
//else
//show error: "your other query isn't finished yet"
?>
Instead of setting $_SESSION['runningAdvancedQuery']; to true, you could set it to the output of SELECT CONNECTION_ID();, and check show processlist; whether it's still running. This would be an addon to the other answers though: especially when using persistent connections other processes could have the connection_id in use.
As the same user, you are always allowed to see your own processes. So, to combine the other checks:
Store timestamp, connection_id & actual sql-query (I'd remove all white-space & cast to either upper or lower string to avoid slight differences in presentation to matter).
Of course use the ignore_user_abort() functionality, do a session_write_close() before the query, restart the session & set it to finished after the query.
On a check for the running query, check whether
The connection-id is still present.
The timestamp + seconds the query is running are reasonably close to the current time.
The query (lowercase, stripped whitespace) is about the same query as requested (use SHOW FULL PROCESSLIST; to get the whole query).
Optionally, take it a step further, and give people the possibility to KILL QUERY <connection_id>; (only if the previous 3 checks all resulted in a positive).
I have a small PHP framework that basically just loads a controller and a view.
An empty page loads in about 0.004 seconds, using microtime() at the beginning and end of execution.
Here's the "problem". If I do the following:
$link = #mysql_connect($server,$user,$pass,$link);
#mysql_select_db($database, $link);
the page loads in about 0.500 seconds. A whopping 12500% jump in time to render the empty page.
is this normal or am I doing something seriously wrong here... (I'm hoping for the latter).
EDIT: Could someone say what a normal time penalty is for just connecting to a mysql db like above.
Error suppression with # will slow down your script. Also an SQL connection is reliant on the speed of the server. So if the server is slow to respond, you will get a slow execution of your script.
Actually, I don't get what you're trying to do with the $link variable .
$link = #mysql_connect($server,$user,$pass,$link); is probably wrong, possibly not doing what you want (ie nothing in your example) unless if you have more than one link to databases (advanced stuff)
the php.net documention states
new_link
If a second call is made to
mysql_connect() with the same
arguments, no new link will be
established, but instead, the link
identifier of the already opened link
will be returned. The new_link
parameter modifies this behavior and
makes mysql_connect() always open a
new link, even if mysql_connect()
was called before with the same
parameters. In SQL safe mode, this
parameter is ignored.
On my webserver, the load time is always about the same on average (4000 µsecs the first time and 600 µsec the second time)
Half a second to connect to a mysql database is a bit slow but not unusual either. If it's on another server on the network with an existing load, it might be absolutely normal.
I wouldn't worry too much about this problem.
(oh, old question ! Nevermind, still replying)
Have you tried verifying your data integrity? do a repair table, and an optimze table to all tables. I know that sometimes when a table has gone corrupt the connection time can take an enormous amount of time / fail all together.
May be the reason is domain resolve slow.
skip-name-resolve
Add this to my.cnf, then restart mysqld.
And if you skip-name-resolve, you can't use hostname in mysql for user permission.