Increasing MySQL server timeout value - php

I have a find() call which sometimes takes a long time to complete, depending on the date range selected by the user. This can sometimes cause the server to time out (2006: MySQL server has gone away), causing the find() to fail. I have tried altering the timeout value using the following:
ini_set('mysql.connect_timeout', 5);
My presumption is that this is failing because I cannot override the server settings on the hosting package.
I was advised by the hosting company to use the following code:
SET ##session.wait_timeout=60
I would be very grateful for any advice on increasing the MySQL server timeout through CakePHP.

i guess you should increase php request timeout rather then mysql timeout
set_time_limit(250);

I think you should consider another approach.
Because php run by apache will timeout, although you can set it a large value(however, some hosting prohibit to do so).
You can try to submit the form by AJAX when user request the data. And using some backend technology(node.js) to connect to mysql and query the data. Then send back to front end.

Related

Website stop responding when database backup is taken

I am using AWS EC2 instance, and my database size is approx 4GB. Using ubuntu OS and mysql database within the instance. Whenever i dump my database the time its dumping, website stops to respond. Time period is about 15 to 20 seconds.
Kindly assist i anything goes better than this backup procedure.
I think you forgot to turn off the lock tables option. By default, MySQL sets table locks when doing a data export.
The lock isn't released until the data export is complete, which explains why your website process cannot do anything on the tables for about 15-20 seconds.
If you are taking the database dump through MySQL workbench, go to advanced options and uncheck lock-tables.
Please check max_execution_time and memory_limit in your php.ini file.
you also can use set_time_limit function.
obviously, when you are taking back up the database is busy with that and could not listen to other requests you may consider to get something like RDS which will handle the backup jobs for you behind the scene also if you get done with the read replica you can get rid of this timeout issue.

PHP - getting gateway timeout error for certain post request on Apache 2

I am running my application on laravel 5.1.27 on server hosted on hostgator.
Most of the times my POST requests end up in a gateway timeout error. I've restfull APIs which allow user to send POST requests and also I am using datatables. Datatables post request also mostly end up as Timeout Error.
I've read many other threads but can't seem to be successful in removing these Errors. Everything is working fine on my local machine but on server timeout errors occur.
Here are my live server specifications:
Any help/suggestions would be really appreciated.
Note: I am using shared hosting plan so I don't have root access on my server to solve my problem. So keep this thing in mind while suggesting any solutions.
Try using
<?php
set_time_limit (60);
?>
Set the number of seconds a script is allowed to run. If this is reached, the script returns a fatal error. The default limit is 30 seconds or, if it exists, the max_execution_time value defined in the php.ini.
The PHP default is 30 seconds, however your host may set this even lower.
If you change to 60 to 0 this will tell PHP never to time out.
This is not recommended, as if you have a leaky/looping script, this can cause havoc on the server (and your host will probably disable your site until the script stops).

PHP Mysql is going away after ~60 seconds

First of all, I already went through some related posts, but without luck.
I followed the solution provided in MySQL server has gone away - in exactly 60 seconds
setting this values at the very beginning:
ini_set('mysql.connect_timeout', 300);
ini_set('default_socket_timeout', 300);
but it seems that the error persist.
The error occurs just before perform a query: the database class used for handling the mysql operations, perform a ping (mysqli_ping) in order to refresh the connection (guess that's the meaning of using mysql ping) but in certain point, ~60 it throws this warning:
Warning: mysqli_ping(): MySQL server has gone away in...
Is there something I missing?
UPDATED
I figured out where exactly the issue is.
I will explain further my workflow.
I establish two different DB connections, the first one I do is just for retrieve data, and the second one, is used to insert all data obtained (row by row). Due to the second connection is the one that is performing operations, I thought that that's the one producing the server's gone away, but it turns out that the error is raise by the idle connection (the first one).
The workaround I made is to close the connection (the first one due to it will be no longer used) just after the data was queried.
The second connection has enough time to not reach a timeout.
The first connection is to a remote database and the second one is made to my local server, so I have completely control over the second one.
The weird thing with the connection I do to the remote MySQL server is that, when I connect from my PHP script, after the 60 seconds or so, it reaches the timeout, but if I connect it from the console, the connection is not timing out. Do you know how can I managed to not get that timeout (server has gone away)? As I said above, I already have a workaround to this, but I'd like to know why from PHP is timing out after ~60 seconds whereas from the console I can stay connected per hours.
Those configs that you're setting change the client connection, this problem is related to the server: it's the server that is closing the connection. To change this configuration you must change the value of wait_timeout on the my.cnf
Sources:
http://dev.mysql.com/doc/refman/5.7/en/gone-away.html
http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_wait_timeout
Just one more thing: change this may be not the best thing that you can do. Try to improve you query first.

Timeout faster \w memcache

I'm trying to force PHP's Memcache extension to timeout almost immediately if a memcached server I'm connecting to isn't available (for whatever reason). I'd like to throw an exception in this case (which will be handled somewhere else).
I've been searching and trying different things without any luck. I'm adding servers (only one for now) to the pool with the standard:
$this->memcache->addServer ( $server['host'], $server['port'] );
I then killed the memcached deamon (also tried with a wrong port&host) and opened my page. It just loads for a very long time and then nginx comes back with a 504 Gateway Time-out error.
How can I tell the memcache client to try for, I don't know, 1 second and then give up, at which point I should be able to detect the timeout somehow.
The bottom line is that if our memcached server would be down I'd like to display a user-friendly error page (already working for uncaught exceptions) as soon as possible and not make the user wait for 30 sec before he sees a generic server error.
Just call:
Memcache::getServerStatus() or
Memcache::getExtendedStats()
Also, this question is pretty much identical to yours.
Reduce the the value of max_failover_attempts memcache module configuration parameter, default number is too high.
You can also specify timeout as 3rd parameter to connect() method:
$memcache->connect('memcache_host', 11211, $timeout);
however the default timeout should be already set to 1 second.
Another place to look are TCP timeout parameters in OS.

Execution Creates "MySQL server has gone away" Error

I am using a PHP script which is meant to execute continuously for a long time (a one-time mapping script) and insert data into a MySQL database. The script works fine, however, eventually (after a minute or two) it gets an error:
MySQL server has gone away
I have changed the ini settings for both mysql connection timeout, and for the php script execute timeout but neither have changed the outcome.
I have make a VERY similar script in the past that had run on the same server for long amounts of time without ever running into this error.
I thank you for your time, hopefully your help can allow me to solve this problem along with any other frustrated scripter coming across this post in the future.
There are many reasons for this to happen : timeout, big packets size etc.
Please check this
Did you restarted mysqld after config changes?
Do you have enough memory, so that it's not killed by OOM killer?
UPDATE: here is the solution, you need to set "wait timeout"
http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
check out max_allowed_packet on mysql server settings. Client cannot send packets larger than this or mysql server will close the connection.
This is only related to data inserts when the query string gets very long. This does not affect SELECTs as server will automatically enlarge the sending packet.
Good size would be the size of your data multiplied by two. The multiplication is needed as often data is escaped before sending and header and possible footer added in the SQL QUery.
The use for the max_allowed_packet would be to control the server memory usage and to limit DoS attacks.

Categories