I am getting a peculiar problem. Apache closes connection after 12 seconds or so. This leads to a "connection reset by peer" message on the browser.
I am on Linux Centos 5. Using apache2/php5.x/mod_gzip. (php with eAccelerator)
I tested some variations:
Usually, I will print all the HTML output as the last step. It always closes connection when the processing time goes above 12 seconds.
If the print happens quicker ( < 12 secs ), connection is not closed and I get the page on the browser.
If I print something regularly (every second or so), the connection is not closed even if the processing time goes above 12 secs.
What could be the possible issue here? Any suggestions on fixing this issue?
Edit - More details:
apache access-log shows status code is 200.
TimeOut directive is set. Timeout value is set at 60.
php.ini: max_execution_time is set at 30 secs.
client and server on different machines. It is a direct connection (no proxies in between Edit2: The ISP routes all requests through its proxy.).
Apache is standalone.
On the software side,
What status code is logged in access.log?
Do you (per-chance) have a Timeout directive in your httpd.conf (or inside any other files that may be included from httpd.conf)?
What is max_execution_time configured to be in php.ini?
Is your Apache being used as a reverse-proxy, or is it stand-alone?
On the network side,
Are the server and your client (browser PC) on the same machine, or is there a proxy, firewall or router in-between?
Related
How do I increase the apache timeout directive in .htaccess? I have a LONG $_POST['script'] that takes a user probably 10 minutes to fill in all the data. The problem is if it takes too long than the page times out or something and it goes to a webpage is not found error page. Would increasing the apache timeout directive in .htaccess be the answer I'm looking for. I guess it's set to 300 seconds by default, but I don't know how to increase that or if that's even what I should do... Either way, how do I increase the default time? Thank you.
if you have long processing server side code, I don't think it does fall into 404 as you said ("it goes to a webpage is not found error page")
Browser should report request timeout error.
You may do 2 things:
Based on CGI/Server side engine increase timeout there
PHP : http://www.php.net/manual/en/info.configuration.php#ini.max-execution-time - default is 30 seconds
In php.ini:
max_execution_time 60
Increase apache timeout - default is 300 (in version 2.4 it is 60).
In your httpd.conf (in server config or vhost config)
TimeOut 600
Note that first setting allows your PHP script to run longer, it will not interferre with network timeout.
Second setting modify maximum amount of time the server will wait for certain events before failing a request
Sorry, I'm not sure if you are using PHP as server side processing, but if you provide more info I will be more accurate.
Just in case this helps anyone else:
If you're going to be adding the TimeOut directive, and your website uses multiple vhosts (eg. one for port 80, one for port 443), then don't forget to add the directive to all of them!
This solution is for Litespeed Server (Apache as well)
Add the following code in .htaccess
RewriteRule .* - [E=noabort:1]
RewriteRule .* - [E=noconntimeout:1]
Litespeed reference
I have some cron job set up on Linux through wget, those jobs run once every 24 hours. All the jobs are basically calling the APIs, pulling the data and I am strong it on the database. Now the issue is some API calls are very very slow and takes a lot of time to get a response that eventually ends up getting the below error.
--2017-07-24 06:00:02-- http://wwwin-cam-stage.cisco.com/cron/mh.php Resolving
wwwin-cam-stage.cisco.com (wwwin-cam-stage.cisco.com)... 171.70.100.25
Connecting to wwwin-cam-stage.cisco.com
(wwwin-cam-stage.cisco.com)|171.70.100.25|:80... connected. HTTP
request sent, awaiting response... Read error (Connection reset by
peer) in headers. Retrying.
--2017-07-24 06:05:03-- (try: 2) http://wwwin-cam-stage.cisco.com/cron/mh.php Connecting to
wwwin-cam-stage.cisco.com
(wwwin-cam-stage.cisco.com)|171.70.100.25|:80... connected. HTTP
request sent, awaiting response... Read error (Connection reset by
peer) in headers. Retrying.
--2017-07-24 06:10:05-- (try: 3) http://wwwin-cam-stage.cisco.com/cron/mh.php Connecting to
wwwin-cam-stage.cisco.com
(wwwin-cam-stage.cisco.com)|171.70.100.25|:80... connected. HTTP
request sent, awaiting response... 200 OK Length: 0 [text/html] Saving
to: ‘mh.php.6’
0K 0.00 =0s
2017-07-24 06:14:58 (0.00 B/s) - ‘mh.php.6’ saved [0/0]
Though at third try it gave response 200 OK, it messes up the actual data as it timed out in a first and second try.
How can I change the URL timeout settings to unlimited time or highest limit in order to complete job in one try and without getting error like
(Connection reset by peer)....
wget --timeout 10 http://url
This can be used in case of wget.
EDIT
Or
If you are asking about the Keep-Alive of linux machine this might help.
On RedHat Linux modify the following kernel parameter by editing the /etc/sysctl.conf file, and restart the network daemon (/etc/rc.d/init.d/network restart).
"Connection reset by peer" is the TCP/IP equivalent of slamming the
phone back on the hook. It's more polite than merely not replying,
leaving one hanging. But it's not the FIN-ACK expected of the truly
polite TCP/IP converseur.
Code:
# Decrease the time default value for tcp_keepalive_time
tcp_keepalive_time = 1800
EDIT
-T seconds
‘--timeout=seconds’
Set the network timeout to seconds seconds. This is equivalent to specifying ‘--dns-timeout’, ‘--connect-timeout’, and ‘--read-timeout’, all at the same time.
When interacting with the network, Wget can check for timeout and abort the operation if it takes too long. This prevents anomalies like hanging reads and infinite connects. The only timeout enabled by default is a 900-second read timeout. Setting a timeout to 0 disables it altogether. Unless you know what you are doing, it is best not to change the default timeout settings.
All timeout-related options accept decimal values, as well as subsecond values. For example, ‘0.1’ seconds is a legal (though unwise) choice of timeout. Subsecond timeouts are useful for checking server response times or for testing network latency.
‘--dns-timeout=seconds’
Set the DNS lookup timeout to seconds seconds. DNS lookups that don’t complete within the specified time will fail. By default, there is no timeout on DNS lookups, other than that implemented by system libraries.
‘--connect-timeout=seconds’
Set the connect timeout to seconds seconds. TCP connections that take longer to establish will be aborted. By default, there is no connect timeout, other than that implemented by system libraries.
‘--read-timeout=seconds’
Set the read (and write) timeout to seconds seconds. The “time” of this timeout refers to idle time: if, at any point in the download, no data is received for more than the specified number of seconds, reading fails and the download is restarted. This option does not directly affect the duration of the entire download.
Of course, the remote server may choose to terminate the connection sooner than this option requires. The default read timeout is 900 seconds.
Source Wget Manual.
See this wget Manual page for more information.
I am running a websockets server using https://github.com/ghedipunk/PHP-Websockets/blob/master/websockets.php on an Ubuntu 16 box with PHP7
After 256 users connect to the websocket, it stops taking connections and I can't figure out why. In the client, I get a 1006 error code (connection was closed abnormally (locally) by the browser implementation) and no further information. The websockets request doesn't appear to make it to the websockets server (which normally echos "Client Connected" right after a socket connection is made).
In the connect() function, one of the things I do is echo the count of the number of users, sockets and overall memory usage to the log. This problem occurs whenever the user count hits 256 (at which point the socket count is 257 and memory usage around 4Mb). The fact that it happens at 256 makes me think that a limit is being hit somewhere, but I can't find that limit. If I restart the websockets server, everything works fine again.
From my investigation so far, I have tried and checked:
ulimit (says it's unlimited)
MySQL connection limit (was set to default, now is 1000, but that didn't help)
Increased the PHP memory limit (because why not, just to see)
SOMAXCONN is set to 128, so I don't think this is the problem, but I would have to recompile PHP to test it. I haven't tried this yet.
Apache: The message I get when the problem occurs is: [proxy:error] [pid 16785] (111)Connection refused: AH00957: WS: attempt to connect to 10.0.0.240:9000 (websockets.mydomain.com) failed, which doesn't tell me much about anything. Apache is running MPM prefork and I have increased the spare servers and MaxRequestWorkers
I am open to any suggestions as to where to look next or how to get more detail out of the "Connection Refused" error log from apache!
Thanks
It's Apache that holding you up. Try setting the following in your conf file...
MaxClients 512
ServerLimit 512
(you must set both)
Of course, you can use whatever numbers work for you. In mpm-prefork, you should be able to go to 20,000 but that really shouldn't be necessary.
Im currently writing a php script which accesses a csv file on a remote server, processes the data then writes data to the local MySQL database. Because there is so much data to process and insert into the database (50,000 lines), the script takes longer than 60 seconds to run. The problem I have is, the script times out after 60 seconds.
To make sure its not a MySQL issue, i created another script that enters an infinite loop, and it too times out at 60 seconds.
I have tried increasing/changing the following settings on the Ubuntu server but it hasn't helped:
max_execution_time
max_input_time
mysql.connect_timeout
default_socket_timeout
the TimeOut value in the apache2.conf file.
Could it possibly be an issue because i'm accessing the PHP file from a web browser? Do web browsers have time out limits?
Any help would be appreciated.
The simplest and least intrusive way to get over this limit is to add this line to your script.
Then you are only amending the execution time for this script and not all PHP scripts which would be the case if you amended either of the 2 PHP.INI files
ini_set ('max_execution_time', -1);
When you were trying to amend the php.ini file I would guess you were amending the wrong one, there are 2, one used only be the PHP CLI and one used by PHP running with Apache.
For future reference to find the actual file used by php-apache just do a
<?php
phpinfo();
?>
And look for Loaded Configuration File
I finally worked out the reason the request times out. The problem lies with having virtual server hosting.
The request from the web browser is sent to the hosting server which then directs the request to the virtual server (acts like a separate server). Because the hosting server doesn't get a response back from the virtual server after 60 seconds, it times out and sends a response back to the web browser saying exactly this. Meanwhile, the virtual server is still processing the script.
When the virtual server finally finishes processing the script, it is too late as the hosting server has already returned a timeout error to the front-end user.
Because the hosting server is used to host many virtual servers (for multiple different users), it is generally not possible to change the timeout settings on this server.
So, final verdict: The timeout error cannot be avoided with virtual hosting. If this is a serious issue, you may need to look into getting dedicated server hosting.
Michael,
Your problem should come from the PHP file and not the web browser accessing it.
Did you try putting the following lines at the beginning of your PHP file ?
set_time_limit(0);
ini_set ('max_execution_time', 0);
PHP has 2 configuration files, one for Apache and one for CLI, which explains why when running the script in command line, you don't have a timeout. The phpinfo you gave me has a max_execution_time at 6000
See set time limit documentation.
For CentOS8, the below settings worked for me:
sed -i 's/default_socket_timeout = 60/default_socket_timeout = 6000/g' /etc/php.ini
sed -i 's/max_input_time = 60/max_input_time = 30000/g' /etc/php.ini
sed -i 's/max_execution_time = 30000/max_execution_time = 60000/g' /etc/php.ini
echo "Timeout 6000" >> /etc/httpd/conf/httpd.conf
Restarting apache the usual way isn't good enough anymore. You have to do this now:
systemctl restart httpd php-fpm
Synopsis:
If the script(PHP function) takes 61 seconds or above, then you will get a gateway timeout error. The term Gateway is referred to as the PHP worker, meaning the worker timed out because thats how it was configured. It has nothing to do with networking.
php-fpm is a new service in CentOS8. From what I gathered from the internet (I have not verified this myself), it basically has executables(workers) running in the background waiting for you to give it scripts (PHP) to execute. The time saving is the executables are always running. Because they are already running you suffer no start-up time penalty.
My PHP script executes some program on my server IIS 7.5
Takes time about 10 mins to execute but above error in browser.
How to resolve this.
Error:
HTTP Error 500.0 - Internal Server Error
C:\php\php-cgi.exe - The FastCGI process exceeded configured request timeout
Module FastCgiModule
Notification ExecuteRequestHandler
Handler FastCGI
Error Code 0x80070102
php.ini settings:
fastcgi.impersonate = 1
fastcgi.logging = 0
cgi.fix_pathinfo=1
cgi.force_redirect = 0
max_execution_time = 0
upload_max_filesize = 20M
memory_limit = 128M
post_max_size = 30M
C:\Windows\System32\inetsrv\config\ applicationHost.config file settings for fast-cgi
<fastCgi>
<application
fullPath="C:\php\php-cgi.exe" activityTimeout = "3600" requestTimeout = "300" />
</fastCgi>
This is sort of a quick explanation of what is going on. When you are using a CGI/FCGI configuration for PHP. The webserver (in this case IIS), routes requests that require php processing to the PHP process (which runs separately from the web server).
Generally, to prevent connections from getting stuck open and waiting (if php process happens to crash) the web server will only wait a set amount of time for the PHP process to return a result (usually 30-60 seconds).
In your configuration you have this:
requestTimeout = "300"
300 seconds = 5 minutes. IIS will cancel the request since your request takes 10 minutes to complete. Simple fix, increase the timeout to something 600 or greater.
Now, running a script for 10 minutes with an http request is not a good design pattern. Generally, http works best with short lived requests. The reason is that timeouts can exist in any part of the process (server, proxy, or client) and the script could be accidentally interrupted.
So, when you have a web application that has a long running job like this, the best way to run it is via console or job queue.
There is one more setting I found from this source that helped me with the same problem I was having.
Copied and pasted:
Open Server Manager
At Server Level (not default Web site)
Double click FastCGI Settings
open PHP.EXE listed there
Monitor changes to file php.ini
activity timeout default is 60s - change to 600s or whatever
This can be solve by adjusting fast-cgi configuration.
Goto "C:\Windows\System32\inetsrv\" and edit "fcgiext.ini" file
[PHP]
ExePath=C:\xampp\php\php-cgi.exe
MonitorChangesTo=C:\xampp\php\php.ini
ActivityTimeout=3600
IdleTimeout=3600
RequestTimeout=3600
Make sure to place ActivityTimeout,IdleTimeout & RequestTimeout inside [PHP] section as shown above.