How to change URL timeout settings on linux webserver - php

I have some cron job set up on Linux through wget, those jobs run once every 24 hours. All the jobs are basically calling the APIs, pulling the data and I am strong it on the database. Now the issue is some API calls are very very slow and takes a lot of time to get a response that eventually ends up getting the below error.
--2017-07-24 06:00:02-- http://wwwin-cam-stage.cisco.com/cron/mh.php Resolving
wwwin-cam-stage.cisco.com (wwwin-cam-stage.cisco.com)... 171.70.100.25
Connecting to wwwin-cam-stage.cisco.com
(wwwin-cam-stage.cisco.com)|171.70.100.25|:80... connected. HTTP
request sent, awaiting response... Read error (Connection reset by
peer) in headers. Retrying.
--2017-07-24 06:05:03-- (try: 2) http://wwwin-cam-stage.cisco.com/cron/mh.php Connecting to
wwwin-cam-stage.cisco.com
(wwwin-cam-stage.cisco.com)|171.70.100.25|:80... connected. HTTP
request sent, awaiting response... Read error (Connection reset by
peer) in headers. Retrying.
--2017-07-24 06:10:05-- (try: 3) http://wwwin-cam-stage.cisco.com/cron/mh.php Connecting to
wwwin-cam-stage.cisco.com
(wwwin-cam-stage.cisco.com)|171.70.100.25|:80... connected. HTTP
request sent, awaiting response... 200 OK Length: 0 [text/html] Saving
to: ‘mh.php.6’
0K 0.00 =0s
2017-07-24 06:14:58 (0.00 B/s) - ‘mh.php.6’ saved [0/0]
Though at third try it gave response 200 OK, it messes up the actual data as it timed out in a first and second try.
How can I change the URL timeout settings to unlimited time or highest limit in order to complete job in one try and without getting error like
(Connection reset by peer)....

wget --timeout 10 http://url
This can be used in case of wget.
EDIT
Or
If you are asking about the Keep-Alive of linux machine this might help.
On RedHat Linux modify the following kernel parameter by editing the /etc/sysctl.conf file, and restart the network daemon (/etc/rc.d/init.d/network restart).
"Connection reset by peer" is the TCP/IP equivalent of slamming the
phone back on the hook. It's more polite than merely not replying,
leaving one hanging. But it's not the FIN-ACK expected of the truly
polite TCP/IP converseur.
Code:
# Decrease the time default value for tcp_keepalive_time
tcp_keepalive_time = 1800
EDIT
-T seconds
‘--timeout=seconds’
Set the network timeout to seconds seconds. This is equivalent to specifying ‘--dns-timeout’, ‘--connect-timeout’, and ‘--read-timeout’, all at the same time.
When interacting with the network, Wget can check for timeout and abort the operation if it takes too long. This prevents anomalies like hanging reads and infinite connects. The only timeout enabled by default is a 900-second read timeout. Setting a timeout to 0 disables it altogether. Unless you know what you are doing, it is best not to change the default timeout settings.
All timeout-related options accept decimal values, as well as subsecond values. For example, ‘0.1’ seconds is a legal (though unwise) choice of timeout. Subsecond timeouts are useful for checking server response times or for testing network latency.
‘--dns-timeout=seconds’
Set the DNS lookup timeout to seconds seconds. DNS lookups that don’t complete within the specified time will fail. By default, there is no timeout on DNS lookups, other than that implemented by system libraries.
‘--connect-timeout=seconds’
Set the connect timeout to seconds seconds. TCP connections that take longer to establish will be aborted. By default, there is no connect timeout, other than that implemented by system libraries.
‘--read-timeout=seconds’
Set the read (and write) timeout to seconds seconds. The “time” of this timeout refers to idle time: if, at any point in the download, no data is received for more than the specified number of seconds, reading fails and the download is restarted. This option does not directly affect the duration of the entire download.
Of course, the remote server may choose to terminate the connection sooner than this option requires. The default read timeout is 900 seconds.
Source Wget Manual.
See this wget Manual page for more information.

Related

PHP Websockets server stops accepting connections after 256 users

I am running a websockets server using https://github.com/ghedipunk/PHP-Websockets/blob/master/websockets.php on an Ubuntu 16 box with PHP7
After 256 users connect to the websocket, it stops taking connections and I can't figure out why. In the client, I get a 1006 error code (connection was closed abnormally (locally) by the browser implementation) and no further information. The websockets request doesn't appear to make it to the websockets server (which normally echos "Client Connected" right after a socket connection is made).
In the connect() function, one of the things I do is echo the count of the number of users, sockets and overall memory usage to the log. This problem occurs whenever the user count hits 256 (at which point the socket count is 257 and memory usage around 4Mb). The fact that it happens at 256 makes me think that a limit is being hit somewhere, but I can't find that limit. If I restart the websockets server, everything works fine again.
From my investigation so far, I have tried and checked:
ulimit (says it's unlimited)
MySQL connection limit (was set to default, now is 1000, but that didn't help)
Increased the PHP memory limit (because why not, just to see)
SOMAXCONN is set to 128, so I don't think this is the problem, but I would have to recompile PHP to test it. I haven't tried this yet.
Apache: The message I get when the problem occurs is: [proxy:error] [pid 16785] (111)Connection refused: AH00957: WS: attempt to connect to 10.0.0.240:9000 (websockets.mydomain.com) failed, which doesn't tell me much about anything. Apache is running MPM prefork and I have increased the spare servers and MaxRequestWorkers
I am open to any suggestions as to where to look next or how to get more detail out of the "Connection Refused" error log from apache!
Thanks
It's Apache that holding you up. Try setting the following in your conf file...
MaxClients 512
ServerLimit 512
(you must set both)
Of course, you can use whatever numbers work for you. In mpm-prefork, you should be able to go to 20,000 but that really shouldn't be necessary.

502 Bad gateway error exact after 30 seconds

On my page, there is a script which takes a long time to execute fully. While in process, after 30 seconds, I'm getting 502 Bad gateway error. I have searched for this and it seems to be the KeepAlive feature of Apache. I've tried few things to keep it alive, such as:
set_time_limit(-1);
header("Connection: Keep-Alive");
header("Keep-Alive: timeout=600, max=100");
ini_set('max_execution_time', 30000);
ini_set('memory_limit', '-1');
I have also called an Ajax function that hits a page on server in every 5 seconds. But nothing worked for me.
I'm using PHP + MySql + Apache on Linux server.
If you are using some type of hosting it is quite possible that between your client and your server there is a proxy or a load balancer with connection time limit set to 30 seconds. It's quite a common solution.
Try to investigate logs to find which service returns 502.

Browser shows time out while Server process is still running

I am having following problem:
I am running BIG memory process but have divided memory load into smaller chunks so no CPU time out issue.
In the Server I am creating .xml files with around 100kb sizes and they will be created around 100+.
Now main problem is browser shows Response Time out and IE at the below (just upper status bar) shows .php file download message.
During this in the backend (Server side) process is still running and continuously creating .xml files in incremental order. So no issue with that.
I have following php.ini configuration.
max_execution_time = 10000 ; Maximum execution time of each script, in seconds
max_input_time = 10000 ; Maximum amount of time each script may spend parsing request data
memory_limit = 2000M ; Maximum amount of memory a script may consume (128MB)
; Maximum allowed size for uploaded files.
upload_max_filesize = 2000M
I am running my site on IE. And I am using ZSCE with PHP 5.3
Can anybody redirect me on proper way on this issue?
Edit:
Uploading image of Time out and that's why asking for .php file download.
Edit 2:
I briefly explain my execution flow:
I have one PHP file with objects of Class Hierarchies which will start to execute Function1() from each class Hierarchy.
I have class file.
First, let say, Function1() is executed which contains logic of creating XML files in chunks.
Second, let say, Function2() is executed which will display output generated by Function1().
All is done in Class Hierarchies manner. So I can't terminate, in between, execution of Function1() until it get executed. And after that Function2() will be called.
Edit 3:
This is specially for #hakre.
As you asked some cross questions and I agree with some points but let me describe more in detail about the issue.
First I was loading around 100+ MB size XML Files at a time and that's why my Memory in local setup was hanging and stops everything on Machine and CPU time was utilizing its most resources.
I, then, divided this big size XML files in to small size (means now I am loading single XML file at a time and then unloading it after its usage). This saved me from Memory overload and CPU issue on local setup.
Now my backend process is running no CPU or Memory issue but issue is with Browser Timeout. I even tried cURL but as per my current structure it does seems to fit because of my class hierarchy issue. I have a set of classes in hierarchy and they all execute first their Process functions and then they all execute their Output functions. So unless and until Process functions get executed the Output functions do not comes in picture and that's why Browser shows Timeout.
I even followed instructions suggested by #vortex and got little success but not what I am looking for. Why I could not implement cURl because My process function is Creating required XML files at one go so it's taking too much time to output to Browser. As Process function is taking that much time no output is possible to assign to client unless and until it get completed.
cURL Output:
URL....: myurl
Code...: 200 (0 redirect(s) in 0 secs)
Content: text/html Size: -1 (Own: 433) Filetime: -1
Time...: 60.437 Start # 60.437 (DNS: 0 Connect: 0.016 Request: 0.016)
Speed..: Down: 7 (avg.) Up: 0 (avg.)
Curl...: v7.20.0
Contents of test.txt file
* About to connect() to mylocalhost port 80 (#0)
* Trying 127.0.0.1... * connected
* Connected to mylocalhost (127.0.0.1) port 80 (#0)
\> GET myurl HTTP/1.1
Host: mylocalhost
Accept: */*
< HTTP/1.1 200 OK
< Date: Tue, 06 Aug 2013 10:01:36 GMT
< Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8o
< X-Powered-By: PHP/5.3.9-ZS5.6.0 ZendServer
< Set-Cookie: ZDEDebuggerPresent=php,phtml,php3; path=/
< Cache-Control: private
< Transfer-Encoding: chunked
< Content-Type: text/html
<
* Connection #0 to host mylocalhost left intact
* Closing connection #0
Disclaimer : An answer for this question is chosen based on the first little success based on answer selected. The solution from #Hakre is also feasible when this type of question is occurred. But right now no answer fixed my question but little bit. Hakre's answer is also more detail in case of person finding for more details about this type of issues.
assuming you made all the server side modifications so you dodge a server timeout [i saw pretty much everyting explained above], in order to dodge browser timeout it is crucial that you do something like this
<?php
set_time_limit(0);
error_reporting(E_ALL);
ob_implicit_flush(TRUE);
ob_end_flush();
I can tell you from experience that internet explorer doesn't have any issues as long as you output some content to it every now and then. I run a 30gb database update everyday [that takes around 2-4 hours] and opera seems to be the only browser that ignores the content output.
if you don't set "ob_implicit_flush" you need to do an "ob_flush()" after every piece of content.
References
ob_implicit_flush
ob_flush
if you don't use ob_implicit_flush at the top of your script as I wrote earlier, you need to do something like:
<?php
echo 'dummy text or execution stats';
ob_flush();
within your execution loop
1. I am running BIG memory process but have divided memory load into smaller chunks so no CPU time out issue.
Now that's a wild guess. How did you find out it was a CPU time out issue in the first place? Did you even? If yes, what does your test now gives? If not, how do you test now that this is not a time-out issue?
Despite you state there won't be a certain issue, you don't proof that and many questions are still open. That invites for guessing which is counter-productive for trouble-shooting (which you are doing here).
What you write here just means that you wrote code to chunk memory, however, this is not a test for CPU time out issues. The one is writing code the other part is test. Don't mix the two. And don't draw wild assumptions. Issues are for the test, otherwise it didn't happen.
So much for your first point already just to show you that when doing troubleshooting, look for facts (monitor, test, profile, step-debug) not run assumptions. This is curcial otherwise you look in the wrong places and ask the wrong questions.
From what you describe how the client (browser) behaves, this is not a time-out-issue per-se. The problem you've got is that the answer between the header response and the body response is taking to long for the taste of your browser. The one browser is assuming a time-out (as such a boundary value has been triggered and this looks more correct to me) and the other browser is assuming somthing is coming up, why not save it.
So you merely have a processing issue here. Please consult the menual of your internet browsers (HTTP clients) which configuration values you can change to change this behavior. E.g. monitor with a curl-request on the command-line how long the request actually take. Then configure your browser to not time-out when connecting to that server under such an amount of time you just measured. For example if you're using Internet Explorer: http://www.ehow.com/how_6186601_change-internet-timeout-options.html or if you're using Mozilla Firefox: http://forums.mozillazine.org/viewtopic.php?f=7&t=102322&start=0
As you didn't show any code on the server-side I assume you want to solve this problem with client settings. Curl will help you to measure the number of seconds such a request takes. Use the -v (Verbose) switch to obtain detailed information about the request.
In case you don't want to solve this on the client, curl will still help you to measure important data and easily reproduce any underlying server-related timing issue. So you should go for Curl on the command-line in any case, especially as looking into response-headers might reveal what triggers the (again) esoteric internet explorer behavior. Again the -v switch does reveal you request and response headers.
If you like to automate such tests with a PHP script, it's also possible with the PHP Curl Extension. This has been outlined in:
Php - Debugging Curl
The problem is with your web-server, not the browser.
If you're using Apache, you need to adjust your Timeout value at httpd.conf or virtual hosts config.
You have 3 pages
Process - Creates the XML files and then updates a database value saying that the process is done
A PHP page that returns {true} or {false} based on the status of the process completion database value
An ajax front end, polling page 2 every few seconds to check weather the process is done or not
Long Polling
I have had this issue several times, while reading large csv file and puting it in database. I solved it in way, that i divided the reading and putting in database process into smaller parts. Like i created a new table to make log of how much data is readed and inserted, and next time the page reloads itself and start from that position. So you can do it by creating one xml in one attempt,and reload page and start form next one. In this way the memory used by browser is refreshed.
Hope it will help.
Is it possible to send some output to browser from the script while it's still processing, even white space? If, then do it, it should reset the timeout counter.
If it's not possible, you have to increase the timeout of IE in the registry:
HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings
You need ReceiveTimeout, if it's not there, create it as dword, and set the value in miliseconds.
What is a "CPU time out issue"?
The right way to solve the problem is to run the heavy stuff asynchronously, in a seperate session group (not the webserver process tree).
Try to include set_time_limit(0); in your PHP script page.
The following links might help you.
http://php.net/manual/en/function.set-time-limit.php
http://php.net/manual/en/function.ignore-user-abort.php

Could someone explain a little about this statement about mysqli close function?

Listed here on the mysqli documentation site, one of the comments says
You should always use mysqli_kill()
function before mysqli_close() to
actually close and free up the tcp
socket being used by PHP. Garbage
collection after script execution nor
mysqli_close() do not kill the tcp
socket on their own. The socket would
otherwise remain in 'wait' state for
approximately 30 seconds, and any
additional page loads/connection
attempts would only add to the total
number of open tcp connections. This
wait time does not appear to be
configurable via PHP settings.
Also as of this version, mysqli
created links cannot be "deactivated",
and will continue to accumulate in
process memory until the PHP server or
process is restarted, essentially
making mysqli.max_links = -1 required.
Could someone explain what this means, and if mysqli.max_links should be set, how so, and if i should be using mysqli_kill();
I don't found that rational, mysql can be connected via socket on localhost
Be careful using mysqli::kill before mysqli::close.
Killing the thread before actually closing the connection will leave the connection open! And depending on your max_connections and max_user_connections (by default the same), this could result in a "Max connections reached for ** user" message.
from : http://www.php.net/manual/en/mysqli.kill.php

Apache/PHP closes connection after short time (12 secs)

I am getting a peculiar problem. Apache closes connection after 12 seconds or so. This leads to a "connection reset by peer" message on the browser.
I am on Linux Centos 5. Using apache2/php5.x/mod_gzip. (php with eAccelerator)
I tested some variations:
Usually, I will print all the HTML output as the last step. It always closes connection when the processing time goes above 12 seconds.
If the print happens quicker ( < 12 secs ), connection is not closed and I get the page on the browser.
If I print something regularly (every second or so), the connection is not closed even if the processing time goes above 12 secs.
What could be the possible issue here? Any suggestions on fixing this issue?
Edit - More details:
apache access-log shows status code is 200.
TimeOut directive is set. Timeout value is set at 60.
php.ini: max_execution_time is set at 30 secs.
client and server on different machines. It is a direct connection (no proxies in between Edit2: The ISP routes all requests through its proxy.).
Apache is standalone.
On the software side,
What status code is logged in access.log?
Do you (per-chance) have a Timeout directive in your httpd.conf (or inside any other files that may be included from httpd.conf)?
What is max_execution_time configured to be in php.ini?
Is your Apache being used as a reverse-proxy, or is it stand-alone?
On the network side,
Are the server and your client (browser PC) on the same machine, or is there a proxy, firewall or router in-between?

Categories