I would like to increase timeout of one php site on nginx so I don't get "504 Gateway timeout". I've tried set_time_limit but it doesn't work. I've found some solutions which are based on modification of configuration files (e.g. Prevent nginx 504 Gateway timeout using PHP set_time_limit()). However I shouldn't modify these files in my case. Is there such a way?
Thanks for any efforts.
First, you have to edit your Nginx configuration files to change fastcgi_read_timeout. There is no getting around that, you have to change that setting.
I'm not sure why you say "I shouldn't modify these files in my case". I think your reason might be that you want to change the timeout for one of your websites, but not others. The best way I've found to accomplish that is go ahead and set fastcgi_read_timeout to a very long timeout (the longest you would want for any of your sites).
But you won't really be counting on using that timeout, instead let PHP handle the timeouts. Edit your PHP's php.ini and set max_execution_time to a reasonable amount of time that you want to use for most your websites (maybe 30 seconds?).
Now, to use a longer timeout for a particular website or web page, use the set_time_limit() function in PHP at the beginning of any scripts you want to allow to run longer. That's really the only easy way to have a different setting for some websites but not others on a Nginx / PHP-FPM set up. Other ways of changing the timeout are difficult to configure because of the way PHP-FPM shares pools of PHP threads with multiple websites on the server.
Related
I made a code to zip 400 files from website, but when I open it, it taking a lot of time (And this is ok), but if it too long the php file stop working.
How I suppose to zip 4000 files without my website crash? Maybe I need to create a progress bar?
Hmm.. help? :)
Long work (like zipping 4000 files, sending emails, ...) should not be done in PHP scripts that will keep the user's browser waiting.
Your user may cancel loading the page, and even if they don't, it's not great to have an apache thread locked during a long time.
Setting up a pool of workers outside of apache, to make this kind of work asynchronously is usually the way to go. Have a look at tools like RabbitMQ and Celery
In a PHP installation, you have the directive max_execution_time which is defined in your php.ini file. This directive sets the maximum time of execution of any of your PHP scripts, so you might want to increase it or set it to the infinite (no time limit). You have two ways of doing that : you can modify your php.ini but it's not always available. You can also use the set_time_limit function or the ini_set function. Note that depending on your host service, you may not be able to do any of this.
I think you should look around PHP set time limit or max_execution_time properties:
ini_set('max_execution_time', 300);
// or if safe_mode is off in your php.ini
set_time_limit(0);
Try to find the reasonable settings for zipping all your bundle, and set the values accordingly.
The PHP interpreter is limited in its own execution time, by default to a certain value. That's why it stops suddently. If you change that setting at the beginning of your php script, it will work better, try it!
You could invoke the php executable in cli mode too, to handle that process... with functions like shell_exec
We try to implement long-polling based notification service in our company's ERP. Similar to Facebook notifications.
Technologies used:
PHP with timeout set to 60 seconds and 1 second sleep in each iteration of loop.
jQuery for AJAX handling.
Apache as web server.
After nearly month of coding, we went to production. Few minutes after deployment we had to rollback everything. It turned out that our server (8 cores) couldn't handle long requests from 20 employees, using ~5 browser tabs each.
For example: User opened 3 tabs with our ERP, with one long-polling AJAX on each tab. Opening 4th tab is impossible - it hangs until one of previous 3 is killed (and therefore AJAX is stopped).
'Apache limitations', we thought. So we went googling. I found some info about Apache's MPM modules and configs, so I gave it a try. Our server use prefork MPM, as apachectl -l shown us. So I changed few lines in config to look something like this:
<IfModule mpm_prefork_module>
StartServers 1
MinSpareServers 16
MaxSpareServers 32
ServerLimit 50%
MaxClients 150
MaxClients 50%
MaxRequestsPerChild 0
</IfModule>
Funny thing is, it works on my local machine with similar config. On server, it looks like Apache ignores config, because with MinSpareServers set to 16, it lauches 8 after restart. Whe have no idea what to do.
Passerby in first comment of previous post gave me good direction to check out if we hit max browser connections to one server.
As it turns out, each browser has those limit and you can't change them (as far as I know).
We made a workaround to make it work.
Let's assume that I was getting AJAX data from
http://domain.com/ajax
To avoid hitting max browser connections, each long-polling AJAX connects to random subdomain, like:
http://31289.domain.com/ajax
http://43289.domain.com/ajax
and so on. There's a wildcard on a DNS server pointing from *.domain.com to domain.com, and subdomain is unique random number, generated by JS on each tab.
For more information, check out this thread.
There's been also some problems with AJAX Same Origin Security, but we managed to work it out, using appropriate headers on both JS and PHP sides.
If you want to know more about headers, check it out here on StackOverflow, and here on Mozilla Developer's page. Thanks!
I have successfully implemented a LAMP setup with long polling. Two things to keep in mind, the php internal execution clock for linux is not altered or incremented by the 'usleep()' function. Therefore, setting the maximum execution time would only be needed for rare edge cases where obtaining the data takes longer than normal, or possibly for a windows setup. In addition, with long polling bare in mind that once you go over 20+ seconds, you are vulnerable to having browser timeouts occur.
Secondly, you will need to verify that your sessions aren't locking up (if sessions are being used).
Apache really shouldn't have any issue with what you are looking to do. Though, I will admit that webservers like nginx or an ajax-specific webserver may handle the concurrent connections better. If you could post your code for the ajax handler, we might be able to figure out where the problem is.
Utilizing subdomains, or as other threads have suggested -- multiple webservers on separate ports, remember that you may encounter JavaScript domain security issues.
I say, don't change apache config until you encounter an issue and have exhausted all other options; be careful with the PHP sessions, and make sure AJAX is waiting for a response, before sending another request ;)
I'm troubleshooting a series of reoccurring errors: WordPress database error MySQL server has gone away for query ...
I think I've found a solution here but it's a few years old and I want to better understand the MySQL wait_timeout and it's relationship to Wordpress before I start monkeying with core files or reconfiguring my server. (I'm on a virtual dedicated server, so I have the option to change the wait_timeout on the server.)
I checked by running SHOW VARIABLES; from phpMyAdmin and wait_timeout is currently set to 35. That seems low to me, but I don't fully understand what it does. I'm considering changing it to 600.
My main question is whether this is a responsible thing to do or not. But I think that broader question can be divided into smaller parts:
1. Do I have the option to override this setting with PHP (Wordpress)?
2. What is the optimal setting is for medium-large Wordpress site?
3. Are there any Wordpress configuration options or filters that I could use to change the setting without modifying core files?
Thanks.
wait_timeout is the time mysql will hold a non-interactive connection open for before closing it basically.
So increasing it to 600 seconds could solve your problem, however, if you set it to 600 seconds and you have lots of people running a slow page on your site at the same time you can get to a point where mysql starts refusing connections and then apache will start queueing requests until it subsequently refuses requests and your server takes a dive.
My suggestion would be to try and find out why a single request is taking over 35 seconds because to be honest, that seems a rather long load time on a single page from a blog to me.
I'm currently working on a upload script supporting larger uploads (~50 Mb) and I have very rapidly run into a problem! I'm using the traditional POST request with a form uploading the file to a temp location and later moving it with PHP. Naturally I've updated my php.ini file to support slightly larger than default files and files around 15 Mb upload really well!
The main problem is due to my hosting company. They let scripts timeout after 60 seconds meaning that POST requests taking longer than 60 seconds to complete will die before the temp file reaches the PHP script and this naturally yields an internal server error.
Not being able to crank the timeout on the server (after heated debates) I'm considering the options. Is there a way to bump the request or somehow refresh it to notify the server and reset the timing? Or are there alternative upload methods that don't timeout?
There are a few things you could consider. Each has a cost, and you'll need to determine which one is least costly.
Get a new hosting company. This may be your best solution.
Design a rather complex client-side system that breaks up the upload into multiple chunks and submits them via AJAX. This is ugly especially since it is only useful in getting around a host rule.
I'd really research #1.
With great difficulty. By far your easiest option is to dump the hard-headed host and pick one that actually lets you be productive. I personally use TSOHost - been with them for over a year and a half and have so far had absolutely no reason to complain (not even a slight annoyance).
Are you really sure it s a timeout issue? My first idea...
the transfert failed due to a configuration limitation set up in the web server php.ini file. You need to change it or set it as local settings in your script
# find it in php.ini used by your configuration
memory_limit = 96M
post_max_size = 64M
upload_max_filesize = 64M
Or directly inyour script
ini_set('memory_limit', '96M');
ini_set('post_max_size', '64M');
ini_set('upload_max_filesize', '64M');
I was asked to help troubleshoot someone's website. It is written in php, on a linux box, using an apache server and mysql, and I have never worked with any of these before (except maybe linux in school).
I got most of the issues fixed (most code is really the same no matter what langues it is) however there is still one page that is timing out when processing huge files. I'm fairly sure the problem is a timeout somewhere but I have no idea where all the php timeouts would be.
I have adjusted max_execution_time, max_input_time, mysql.connect_timeout, default_socket_timeout, and realpath_cache_ttl in php.ini but it is still timing out after about 10 minutes. What other settings might exist that I could increase to try and fix this?
As a sidenote, I'm aware that 10min is generally not desired when processing an file, however this section of the site is only used by one person once or twice a week and she doesn't mind providing the process finishes as expected !and I really don't want to go rewrite someone else's bad coding in a language I don't understand, for a process I don't understand)
EDIT: The sql process finishes in the background, its just the webpage itself that times out.
Per Frank Farmer's suggestion, I added flush() to the code and it works now. Definitely a browser timeout, thanks Frank!
You can use set_time_limit() if you set it to zero the script should not time out at all.
This would be placed within your script, not in any config etc...
Edit: Try changing apache's timeout settings. In the config look for TimeOut directive (should be the same for apache 2.x and apache 1.3.x), once changed restart apache and check it.
Edit 3:
Did you go to the link I provided? It lists there the default, which is 300 seconds (5 minutes). Also if the setting IS NOT in the config file, you CAN add it.
According to the docs:
The TimeOut directive currently defines the amount of time Apache will wait for three things:
The total amount of time it takes to receive a GET request.
The amount of time between receipt of TCP packets on a POST or PUT request.
The amount of time between ACKs on transmissions of TCP packets in responses.
So it is possible it doesn't relate, but try it and see.