I was asked to help troubleshoot someone's website. It is written in php, on a linux box, using an apache server and mysql, and I have never worked with any of these before (except maybe linux in school).
I got most of the issues fixed (most code is really the same no matter what langues it is) however there is still one page that is timing out when processing huge files. I'm fairly sure the problem is a timeout somewhere but I have no idea where all the php timeouts would be.
I have adjusted max_execution_time, max_input_time, mysql.connect_timeout, default_socket_timeout, and realpath_cache_ttl in php.ini but it is still timing out after about 10 minutes. What other settings might exist that I could increase to try and fix this?
As a sidenote, I'm aware that 10min is generally not desired when processing an file, however this section of the site is only used by one person once or twice a week and she doesn't mind providing the process finishes as expected !and I really don't want to go rewrite someone else's bad coding in a language I don't understand, for a process I don't understand)
EDIT: The sql process finishes in the background, its just the webpage itself that times out.
Per Frank Farmer's suggestion, I added flush() to the code and it works now. Definitely a browser timeout, thanks Frank!
You can use set_time_limit() if you set it to zero the script should not time out at all.
This would be placed within your script, not in any config etc...
Edit: Try changing apache's timeout settings. In the config look for TimeOut directive (should be the same for apache 2.x and apache 1.3.x), once changed restart apache and check it.
Edit 3:
Did you go to the link I provided? It lists there the default, which is 300 seconds (5 minutes). Also if the setting IS NOT in the config file, you CAN add it.
According to the docs:
The TimeOut directive currently defines the amount of time Apache will wait for three things:
The total amount of time it takes to receive a GET request.
The amount of time between receipt of TCP packets on a POST or PUT request.
The amount of time between ACKs on transmissions of TCP packets in responses.
So it is possible it doesn't relate, but try it and see.
Related
I made a code to zip 400 files from website, but when I open it, it taking a lot of time (And this is ok), but if it too long the php file stop working.
How I suppose to zip 4000 files without my website crash? Maybe I need to create a progress bar?
Hmm.. help? :)
Long work (like zipping 4000 files, sending emails, ...) should not be done in PHP scripts that will keep the user's browser waiting.
Your user may cancel loading the page, and even if they don't, it's not great to have an apache thread locked during a long time.
Setting up a pool of workers outside of apache, to make this kind of work asynchronously is usually the way to go. Have a look at tools like RabbitMQ and Celery
In a PHP installation, you have the directive max_execution_time which is defined in your php.ini file. This directive sets the maximum time of execution of any of your PHP scripts, so you might want to increase it or set it to the infinite (no time limit). You have two ways of doing that : you can modify your php.ini but it's not always available. You can also use the set_time_limit function or the ini_set function. Note that depending on your host service, you may not be able to do any of this.
I think you should look around PHP set time limit or max_execution_time properties:
ini_set('max_execution_time', 300);
// or if safe_mode is off in your php.ini
set_time_limit(0);
Try to find the reasonable settings for zipping all your bundle, and set the values accordingly.
The PHP interpreter is limited in its own execution time, by default to a certain value. That's why it stops suddently. If you change that setting at the beginning of your php script, it will work better, try it!
You could invoke the php executable in cli mode too, to handle that process... with functions like shell_exec
I would like to increase timeout of one php site on nginx so I don't get "504 Gateway timeout". I've tried set_time_limit but it doesn't work. I've found some solutions which are based on modification of configuration files (e.g. Prevent nginx 504 Gateway timeout using PHP set_time_limit()). However I shouldn't modify these files in my case. Is there such a way?
Thanks for any efforts.
First, you have to edit your Nginx configuration files to change fastcgi_read_timeout. There is no getting around that, you have to change that setting.
I'm not sure why you say "I shouldn't modify these files in my case". I think your reason might be that you want to change the timeout for one of your websites, but not others. The best way I've found to accomplish that is go ahead and set fastcgi_read_timeout to a very long timeout (the longest you would want for any of your sites).
But you won't really be counting on using that timeout, instead let PHP handle the timeouts. Edit your PHP's php.ini and set max_execution_time to a reasonable amount of time that you want to use for most your websites (maybe 30 seconds?).
Now, to use a longer timeout for a particular website or web page, use the set_time_limit() function in PHP at the beginning of any scripts you want to allow to run longer. That's really the only easy way to have a different setting for some websites but not others on a Nginx / PHP-FPM set up. Other ways of changing the timeout are difficult to configure because of the way PHP-FPM shares pools of PHP threads with multiple websites on the server.
We try to implement long-polling based notification service in our company's ERP. Similar to Facebook notifications.
Technologies used:
PHP with timeout set to 60 seconds and 1 second sleep in each iteration of loop.
jQuery for AJAX handling.
Apache as web server.
After nearly month of coding, we went to production. Few minutes after deployment we had to rollback everything. It turned out that our server (8 cores) couldn't handle long requests from 20 employees, using ~5 browser tabs each.
For example: User opened 3 tabs with our ERP, with one long-polling AJAX on each tab. Opening 4th tab is impossible - it hangs until one of previous 3 is killed (and therefore AJAX is stopped).
'Apache limitations', we thought. So we went googling. I found some info about Apache's MPM modules and configs, so I gave it a try. Our server use prefork MPM, as apachectl -l shown us. So I changed few lines in config to look something like this:
<IfModule mpm_prefork_module>
StartServers 1
MinSpareServers 16
MaxSpareServers 32
ServerLimit 50%
MaxClients 150
MaxClients 50%
MaxRequestsPerChild 0
</IfModule>
Funny thing is, it works on my local machine with similar config. On server, it looks like Apache ignores config, because with MinSpareServers set to 16, it lauches 8 after restart. Whe have no idea what to do.
Passerby in first comment of previous post gave me good direction to check out if we hit max browser connections to one server.
As it turns out, each browser has those limit and you can't change them (as far as I know).
We made a workaround to make it work.
Let's assume that I was getting AJAX data from
http://domain.com/ajax
To avoid hitting max browser connections, each long-polling AJAX connects to random subdomain, like:
http://31289.domain.com/ajax
http://43289.domain.com/ajax
and so on. There's a wildcard on a DNS server pointing from *.domain.com to domain.com, and subdomain is unique random number, generated by JS on each tab.
For more information, check out this thread.
There's been also some problems with AJAX Same Origin Security, but we managed to work it out, using appropriate headers on both JS and PHP sides.
If you want to know more about headers, check it out here on StackOverflow, and here on Mozilla Developer's page. Thanks!
I have successfully implemented a LAMP setup with long polling. Two things to keep in mind, the php internal execution clock for linux is not altered or incremented by the 'usleep()' function. Therefore, setting the maximum execution time would only be needed for rare edge cases where obtaining the data takes longer than normal, or possibly for a windows setup. In addition, with long polling bare in mind that once you go over 20+ seconds, you are vulnerable to having browser timeouts occur.
Secondly, you will need to verify that your sessions aren't locking up (if sessions are being used).
Apache really shouldn't have any issue with what you are looking to do. Though, I will admit that webservers like nginx or an ajax-specific webserver may handle the concurrent connections better. If you could post your code for the ajax handler, we might be able to figure out where the problem is.
Utilizing subdomains, or as other threads have suggested -- multiple webservers on separate ports, remember that you may encounter JavaScript domain security issues.
I say, don't change apache config until you encounter an issue and have exhausted all other options; be careful with the PHP sessions, and make sure AJAX is waiting for a response, before sending another request ;)
I launched a website about a week ago and I sent out an email blast to a mailing list telling everyone the website was live. Right after that the website went down and the general error log was flooded with "exceeded process limit" errors. Since then, I've tried to really clean up a lot of the code and minimize database connections. I will still see that error about once a day in the error log. What could be causing this error? I tried to call the web host and they said it had something to do with my code but couldn't point me in any direction as to what was wrong with the code or which page was causing the error. Can anyone give me any more information? Like for instance, what is a process and how many processes should I have?
Wow. Big question.
Obviously, your maxing out your apache child worker processes. To get a rough idea of how many you can create, use top to get the rough memory footprint of one http process. If you are using wordpress or another cms, it could easily be 50-100m each (if you're using the php module for apache). Then, assuming the machine is only used for web serving, take your total memory, subtract a chunk for OS use, then divide that by 100m (in this example). Thats the max worker processes you can have. Set it in your httpd.conf. Once you do this and restart apache, monitor top and make sure you don't start swapping memory. If you do, you have set too high a number of workers.
If there is any other stuff running like mysql servers, make space for that before you compute number of workers you can have. If this number is small, to roughly quote a great man 'you are gonna need a bigger boat'. Just kidding. You might see really high memory usage for a http process like over 100m. You can tweak your the max requests per child lower to shorten the life of a http process. This could help clean up bloated http workers.
Another area to look at is time response time for a request... how long does each request take? For a quick check, use firebug plugin for firefox and look at the 'net' tab to see how long it takes for your initial request to respond back (not images and such). If for some reason request are taking more than 1 or 2 seconds to respond, that's a big problem as you get sort of a log jam. The cause of this could be php code, or mysql queries taking too long to respond. To address this, make sure if you're using wordpress to use some good caching plugin to lower the stress on mysql.
Honestly, though, unless your just not utilizing memory by having too few workers, optimizing your apache isn't something easily addressed in a short post without detail on your server (memory, cpu count, etc..) and your httpd.conf settings.
Note: if you don't have server access you'll have a hard time figuring out memory usage.
The process limit is typically something enforced by shared webhost providers, and generally has to do with the number of processes executing under your account. This will typically equate to the number of connections made to your server at once (assuming one PHP process per each connection).
There are many factors that come into play. You should figure out what that limit is from your hosting provider, and then find a new one that can handle your load.
I have everything setup and running with KalturaCE & Drupak and the server (Ubuntu 8.02+Apache2+PHP5+MySql) is working fine.
The issue here I am having is unclassified.
When I play two videos together from my site, the second video which I played later doesn't start until the first completes its buffering. I did some HTTP watch and came to know that,
both of the entries request the file with the URL as follows,
/kalturace/p/1/sp/100/flvclipper/entry_id/xxxxxx/flavor/1/version/100000
so the first video which I played receives a 302 redirect response to the URL as follows,
/kalturace/p/1/sp/100/flvclipper/entry_id/xxxxxxx/flavor/1/version/100000/a.flv?novar=0
and starts buffering and playing. While the second video which I started later just wait for a response till the first video end its buffering and then the second video receives its 302 redirect and start buffering
My question is, why can't both videos buffer concurrently? and Obviously this is what I require.
Your help is highly anticipated and much welcome.
PHP file-based sessions will lock the session file while a request is active. If you intend to use parallel requests like this, you'll have to make sure each script closes the session as soon as possible (ie: after writing out any changes) with session_write_close() to keep the lock time to a minimum.
I found the solution to my problem and it was surely the Marc at first place suggested but it was some problem with mysql timeout as well.
So putting session_write_close() in both the files at right places solved my problem.
For a complete overview please visit the suggested thread at http://www.kaltura.org/videos-not-playing-simultaneously-0
I posted several suggestions at http://drupal.org/node/1002144 for the same question, this is one of them:
In Apache space, a possible cause might be MaxClients (or, the capacity of your Apache server to respond to multiple requests). If that apache setting or server capacity is low, then the server may be dropping your connections into its backlog until the first connection is completed. You can test this using regular large files which take some time to download on the server - that will establish whether it's KalturaCE or Apache which is causing the issue. To see this you'd either need a site which is already sitting at max for MaxClients (ie, not an unused dev site), or a very low MaxClients setting (like, 1!).
Another suggestion posted at http://drupal.org/node/1002144 for the same question -
It's possible that this behaviour might be due to a mysql query lock. If KalturaCE's SQL was locking a table until the first request is completed (I have no reason to believe this is the case, just floating possible causes), then a second request might hang like this.
I'm not familiar enough with CE to suggest that this is the case, but you could easily debug using mtop on the server during the request to see if this is what's actually happening.