I am hoping you all can help me out a little. I have spent about 7 hours trying to find an answer and have tried many things so far.
I have a PHP script that is used to sync files/database data between 2 servers. Before you guys ask this process is necessary for this project and must stay in place.
The script basically finds all files in a directory that have changed in the last 72 hours and SFTPs them to the other server, replacing any files needed. It then creates a copy of the backing database, removes certain tables/rows, changes others and exports a .sql file. It then SFTPs this .sql file to the other server and calls an include on a file on the 2nd server that imports the .sql file replacing the existing database with updated data.
All of this works...
The issue is that no matter what changes I make to the Apache config the script always gives me a 503 error after 30 seconds, every time (between 30.02 and 30.04 seconds to be precise). However, the PHP script continues to run and successfully completes all operations, including writing to the log file, in about 60-61 seconds. There is nothing in the Apache logs referencing any kind of error at all either.
I have checked all .conf files used and none of them mention a 30 second timeout. In my httpd.conf I have added these lines:
TimeOut 300
ProxyTimeOut 300
KeepAlive On
KeepAliveTimeout 60
I also have set the max_execution_time and memory_limit on the php script to 120 and 2048M, respective, to be sure to rule that out during testing.
The page is supposed to display a success message to the user with a report of what was changed/updated. However with the 503 error I am not able to do this. So I am looking to get rid of this 503 limitation so it can properly display the end result of the sync. I am not too familiar with Apache configuration to be honest so any help/ideas on what would cause this/where to look would be much appreciated!
Thanks in advance!
After trying many, many things I was able to find out what the specific cause was. It turns out this was caused by the proxy timing out. Here is a link to the answer that explained what to add to the vhost conf file.
In short, here is the answer for future visitors:
For the latest versions of httpd and mod_proxy_fcgi you can simply add
timeout= to the end of the ProxyPassMatch line, e.g.:
ProxyPassMatch ^/(.+\.php.*)$ fcgi://127.0.0.1:9000/<docroot>/$1 timeout=1800
For older versions it was a little more complicated,
e.g.:
<Proxy fcgi://127.0.0.1:9000>
ProxySet timeout=1800
</Proxy>
ProxyPassMatch ^/(.+\.php.*)$ fcgi://127.0.0.1:9000/<docroot>/$1
Not sure if you tried this, but I think you may need to adjust max_execution_time in the php.ini that Apache uses. On many distributions it defaults to 30.
http://php.net/manual/en/info.configuration.php#ini.max-execution-time
Related
I am facing a pretty common situation judging from the questions either here on SO or in the SilverStripe forums: file uploads fail.
However, my situation seems to stem from an issue that I haven't met yet on the Web; from reading other questions and many blog articles or forum threads, I have ruled out:
Permission problems
upload_max_filesize and post_max_size in the PHP configuration (both set to 8M)
LimitRequestBody in the Apache Configuration (default value of 0, meaning "unlimited")
I have ruled these out for many reasons but this picture shows with an example of three consecutive uploads that the uploads are sometimes working:
I have also started a thread on the SilverStripe forums for this problem, but I have little hope of having luck solving the problem there.
I have set up breakpoints in the Upload, UploadField and File classes, and stepped through the code for hours without succeeding in identifying the cause of the error.
My finding so far is that any file above 128 kiB causes an internal server error. Any file below this size threshold gets uploaded as expected.
All logs (Apache, PHP, SilverStripe) are totally mute when this error occurs.
A permission issue seems very unlikely because:
PHP runs in Fast-CGI mode as a user (web1) created by ISPConfig
Apache runs as user apache:apache
I have added apache to the group of user so that groups web1 gives me web1 : client1 sshusers and groups apache gives me apache : apache ispapps ispconfig client1
the upload folder (assets) is owned by web1:client1 and has permissions 775
the temporary upload folder (upload_tmp_dir) is owned by web1:client1 and permissions are 775.
I believe what I'm looking is a means of somehow getting information about where and why the uploads fail. Is it possible to set the loglevel of Apache to "debug" or "trace"?
NOTE: an entry in the "Similar Questions" led me to this answer, which hints at SSLRenegBufferSize being by default at exactly 128 kiB. Unfortunately, whether the protocol is HTTPS or HTTP has no influence: the problem shows up.
[EDIT] I had later on set the LogLevel directive to trace but I still had no message about this error in the server logs.
Quick googling took me to the following articles:
Debian Jessie - Apache2 / PHP 5.6, can't upload more than 128kb
https://wordpress.org/support/topic/cant-upload-images-larger-than-128kb-http-error
Those suggest to check FcgidMaxRequestLen setting value.
This doesn't answer how to debug that correctly but helps solve the original issue.
I've setup an Apache server with Wordpress and after installing several plugins I noticed the page load times went up to 30 seconds or more so I followed several guides to fine-tune and Speed up Apache by removing modules, enabling deflate, changing worker processes, etc...
One of the changes I made was removing mod-php and using php-fpm through mod-fastcgi, afterwards I noticed several bizarre errors. W3 Total Cache reported that htaccess was not writeable despite the fact it belongs to the same user and group and I even made it world-writable (777 Permissions) and minify can't work because it can't write any changes to htaccess.
Not only that but Minify is giving off 2 more bizarre messages
Minify Auto encountered an error. The filename length value is most likely too high for your host. It is currently 150. The plugin is trying to solve the issue for you
To which it sits there trying to fix and then says
Minify Auto does not work properly. Try using Minify Manual instead or try another Minify cache method. You can also try a lower filename length value manually on settings page by checking "Disable the Minify Auto automatic filename test”
Also the compatibility check produces strange messages as well claiming that a number of modules aren't detected which are loaded, I did some quick research and found that the modules are just simply difficult to detect through fast-gi but I wonder if the plugin is doing anything given it cant detect them.
Any help would be appreciated
W3 Total Cache 'Minify Auto' under Apache/PHP-FPM
I experienced the same issue with W3 Total Cache (W3TC) and its 'Minify Auto' feature under Apache with PHP-FPM.
Problem in brief
When PHP is invoked in FastCGI mode, some CGI variables such as SCRIPT_NAME and PATH_INFO are not always set to the values expected by script developers. In my case, the value of SCRIPT_NAME was the path of the php5-fcgi executable (/usr/lib/cgi-bin/php5-fcgi), rather than that of the PHP script itself.
The minify module code in the W3TC plugin expects SCRIPT_NAME to be set correctly, and fails when it isn't.
Solution
The php.ini directive, cgi.fix_pathinfo, works around this 'CGI variables' issue when enabled. In my case, I had disabled this setting, and reenabling it resulted in generation of the correct SCRIPT_NAME and resolved the minification issue.
Instructions for a Debian/Ubuntu system
To reenable, change the setting in /etc/php5/fpm/php.ini:
cgi.fix_pathinfo = 1
And reload the php-fpm service:
sudo service php-fpm reload
Caveat
Note there have been security concerns dating from 2010 regarding the use of the cgi.fix_pathinfo setting in misconfigured Nginx sites (see here for details), however I haven't been able to reproduce this under an Apache setup.
Since PHP 5.3.9, a (poorly documented) new FPM configuration directive, security.limit_extensions has been introduced. This defaults to limiting execution to .php files only, and as far as I can tell this should mitigate the historical security issues.
Problem in detail (for those who care)
The broken CGI variables cause a problem in the W3TC function that derives the cache directory path.
This in turn causes the minify .htaccess file to be written to disk with the malformed cache path in the RewriteBase directive.
In my case, it was:
RewriteBase inify/
Rather than:
RewriteBase /wp-content/cache/minify/
This affects the subsequent rewrite rules, which ultimately prevents the minification code (which relies on these rules) from being invoked correctly.
I am new to heroku,i am getting the below warning in server logs. I have just uploaded my php scripts into the heroku through git I am not sure where my Maxclient setting is present too..
server reached MaxClients setting, consider raising the MaxClients settings
I saw in many posts to chnage Maxclients in httd.conf. I am not aware of the apache server folder sturucture. Can you please let me know in which path should i keep the httd.conf and also the format of it. You can also direct me to any posts which explains it well
i found the answer.. followed the usage instructions in github.com/heroku/heroku-buildpack-php
and my php code was found to be /app
Later created httd.conf file inside /app/apche/conf
My company has a large hosting, but it’s not managed by us, we don't see configuration files, but I want to reply this feature on our local test server.
I’m new in my company and want to start some debug of applications to fix some minors and majors issues to clients, but the amount of files is so big that single error_file is huge.. and there are many people working on this so each time I check log (like 30 secs to 1 min) has hundreds of added lines.
I don’t know if this is set up on Apache, through .htaccess files or in php.ini.
I am talking about PHP errors, but I don't know if this is set in PHP, Apache, or maybe using a third-party library.
I'm not talking about setting a specific folder error_log. I'm talking about if errors are logged in the scripts folder.
Example: I create a folder named test1. Inside it I make some buggy PHP script that throws some errors. When I run the script I can see an error_log file created in the folder. So it works on the fly.
I have tried to ask the hosting company support how they do this, but they haven’t answered me.
I don't know if maybe could be some kind of cPanel setting (BTW, the hosting support stuff doesn't understand this question either, but well.. usually level 1 of support can’t handle technical stuff).
I found it.
You have to set a directive in the php.ini file as follows, string "error_log". On the right side is the file name you want for the log,
error_log = error_log
This will generate a PHP error log in the folder where script executed are,
I'll try to explain.
Script test.php in folder /www/site/lib:
include "./db_conn.php";
If file db_conn.php is not located in the same directory, this will fire a warning and error. Usually this will be lead to the servers/vhost log, but using this directive you will get an error_log file under the /www/site/lib directory.
Why was I looking or this? Well, as I wrote, I'm working on a huge application, with thousands of files many fires warnings, notices, etc. I'm not the only one in the project and the site error_log file was so huge it was hard to keep tracking debug evolution for one or just some files. Now I just have to track the log from directories where I'm working.
You can manage the logs by adding this to your vhost or htaccess file
ErrorLog /path/to/a/writable/directory/error.log
For more information, have a look at this article on advanced PHP error handling via htaccess.
To do this in PHP, edit file php.ini and add the line
error_log = /path/to/where/you/want/your/php-errors.log
Then restart the web server. This should give you PHP errors, and only PHP errors, to that file. Provided, of course, that the scripts aren't written to throw away all errors.
To do it in Apache, you can use SetEnvIf to add an environment variable to any request ending in PHP, and then printing all logs with that particular environment variable. E.g.:
SetEnvIf Request_URI "\.php$" phplog
CustomLog /path/to/php.log env=phplog
To use this in multiple folders, make one environment variable per folder and one CustomLog per folder, like this:
SetEnvIf Request_URI "/folder1/.*\.php$" log_folder1
Customlog /path/to/folder1/php.log env=log_folder1
SetEnvIf Request_URI "/folder2/.*\.php$" log_folder2
Customlog /path/to/folder2/php.log env=log_folder2
How i show full list of running php sripts on linux server? I see only httpd service, or PID but not specific php file source, i need analyze what script take more memory and fixed it. Thanks
You have two options:
Log all URLs that are requested from your server and that end up in PHP scripts being executed
You can use a PHP feature, which allows you to add a PHP script to any Apache request that is sent to PHP. You enable it by adding this to your root .htaccess:
php_value auto_prepend_file append.php
In append.php you add a logging feature, where you can insert the URL requested, the time it took to generate the response and the max memory used. If you add this to a TAB separated file, you could import it in a DB table and see what is really happening on your server.
More info here: http://www.electrictoolbox.com/php-automatically-append-prepend/
Debug what Apache is doing, using strace
You basically start Apache with strace. This will debug what operations Apache and subsequently PHP are doing. Watch out, as there is a lot of noise in the debug output.
More info here: http://bobcares.com/blog/?p=103