Magento: Getting 500 internal server error when creating invoices - php

I'm facing a big problem with my current Magento Shop. When I create an invoice through the backend I'm getting an 500 internal server error (after a long loading process). The server logs dont show anything, I looked into /var/log/apache2/error.log and theres nothing related. The error didnt show up on my dev machine, but since I moved the shop to our live server it occurs all the time.
What I tried so far:
Checked the file and folder permissions
Enabled Mage::setIsDeveloperMode(true); and ini_set('display_errors', 1);
Still no errors or logs
Deleted local.xml and generated a new one
Increased memory limit
Increased max execution time
Cleared cache
Checked .htaccess file, everything seems fine
Ran a script to check if everything matches the Magento requirements
This is starting to keep my busy for a couple of days now... and I dont know where to start, because the server doesnt even output an error in the logs. How can I force the server to log the error in the according file?
Do you have any other ideas what I can try to get rid of the error?
Attached also my php.ini file, maybe that helps.
PHP.ini http://pastebin.com/9BWQRHTu
PHP Version and OS: PHP Version 5.3.2-1ubuntu4.21
Env: Virtual Private Server

Increase memory_limit from 128M to 256M or 512M.
Clear browser cache and cookies. Do you encounter the same "500 server error" in other browser?
You said the loading process is long - measure the exact time couple of times and if the time is the same, which is set in php.ini, increase the max_execution_time
Find why is it taking so long to create an invoice using a debugger - most probably some module you have installed has a problem, like infinite loop, or it might trigger an action, which takes a lot of time, for example, reindexes everything each time. Creating an invoice shouldn't take so long and it is a problem in code - not in server settings probably.

I looked into the wrong error_log file, as I'm using Plesk the correct error_log file was located in /var/www/vhosts/xxx.com/statistics/logs/ and not in /var/log/apache2/
The error in there was
[Mon Nov 04 14:37:13 2013] [warn] [client xxx] mod_fcgid: read data timeout in 45 seconds, referer
This lead me to the fcgid.conf (etc/apache2/mods-available/fcgid.conf) where I had to increase the following values:
FcgidIdleTimeout 3600
FcgidProcessLifeTime 7200
FcgidMaxProcesses 64
FcgidMaxProcessesPerClass 8
FcgidMinProcessesPerClass 0
FcgidConnectTimeout 300
FcgidIOTimeout 180
FcgidInitialEnv RAILS_ENV production
FcgidIdleScanInterval 10

Related

Wordpress/Godaddy - cURL error 28: Operation timed out after 10001 milliseconds with 0 bytes received

I have followed most of the questions here, tried changing memory_limit, upload_max_filesize, post_max_size, max_execution_time, max_input_time, through .htaccess and php.ini file, but I'm still getting the same error.
Upon asking Godaddy support they are simply giving a scripted response. Stating that there is a problem with your plugins, you should deactivate and see.
Currently, GoDaddy support suggested adding the following configuration in php.ini and deactivate the plugins and it will resolve.
memory_limit 5000M
upload_max_filesize 3000M
post_max_size 3000M
max_execution_time 3000
max_input_time 3000
But, this error is been from the time of a fresh WordPress installation. So, will deactivating all the plugins lead to resolutions?. Any suggestions??
Because of this, I'm getting connection timeouts and unable to take a backup through admin.
Also, I'm on shared hosting. Site - 247btl.com
After a lot of playing around with php settings, this error was solved with the following PHP settings:
max_execution_time = 30
max_input_time = 300
memory_limit = 128M
post_max_size = 32M
upload_max_size = 32M
I believe the problem was due to max_execution_time settings. Most of the guides suggested to increase it to 1000 along with increasing memory_limit, but that would lead to long load time. Tried it on a hostinger hosted website as well and it seems to work very well.
try to update your WordPress to the latest version if not already done.
Then if the problem is still there, contact hosting company and ask the hosting support team check following
1. your server are running latest version of PHP and the cURL library.
increase the Server Memory Limits settings.
The cURL error can be a dns related issue. Your hosting company might need to switch dns configuration to OpenDNS : https://www.howtogeek.com/164981/how-to-switch-to-opendns-or-google-dns-to-speed-up-web-browsing/
Ask your host if there is some limitation with wp-cron, or if loopback is disabled.
Ask your host if there a firewall or security modules (e.g. mod_security ) that could block the outgoing cURL requests.
You can also install the Query Monitor plugin and check the status of the HTTP API Calls in the admin page where the error is displayed.
In my case this was caused by the plugin "Contact Form by BestWebSoft".
If you find yourself in the same situation you have to disable the plugin one by one and refresh the page /wp-admin/site-health.php to check if the error is still there.
As explained here Getting "An active PHP session was detected" critical warning in wordpress this is due to a plugin badly developed.
Be aware that this issue can be caused by the use of the php session_start function. We had an issue where a developer had written the following.
add_action('init', function ()
{
if (!session_id()) { session_start(); }
});
This will kill the loopback and also disrupt REST API communications. This results in the cURL error 28 mentioned above. Sometimes it is not a complicated reason for the failure.
For Hostinger go to your Hosting Dashboard, then open PHP configuration option from Advanced menu(see screenshot below)-
After that goto PHP Options tab(see screenshot below)-
Then, Scroll down and change the MAX Execution value to 300(see screenshot below). Here 300 = 5 minutes. And after changing the value save these settings and this problem will get solved.

504 Gateway Timeout after 120 seconds

My script sometimes takes more than 2 minutes and I am getting "504 Gateway Timeout Error" sometimes.
Following are the php server settings
max_execution_time 6000 (Local), 120 (Master)
max_input_time 120
memory_limit -1
I also tried using set_time_limit(0) but getting same error.
I also tried creating a php.ini at root directory with max_input_time=-1 but it is not updating
When I check in console Network section, the script timeouts after 1.1 minutes giving 504 gateway timeout error.
I checked the phpinfo(), Server type is fastCGI. On my local system (wamp) the script runs perfect. On local Server type is apache mod_php. It can be issue?
Please guide me how I can solve this issue.
Any help will be appreciated.

mod_fcgid read data timeout - Premature end of script headers

The websites on one of my Plesk users can't be accessed. The server reports a 500 Internal server error, the error_log for that user shows a bunch of
[warn] mod_fcgid: read data timeout in 60 seconds
[error] Premature end of script headers: index.php
The DocumentRoot contains a normal WordPress installation. Other sites running the same WP version, using the same DB server and PHP+Extensions run fine. A <?php phpinfo(); ?> runs fine as well. Calling php index.php from cli returns the webpage, but is a bit too slow for an idle Xeon E5-2620 Server w/ 64GB RAM
Are there any known Problems? How can I debug further?
Some more system info:
PHP 5.6.24 (tried 5.4 as well)
Plesk 12.5.30
EDIT: The Problem occurs intermittently. Right now, no 500 Error is returned, site loads fine (a bit slow). I increased memory_limit, just to be sure it isn't a config limitation
You can try to increase FcgidIOTimeout as described here https://kb.plesk.com/en/121251
Since Plesk 11.5, "FcgidIOTimeout" parameter is set to the same value as max_execution_time php parameter in domain's PHP settings
Also you can try any of PHP-FPM handler instead of FastCGI, because of mod_fcgid has a lot of internal performance limitations which can't be avoided.
The problem was caused by a rogue file_get_contents in some scripts.
I looked through the error log for the 1st appearance of the error message, and found a file created exactly when the error message first appeared - only 2 years earlier...
WordPress Site hacked? Suspicious PHP file
So I removed the malware ( detailed write-up at https://talk.plesk.com/threads/debugging-premature-end-of-script-headers.338956/ ), rebooted the Server and the error is now gone.
Technical detail: The error turned up because the Server distributing the malware is offline. file_get_contents("http..." timed out, the local script failed and returned the error message.

Erratic 500 error on Codeigniter app

My CI app has been working well so far.
However I noticed that when a longer SQL query is requested (for example on the home page where around 50 blog posts are shown) there is a serious problem.
Sometimes the page loads fine. Unpredictably, as I reload that same page - with no change in content - the browser keeps hanging until I get back an Apache 500 error. This happens on multiple browsers.
CI error logs show nothing. PHP error logs show nothing.
I've noticed this is not an issue with smaller queries (ie, 20 posts), but am unsure if it has anything to do with the problem, after all, it does download 50 posts on some attempts.
I know this is hard to explain in detail, but if anyone could give me any pointers on how to debug I'd be very grateful. Glad to add any info.
The app is running on a Plesk 9 RHEL server, PHP 5.3.8, MySQL 5.5.17, CI 2.1.0.
php error log file
-rw-rw-r-- 1 apache apache 0 May 19 10:46 php_errors.log
php.ini info
error_log /var/log/php_errors.log /var/log/php_errors.log
log_errors On On
Use the sparks Debug-Toolbar here: http://getsparks.org/packages/Debug-Toolbar/versions/HEAD/show
Then watch the times that your queries take to load, view your memory etc. Slowly increase your post count from 20 to 30 to 50 to 100 etc until the error occurs - and see if something sticks out.
I suspect a PHP timeout is occuring, either because you have the timeout value configured to low (should be around 230), or your query is really poorly written and inefficient, causing the server to take too long to return the result with a larger query.

How to recover crashed Symfony App due to PHP error? PHP 5.3.1 / Apache 2.2.14

I am getting this strange error on PHP 5.3.1 on Apache 2.2.14. I went thru the forums and most suspect it is a PHP-issue memory-leak issue. WHile I upgrade PHP, I wanted to know if there is a way for the server to recover - to be restarted programatically. Currently, I have to manually start-stop Apaache
Warning: Attempt to assign property of non-object in D:\xampp_ext\xampp\htdocs\soki\test.soki.com\symfony\lib\autoload\sfCoreAutoload.class.php on line 38
Warning: require(/config/sfProjectConfiguration.class.php) [function.require]: failed to open stream: No such file or directory in D:\xampp_ext\xampp\htdocs\soki\test.soki.com\symfony\lib\autoload\sfCoreAutoload.class.php on line 99
Fatal error: require() [function.require]: Failed opening required '/config/sfProjectConfiguration.class.php' (include_path='.;d:\xampp_ext\xampp\php\PEAR') in D:\xampp_ext\xampp\htdocs\soki\test.soki.com\symfony\lib\autoload\sfCoreAutoload.class.php on line 99
I'm using PHP 5.3.1 on Apache 2.2.14
PHP is run by Apache, so if Apache has indeed crashed, you can't easily use PHP to start Apache again unless PHP is being called by some other program besides Apache (example: running PHP as a cron job/scheduled task by calling PHP.exe [script.php] directly).
I'm assuming by the path in your error message that you're using a Windows environment (obviously). You don't want a script or program in the background starting Apache over and over if the process isn't running; that's messy. You could but that is nowhere near ideal.
How do you even know Apache is crashing? The errors you provided are errors from PHP, not Apache. If you refresh the page and still see errors (or anything besides a Connection Failure/Forcefully rejected/etc.) then Apache is still running.
Double-check that Apache is indeed no longer running
Check error logs in Apache's logs folder
Report back with your findings
Assuming that you are using mod_php requires to work with the Apache prefork worker model. There is a configuration setting you can use to implement a work-around:
With the MaxRequestsPerChild directive you can set a number of requests that will be processed before child processes will be recycled. Workaround: Put it to a low value, so that Apache will recycle it's children more often.
<IfModule prefork.c>
StartServers 2
MinSpareServers 3
MaxSpareServers 3
ServerLimit 30
MaxClients 30
MaxRequestsPerChild 200
</IfModule>
Please keep in mind that this is only a dirty hack and not a serious solution of your problem.
I read jillions of blog posts about the error, and my best option was to upgrade to PHP 5.3 and that resolved the problem. It was definitely due to a memory leak somewhere that was preventing additional files to be loaded

Categories