I am developing a small-ish app in PHP. At some point a few days ago, PHP started to segfault on my dev machine (not a prod server) when I try to load and process some data in the app.
I attached gdb to try to see where the segfault happens, but basically got nowhere. The backtrace contains a bunch of call_func -esque things (about 50) and the last function was an mpm_prefork thing. I thought it was mpm_prefork's fault so I installed php-fpm and tried using mpm_event instead, but it still segfaults.
I noticed that if I remove all data from database (I'm using SQLite), it would no longer segfault, so I tried exporting the data into a MySQL db, but it still segfaults.
In conclusion, it would seem the segfault occurs when the data from the db are loaded and processed for output.
I am using PHP 5.5.12 and Apache 2.4.9, both of which are (I believe) newest versions.
My question is: Can I somehow get the PHP's stack printed when the segfault occurs? Or perhaps could I have PHP log each function call somewhere to see what function call leads to the crash?
Also, does PHP have a stack limit? Am I possibly calling to many functions? (As far as I remember, there's no recursion involved in the process.)
Thank you.
Related
Recently we updated our server from PHP version 5.6 to PHP 7.4, since this upgrade we are experiencing some very strange behaviour of some scripts.
The script themselves are fully PHP 7 suitable, there are no error logs or what so ever being printed or even logged when the problem occurs.
What happens is follows:
A script is started, this script calls several functions, when 1 function simply takes to long to finish then the main script stops, there is no error or output given what indicates that something is or went wrong.
It does not matter if we run this script by a GUI or via CLI, on both the result is the same.
The script stops/breaks (on cli you are back on prompt) every time when a called function (does not matter what function) simply takes to long to finish, as mentioned the cause is NOT a php code error, the code is valid.
When the same script is ran using php 5.6 then the script keeps waiting until the called function is finished and then normally continues as it supposed to.
It looks like there is a (new) setting somewhere in PHP7 which limits the execution time that a called function may run otherwise I cannot explain this behaviour, the problem here is which setting is this exactly and how do I change it, the obvious settings we already changed.
Has someone an idea where to look or search for these kind of settings?
The system is running on Centos 8 and using PHP 7.4.13 (or php 5.6), when using an older PHP version (7.2) the problem is the same, only php 5.6 does not have this problem at all.
Error reporting is turned on, no errors are logged whatsoever. If we run a script on PHP 7.4. The script stops from CLI after a short period (1 - 2 minutes). When running the same script on PHP 5.6 it runs for a long time (which it should in this case). Our developpers found that a function that calls another function to check whether an email account exists (HELO check) takes longer than 2-3 seconds, the entire PHP script is stopped in 7.4 where PHP 5.6 just waits longer and runs the entire script
This question already has answers here:
"[notice] child pid XXXX exit signal Segmentation fault (11)" in apache error.log [closed]
(3 answers)
Closed 2 years ago.
I have read a lot about this error in different forums but still cannot find a solution
My productive server currently has:
Apache: 2.4.25 PHP: 5.6.40
Using the PHPSpreadsheet library in version 1.8.2, different Excel files are generated; For different reasons (Large amounts of information and styles applied to the cell sheet) an xlsx file of maximum 3MB is exported in quite long execution time (It can reach 30/40 minutes).
And all this worked until we moved and dockerized to our new server, whose OS is CentOS7 and has a docker for apache, php and another for mysql.
Now when generating these "large" files the error.log trace shows the following error: [core: notice] [pid 8] AH00052: child pid 15829 exit signal Segmentation fault (11)
I tried to replicate the error using virtual machines in my local machine and in another one, I put OS centos7 and dockerized on them, I even copied the same configuration files from the productive server, however, on these machines the error does not appear and allows me to download correctly.
Currently I am desperate, I have tried every internet solution in reverse to replicate the error before trying any solution on the production line, but this error is by no means replicated.
I appreciate any information and help with a just cause ... I know that at this point php 5.6 is quite sent to collect, but I cannot commit the work time to an update if in the end it is not going to solve this incident.
Seems like a possible fix, and definitely an optimization, would be getting away from having an httpd child tied to anything that runs that long. I'd expect something like this to fail. The number of things that would want to timeout this request is long. I mean is the user sitting there for 40 minutes?
What about having your web interface trigger a PHP script (CLI)?
exec('php slowasf.php arg1 arg2 >/dev/null 2>/dev/null &');
Could look at a message queue to track progress. More of a long comment maybe than an answer. But then your question lacks code!
Finally, you did try giving it more RAM?
last year i have purchased an encrypted script wich run two crons jobs, a month ago crons stop working and i have talk to the hosting company they said its script problem , The PHP cron file works fine without any errors when visited by browser, the script provider told me that this issue should be fixed by hosting service and refuse to help !
here the command used it run every 10 MIN /home/username/public_html/cron/cron.php
cPanel Version 64.0 (build 24)
Apache Version 2.4.25
PHP Version 5.6.30
my question is it true upgrading the PHP version will affect cron job and how can i solve this?
thanx
In short, yes, upgrading PHP can effect your scripts -- the crons aren't run by Apache or PHP; the crons run from the OS level.
Your PHP upgrade is most likely affecting the crons in one of two ways:
The upgrade was large, like PHP5.6 to PHP7.0 and there's a deprecation warning somewhere (which will output in the crons' log) or the script is running some code that's now fully deprecated; most likely a query or a class/method named after a reserved word. Your logs will have more info, just make sure you have debugging turned on, otherwise your errors will be suppressed.
The new PHP settings from the upgrade have disabled some of the allowed rules from an older PHP version, such as getting away with empty or unassigned variables, and now your script is running into errors (ie. using a variable that doesn't exist, such as $_REQUEST['something'], which would have been empty but now returns an error that effects the rest of the script).
To fix this you need to know what the problem is. The easiest way is to access the log files that crons often create. If you don't get that with your host, ask them for it, or ask them to send you a copy of the error that's being created -- a quick Google on the error will tell you what the problem is. But without knowing more about the script or the error log, you probably won't get a better answer.
old command is working its just me i did copy past from my old backup and i forget the PHP at the firts off command ! nothing has changed the command should be like that exp : php /home/username/public_html/cron/cron.php
I have a laravel 4 web project that implements a laravel command.
When running in the development homestead vm, it runs to completion (about 40 seconds total time).
However when running it on the production server, it quits with a 'killed' output on the command line.
At first i thought it was the max_execution_time in cli php.ini, so I set it to 0 (for unlimited time).
How can I find out what is killing my command?
I run it on ssh terminal using the standard artisan invokation:
php artisan commandarea:commandname
Does laravel 4 have a command time limit somewhere?
The vps is a Ubuntu 4.10 machine with mysql, nginx and php-fpm
So, firstly, thank you everyone who has pointed me in the right direction regarding PHP and laravel memory usage tracking.
I have answered my own question hoping that it will benefit laravel devs in the future, as my solution was hard to find.
After typing 'dmesg' to show system messages. I found that the php script was being killed by Linux.
So, I added memory logging calls into my script before and after each of the key areas of my script:
Log::Info('Memory now at: ' . memory_get_peak_usage());
Then I ran the script while watching the log output and also the output of the 'top' command.
I found that even though my methods were ending and the variables were going out of scope, the memory was not being freed.
Things that I tried, that DIDNT make any difference in my case:
unset($varname) on variables after I have finished with them - hoping to get GC to kick off
adding gc_enable() at beginning of script and then adding gc_collect_cycle() calls after a significant number of vars are unset.
Disabling mysql transactions - thinking maybe that is memory intensive - it wasnt.
Now, the odd thing was that none of the above made any difference. My script was still using 150mb or ram by time it killed!
The solution that actually worked:
Now this is definitely a laravel specific solution.
But my scripts purpose is basically to parse a large xml feed and then insert thousands of rows into mysql using Elequent ORM.
It turns out that Laravel creates logging information and objects to help you see the query performance.
By turning this off with the following 'magic' call, I got my script down from 150mb to around 20mb!
This is the 'magic;' call:
DB::connection()->disableQueryLog();
I can tell you by the time I found this call, I was grasping at straws ;-(
A process may be killed for several reasons:
Out of Memory
There are two ways to trigger this error: Exceed the amount of memory allocated to PHP script in php.ini, or exceed the available system memory. Check the PHP error log and php.ini file to rule out the first possibility, and use dmesg output to check for the second possibility.
Exceeded the execution time-out limit
In your post you indicate that you disabled the timeout via the max_execution_time setting, but I have included it here for completeness. Be sure that the setting in php.ini is correct and (for those using a web server instead of a CLI script) restart the web server to ensure that the new configuration is active.
An error in the stack
If your script is error-free and not encountering either of the above errors, ensure that your system is running as expected. When using a web server, restart the web server software. Check the error logs for unexpected output, and stop or upgrade related daemons and needed.
Had this issue on a Laravel/Spark project. just wanted to share if others have this issue.
Try a refresh/restart of your dev server if running Vagrant or Ubuntu before more aggressive approaches.
I accidentally ran install of dependency packages on a Vagrant server. I also removed and replaced a mirrored folder repeatedly during install errors. My error was on Laravel/Spark 4.~. I was able to run migrations on other projects; I was getting 'killed' very quickly, 300ms timeframe, on a particular project for nearly all commands. Reading other users, I was dreading trying to find the issue or corruption. In my case, a quick Vagrant reload did the trick. killed issue was resolved.
Firstly forgive me if my terminology isn't entirely accurate. I have only limited knowledge on this subject, but will best try to convey the problems we are having. My server administrator is trying to deploy php 5.5.9 on a live server. Originally the intention was to install php 5.4.x, but we opted for the latest version instead (a manual compile is required regardless due to the o/s)
The O/S is OpenSuse 12.1 and the server is a Plesk server (Plesk Version 11.0.9) with Apache 2.2.1. This particular o/s does not have the ability to update php automatically so everything has to be done manually. Since we didn't want to risk screwing up the server (currently running with php 5.3.8), we opted to install a second version of php alongside the current one. The instructions we followed are outlined here: http://kb.parallels.com/en/114753
After numerous failed attempts due to missing libraries during compilation, we were finally able to compile php 5.5.9 without error and then proceeded to run tests with 'make test'
Unfortunately, the test results came back with 32 failures and 20% of the total tests were skipped. A total of 13011 tests were done, 10410 of which were completed. The TEST SUMMARY can be downloaded from here: http://uploaded.net/file/v6ug55l8
Anyway, deciding we might aswell give it a try, we applied the changes as indicated in the first link above to the vhost.conf. However, it didn't work, and the vhost then returned Internal Server Errors for every page regardless of script or extension. The errors logs sadly do not indicate any errors, only a whole ton of internal server errors recorded by mod_security. We did notice a huge number of these in the error log: Warning: SuexecUserGroup directive requires SUEXEC wrapper. But, it doesn't seem to be related, as the same error goes back several weeks.
So, we're stuck without any idea what to do next. Our next attempt will be to try and compile a php 5.4.x instead, as perhaps something is bumping heads with 5.5.9...
Any and all advice will be appreciated. As per the opening statement, I'm not an expert here, so if you need any additional information about the machine and it's server, feel free to ask. Thankyou for your attention!
Problem solved. The vhost's CGI-BIN needed to be CHMOD 755 and not 775.