i use on my one personaly project simple queue implemented by msg function. This a specific function msg_receive which is used to wait for a message to come to the queue. Once in a while, this wait crashes with a return error code 43 without an error text.
I can cause this error number 43. If I run two processes, you can see on github. This is expected, but if i run by supervisor i get same error.
I created simple script on github, which is executed by supervisor and process waiting for receive message, it sometomes happen the function return error code 43. Is there anything what clean resources?
I have no idea what causes this, but what did I try?
I checked that the supervisor runs only one php process and the PID is still the same.
I tested it on arm raspbian 32 bit (php 7.3.19) and ubuntu 64 bit (php 7.4.9), both are same.
Thank you for help.
I hope the behavior is well described.
Related
Recently we updated our server from PHP version 5.6 to PHP 7.4, since this upgrade we are experiencing some very strange behaviour of some scripts.
The script themselves are fully PHP 7 suitable, there are no error logs or what so ever being printed or even logged when the problem occurs.
What happens is follows:
A script is started, this script calls several functions, when 1 function simply takes to long to finish then the main script stops, there is no error or output given what indicates that something is or went wrong.
It does not matter if we run this script by a GUI or via CLI, on both the result is the same.
The script stops/breaks (on cli you are back on prompt) every time when a called function (does not matter what function) simply takes to long to finish, as mentioned the cause is NOT a php code error, the code is valid.
When the same script is ran using php 5.6 then the script keeps waiting until the called function is finished and then normally continues as it supposed to.
It looks like there is a (new) setting somewhere in PHP7 which limits the execution time that a called function may run otherwise I cannot explain this behaviour, the problem here is which setting is this exactly and how do I change it, the obvious settings we already changed.
Has someone an idea where to look or search for these kind of settings?
The system is running on Centos 8 and using PHP 7.4.13 (or php 5.6), when using an older PHP version (7.2) the problem is the same, only php 5.6 does not have this problem at all.
Error reporting is turned on, no errors are logged whatsoever. If we run a script on PHP 7.4. The script stops from CLI after a short period (1 - 2 minutes). When running the same script on PHP 5.6 it runs for a long time (which it should in this case). Our developpers found that a function that calls another function to check whether an email account exists (HELO check) takes longer than 2-3 seconds, the entire PHP script is stopped in 7.4 where PHP 5.6 just waits longer and runs the entire script
This question already has answers here:
"[notice] child pid XXXX exit signal Segmentation fault (11)" in apache error.log [closed]
(3 answers)
Closed 2 years ago.
I have read a lot about this error in different forums but still cannot find a solution
My productive server currently has:
Apache: 2.4.25 PHP: 5.6.40
Using the PHPSpreadsheet library in version 1.8.2, different Excel files are generated; For different reasons (Large amounts of information and styles applied to the cell sheet) an xlsx file of maximum 3MB is exported in quite long execution time (It can reach 30/40 minutes).
And all this worked until we moved and dockerized to our new server, whose OS is CentOS7 and has a docker for apache, php and another for mysql.
Now when generating these "large" files the error.log trace shows the following error: [core: notice] [pid 8] AH00052: child pid 15829 exit signal Segmentation fault (11)
I tried to replicate the error using virtual machines in my local machine and in another one, I put OS centos7 and dockerized on them, I even copied the same configuration files from the productive server, however, on these machines the error does not appear and allows me to download correctly.
Currently I am desperate, I have tried every internet solution in reverse to replicate the error before trying any solution on the production line, but this error is by no means replicated.
I appreciate any information and help with a just cause ... I know that at this point php 5.6 is quite sent to collect, but I cannot commit the work time to an update if in the end it is not going to solve this incident.
Seems like a possible fix, and definitely an optimization, would be getting away from having an httpd child tied to anything that runs that long. I'd expect something like this to fail. The number of things that would want to timeout this request is long. I mean is the user sitting there for 40 minutes?
What about having your web interface trigger a PHP script (CLI)?
exec('php slowasf.php arg1 arg2 >/dev/null 2>/dev/null &');
Could look at a message queue to track progress. More of a long comment maybe than an answer. But then your question lacks code!
Finally, you did try giving it more RAM?
OS: Windows 8.1
Laravel-version: 5.3.15
Hello,
I'm having a bit of trouble getting Laravel's scheduler to properly run on on my machine and STAY RUNNING in a way where I can see some sort of output to indicate that it's actually working. Every time I run php artisan schedule:run it calls any listed commands once and then just dies. It doesn't repeat or anything. Example attempt below:
The Code:
The Result When Trying to Run the Scheduler:
What I've tried:
I've tried multiple suggested fixes from the link below and some are actually referenced throughout stackoverflow as well. Including adding a batch file, setting php's path in my system veriables etc etc. as suggested in the link below by Kryptonit3:
https://laracasts.com/discuss/channels/general-discussion/running-schedulerun-on-windows
However, nothing works. Even in the "Windows Task Scheduler" tool it will just run once, end any further processing in the CLI and then close the window. There is no actual indication that this scheduler is running in the background indefinitely (Sort-of like a background job only without the queues).
Questions:
1 - With no indication (the echoed "I am Scheduled") how do I know if this is actually working?
2 - Why does attempting to run this command die after it successfully completes once.
3 - Does my Laravel Version have anything to do the scheduler not running permenently when I try using php artisan schedule:run in the CLI or windows task scheduler?
4 - Mainly what I'm trying to accomplish here is every minute the system would scan a single DB table's field. How do I accomplish this?
5 - Can someone please give me some clarification on this? Or point me in the right direction so that I can get this thing really running indefinitely without it just ending after the first run?
Note: I do not want to use Laravel Forge or any other external service for something so simple. That would be overkill and unecessary I feel.
after doing some rather serious digging I found that all of what Kryptonit3 had said in his step by step answer that I referenced was correct. However, I did not know I needed to restart my computer after creating a new task with the Windows Task Scheduler. I thought this would happen automatically. Would've saved me 2 days of debugging had I'd of known but regardless, if someone else comes across this similar problem they will know to reboot the computer.
Reference:
https://laracasts.com/discuss/channels/general-discussion/running-schedulerun-on-windows
I get a strange PHP bug on a PHP 5.6 / Symfony 2.7 project, running on a CentOS6 server through Apache.
I have a Symfony console command running as a service which launches some other console commands every 2 seconds. I use the Symfony Process component to launch the sub-processes and have timeout management.
And everything is done to avoid to launch parallel processes from the main command.
The issue I have is that sometimes php console commands doesn't stop after finishing their process. Which means that if I launch by hand the commands, everything runs correctly on the PHP side but I don't get the hand back on the console after the PHP statements finished, unless I use Ctrl+C.
The issue was happening a lot of times when the PHP version was 5.5, but now with PHP 5.6 it (only) happens randomly. When it happens, I can see a lot of stucked php sub-processes, probably launched by the main command.
I just can't find any explanation since php commands doesn't raise any error. It's just that the console get stuck and wait for something to finish.
Has anybody a possible solution to this issue?
I am developing a small-ish app in PHP. At some point a few days ago, PHP started to segfault on my dev machine (not a prod server) when I try to load and process some data in the app.
I attached gdb to try to see where the segfault happens, but basically got nowhere. The backtrace contains a bunch of call_func -esque things (about 50) and the last function was an mpm_prefork thing. I thought it was mpm_prefork's fault so I installed php-fpm and tried using mpm_event instead, but it still segfaults.
I noticed that if I remove all data from database (I'm using SQLite), it would no longer segfault, so I tried exporting the data into a MySQL db, but it still segfaults.
In conclusion, it would seem the segfault occurs when the data from the db are loaded and processed for output.
I am using PHP 5.5.12 and Apache 2.4.9, both of which are (I believe) newest versions.
My question is: Can I somehow get the PHP's stack printed when the segfault occurs? Or perhaps could I have PHP log each function call somewhere to see what function call leads to the crash?
Also, does PHP have a stack limit? Am I possibly calling to many functions? (As far as I remember, there's no recursion involved in the process.)
Thank you.