I get a strange PHP bug on a PHP 5.6 / Symfony 2.7 project, running on a CentOS6 server through Apache.
I have a Symfony console command running as a service which launches some other console commands every 2 seconds. I use the Symfony Process component to launch the sub-processes and have timeout management.
And everything is done to avoid to launch parallel processes from the main command.
The issue I have is that sometimes php console commands doesn't stop after finishing their process. Which means that if I launch by hand the commands, everything runs correctly on the PHP side but I don't get the hand back on the console after the PHP statements finished, unless I use Ctrl+C.
The issue was happening a lot of times when the PHP version was 5.5, but now with PHP 5.6 it (only) happens randomly. When it happens, I can see a lot of stucked php sub-processes, probably launched by the main command.
I just can't find any explanation since php commands doesn't raise any error. It's just that the console get stuck and wait for something to finish.
Has anybody a possible solution to this issue?
Related
Recently we updated our server from PHP version 5.6 to PHP 7.4, since this upgrade we are experiencing some very strange behaviour of some scripts.
The script themselves are fully PHP 7 suitable, there are no error logs or what so ever being printed or even logged when the problem occurs.
What happens is follows:
A script is started, this script calls several functions, when 1 function simply takes to long to finish then the main script stops, there is no error or output given what indicates that something is or went wrong.
It does not matter if we run this script by a GUI or via CLI, on both the result is the same.
The script stops/breaks (on cli you are back on prompt) every time when a called function (does not matter what function) simply takes to long to finish, as mentioned the cause is NOT a php code error, the code is valid.
When the same script is ran using php 5.6 then the script keeps waiting until the called function is finished and then normally continues as it supposed to.
It looks like there is a (new) setting somewhere in PHP7 which limits the execution time that a called function may run otherwise I cannot explain this behaviour, the problem here is which setting is this exactly and how do I change it, the obvious settings we already changed.
Has someone an idea where to look or search for these kind of settings?
The system is running on Centos 8 and using PHP 7.4.13 (or php 5.6), when using an older PHP version (7.2) the problem is the same, only php 5.6 does not have this problem at all.
Error reporting is turned on, no errors are logged whatsoever. If we run a script on PHP 7.4. The script stops from CLI after a short period (1 - 2 minutes). When running the same script on PHP 5.6 it runs for a long time (which it should in this case). Our developpers found that a function that calls another function to check whether an email account exists (HELO check) takes longer than 2-3 seconds, the entire PHP script is stopped in 7.4 where PHP 5.6 just waits longer and runs the entire script
I have an XML database that I want to manage independently from users on my website. Looking into the matter it appears that I should write a daemon script to manage my database. That is all fine and dandy but, I feel like I'm opening a can of worms. I wanted to write my daemon script in PHP, so I looked into PCNTL. But I quickly learned that PCNTL is not suited for web servers. So now I am stumped. How can I get a daemon to run on my server? Do I need to learn another language? I only want to write my own scripts. But I feel lost. I would prefer to write my daemon in PHP as I am familiar with the language.
I have been researching everything from PCNTL, CLI, SO questions, numerous articles on daemon processes... etc
I am running PHP 5.6.32 (cli), windows 7, on Apache. XAMPP 5.6.32. Unix system.
EDIT: I also have windows setup to run PHP from command prompt.
There's nothing wrong in running a PHP daemon, however it's not the fastest thing, especially before the 7.0 version. You can proceed in two ways:
Using Cron Jobs, if you're under Unix systems crontab will be fine, in this way you can specify the interval within the system automatically executes the specified script and then exit.
The true daemon, firstly you need to change the max_execution_time in PHP.ini to 0 (infinite), then in your daemon call for first function set_time_limit(0);, remember to run it only once. However if there is some failure like a thrown error uncatched the script will exit and you need to open it again manually, and don't try...catch in a while loop because it will probably go into an endless loop. Execute the script with php -f daemon.php.
I'm running PHP CLI commands inside an Ubuntu VM via SSH. The problem is that certain commands have started causing segmentation faults.
Oddly, it only happens when the command is run via SSH. If I run it directly inside the VM, there's no problem. Also, it doesn't happen on another machine running the same VM.
When I run the PHP command (via SSH from the host machine), the only thing I see in the logs on the VM is this:
Mar 22 14:53:00 local kernel: [ 1868.863475] php[4967]: segfault at 7ff60c200000 ip 00007ff6202b284e sp 00007ffce7f19448 error 4 in libc-2.23.so[7ff620213000+1c0000]
This seems to indicate that the problem is actually not with PHP, but with the standard library. If that's the case, how can I go about debugging this, and/or how should I submit a bug report?
Details, if they're relevant:
My host machine is also Ubuntu, and the VM is DrupalVM using the beet/box base box. The command I'm actually running is blt setup (see https://github/acquia/blt), which calls drush cex inside the VM via SSH.
It's this drush cex command that recently started causing the seg fault.
First option:
strace mycommand
...and watch what happens
Second option
gdb mycommand
...and step through it
Third option
gather the dumps and analyze them.
- https://stackoverflow.com/questions/17965/how-to-generate-a-core-dump-in-linux-on-a-segmentation-fault
The first one is usually sufficient if you have some intuition about what's going on. In your case it's probably something silly like terminal capability negotioation or some other nor-really-core-relevant thing, but you never know.... could be serious.
Second option is the most complicated.
last year i have purchased an encrypted script wich run two crons jobs, a month ago crons stop working and i have talk to the hosting company they said its script problem , The PHP cron file works fine without any errors when visited by browser, the script provider told me that this issue should be fixed by hosting service and refuse to help !
here the command used it run every 10 MIN /home/username/public_html/cron/cron.php
cPanel Version 64.0 (build 24)
Apache Version 2.4.25
PHP Version 5.6.30
my question is it true upgrading the PHP version will affect cron job and how can i solve this?
thanx
In short, yes, upgrading PHP can effect your scripts -- the crons aren't run by Apache or PHP; the crons run from the OS level.
Your PHP upgrade is most likely affecting the crons in one of two ways:
The upgrade was large, like PHP5.6 to PHP7.0 and there's a deprecation warning somewhere (which will output in the crons' log) or the script is running some code that's now fully deprecated; most likely a query or a class/method named after a reserved word. Your logs will have more info, just make sure you have debugging turned on, otherwise your errors will be suppressed.
The new PHP settings from the upgrade have disabled some of the allowed rules from an older PHP version, such as getting away with empty or unassigned variables, and now your script is running into errors (ie. using a variable that doesn't exist, such as $_REQUEST['something'], which would have been empty but now returns an error that effects the rest of the script).
To fix this you need to know what the problem is. The easiest way is to access the log files that crons often create. If you don't get that with your host, ask them for it, or ask them to send you a copy of the error that's being created -- a quick Google on the error will tell you what the problem is. But without knowing more about the script or the error log, you probably won't get a better answer.
old command is working its just me i did copy past from my old backup and i forget the PHP at the firts off command ! nothing has changed the command should be like that exp : php /home/username/public_html/cron/cron.php
OS: Windows 8.1
Laravel-version: 5.3.15
Hello,
I'm having a bit of trouble getting Laravel's scheduler to properly run on on my machine and STAY RUNNING in a way where I can see some sort of output to indicate that it's actually working. Every time I run php artisan schedule:run it calls any listed commands once and then just dies. It doesn't repeat or anything. Example attempt below:
The Code:
The Result When Trying to Run the Scheduler:
What I've tried:
I've tried multiple suggested fixes from the link below and some are actually referenced throughout stackoverflow as well. Including adding a batch file, setting php's path in my system veriables etc etc. as suggested in the link below by Kryptonit3:
https://laracasts.com/discuss/channels/general-discussion/running-schedulerun-on-windows
However, nothing works. Even in the "Windows Task Scheduler" tool it will just run once, end any further processing in the CLI and then close the window. There is no actual indication that this scheduler is running in the background indefinitely (Sort-of like a background job only without the queues).
Questions:
1 - With no indication (the echoed "I am Scheduled") how do I know if this is actually working?
2 - Why does attempting to run this command die after it successfully completes once.
3 - Does my Laravel Version have anything to do the scheduler not running permenently when I try using php artisan schedule:run in the CLI or windows task scheduler?
4 - Mainly what I'm trying to accomplish here is every minute the system would scan a single DB table's field. How do I accomplish this?
5 - Can someone please give me some clarification on this? Or point me in the right direction so that I can get this thing really running indefinitely without it just ending after the first run?
Note: I do not want to use Laravel Forge or any other external service for something so simple. That would be overkill and unecessary I feel.
after doing some rather serious digging I found that all of what Kryptonit3 had said in his step by step answer that I referenced was correct. However, I did not know I needed to restart my computer after creating a new task with the Windows Task Scheduler. I thought this would happen automatically. Would've saved me 2 days of debugging had I'd of known but regardless, if someone else comes across this similar problem they will know to reboot the computer.
Reference:
https://laracasts.com/discuss/channels/general-discussion/running-schedulerun-on-windows