Supervisor not restarting Horizon Queues - php

So I might be misunderstanding something here and I'm not 100% sure if it's a Horizon problem, or a Supervisor problem, but was looking to see if anyone had experienced this before.
Essentially (and I don't think this is just limited to Horizon processes) it appears that Supervisor just doesn't attempt to restart any processes (at least, not all of the time).
I've currently got it running on 8 different servers (some our own dedicated hosting, others Digital Ocean droplets, and an AWS EC2 instance).
There are a variety of Laravel, PHP and Supervisor versions (from Laravel 5.7 and Supervisor 4.0.4 to Laravel 7.2.1 and Supervisor 3.4.0 - which seems to be latest version available in the CentOS repository and might need updating anyway)
This is an example config file that I'm using, which is essentially the same as the one provided in the Laravel docs.
process_name=%(program_name)s
command=/usr/bin/php /var/www/<domain>/artisan horizon
autostart=true
autorestart=true
user=centos
redirect_stderr=true
stdout_logfile=/var/www/<domain>/log/horizon.log
stopwaitsecs=3600
I don't believe the issue to be memory related as usage is usually pretty low at the time and I'm not seeing errors in that regard.
The log file is usually just the same thing over and over again:
Horizon started successfully.
Which I assume is from whenever it has actually managed to restart, or when I've done it manually.
I've read another question on here that suggested that the user had the wrong command in the config, but I've verified this and used full paths to make sure it's correct.
Hopefully, someone here has more experience with this than I do?

Related

QLess worker code caching

I have Symfony application on my Debian 8 vagrant box which uses QLess (https://github.com/seomoz/qless) for background tasks. QLess backround workers are running by supervisor.
I have an issue that job handlers (symfony commands) are somehow cached.
It's not a NFS problem, source files are identical on guest and host.
The caching problem can solved only by restarting qless workers.
opcache.enable_cli is also set to false in php.ini
Do you have any idea what can cause this problem?
The reason was very simple eventually. Worker is a long-running PHP process. All the libraries are permanently loaded in memory after start of the worker, that's why I couldn't see the changes.

Does drush commands cause outage?

With our CD process, we have configured the following drush commands to be executed after code sync on the servers -
drush #hostname rr
drush #hostname cc all
drush #hostname fra -y
drush #hostname updb -y
Now I want to know if execution of the above commands cause an outage.
Thanks
This depends largely on what code you push exactly. The more custom the code, the more likely something may break at all. I've seen a lot of sites running similar commands as part of their deployment routine without a problem. Most likely it's drush cc all that may abort due to memory limit exhaustion. But this won't break your site.
To ensure your commands will run successfully in your live environment I'd recommend to implement some sort of continuous integration. For example CircleCI (1.500 build minutes free per month) or TravisCI (free for open source projects). Here is an example: https://github.com/leymannx/drupal-circleci-behat. Though it's for Drupal 8 I guess you'll get the idea.
By that you'll basically set up your site from scratch inside some temporary and configurable server (Docker), import a dummy database, run your commands, maybe run some testing (Behat) and then ONLY when everything went fine the site will be deployed to the live server where your deployment commands run again.
Depending on how often those command run and how big the site is, those commands can put a strain on the server and cause outage. If this is only on deployment then still can cause outage depending on the range of factors, but that can be more controlled such us have the deployment at the time when there isn't much traffic.
Check out a list of drush commands at drupalreference.com

Supervisor alternatives on shared hosting

I have deployed my app on shared host "Hostgator", I've ssh successfull access, however I can't install supervisor to manage queue processing, the command sudo apt-get install supervisor always return errors, so I have contacted support and I was told that I can't make sudo commands with sharedhost "cloud" plan and I have to move to VPS or dedicated which I can't move to at this time.
My question is : is there any alternative can I use to manage the queue processing without supervisor or another way to go around this ? anybody wen through this and found a solution ?
I was thinking to make a cron job with command php artisan queue:work every morning maybe but is this a good practice ?
Thanks in advance any help is appreciated.
For anyone there who is going through something similar, I managed that by cron jobs that run every hour but you have to migrate failed jobs table to make sure that failed jobs are handled and don't stop the queue processing, even though this is not ideal but it somehow works. The ideal solution is considering VPS hosting or mixing Laravel Forge with DigitalOcean for example as mentioned in laravel docs.
Cheers.

Laravel queued jobs don't appear in new relic if worker runs as daemon

I noticed that the queued jobs do not appear in new relic as transactions of any kind.
After digging for a bit I found that if I run my artisan queue workers "straight" they do appear just fine but if I run them as daemons (That's what I have set for my artisan queue:work commands in supervisord config) they do not.
Why does it work that way? Is there anything that can be done about it?
I wanna keep them with --daemon set to avoid framework bootstrapping for every single job. However being able to see what's going on in new relic is important as well.
Scheduled commands and regular http requests seem to be tracked just fine.
I'm running Laravel 5.2 on several forge servers with both php 5.6 and 7.0.
Thank you
New Relic added out-of-the-box instrumentation support for Laravel Queues as an experimental feature in version 6.6.0. Check if your agent version is at least 6.6.0 and then add this property to your newrelic.ini:
newrelic.feature_flag=laravel_queue
For more info, check out the release notes:
https://docs.newrelic.com/docs/release-notes/agent-release-notes/php-release-notes/php-agent-660169

Laravel queue listener with supervisord results in core dump

I'm using Laravel 4 and its queue implementation for some asynchronous tasks, and I want to use supervisord to manage the worker processes (basically artisan queue:listen), like I've done in the past with other frameworks.
When I run supervisord manually, it starts up the worker processes just fine, but when I run it as a service, the worker processes die immediately with this message:
2013-07-25 09:51:32,162 INFO spawned: 'myproject' with pid 4106
2013-07-25 09:51:32,875 INFO exited: myproject (terminated by SIGSEGV (core dumped); not expected)
There's no stdout or stderr output.
Here's the supervisord configuration for the workers (nothing fancy):
[program:myproject]
command=php artisan queue:listen --queue=queue_name iron --env=staging
directory=/home/myuser/myproject
stdout_logfile=/var/log/supervisord/myproject.stdout
stderr_logfile=/var/log/supervisord/myproject.stderr
The server its running on is a CentOS 6.4 64 bit with PHP 5.3.25 from cPanel/WHM (not my choice, it's a server that was idle and about which we can't do much).
Any ideas on what could be causing the problem?
I had this issue a few months back, for the life of me I can't accurately remember what the solution was, but I'm reasonably sure that my issue was that I needed to at least create the log files for it to write to, it wouldn't create them itself.
I know it is an old thread - I came around this issue after Laravel was working just fine.
There was a compiled.php in bootstrap folder. I deleted it and all worked fine (I knew that caused issues in Laravel 5)
maybe will be helpful for someone

Categories