I have Symfony application on my Debian 8 vagrant box which uses QLess (https://github.com/seomoz/qless) for background tasks. QLess backround workers are running by supervisor.
I have an issue that job handlers (symfony commands) are somehow cached.
It's not a NFS problem, source files are identical on guest and host.
The caching problem can solved only by restarting qless workers.
opcache.enable_cli is also set to false in php.ini
Do you have any idea what can cause this problem?
The reason was very simple eventually. Worker is a long-running PHP process. All the libraries are permanently loaded in memory after start of the worker, that's why I couldn't see the changes.
Related
I'm working on a "legacy" Symfony (it's using Symfony 4 but It's not maintained anymore). The problem is that the cache folder is growing every day, raising 50GB after a few months.
It's running as a DEV environment, but as the original developer left the company we would like to just "patch" the problem cleaning the cache after X time instead of changing the environment to a production one (which could lead to different problems and maybe it won't solve the cache issue), just like rotating Symfony logs where you can configure Symfony to log every day in a different file and remove old files automatically.
There is no ready made way to do this from within the application.
Just clear the cache every now and then. (bin/console cache:clear). You could even schedule this as a cron job to run overnight, the process usually takes only a couple/few seconds at most.
Make sure that you run the command with the same user that's running the application (e.g. www-data, because if you run the command as root the cache will be warmed up with the wrong permissions).
Mind you, running a production system in development mode is inherently dangerous, as it's more likely to leak configuration data on unexpected situations.
So I might be misunderstanding something here and I'm not 100% sure if it's a Horizon problem, or a Supervisor problem, but was looking to see if anyone had experienced this before.
Essentially (and I don't think this is just limited to Horizon processes) it appears that Supervisor just doesn't attempt to restart any processes (at least, not all of the time).
I've currently got it running on 8 different servers (some our own dedicated hosting, others Digital Ocean droplets, and an AWS EC2 instance).
There are a variety of Laravel, PHP and Supervisor versions (from Laravel 5.7 and Supervisor 4.0.4 to Laravel 7.2.1 and Supervisor 3.4.0 - which seems to be latest version available in the CentOS repository and might need updating anyway)
This is an example config file that I'm using, which is essentially the same as the one provided in the Laravel docs.
process_name=%(program_name)s
command=/usr/bin/php /var/www/<domain>/artisan horizon
autostart=true
autorestart=true
user=centos
redirect_stderr=true
stdout_logfile=/var/www/<domain>/log/horizon.log
stopwaitsecs=3600
I don't believe the issue to be memory related as usage is usually pretty low at the time and I'm not seeing errors in that regard.
The log file is usually just the same thing over and over again:
Horizon started successfully.
Which I assume is from whenever it has actually managed to restart, or when I've done it manually.
I've read another question on here that suggested that the user had the wrong command in the config, but I've verified this and used full paths to make sure it's correct.
Hopefully, someone here has more experience with this than I do?
With our CD process, we have configured the following drush commands to be executed after code sync on the servers -
drush #hostname rr
drush #hostname cc all
drush #hostname fra -y
drush #hostname updb -y
Now I want to know if execution of the above commands cause an outage.
Thanks
This depends largely on what code you push exactly. The more custom the code, the more likely something may break at all. I've seen a lot of sites running similar commands as part of their deployment routine without a problem. Most likely it's drush cc all that may abort due to memory limit exhaustion. But this won't break your site.
To ensure your commands will run successfully in your live environment I'd recommend to implement some sort of continuous integration. For example CircleCI (1.500 build minutes free per month) or TravisCI (free for open source projects). Here is an example: https://github.com/leymannx/drupal-circleci-behat. Though it's for Drupal 8 I guess you'll get the idea.
By that you'll basically set up your site from scratch inside some temporary and configurable server (Docker), import a dummy database, run your commands, maybe run some testing (Behat) and then ONLY when everything went fine the site will be deployed to the live server where your deployment commands run again.
Depending on how often those command run and how big the site is, those commands can put a strain on the server and cause outage. If this is only on deployment then still can cause outage depending on the range of factors, but that can be more controlled such us have the deployment at the time when there isn't much traffic.
Check out a list of drush commands at drupalreference.com
I'm using Laravel 4 and its queue implementation for some asynchronous tasks, and I want to use supervisord to manage the worker processes (basically artisan queue:listen), like I've done in the past with other frameworks.
When I run supervisord manually, it starts up the worker processes just fine, but when I run it as a service, the worker processes die immediately with this message:
2013-07-25 09:51:32,162 INFO spawned: 'myproject' with pid 4106
2013-07-25 09:51:32,875 INFO exited: myproject (terminated by SIGSEGV (core dumped); not expected)
There's no stdout or stderr output.
Here's the supervisord configuration for the workers (nothing fancy):
[program:myproject]
command=php artisan queue:listen --queue=queue_name iron --env=staging
directory=/home/myuser/myproject
stdout_logfile=/var/log/supervisord/myproject.stdout
stderr_logfile=/var/log/supervisord/myproject.stderr
The server its running on is a CentOS 6.4 64 bit with PHP 5.3.25 from cPanel/WHM (not my choice, it's a server that was idle and about which we can't do much).
Any ideas on what could be causing the problem?
I had this issue a few months back, for the life of me I can't accurately remember what the solution was, but I'm reasonably sure that my issue was that I needed to at least create the log files for it to write to, it wouldn't create them itself.
I know it is an old thread - I came around this issue after Laravel was working just fine.
There was a compiled.php in bootstrap folder. I deleted it and all worked fine (I knew that caused issues in Laravel 5)
maybe will be helpful for someone
Let's say you have a project running on apache. I use capistrano to deploy new code and update a httpd.conf/other configuration files, I then reload all of my services (reloading the configs).
How is rollback managed? I wouldn't assume cap rollback would put the old configs in place and reload. Is this possible? Can you show me an example?
Is there a better way of managing configuration?
Capistrano comes with built-in recipes to manage Rails application rollbacks. They may work for your PHP/Apache deployment...but if they don't you can easily write your own Cap recipies in Ruby. You'll have to try it out on a test server to see how it works.
I ended up making my own hooks into deploy_code and on_rollback that copied the apache conf from the repository and reloaded apache.