beanstalkd default wal directory - php

We are using Laravel with BeansTalkD for queuing in our app. Now I am stuck at a point that this command :
php artisan queue:listen
keeps getting jobs, in other words there are a lot of jobs and I have no idea how to clean the queue.
And the problem is that I ran the beanstalkd using this command:
beanstalkd -z 1024*1024
which does not specify wal directory.
I have been searching for the whole last week on how to clean beanstalk work queue, but found nothing.
Tips
I am running this on Mac OS X Yosemite.
Restarting BeansTalkD service did not solve it
I don't store jobs in DB so flush command is not the one. (I don't know if Laravel does that with me knowing but I don't think so)
I am deleting the jobs when I am done but the app generates a lot of jobs.

If you didn't use -b option then restarting beanstalkd (again without -b) should help.
Now if restarting for some reason doesn't work for you and you're using Laravel 5.x you can consider installing artisan-beans package and use php artisan beans:purge command to clean up your queue.
UPDATE: Since you're on Laravel 4.2 you can
install dependency-free CLI tool beanstool. Here's how install v2.0 on OS X
wget https://github.com/tyba/beanstool/releases/download/v0.2.0/beanstool_v0.2.0_darwin_amd64.tar.gz
tar -xvzf beanstool_v0.2.0_darwin_amd64.tar.gz
cp beanstool_v0.2.0_darwin_amd64/beanstool /usr/local/bin/
and then run this in bash
for i in {1..N}; do beanstool delete -t default --state=ready; done
Change N to the number of jobs you want to delete at once and default to the name of your queue (tube).
If you wonder how many jobs you currently have in the queue run
beanstool stats

Beside the accepted answer I found another solution which is to stop (CTRL+C) the command:
beanstalkd -z 1048576
and start a the command again with the -b option:
beanstalkd -z 1048576 -b ~/btd_data
this also solved the problem

Related

Cron running "docker exec" make my server freeze

I need help to solve a problem with a cron running a docker exec command.
After setting up this cron, my server sometimes gets not responding anymore. No web requests handled, no SSH connection possible. I must restart the server to get it back. It usually happens 3 or 4 times per day.
My cron is setup in my host's crontab :
* * * * * docker exec -w /home/current myphpapp-container bash -c "php artisan schedule:run >> storage/logs/schedule.log"
I'm pretty sure the cron is faulty here because I never had this problem before the cron installation and I don't get it when I disable the cron script.
Docker version is "18.06.3-ce".
The container is a "php:8.0-fpm".
OS is "Debian GNU/Linux 8 (jessie)".
I searched into syslog and others but did not find anything interesting. My cron is minute but I don't even see any progressive load increasing along time. I'm a bit stuck...
Do you have any ideas ? Where should I look to find relevant logs ?
Ok, finally, it looks like a "Docker engine 18.06" related problem. I created a new fresh server with OS and Docker engine up to date. Problem is gone.

How to restart supervisor for a Laravel deployment?

I currently use a cron job to call php artisan queue:work --once every minute to work on my jobs queue in production.
I would like to use supervisor instead to handle my queues.
In the docs in the section of supervisor-configuration it states:
Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted. So, the simplest way to deploy an application using queue workers is to restart the workers during your deployment process. You may gracefully restart all of the workers by issuing the queue:restart command:
php artisan queue:restart
This command will instruct all queue workers to gracefully "die" after they finish processing their current job so that no existing jobs are lost. Since the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
I don't understsand the last sentence. So lets say I have installed and configured the supervisor as described here and I manually logged into the server through ssh and started supervisor:
sudo supervisorctl start laravel-worker:*
Do I need to call php artisan queue:restart on deployment? If so, then this will only kill all current workers, how do I tell supervisor to restart the queue workers? Do I need to call sudo supervisorctl restart laravel-worker:* in the deployment after php artisan queue:restart?
I struggled with this for quite some time. It is a little confusing, and the docs tend to point to each other rather than explain how the whole system works.
The whole point of installing supervisor on your server is to automate the queue process within any Laravel apps running on the machine. If you look at the example file on the help page you linked to, all it is doing is going into a specific Laravel instance and starting the queue.
When supervisor starts, it is the equivalent of:
php artisan queue:start
within your Laravel folder. Supervisor is running the process. But, you still have control to restart the queue, either through sudo supervisorctl restart laravel-worker:* or a php artisan queue:restart within a single Laravel folder. You do not need to call a restart to supervisor if you are manually restarting the queue with the aritsan command - that would be redundant. You can test this out by doing a restart of the queue and monitoring code changes updates, or looking at the queue itself to see that all jobs restart.
The 'gotcha' with all of this is that if you introduce new code and deploy it, you must remember to restart the queue for that instance.
To make things more complicated, but to eventually make it simple, take a look at Laravel Horizon, which basically takes the place of the supervisor in a way that is a bit easier to maintain.

Php artisan command Run out memory with 32Gb of RAM

I have in DigitalOcean a droplet of 32Gb ram with 12 CPU (resize for occasion)
I run php artisan command (laravel 4.2) which never ran. What could be happening?
And the error message after more than 10 minutes:
I answer my own question, with the help of #aynber and #MartinBarker
I solved the problem droping the current database, updating composer (composer update in my local and pushing composer.lock which contain the updates to the server, in the server run composer install with sudo (is not recommendable use sudo with composer.) and runing sudo php artisan.
then edit the php.ini file and increase your memory limit

Laravel jobs pushed to Amazon SQS but not processing

I'm running Laravel 5.3. I'm attempting to test a queue job, and I have my queue configured to use Amazon SQS. My app is able to push a job onto the queue, and I can see the job in SQS. But it stays there, never being processed. I've tried running php artisan queue:work, queue:listen, queue:work sqs... None of them are popping the job off the queue. I'm testing this locally with Homestead. Is there a trick to processing jobs from SQS?
I faced the same problem. I was using supervisor. This worked for me:
Mentioned queue driver in command (sqs):
command=php /var/www/html/artisan queue:work sqs --tries=3
Then ran these commands:
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl restart all
Posting this just in case it helps anyone.
The instructions in the following post worked for me: https://blog.wplauncher.com/sqs-queue-in-laravel-not-working/. In essence, make sure you do the following:
create your standard queue in SQS
update your config/queue.php file to use your SQS credentials (I would suggest - - adding more env vars to your .env file and referencing them in this file)
update your QUEUE_DRIVER in your .env, so it's set to QUEUE_DRIVER=SQS
update your supervisor configuration file (typically /etc/supervisor/conf.d/laravel-worker.conf)
update and restart supervisor (see the 3 commands mentioned by #Dijkstra)

Make Capistrano use alias on sever when running scripts

I have the following problem using Capistrano with laravel:
My hosting provider does not provide a cli php version via php but only via a usr/bin/local/.../PHP-CLI command
I did create an alias for it in my .bash_profile so running composer install from the cli is no problem.
However, Capistrano (as far as I understand due to it starting in a very basic shell http://capistranorb.com/documentation/faq/why-does-something-work-in-my-ssh-session-but-not-in-capistrano/) does not load this alias, so I get an error from the composer scripts e.g. php artisan.
However, on my dev machine I need to keep it as php, since this is where php is here.
How can I solve this problem best? Any more info you need? Thanks.
Just in case it helps, this is how I call the script:
desc 'Composer install'
task :composer_install do
on roles(:app), in: :groups, limit:1 do
execute "/usr/local/bin/php5-56STABLE-CLI composer.phar install --working-dir #{fetch(:release_path)}"
execute "cp #{fetch(:deploy_to)}/shared/.env #{fetch(:release_path)}/.env"
end
end
It sounds like your scenario is the perfect fit for Capistrano's "command map" feature, as documented here: https://github.com/capistrano/sshkit#the-command-map.
Here are the two main takeaways:
Write your Capistrano execute commands so that the binary name (php) is a separate argument. This will allow it to be substituted using the command map. For example:
execute :php, "composer.phar install --working-dir #{fetch(:release_path)}"
In your Capistrano deployment config, tell the command map how to substitute the :php command, like this:
SSHKit.config.command_map[:php] = "/usr/local/bin/php5-56STABLE-CLI"
If you want this substitution to affect all deployment environments, place it in deploy.rb. If it only applies to your production environment, then put it in production.rb.
Okay, my current workaround is the following:
in your capistrano deploy.rb in the script that you execute at deploy update.
desc 'Composer install'
task :composer_install do
on roles(:app), in: :groups, limit:1 do
execute "/usr/local/bin/php5-56STABLE-CLI /path/to/composer.phar install --working-dir #{fetch(:release_path)} --no-scripts"
execute "cd #{fetch(:release_path)} && /usr/local/bin/php5-56STABLE-CLI artisan clear-compiled"
execute "cd #{fetch(:release_path)} && /usr/local/bin/php5-56STABLE-CLI artisan optimize"
end
end
end
after "deploy:updated", "deploy:composer_install"
I am not 100% sure if the artisan clear-compiled is needed. Anyway, those 2 are composer scripts that would normally be called via composer, but the --no-scripts flag keeps them from being called, so that it does not fail on install. When calling them from capistrano, I can easily change which php to use, as you can see.
However if anyone has a better solution, please let me know.

Categories