Gearman (using PHP) - Possible to send job / message to ALL workers? - php

I'm new to Gearman, but I understand the general concepts. I realize that this isn't something that you would normally want to do... But I was wondering if there is a way to send a "job" to ALL workers?
I have a script that monitors my workers and respawns them when they die. I would like to be able to send out a job that says "die," when I want to kill / respawn all worker processes.
Is this possible? Thanks!

There are a couple of ways that you can go about this.
The easiest way is to send the "kill" job for every worker that you have. Once they've all been killed, then respawn them. The downside of this method is that you will have to wait until all your workers are dead before you can begin respawning. If you existing script respawns immediately, you'll run into problems here.
Another method is to register a unique task for each of your workers. If, for example, you have two workers, register a task "kill_001" for the first worker, and "kill_002" for the second worker. To kill your workers, determine the unique jobs to start ("kill_001", "kill_002"), and then send them out. Respawned workers should have new unique tasks, i.e. don't register a new job "kill_001" if it hasn't been killed yet. Although this method can require a bit more work, it will allow you to respawn your workers without downtime.

Related

Laravel - how to move jobs form one redis queue to other redis queue?

For example I have two queues "high" and "low".
I have 1000 jobs in "high" queue and 0 in "low" queue.
Now I want to move for example 500 jobs from "high" queue to "low" queue.
For start it would be fine to find out how to move all jobs not only half of them.
I can get all jobs with this command :
\Redis::lrange('queues:' . $name, 0, -1);
But how to move them, any idea ?
To sum up the dicussion in the comments, here some recommendations and additional information.
Manually adjusting the Redis queue
It is not recommended to intervene with the redis queue manually. Do not alter the queue by hand. Instead, let the queue workers handle the queue that has an unexpected (high) load of work. You can also spawn additional queue workers temporarily to get the work done faster.
Maybe take the unbalanced queue loads into account when working on future features though.
Fixing the queue work load
To fix the queue work load, there are a number of solutions. What they have in common is that we share resources between the individual queues. The only difference is the way how this is achieved.
For the following options, I'll use a very basic example. Imagine a simple cloud application where users can buy some computation power (for whatever). To make things more interesting, the users of the application can also buy a priority queue ticket, which guarantees them priority processing. In other words, their requests should (but don't have to) be processed with priority.
1. Rescaling the queue worker processes
One way to share resources is to up- and down-scale queue worker processes based on the workload. This means we reduce the queue worker processes for one queue so that we have the resources for additional queue workers for the other queue.
In our example, we would probably expect our users to use 9 out of 10 times the non-priority processing because they do not want to pay the extra for faster processing. This means we would normally have 9 work items in the low priority queue for 1 work item in the high priority queue. So that the priority processing makes sense, we would now need something like 3 queue worker processes per queue. We would start the processes like this:
3x php artisan queue:work --queue=high
3x php artisan queue:work --queue=low
If now the high priority queue has suddenly a lot more work items (e.g. caused by a sale of the priority queue ticket), we would need to rescale our queue workers accordingly. To do so, we would have to manually kill some of the --queue=low worker processes and start more of the --queue=high workers.
Because this is quite cumbersome to do by hand (and we devs also need to sleep once in a while), there is a solution to this which is called Laravel Horizon. When configured properly balance mode set to auto, Horizon will make sure that queues with higher work loads do get more attention than queues with less work load. In other words, Horizon will try to achieve equal waiting times across queues.
2. Let queue workers work on multiple queues
A less complex approach is to let queue workers listen and work on multiple queues. This can be done by passing multiple, comma-separated queues to the --queue parameter: --queue=high,low
When doing so, we instruct the queue worker to handle work items on the high queue with priority over the work items on the low queue. That means the worker will always clear the high queue entirely before taking work from the low queue. If, after processing a job on the low queue, the worker finds an item on the high queue, he will jump back to this queue. So each time the worker looks for a new work item, he will first look on the high queue and if there are no work items, he will go to the low queue.
For our example above, we could for example deploy the following 6 queue workers:
4x php artisan queue:work --queue=high,low
2x php artisan queue:work --queue=low
In this case we would have (proportionally) a lot more worker resources for the high queue than for the low queue. But the resources would help out the other works of the low queue when they do not have any priority work to do.
We could of course also add the high queue as fallback to our low queue workers:
4x php artisan queue:work --queue=high,low
2x php artisan queue:work --queue=low,high
This way they would be able to help our high priority queue workers in case of work overloads like in the sale situation described above.
Conclusion
Simple solutions are often better. When your work loads do not explode all of a sudden and you can predict them quite good, then go for the second approach (but make use of supervisor so that you don't have to manually start the workers all the time and also to make sure they restart in case of a crash).
If, on the other hand, your work loads vary a lot and you cannot really tell in advance how many queue workers you need, Horizon may be worth a look. Personally, I also use Horizon for rather simple projects because it is easy to set up and it takes away work of me where I have to think a lot about loads and stuff.

RabbitMQ basic_get with multiple consumers

I'm moving some resource intensive functionality currently running on a cron to a RabbitMQ queue. I'm weary of having long running PHP consumer scripts so I'm thinking of doing the following:
Jobs are added to the queue at the start of the day.
A cron runs a command which starts a consumer.
The consumer uses basic_get to get a job, processes the job, acknowledges the job and then exits.
The cron runs again and the next job is processed.
I have a couple of questions around how well this will work.
If I decide to fire up 2 workers via the cron (running the command twice) and the first job is still being processed, and hasn't been acknowledged, would RabbitMQ ever send the same job to the second worker?
I've noticed that basic_consume will be more performant since there's no round trip when receiving each job. Is it possible to use basic_consume rather than basic_get without having to worry about the workers being left to run for too long?
The first part:
No it would not. This would happen only in the case when the first consumer dies without ACKing the message- then that message gets requeued and the next consumer gets it.
The second part:
You should use basic_consume because it's faster, asynchronous and generally better. Using any message retrieval methods has nothing to do with how long will the consumers run.

Managing workers with RabbitMQ

I have implemented rabbitMQ in my current php application to handle asynchroneous jobs that are handled by workers. But my current problem is that how should i monitor and scale up or down the workers. Also, i want to add error handling in case all the workers die. I have thought of following two ways but don't know which one is the better:
At producer end, i would analyze the rabbitMQ queue size. If queue size (list of pending tasks) is more than a threshold, i would create one new worker everytime producer script executes but before that i would check the server load (using linux command uptime). If server load is less than a threshold then only new worker would be created. At consumer end (in worker.php), i would apply same method to scale up the workers and i would also check that if script is idle for a given time (i.e. there is no pending task in rabbit mq queue) then it would automatically die (to automate scaling down of workers).
Second method is to use background process or cron to monitor and scale/up down the workers. But i don't want to rely on cron (as i have very bad experiences with it) or background process because if background process crashes for some reason then there is no way to recover from it.
Please help.
I wouldn't recommend bothering to scale them down to nothing when there's no work to be done. The worker that's left (if you want to scale back to 1) will simply wait for something else to consume and it's not an expensive operation.
In terms of determining whether to scale up, I'd recommend leveraging the RabbitMQ Management HTTP API (http://hg.rabbitmq.com/rabbitmq-management/raw-file/3646dee55e02/priv/www-api/help.html). You can use the queue related aspects via a GET operation to get information about queues, including how many entries are currently waiting to be processed.
With that info, you can decide to scale if it either hits a certain threshold, or keeps increasing with every check for a certain amount of time, or something similar. This can be done from the consumer side.
In terms of error handling, I would recommend encapsulating the RabbitMQ connection aspect of your workers such that if a RabbitMQ exception occurs the connection is re-established from scratch and continues.
If it's a more serious type of exception that isn't RabbitMQ-related, you may need to catch it at such a level where the worker basically spawns a new worker before it dies. Then of course there are other types of exceptions (out of memory conditions, for example), where it really isn't feasible to try to continue and your program should just completely die.
It is very difficult to answer your question with any degree of accuracy since there are many aspects of the context which are not included.
How long do the tasks take to execute?
Why do you want to scale up/down? Why don't you have threads waiting for load in the first place?
That being said, coming from the world of Erland and functional programming (which is the language used to power RabbitMQ) I would like to suggest the concept of a SUPERVISOR thread. This thread would have the following responsibilities:
Spawn threads depending on the load/qty of requests
Discard threads depending on the load/qty of requests
Monitor the children threads and re-launch them as required reprocessing the same messages if necessary or discarding them
The Supervisor thread should be as easy as possible and should be built in such a way that it simply loops, sleeps and checks if all the threads that need to be alive actually are - it can then check the load and spawn up or kill off the workers as needed. Or in other words, spawn more and/or not-spawn depending on your needs.
You could easily use an exchange to send messages to both the supervisor and the worker queues where the supervisor would then be able to keep a record/count of the messages in the queue without having to write polling code to the server, it would simply listen to it's own queue. You can increase/dec the counter from the supervisor thread and manage everything from there.
Hope this helps.
See: http://docs.dotcloud.com/guides/daemons/
Regretfully I don't program in PHP and therefore cannot give you PHP-specific assistance, this is however the programming pattern that I recommend that you use. If PHP doesn't allow multi-threaded programming and/or threads then I would highly recommend that you use a language that does since you will not be able to scale and use the full power of the local machine unless you use multiple threads. As for the supervisor crashing, if you keep minimal work in the supervisor and delegate all responsibilities to children threads then the risk of a supervisor crash is minimal.
Perhaps this will help:
Philosophy:
http://soapatterns.org/design_patterns/service_agent
PHP-specific:
http://www.quora.com/PHP-programming-language-1/Is-there-an-actor-framework-for-php

beanstalkd - what happens to reserved, but not completed jobs?

I've created a PHP script that reads from beanstalkd and processes the jobs. No problems there.
The last thing I've got to do is just write an init script for it, so it can run as a service.
However, this has now raised another question for me. When trying to stop the service, the one obvious way of doing it would be to try and kill the process. However, if I do that, what will happen to the job, if the PHP script was halfway through processing it? So the job was reserved, but the script never succeeded or failed (to delete or bury respectively), what happens?
My guess is that the TTR will expire, and then it gets put back to the ready queue?
And bonus 2nd question, any hints on how to better manage stopping the PHP service?
When a worker process (beanstalk client) opens up a connection with beanstalkd and reserves a job, the job will be in "reserved" state until the client issues delete/release command (or) job times out.
In case, if the worker process terminates abruptly, its connection with beanstalkd will get closed and the server will immediately release all the jobs that has been reserved using that particular connection.
Ref: http://groups.google.com/group/beanstalk-talk/browse_thread/thread/232d0cac5bebe30f?hide_quotes=no#msg_efa0109e7af4672e
Any job that runs out of time, and is not buried or touched goes back into the ready queue to be reserved.
I've posted elsewhere about using Supervisord and shell scripts to run workers. It has the advantage that most of the time, you probably don't mind waiting for a little while as jobs finish cleanly. You can have supervisord kill the bash scripts that run a worker script, and when the script itself has finished, simply exits, as it can't be restarted.
Another way is to put a highest-priority (0) message into a tube that the workers listen of, that will have the workers first delete the message, and then exit. I setup the shell scripts to check for a specific return value (from exit($val);) and then they too would exit any loop in the shell scripts.
I've used these techniques for Beanstalkd and also AWS:SQS queue runners for some time, dealing with millions of jobs per day running through the system.
If you job is too valuable to lose, you can also use pcntl to wait until the job finishes and then restart/shutdown your worker. I've managed to handle all suitable pcntl signals to release the job back to tube.

Infrastructure for Running your Zend Queue Receiver

I have a simple messaging queue setup and running using the Zend_Queue object heirarchy. I'm using a Zend_Queue_Adapter_Db back-end. I'm interested in using this as a job queue, to schedule things for processing at a later time. They're jobs that don't need to happen immediately, but should happen sooner rather than later.
Is there a best-practices/standard way to setup your infrastructure to run jobs? I understand the code for receiving a message from the queue, but what's not so clear to me is how run the program that does that receiving. A cron that receives n messages on the command-line, run once a minute? A cron that fires off multiple web requests, each web request running the receiver script? Something else?
Tangential bonus question. If I'm running other queries with Zend_Db, will the message queue queries be considered part of that transaction?
You can do it like a thread pool. Create a command line php script to handle the receiving. It should be started by a shell script that automatically restarts the process if it dies. The shell script should not start the process if it is already running (use a $pid.running file or similar). Have cron run several of these every 1-10 minutes. That should handle the receiving nicely.
I wouldn't have the cron fire a web request unless your cron is on another server for some strange reason.
Another way to use this would be to have some backround process creating data, and a web user(s) consume it as they naturally browse the site. A report generator might work this way. Company-wide reports are available to all users but you don't want them all generating this db/time intensive report. So you create a queue and process one at a time possible removing duplicates. All users can view the report(s) when ready.
According to the docs it doens't look like the zend db is even using the same connection as your other zend_db queries. But of course the best way to find out is to make a simple test.
EDIT
The multiple lines in the cron are for concurrency. each line represents a worker for the pool. I was not clear, you don't want the pid as the identifier, you want to pass that as a parameter.
/home/byron/run_queue.sh Process1
/home/byron/run_queue.sh Process2
/home/byron/run_queue.sh Process3
The bash script would check for the $process.running file if it finds it exit.
otherwise:
Create the $process.running file.
start the php process. Block/wait until finished.
Delete the $process.running file.
This allows for the php script to die but not cause the pool to loose a worker.
If the queue is empty the php script exits immediately and is started again by the nex invocation of cron.

Categories