How to connect phalconphp as consumer with rabbitmq?
As I understood I need to have a background process running for phalconphp to be able to listen for events/messages from rabbitmq and process some time intensive tasks (sending mail, writing to logs).
What would fire the consumer (in Phalcon), maybe supervisord?
I found some article that states just to run php worker.php containing a listen method:
http://www.sitepoint.com/php-rabbitmq-advanced-examples/
While just running php worker.php will work, if you don't use a supervisor service, and just use a while(1) and send it to the background, there is no way to handle the process dying.
supervisord is recommended because you need to daemonize the process, and ensure that if it dies, or if the system is rebooted, that the process will be restarted.
You might also want to check into upstart. It can achieve the same goal.
Related
i'm creating a socket server with ReactPHP and i need it to run forever.
I also have a command panel where i have to check if the process is running, and i can stop or start it (or restart it).
I don't know howe to achieve this.
My plan was:
With play button: start the php command shell_exec with simply "php script.php".
With stop button i can do in 2 ways: 1. i can set in the loop a timer that every 5 seconds checks if there is file inside the folder (like "stop.lock") and then stop the process. 2. i can save the process PID in the database, and so clicking the stop button i can just kill the process.
Checking online status: I can make another script that tries to connect to the IP/port and if succeeds is online, if not (timeout 5 seconds) is offline.
I also want the script stay always in the listening status, so how can i make the script auto-start if for example i have to restart my server?
I was thinking about a cron trying to connect to the server every minutes; and if it fails, it will just lauch again shell_exec('php script.php');
How is the best solution to handle all? (Server OS is CentOS 7)
As #Volker said, just stop the loop if you want to stop it gracefully. You could check periodically a file or query a table but that's not a great way.
A nice flow could be to listen to an admin message to stop the server. Of course, you should care of authenticating who can stop the server. This way it will stop without having to wait an interval to run, and you reduce the overhead of querying periodically your filesystem or your database.
Another cool way could be using RabbitMQ or a similar queue service. You just listen to your queue server, and you can send a message from your script to RabbitMQ, and from there to your server.
Good luck!
Edit: If you are running your server with systemd, a great way of handling it could be just to listen to a system signal to gracefully stop the application. Take a look at addSignal, you can handle a kill by pid, but also through systemd.
To handle graceful shutdown versus long running streamed response I've created a acquire/release like mechanism.
When a handler starts streaming a long response it acquires a lock and when streaming is done it releases it (it's just an array of uniqid()).
The server can decide to wait if there is active locks.
I use supervisor to handle the start/stop with a SIGTERM signal.
I have been working on a php mvc web application using codeigniter and need to process some long running tasks.
I checked through several options (RabbitMQ, Gearman, IronMQ etc) and decided to use Gearman of it's simplecity. I went through the samples and tutorials in gearman.org which shows how to start a GearmanWorker using worker.php.
my concern is, in mvc architecture where does this GearmanWorker is initiated and started?
Does it started through a controller method OR
Do we need to initiate the GearmanWorker from cli(console)? If it's started from cli then how to handle if the already started worker has stopped for some reason when we make a GearmanClient->do('some task')
a similar question but not clear enough for me
I wouldn't recommend to start worker from the controller. You can start several workers distributed over network and using workers text command for monitoring purpose. gearmand dispatch a job to the next idle worker.
Maybe SUBMIT_JOB_BG is a good option for you to avoid web server timeout, if job execution takes long.
Im looking to build a distributed video encoding cluster of a few dozen machines. Ive never worked with a messaging queue before, but the 2 that I started playing around with were Gearman and Beanstalkd.
Beanstalk seems to be a lot simpler and easier to use than Gearman, but its not as feature rich as.
One thing I don't understand is... how do you spawn new workers on all the servers? I plan to use php. Is it as simple as running worker.php in CLI with "&" and just have it sit there waiting for work?
I noticed gearman doesn't actually kill the process after a job is done, but Beanstalk does, so I have to restart the script after every job, on every server.
Currently Im more inclined to use Beanstalk, the general flow of things I planned was:
Run a minutely cron on each server that checks if there are pre-defined amount of workers running. If its less than supposed to be, spawn new worker processes. Each process will take roughly 2-30 minutes.
Maybe I have a flaw in my logic here? Let me know what would be a "better" or "proper" way of doing this?
Terminology I will use just to try and be clear...
There is the concept of a producer and a consumer. The producer generates jobs that are put on a queue (i.e. the beanstalk service) that is then read by a consumer.
There are multiple ways to write a consumer. You can either every x time frame via a cron job run the task or just have a consumer running in a while 1 loop via php (or what have you).
Where to install the service is really dependent on what you are going after. For me I normally install the service either on a consumer(s) or on its separate box (with sometimes the latter being overkill depending on your needs).
If you want durability on the queue side then you should use Beanstalk's binlog parameter (-b ). If something happens to your beanstalk service this will allow you to restart with minimal loss of data in the queues (if not no information). Durability on the producer side can come from having multiple queues to try against.
I have a PHP project (Symfony2) that uses RabbitMQ. I use its as simple message queue to delay some jobs (sending mails, important data from APIs). The consumers run on the webserver and their code is part of the webserver repo - they are deployed in the same with with the web.
The questions are:
How do I start the consumers as daemons and make sure they always run?
When deploying the app, how do I shut down consumers "gracefully" so that they stop consuming but finish processing the message they started?
If it's any important, for deployment I use Capifony.
Thank you!
It maybe worth looking at something supervisord which is written in python. I've used it before for running workers for Gearmand which is a job queue that fullfils a similar role to the way your using RabbitMQ.
I have a website written in PHP (CakePHP) where certain resource intensive tasks are handled by a background process. This is done through the Beanstalkd message queue. I need some way to retrieve the status of that background process so I can monitor it with Monit.
The background process is a CakePHP Shell (just a PHP CLI script) that communicates with Beanstalkd. It simply does a reserve() on Benastalkd and waits for a new message. When it gets a message, it processes it. I want some way of monitoring this process with Monit so that it can restart the background process if something has gone wrong.
What I have been thinking about so far is writing a PHP CLI script that drops a message in Beanstalkd. The background process picks up the message and somehow communicates it's internal status back to the CLI script. But how? Sockets? Shared memory? Some other IPC method?
Or am I perhaps being too complicated here and is there a much easier way to monitor such a process with Monit?
Thanks in advance!
Here's what I ended up doing in the end.
The CLI script connects to beanstalkd, creates a new queue (tube) and starts watching it. Then it drops a highest priority message in the queue that the background daemon is watching. That message contains the name of the new queue that the CLI script is monitoring.
The background process receives this message almost immediately (because it is highest priority), generates a status message and puts it in the queue that the CLI script is watching. The CLI script receives it and then closes the queue.
When the CLI script does not get a response in 30 seconds it will exit with an error indicating the background daemon is (most likely) hung.
I tied all this into Monit. Monit can now check that the background daemon is running (via the pidfile and process list) and verify that it is actually still processing messages (by using the CLI tool to test that it responds to status requests)
There probably is a plugin to Monit or Nagios to connect, run the stats and return if there are 'too many'. There isn't a 'protocol' written already for that, but t doesn't appear to be exceeding difficult to modify an existing text-based one (like nntp, or smtp) to do what you want. It does mean writing it in C though, by the looks of it.
From a CLI-PHP script, I would go about it through one (or both) of two different methods.
1/ drop a (low-ish) priority message into the queue, and make sure it comes back within a few seconds. Putting it into a dedicated queue and making sure there's nothing there before you put it in there would be a good addition as well.
2/ perform a 'stats' and see how many are waiting: 'current-jobs-ready'.
To get the information back to a website (either way), you can write to a file, or into something like Memcached which gts read and acted upon.