ReactPHP: Running server with start/stop commands - php

i'm creating a socket server with ReactPHP and i need it to run forever.
I also have a command panel where i have to check if the process is running, and i can stop or start it (or restart it).
I don't know howe to achieve this.
My plan was:
With play button: start the php command shell_exec with simply "php script.php".
With stop button i can do in 2 ways: 1. i can set in the loop a timer that every 5 seconds checks if there is file inside the folder (like "stop.lock") and then stop the process. 2. i can save the process PID in the database, and so clicking the stop button i can just kill the process.
Checking online status: I can make another script that tries to connect to the IP/port and if succeeds is online, if not (timeout 5 seconds) is offline.
I also want the script stay always in the listening status, so how can i make the script auto-start if for example i have to restart my server?
I was thinking about a cron trying to connect to the server every minutes; and if it fails, it will just lauch again shell_exec('php script.php');
How is the best solution to handle all? (Server OS is CentOS 7)

As #Volker said, just stop the loop if you want to stop it gracefully. You could check periodically a file or query a table but that's not a great way.
A nice flow could be to listen to an admin message to stop the server. Of course, you should care of authenticating who can stop the server. This way it will stop without having to wait an interval to run, and you reduce the overhead of querying periodically your filesystem or your database.
Another cool way could be using RabbitMQ or a similar queue service. You just listen to your queue server, and you can send a message from your script to RabbitMQ, and from there to your server.
Good luck!
Edit: If you are running your server with systemd, a great way of handling it could be just to listen to a system signal to gracefully stop the application. Take a look at addSignal, you can handle a kill by pid, but also through systemd.

To handle graceful shutdown versus long running streamed response I've created a acquire/release like mechanism.
When a handler starts streaming a long response it acquires a lock and when streaming is done it releases it (it's just an array of uniqid()).
The server can decide to wait if there is active locks.
I use supervisor to handle the start/stop with a SIGTERM signal.

Related

Sending a synchronous signal/message to a long running PHP script

I have a long running PHP script that I'm attempting to convert into a systemd daemon.
While planing the daemon, I figured that I could simply send SITERM/SIGUSR1/SIGUSR2/etc. signals to restart/reload to my script when necessary using the kill command, but after reading through the systemd documentation, I've noticed this bit in the "ExecReload" section:
Note however that reloading a daemon by sending a signal (as with the example line above) is usually not a good choice, because this is an asynchronous operation and hence not suitable to order reloads of multiple services against each other. It is strongly recommended to set ExecReload= to a command that not only triggers a configuration reload of the daemon, but also synchronously waits for it to complete.
So, while my script will run just fine and the daemon itself will work properly using kill to signal various events (I don't have and most likely won't ever have another daemon that would depend on this one), the quote above got me thinking about alternatives of sending a synchronous message to the daemon.
The only thing that I could think of so far is:
Open a local socket in the daemon and listen for messages on it
Execute any supported action when receiving a message
Send an OK message back to the sender once the action is complete
Is there a better/recommended/optimal way of achieving this?

How to stop SIGTERM and SIGKILL?

I need to run a huge process which will run for like 10+ minutes. I maxed the max_execution_time, but in my error logs I get a SIGTERM and then a SIGKILL.
I read a little about SIGTERM and SIGKILL that they come from the daemon, but i Didn't figure out how to stop it from happening. I just need to disable it for one night.
Rather than trying to ignore signals, you need to find who sends them and why. If you're starting php from the command line, no one will send that signal and your script time will have all the time.
But if you're actually starting this process as a response to an http request, it's probably the web server of FastCGI manager that limits the amount of time it waits for the script to finish. It also may simply kill the script because client connection (between user's browser and http server) has been terminated.
So the important question you should ask yourself - what's the source of that signal and how this timeout can be increased. Please also provide all details about how you start this script and what platform you're running.

How to reliably start and stop a long running process (think days) that outlives the script that sarted it?

I have a CLI php app with a command that starts a background process (a Selenium server) and then exits:
php app.php server:start #this should return immediatly
The app also needs to be able to stop the background process in a later invocation:
php app.php server:stop
Since the server process outlives the start/stop script, I cannot manage it by keeping a file descriptor to it open.
I could store the PID of the process on the file system during start and kill that PID in stop. But if the stop command is ran after the background process has died of its own, I risk killing a process that I did not start because the PID might have been reused by the OS for some other process.
Right now my approach is to store not just the PID of the background process, but also the background process' start time and command used. It's OK, but it is hard to make it work consistently across different platforms (I need Linux, Mac, and Windows).
Can you think of a better way to implement such a behaviour?

beanstalkd - what happens to reserved, but not completed jobs?

I've created a PHP script that reads from beanstalkd and processes the jobs. No problems there.
The last thing I've got to do is just write an init script for it, so it can run as a service.
However, this has now raised another question for me. When trying to stop the service, the one obvious way of doing it would be to try and kill the process. However, if I do that, what will happen to the job, if the PHP script was halfway through processing it? So the job was reserved, but the script never succeeded or failed (to delete or bury respectively), what happens?
My guess is that the TTR will expire, and then it gets put back to the ready queue?
And bonus 2nd question, any hints on how to better manage stopping the PHP service?
When a worker process (beanstalk client) opens up a connection with beanstalkd and reserves a job, the job will be in "reserved" state until the client issues delete/release command (or) job times out.
In case, if the worker process terminates abruptly, its connection with beanstalkd will get closed and the server will immediately release all the jobs that has been reserved using that particular connection.
Ref: http://groups.google.com/group/beanstalk-talk/browse_thread/thread/232d0cac5bebe30f?hide_quotes=no#msg_efa0109e7af4672e
Any job that runs out of time, and is not buried or touched goes back into the ready queue to be reserved.
I've posted elsewhere about using Supervisord and shell scripts to run workers. It has the advantage that most of the time, you probably don't mind waiting for a little while as jobs finish cleanly. You can have supervisord kill the bash scripts that run a worker script, and when the script itself has finished, simply exits, as it can't be restarted.
Another way is to put a highest-priority (0) message into a tube that the workers listen of, that will have the workers first delete the message, and then exit. I setup the shell scripts to check for a specific return value (from exit($val);) and then they too would exit any loop in the shell scripts.
I've used these techniques for Beanstalkd and also AWS:SQS queue runners for some time, dealing with millions of jobs per day running through the system.
If you job is too valuable to lose, you can also use pcntl to wait until the job finishes and then restart/shutdown your worker. I've managed to handle all suitable pcntl signals to release the job back to tube.

Checking the status of my PHP beanstalkd background processes

I have a website written in PHP (CakePHP) where certain resource intensive tasks are handled by a background process. This is done through the Beanstalkd message queue. I need some way to retrieve the status of that background process so I can monitor it with Monit.
The background process is a CakePHP Shell (just a PHP CLI script) that communicates with Beanstalkd. It simply does a reserve() on Benastalkd and waits for a new message. When it gets a message, it processes it. I want some way of monitoring this process with Monit so that it can restart the background process if something has gone wrong.
What I have been thinking about so far is writing a PHP CLI script that drops a message in Beanstalkd. The background process picks up the message and somehow communicates it's internal status back to the CLI script. But how? Sockets? Shared memory? Some other IPC method?
Or am I perhaps being too complicated here and is there a much easier way to monitor such a process with Monit?
Thanks in advance!
Here's what I ended up doing in the end.
The CLI script connects to beanstalkd, creates a new queue (tube) and starts watching it. Then it drops a highest priority message in the queue that the background daemon is watching. That message contains the name of the new queue that the CLI script is monitoring.
The background process receives this message almost immediately (because it is highest priority), generates a status message and puts it in the queue that the CLI script is watching. The CLI script receives it and then closes the queue.
When the CLI script does not get a response in 30 seconds it will exit with an error indicating the background daemon is (most likely) hung.
I tied all this into Monit. Monit can now check that the background daemon is running (via the pidfile and process list) and verify that it is actually still processing messages (by using the CLI tool to test that it responds to status requests)
There probably is a plugin to Monit or Nagios to connect, run the stats and return if there are 'too many'. There isn't a 'protocol' written already for that, but t doesn't appear to be exceeding difficult to modify an existing text-based one (like nntp, or smtp) to do what you want. It does mean writing it in C though, by the looks of it.
From a CLI-PHP script, I would go about it through one (or both) of two different methods.
1/ drop a (low-ish) priority message into the queue, and make sure it comes back within a few seconds. Putting it into a dedicated queue and making sure there's nothing there before you put it in there would be a good addition as well.
2/ perform a 'stats' and see how many are waiting: 'current-jobs-ready'.
To get the information back to a website (either way), you can write to a file, or into something like Memcached which gts read and acted upon.

Categories