how could I know if cron services is running using PHP script?
I want to have a PHP script that checks if cron services is running, else it will notify the admin via email so that they can make an immediate action on it.
Thank you
Depending on your OS, there are three approaches (all of which add considerable performance losses but might be acceptable for your app)
Check process list - You can execute a console command to check the list of runniing processes. I dont think this is possible on windows but no problem on linux. Take EXTRA care to filter any and all variables used there as running console command can be a big security risk.
running files - Create a file on start of your script and check for it's existance. I think this is how most (even non PHP) processes check if they are running. Performance loss and security issues should be minimal, you have to take care that the file is properly removed though, even in case ofa an error.
info in storage - Like the file solution, you can add information into your database or other storage system. Performance loss is slightly bigger than file IO but if you already have a db connection for your script, it might be worth it. It's also easier to store more informatione for your current process there or add logging to it.
Related
we are working on project where we have used node-js in background socket for continues respond to web application. In between sometimes somehow some process stops automatically.
We would like to know how we can check all the process running using forever.
We are using sudo forever list to list all process. Is there any way to use this command(forever list) in .sh(shell script file) to check my specific process like responsclient is working or not. If that particular process is not working then we needs to start that.
There are several solutions that will ensure that your service is always running.
One of them is even called forever. Here you have an overview prepared by express.
However, for production services I recommend passenger The result is almost the same, but much greater scalability. For example, you can configure so that another instance is automatically added.
Almost - because it is designed to ensure the availability of HTTP, and not the constant operation of the application.
BTW: service stops, because you have uncatched exception.
Update
If you insist on forever, then: (We're talking about the same forever?)
Make sure that forever is run by the same user. forever has separate managers for all users.
Make sure you save your data in the same place. (automatic run eg by cron is different from manual startup (vaiables in env))
forever has --pidFile - then it is very easy to check if the process is working
also ps -aux | grep node should be your big friend.
No, I do not have it combined. When I started to have problems, I switched to passenger. Finally I did it well because I have professional monitoring, which I launched in less time than searching how to combine the above points together.
I have a website, created using PHP and running on Apache. I want a subscriber to be able to log in and start a process on the server. They can then log out or close the browser without interrupting the process. Later they can log in and see the progress or see the results of the original process. What is the best way to accomplish this (having the process run until completion, after the browser is closed)?
Just looking for someone to point me in the right direction. A few people mentioned Gearman.
Gearman would be an ideal candidate, and I would use it for exactly the purpose you describe. It has everything you need out of the box to meet your requirements ("background" a long running, CPU-bound process to another machine, e.g. video encoding).
There is a Gearman PHP library, but you can write your worker code in a different language if it's better suited to doing the work.
For reporting progress information, I recommend having the worker write to Redis or Memcached - some kind of temporary storage that your web server can also access.
Check out the simple PHP example on the Gearman site. For learning, I recommend setting up a lab environment that contains 3 separate VM's, one for your web server (the client), one for the Gearman job queue (the server) and another for processing jobs (the workers).
I have to make sure a certain PHP script (started by a web request) does not run more then once simultaneously.
With binaries, it is quite easy to check if a process of a certain binary is already around.
However, a PHP script may be run by several pathways, eg. CGI, FCGI, inside webserver modules etc. so I cannot use system commands to find it.
So how to reliable check if another instance of a certain script is currently running?
The exact same strategy is used as one would chose with local applications:
The process manages a "lock file".
You define a static location in the file system. Upon script startup you check if a lock file exists in that location, if so you bail out. If not you first create that lock file, then proceed. During tear down of your script you delete that lock file again. Such lock file is a simple passive file, only its existence is of interest, often not its content. That is a standard procedure.
You can win extra candy points if you use the lock file not only as a passive semaphore, but if you store the process id of the generating process in it. That allows subsequent attempts to verify of that process actually still exists or has crashed in the mean time. That makes sense because such a crash would leave a stale lock file, thus create a dead lock.
To work around the issue discussed in the comments which correctly states that in some of the scenarios in which php scripts are used in a wen environment a process ID by itself may not be enough to reliably test if a given task has been successfully and completely processed one could use a slightly modified setup:
The incoming request does not directly trigger to task performing php script itself, but merely a wrapper script. That wrapper manages the lock file whilst delegating the actual task to be performed into a sub request to the http server. That allows the controlling wrapper script to use the additional information of the request state. If the actual task performing php script really crashes without prior notice, then the requesting wrapper knows about that: each request is terminated with a specific http status code which allows to decide if the task performing request has terminated normally or not. That setup should be reliable enough for most purposes. The chances of the trivial wrapper script crashing or being terminated falls into the area of a system failure which is something no locking strategy can reliably handle.
As PHP does not always provide a reliable way of file locking (it depends on how the script is run, eg. CGI, FCGI, server modules and the configuration), some other environment for locking should be used.
The PHP script can for example call another PHP interpreter in it's CLI variant. That would provide a unique PID that could be checked for locking. The PID should be stored to some lock file then which can be checked for stale lock by querying if a process using the PID is still around.
Maybe it is also possible to do all tasks needing the lock inside a shell script. Shell scripts also provide a unique PID and release it reliable after exit. A shell script may also use a unique filename that can be used to check if it is still running.
Also semaphores (http://php.net/manual/de/book.sem.php) could be used, which are explicitely managed by the PHP interpreter to reflect a scripts lifetime. They seem to work quite well, however there is not much fuzz around about how reliable they are in case of premature script death.
Also keep in mind that external processes launched by a PHP script may continue executing even if the script ends. For example, a user abort on FCGI releases passthru processes, which carry on working despite the client connection is closed. They may be killed later if enough output accumulated or not at all.
So such external processes have to locked as well, which can't be done by the PHP-accquired semaphores alone.
Here's the situation: We have a bunch of python scripts continuously doing stuff and ultimately writing data in mysql, and we need a log to analyse the error rate and script performance.
We also have php front-end that interacts with the mysql data and we also need to log the user actions so that we can analyse their behaviour, and compute some scoring functions.
So we thought of having a mysql table table for each case (one for "python scripts" log and one for "user actions" log).
Ideally, we would be writing to thsese log tables asynchronously, for performance and low-latency reasons. Is there a way to do so in Python (we are using django ORM) and in PHP (we are using Yii Framework) ?
Are there any better approaches for solving this problem ?
Update :
for the user actions, (Web UI), we are now considering loading the Apache Log into mysql with relevant session info automatically through simple Apache configuration
There are (AFAIK) only two ways to do anything a-synchronously in PHP:
Fork the process (requires pcntl_fork)
exec() a process and release it by (assuming *nix) appending > /dev/null & to the end of the command string.
Both of these approaches result in a new process being created, albeit temporarily, so whether this would afford any performance increase is debatable and depends highly on your server environment - I suspect it would make things worse, not better. If your database is very heavily loaded (and therefore the thing that is slowing you down) you might get a faster result from dumping the log messages to file, and having a daemon script that crawls for thing to enter into the DB - but again, whether this would help is debatable.
Python supports multi-threading which makes life a lot easier.
You could open a raw Unix or network socket to a logging service that caches messages and writes them to disk or database asynchronously. If your PHP and Python processes are long-running and generate many messages per execution, keeping an open socket would be more performant than making separate HTTP/database requests synchronously.
You'd have to measure it compared to appending to a file (open once then lock, seek, write, and unlock while running and close at end) to see which is faster.
I am developing a website that requires a lot background processes for the site to run. For example, a queue, a video encoder and a few other types of background processes. Currently I have these running as a PHP cli script that contains:
while (true) {
// some code
sleep($someAmountOfSeconds);
}
Ok these work fine and everything but I was thinking of setting these up as a deamon which will give them an actual process id that I can monitor, also I can run them int he background and not have a terminal open all the time.
I would like to know if there is a better way of handling these? I was also thinking about cron jobs but some of these processes need to loop every few seconds.
Any suggestions?
Creating a daemon which you can make calls to and ask questions would seem the sensible option. Depends on wether your hoster permits such things, especially if you're requiring it to do work every few seconds, then definately an OS based service/daemon would seem far more sensible than anything else.
You could create a daemon in PHP, but in my experience this is a lot of hard work and the result is unreliable due to PHP's memory management and error handling.
I had the same problem, I wanted to write my logic in PHP but have it daemonised by a stable program that could restart the PHP script if it failed and so I wrote The Fat Controller.
It's written in C, runs as a daemon and can run PHP scripts, or indeed anything. If the PHP script ends for whatever reason, The Fat Controller will restart it. This means you don't have to take care of daemonising or error recovery - it's all handled for you.
The Fat Controller can also do lots of other things such as parallel processing which is ideal for queue processing, you can read about some potential use cases here:
http://fat-controller.sourceforge.net/use-cases.html
I've done this for 5 years using PHP to run background tasks and its no different to doing in any other language. Just use CRON and lock files. The lock file will prevent multiple instances of your script running.
Also its important to monitor your code and one check I always do to prevent stale lock files from preventing scripts to run is to have second CRON job to check if if the lock file is older than a few minutes and if an instance of the PHP script is running, if not it then removes the lock file.
Using this technique allows you to set your CRON to run the script every minute without issues.
Use the System::Daemon module from PEAR.
One solution (that I really need to try myself, as I may need it) is to use cron, but get the process to loop for five mins or so. Then, get cron to kick it off every five minutes. As one dies, the next one should be finishing (or close to finishing).
Bear in mind that the two may overlap a bit, and so you need to ensure that this doesn't cause a clash (e.g. writing to the same video file). Some simple inter-process communication may be useful, even if it is just writing to a PID file in the temp directory.
This approach is a bit low-tech but helps avoid PHP hanging onto memory over the longer term - sort of in-built task restarts!