I am trying to run my php scripts in Gearman worker code but also want to monitor
besides that if they are taking more than the expected run time ,I want to kill those scripts.Each script has to run in a timely fashion(say running every 10 minutes) and the Gearman client picks ,the script which are ready to run and send s them to Gearman worker.
I tried using the following options :
1) Tried using an independent script,a normal php script which monitors the running process.
But this normal scripts will not inform Gearman that job got killed and Gearman thinks that the job that got killed is still running.
So that made me think I have to synchronize the process of monitoring and process of running php scripts in the same worker.
Also these jobs need to be restarted and the client takes care of them.
2) I am running my php scripts using the following command :
cd /home/amehrotra/include/core/background;php $workload;(this is blocking does not go to the next line until the script finishes execution).
I tried using exec , but exec does not execute the scripts
exec ("/usr/bin/php /home/amehrotra/include/core/background/$workload >/dev/null &");
3) Tried running 2 workers ,one for running php script another for monitoring but Geraman client does not connect to two workers.
Not the coolest plan, but try to use database as central place where everything is controlled.
It will take some resources and time for your workers but it is the cost to make it manageable.
Worker will need to check for commands (stop/restart) that are assigned to him via db. and he can also save some data into db so you can see what is happening.
Related
I have developed a script which should execute continuously and sequentially. Now on creating a cron job for this script, it keeps executing asynchronously.
1) I kept a while loop in script and thought of executing this script once so i used #reboot to execute the script once but if apache craches then this script will not start executing by its own ??
2)Using * * * * * cron it executes this script but creates multi-thread and all cron overlaps on the server eventually.
I have ran out of ideas how to execute a server script continuously and sequentially even if apache server restarts.
You're asking for:
a script which should execute continuously and sequentially
That's the definition of a daemon. You can use upstart to easily create a daemon with your php code.
Here you can find a good article to explain how create a daemon with upstart and node.js (but is easy to adapt to a php script): http://kvz.io/blog/2009/12/15/run-nodejs-as-a-service-on-ubuntu-karmic/
cron is for repeating jobs on a timed basis (min interval: 1 minute). It's intended for scripts that eventually exit and need to be restarted from scratch. Sounds like your script is essentially trying to be a daemon - started once, then left running permanently.
Instead of cron to repeatedly start new copies of the script, simply start your script ONCE via an init script, e.g. /etc/rc.local
There are some ideas to deal with checking if a script is running in this question, and you could have a cron run every couple of minutes to ensure its running and starting it if not.
In the past, I ran a bunch of scripts each as a separate cron job. Now I'd like to run a controller script with one cron job, then have that call the scripts separately (and in parallel, all at the same time), so I don't have to create a new cron job every time I add another script.
I looked up pcntl_fork() but we don't have that installed. Can fsockopen() do this as well?
A few questions:
I saw this example, http://phplens.com/phpeverywhere/?q=node/view/254, that uses fsockopen(). Will this allow me to run PHP scripts in parallel? Note, the scripts don't interact, but I would still like to know if any of them exited prematurely with an error.
Secondly the scripts I'm running aren't externally accessible, they are internal only. The script was previously run like so: php -f /path/to/my/script1.php. It's not a web-accessible path. Would the example in #1 work with this, or only web-accessible paths?.
Thanks for any advice you can offer.
You can use proc_open to run multiple processes without waiting for each process to finish.
You will have a process handle, you can terminate each process at any time and you can read the standard output of each process.
You can also communicate via pipes, which is optional.
Passing 1st param php /your/path/to/script.php param1 "param2 x" means starting a separate PHP process.
proc_open (see Example #1)
Ultimately you will want to use an infinite while loop + usleep (or sleep) to avoid maxing out on the CPU. Break when all processes finish, or after you killed them.
Edit: you can know if a process has exited prematurely.
Edit2: a simpler way of doing the above is popen
Please correct me if I'm wrong, but if I understand things correctly, the solution Tiberiu-Ionut Stan proposed implies that starting the processes with proc_open and waiting for them to finish will not be run as a cron script, but is part of a running program/service, right?
As far as I understand the cron jobs, the controller script user920050 was thinking of using would be started by cron on a schedule and each new instance would launch the processes all over again, do the waiting for them to finish and probably run in parallel with other cron-launched instances of the controller script.
Recently I've been researching the use of Beanstalkd with PHP. I've learned quite a bit but have a few questions about the setup on a server, etc.
Here is how I see it working:
I install Beanstalkd and any dependencies (such as libevent) on my Ubuntu server. I then start the Beanstalkd daemon (which should basically run at all times).
Somewhere in my website (such as when a user performs some actions, etc) tasks get added to various tubes within the Beanstalkd queue.
I have a bash script (such as the following one) that is run as a deamon that basically executes a PHP script.
#!/bin/sh
php worker.php
4) The worker script would have something like this to execute the queued up tasks:
while(1) {
$job = $this->pheanstalk->watch('test')->ignore('default')->reserve();
$job_encoded = json_decode($job->getData(), false);
$done_jobs[] = $job_encoded;
$this->log('job:'.print_r($job_encoded, 1));
$this->pheanstalk->delete($job);
}
Now here are my questions based on the above setup (which correct me if I'm wrong about that):
Say I have the task of importing an RSS feed into a database or something. If 10 users do this at once, they'll all be queued up in the "test" tube. However, they'd then only be executed one at a time. Would it be better to have 10 different tubes all executing at the same time?
If I do need more tubes, does that then also mean that i'd need 10 worker scripts? One for each tube all running concurrently with basically the same code except for the string literal in the watch() function.
If I run that script as a daemon, how does that work? Will it constantly be executing the worker.php script? That script loops until the queue is empty theoretically, so shouldn't it only be kicked off once? How does the daemon decide how often to execute worker.php? Is that just a setting?
Thanks!
If the worker isn't taking too long to fetch the feed, it will be fine. You can run multiple workers if required to process more than one at a time. I've got a system (currently using Amazon SQS, but I've done similar with BeanstalkD before), with up to 200 (or more) workers pulling from the queue.
A single worker script (the same script running multiple times) should be fine - the script can watch multiple tubes at the same time, and the first one available will be reserved. You can also use the job-stat command to see where a particular $job came from (which tube), or put some meta-information into the message if you need to tell each type from another.
A good example of running a worker is described here. I've also added supervisord (also, a useful post to get started) to easily start and keep running a number of workers per machine (I run shell scripts, as in the first link). I would limit the number of times it loops, and also put a number into the reserve() to have it wait for a few seconds, or more, for the next job the become available without spinning out of control in a tight loop that does not pause at all - even if there was nothing to do.
Addendum:
The shell script would be run as many times as you need. (the link show how to have it re-run as required with exec $#). Whenever the php script exits, it re-runs the PHP.
Apparently there's a Djanjo app to show some stats, but it's trivial enough to connect to the daemon, get a list of tubes, and then get the stats for each tube - or just counts.
Is there any way how to ensure some process will be always running?
Let's say i need this to be running:
php -f myscript.php -param1=value1
Now i do it this way:
The proccess is launched and right after that the PID is written to the file (myProcess.pid).
Then i schedule a cronjob, which is periodically trying to launch the process again and again each 5 minutes.
The process is actually a bash script file, which firstly checks, whether myProcess.pid exists and whether is, if the PID in that file is really running, if it's not, it launches it and rewrite the "myProcess.pid" with new PID.
There are several problems with that solution:
What about the "blackout" time period between "cron checks"?
What about the period? Isn't 5 minutes too much to spend unncessary CPU time?
What if the file myProcess.pid has been compromised, or just deleted by someone/something? Then the bash script launches it again, even if it's already running.
What if the process dies and the exact PID takes another process?
Does any better approach exist?
You'll need some kind of monitoring application like
http://supervisord.org/
http://blogs.nopcode.org/brainstorm/2011/04/21/supervisord-one-process-to-rule-them-all/
Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.
Alternatively, you can create some daemonized php code:
http://simas.posterous.com/writing-a-php-daemon-application
http://kevin.vanzonneveld.net/techblog/article/create_daemons_in_php/
I have a process I'd like to be able to run in the background by starting up a Gearman Client any time.
I've found success by opening up two SSH connections to my server, and in one starting the worker and in the other then running the client. This produces the desired output.
The problem is that, I'd like to have a worker constantly running in the background so I can just call up a client whenever I need to have the process done. But as soon as I close the terminal which has the worker PHP file running, a call to the client does not work - the worker seems to die.
Is there a way to have the worker run constantly in the background, so calling a new client will work without having to start up a new worker?
Thanks!
If you want a program to keep running even after its parent is dead (i.e. you've closed your terminal), you must invoke it with nohup :
nohup your-command &
Quoting the relevant Wikipedia page I linked to :
nohup is a POSIX command to ignore
the HUP (hangup) signal, enabling
the command to keep running after the
user who issues the command has logged
out.The HUP (hangup) signal is
by convention the way a terminal warns
depending processes of logout.
For another (possibly more) interesting solution, see the following article : Dæmonize Your PHP.
It points to Supervisord, which makes sures a process is still running, relaunching it if necessary.
Is there a way to have the worker run constantly in the background, so calling a new client will work without having to start up a new worker?
Supervisor!
The 2009 PHP Advent Calendar has a quick article on using Supervisor (and other tricks) to create constantly-running PHP scripts without having to deal with the daemonization process in PHP itself.