I'm making a scan server for my company, which will be used to launch scans from tools such as nessus, nmap, nikto etc. I've written the pages in PHP, but I need to have control over the subsequent processes (spawned with nohup and backgrounded with &) as I need to perform various actions once the scans have completed (such as emailing them, downloading reports off the nessus server etc).
I was advised on here to create a python daemon that the PHP pages communicated with. I've googled endlessly, but I can't find anything that explains the logic behind the communication from a beginner's perspective (coding a daemon will be my most advanced project yet). I'm aware of IPC and unix domain sockets for example, but not sure as to how I can employ them in my situation. As such, I'm after some advice or pointers as to what I should do.
I was thinking I could create a python script with a while loop that constantly checks to see if the process has terminated and when it does, perform the appropriate post process termination action. The script would be daemonized so it runs in the background and I would call it from the PHP pages with the PID as a parameter, which I could access with the argparse module for example.
Am I on the right lines in terms of logic - or are there better solutions?
Any help, or just something to google, is much appreciated! Thanks
I think something like gearman would certainly make it easier to implement this.
Gearman is a job server which lets you start jobs, query if the job is still running and fetch the output of the job (as text).
It supports PHP and Python (among others).
(This answer made me feel like a salesman).
So your plan is: PHP spawns nmap & a watchdog. Watchdog keeps polling for nmap to finish running and then does some post processing once its done.
Slightly cleaner would be:
PHP spawns a 'process manager' (which also you write). This process manager is basically a program that executes nmap in a child process, waits for this child process to finish (using the 'wait' system call, which for example in C looks like: http://linux.die.net/man/2/wait), and does the post processing.
It will also be more efficient because a 'wait' will probably be cheaper than repeatedly checking if the PID has terminated.
If you like python more than C, python has subprocess management too:
http://docs.python.org/library/subprocess.html
Related
Would appreciate some help understanding typical best practices in carrying out a series of tasks using Gearman in conjunction with PHP (among other things).
Here is the basic scenario:
A user uploads a set of image files through a web-based interface. The php code responding to the POST request generates an entry in a database for each file, mostly with null entries in the columns, queues a job for each to do analysis using Gearman, generates a status page and exits.
The Gearman worker gets a job for a file and starts a relatively long-running analysis. The result of that analysis is a set of parameters that need to be inserted back into the database record for that file.
My question is, what is the generally accepted method of doing this? Should I use a callback that will ultimately kick off a different php script that is going to do the modification, or should the worker function itself do the database modification?
Everything is currently running on the same machine; I'm planning on using Gearman for background scheduling, rather than for scaling by farming out to different machines, but in any case any of the functions could connect to the database wherever it is.
Any thoughts appreciated; just looking for some insights on how this typically gets structured and what might be considered best practice.
Are you sure you want to use Gearman? I only ask because it was the defacto PHP job server about 15 years ago but hasn't been a reliable solution for quite some time. I am not sure if things have drastically improved in the last 12 months, but last time I evaluated Gearman, it wasn't production capable.
Now, on to the questions.
what is the generally accepted method of doing this? Should I use a callback that will ultimately kick off a different php script that is going to do the modification, or should the worker function itself do the database modification?
You are going to follow this general pattern with any job queue:
Collect a unit of work. In your case, it will be 1 of the images and any information about who that image belongs to, user id, etc.
Submit the work to the job queue with this information.
Job Queue's worker process picks up the work, and starts processing it. This is where I would create records in the database as you can opt to not create them on job failure.
The job queue is going to track which jobs have completed and usually the status of completion. If you are using gearman, this is the gearmand process. You also need something pickup work and process that work, I will refer to this as the job worker. The job worker is where the concurrency happens which is what i think you were referring to when you said "kick off a different php script." You can just kick off a PHP script at an interval (with supervisord or a cronjob) for a kind of poll & fork approach. It's not the most efficient approach, but it doesn't sound like it will really matter for your applications use case. You could also use pcntl_fork or pthreads in PHP to get more control over your concurrent processes and implement a worker pool pattern, but it is much more complicated than just firing off a script. If you are interested in trying to implement some concurrency in PHP, I have a proof-of-concept job worker for beanstalkd available on GitHub that implements a worker pool with both fork and pthreads. I have also include a couple of other resources on the subject of concurrency.
Job Worker (pthreads)
Job Worker (fork)
PHP Daemon Example
PHP IPC Example
Running an infinite loop in cron job.Suppose, i have written a php based script to run on my server computer using cron job, and i want to use infinite loop in that php script.Any ideas for running an infinite loop in cron job.
Infinite looping applications are usually called daemons. They are system services that offer some kind of constant processing and/or the readiness to accept some potential incoming processing activities.
Gearman is a system daemon you can install than can handle various tasks you give it. It's a complex tools that allows many things but it could be used to implement your necessities.
PHP::Gearman is a Gearman client that talks to the Gearman daemon and sends tasks to the daemon specifying the conditions under which the task must be executed.
The limitations that #Jeffrey emphasized about PHP are true because PHP was designed as a share nothing architecture (one page load equals one script execution - each page load works under its own data context).
Perhaps System Daemon (a pear package) may assist in overcoming some or all of the limitations mentioned above. I haven't used it so I can't tell you much more about it but it's as good a place to start as any.
My app takes a loooong list of urls, and split it in X (where X = $threads) so then I can start a thread.php and calculate the urls for it. Then it does GET and POST request to retrieve data
I am using this:
for($x=1;$x<=$threads;$x++){
$pid[] = exec("/path/bin/php thread.php <options> > /dev/null & echo \$!");
}
For "threading" (I know its not really threading, is it forking or what?), I save the pids into a file for later checking if N thread is running and to stop them.
Now I want to move out from php, I was thinking about using python because I'd like to learn more about it.
How can I achieve this kind of "threading" with python? (or ruby)
Or is there a better way to launch multiple background threads in python or ruby that runs in parallel (at the same time)?
The threads doesn't need to communicate between each other or with a main thread, they are independent, they do http request and interact with a mysql db, they may need to access/modify the same table entries (I haven't tought about this or how I will solve it yet).
The app works with "projects", each project has a "max threads" variable and I use a web interface to control it (so I could still use php for the interface [starting/stopping threads] in the new app).
I wanted to use
from threading import Thread
in python, but I've been told those threads wont run in parallel but once at a time.
The app is intended to run on linux web servers.
Any suggestion will be appreciated.
For Python 2.6+, consider the multiprocessing module:
multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows
For Python 2.5, the same functionality is available via pyprocessing.
In addition to the example at the links above, here are some additional links to get you started:
multiprocessing Basics
Communication between processes with multiprocessing
You don't want threading. You want a work queue like Gearman that you can send jobs to asynchronously.
It's worth noting that this is a cross-platform, cross-language solution. There are bindings for many languages (including Python and PHP) provided officially, and many more unofficially with a bit of work with Google.
The original intent is effectively load balancing, but it works just as well with only one machine. Basically, you can create one or more Workers that listen for Jobs. You can control the number of Workers and the types of Jobs they can listen for.
If you insert five Jobs into the queue at the same time, and there happen to be five Workers waiting, each Worker will be handed one of the Jobs. If there are more Jobs than Workers, the Jobs get handled sequentially. Your Client (the thing that submits Jobs) can either wait for all of the Jobs it's created to complete, or it can simply place them in the queue and continue on.
I know about PHP not being multithreaded but i talked with a friend about this: If i have a large algorithmic problem i want to solve with PHP isn't the solution to simply using the "curl_multi_xxx" interface and start n HTTP requests on the same server. This is what i would call PHP style multithreading.
Are there any problems with this in the typical webserver environment? The master request which is waiting for "curl_multi_exec" shouldn't count any time against its maximum runtime or memory length.
I have never seen this anywhere promoted as a solution to prevent a script killed by too restrictive admin settings for PHP.
If i add this as a feature into a popular PHP system will there be server admins hiring a russian mafia hitman to get revenge for this hack?
If i add this as a feature into a
popular PHP system will there be
server admins hiring a russian mafia
hitman to get revenge for this hack?
No but it's still a terrible idea for no other reason than PHP is supposed to render web pages. Not run big algorithms. I see people trying to do this in ASP.Net all the time. There are two proper solutions.
Have your PHP script spawn a process
that runs independently of the web
server and updates a common data
store (probably a database) with
information about the progress of
the task that your PHP scripts can
access.
Have a constantly running daemon
that checks for jobs in a common
data store that the PHP scripts can
issue jobs to and view the progress
on currently running jobs.
By using curl, you are adding a network timeout dependency into the mix. Ideally you would run everything from the command line to avoid timeout issues.
PHP does support forking (pcntl_fork). You can fork some processes and then monitor them with something like pcntl_waitpid. You end up with one "parent" process to monitor the children it spanned.
Keep in mind that while one process can startup, load everything, then fork, you can't share things like database connections. So each forked process should establish it's own. I've used forking for up 50 processes.
If forking isn't available for your install of PHP, you can spawn a process as Spencer mentioned. Just make sure you spawn the process in such a way that it doesn't stop processing of your main script. You also want to get the process ID so you can monitor the spawned processes.
exec("nohup /path/to/php.script > /dev/null 2>&1 & echo $!", $output);
$pid = $output[0];
You can also use the above exec() setup to spawn a process started from a web page and get control back immediately.
Out of curiosity - what is your "large algorithmic problem" attempting to accomplish?
You might be better to write it as an Amazon EC2 service, then sell access to the service rather than the package itself.
Edit: you now mention "mass emails". There are already services that do this, they're generally known as "spammers". Please don't.
Lothar,
As far as I know, php don't work with services, like his concorrent, so you don't have a way for php to know how much time have passed unless you're constantly interrupting the process to check the time passed .. So, imo, no, you can't do that in php :)
I'm working on a PHP web interface that will receive huge traffic. Some insert/update requests will contain images that will have to be resized to some common sizes to speed up their further retrieval.
One way to do it is probably to set up some asynchronous queue on the server. Eg. set up a table in a db with a tasks queue that would be populated by PHP requests and let some other process on the server watch the table and process any waiting tasks. How would you do that? What would be the proper environment for that long running process? Java, or maybe something lighter would do?
If what you're doing is really high volume then what you're looking for is something like beanstalkd. It is a distributed work queue processor. You just put a job on the queue and then forget about it.
Of course then you need something at the other end reading the queue and processing the work. There are multiple ways of doing this.
The easiest is probably to have a cron job that runs sufficiently often to read the work queue and process the requests. Alternatively you can use some kind of persistent daemon process that is woken up by work becoming available.
The advantage of this kind of approach is you can tailor the number of workers to how much work needs to get done and beanstalkd handles distributed prorcessing (in the sense that the listners can be on different machines).
You may set a cron task that would check the queue table. The script that handles actions waiting in the queue can be written e.g. in php so you don't have to change implementation language.
I use Perl for long running process in combination with beanstalkd. The nice thing is that the Beanstalkd client for Perl has a blocking reserve method. This way it uses almost no CPU time when there is nothing to do. But when it has to do its job, it will automatically start processing. Very efficient.
You would want to create a daemon which would "sleep" for a period of time and then check the database for items to process. Once it found items to process, it would process them and then check again as soon as it was done, if no more, then sleep.
You can create daemon's in any language, including PHP.
Alternatively, you can just have PHP execute a script and continue on. So that PHP won't wait for the script to finish before continuing, execute it in the background.
exec("nohup /usr/bin/php -f /path/to/script/script.php > /dev/null 2>&1 &");
Although you have to be careful with that since you could end up having too many processes running in the background since there is no queueing.
You could use a service like IronWorker to do image processing in the background and take the load off your servers. Since it's a service, you won't need to manage anything or set anything else up and it will scale with you as you grow so if you can do one image with it, you can scale to millions of images with zero effort.
Here's an article on how to do a bunch of image processing transformations:
http://dev.iron.io/solutions/image-processing/
The examples there are in Ruby, but you could do the same stuff with PHP pretty easily.