How to setup Laravel to start and stop python scripts - php

I am testing my python script macro.py that runs in a loop from the terminal.
My plan is to code a Laravel application so that I can setup multiple instances of macro.py running on the server and these can be started and stopped at the any time.
Is this possible without too much difficulty?

Invoke an external script the same way it is mentioned in this question. It's very important to store the process pid in any volatile memory (database, file). The retrieved pid number can be later used to stop the process using kill -9 process_pid command.
Be careful, if your python script "breaks" in the background (between your start and stop action called from application) there is a chance that the other process will retrieve the same pid number! I recommend to store also the process startup time as well as pid number. Before killing the process do the additional check of the stored startup time and kill the process only if test passed (otherwise assume that the process stopped unexpectedly and show appropriate information in the user interface).

Related

Start a php script and then stop it within a php script

I will try to summarize my problem in order to make it understandable.
I have a script serverHandler.php that can start multiples server using an another script server.php.
So I start a new server like this :
$server = shell_exec("php server.php");
So now I will have a server.php script that will run in the backgroung until I manually kill it.
Is there a way to directly manage the kill of this server within the script serverHandler.php like that ?
// Start the script server.php
$server = shell_exec("php server.php");
// Stop the script that run on background
// So the server will be stopped
killTask($server);
Shell management of tasks is typically done using the ID of a process (PID). In order to kill the process, you must keep track of this PID and then provide it to your kill command. If your serverHandler is a command line script then keeping a local copy of the PID could suffice, but in a web interface over HTTP/HTTPS you would need to send back the PID so it could be managed.
Using a stateless language like PHP for this is not recommended, however, as attempting to retrieve process information, determine whether or not the process is one of the server processes previously dispatched, and other fine little details will be unnecessarily complicated and, if you're not careful, error-prone and potentially even dangerous.
Better would be to use a stateful language like Java or Python for managing these processes. By using a single point of access with a maintained state, you can have several threads "waiting" on these processes so that :
you know for certain which PIDs are expected to be valid at all times,
you can avoid the need for excessive PID validation,
you minimize the security risks of bad PID validation,
you know if these processes end prematurely so you can remove them from the list of expected processes automatically
you can keep track of which PID is associated with which server instance.
Use the right tools for the problem you're trying to solve. PHP really isn't the tool for this particular problem (your servers can be written in PHP, but use a different language for your serverHandler to avoid headaches).

How to reliably start and stop a long running process (think days) that outlives the script that sarted it?

I have a CLI php app with a command that starts a background process (a Selenium server) and then exits:
php app.php server:start #this should return immediatly
The app also needs to be able to stop the background process in a later invocation:
php app.php server:stop
Since the server process outlives the start/stop script, I cannot manage it by keeping a file descriptor to it open.
I could store the PID of the process on the file system during start and kill that PID in stop. But if the stop command is ran after the background process has died of its own, I risk killing a process that I did not start because the PID might have been reused by the OS for some other process.
Right now my approach is to store not just the PID of the background process, but also the background process' start time and command used. It's OK, but it is hard to make it work consistently across different platforms (I need Linux, Mac, and Windows).
Can you think of a better way to implement such a behaviour?

Execute server-side execution of PHP script via webpage

First of all sorry to post a question that seems to have been flogged to death on SO before. However, none of the questions I have reviewed helped me to solve my specific problem.
I have built a web application that runs an extensive data processing routine in PHP (i.e. MySQL queries, calculations, etc.).
Depending on the amount of data fed to the app this processing can take quite a long time so the script needs to run server-side and independently from the web front-end.
There is a problem, however. It seems I cannot control the script execution time limit as long as the script is invoked via cgi.
When I run the script via SSH and the command line it works fine for however long it takes to process the data.
But if I use the exec() command in a php script called via the webserver I always ends up with the error End of script output before headers after approximately 45 seconds.
Rather than having to fiddle with server settings (a nightmare in terms of portability) I would like to find a solution that kicks off the script independently from cgi.
Any suggestions?
Don't execute the long script directly from the website (AKA, directly from Apache) because, as you've mentioned, it will block until it finishes and potentially time out. Instead, use the website to schedule a job (an execution of the long script) to be run immediately.
Here is a basic outline of how you can potentially do this:
Create a new, small database to store job requests, including fields job_id, processing_status, run_start_time, and more relevant fields
Create some Ajax that hits your server and writes a "job request" to this jobs database, set to execute immediately.
Add a crontab script or bot that periodically watches for new jobs. If it finds a job that is yet to be processed but has passed the run_start_time, run it using exec() or some other command executor. This way the command won't timeout because it is not being run by Apache, but by the cron daemon.
When the command finishes, update the jobs database saying that processing is finished.
From your website, write a frontend that allows the user to see if the requested job is finished yet. Once it finishes, it displays some kind of "Done" indicator or something similar.

How to set timer for a c program being called from php script?

I have a textarea in my webpage in which the user is to paste a c program. At the server side, I save this code in a file appropriately. I use the shell_exec() function to call gcc to execute the c program. This works fine. And so does the execution part.
But what if the user (un)intentionally gives an infinite loop? When I use the function -
shell_exec("./a.out")
the program goes into an infinite loop. How do I break out of such a loop from the php script itself? Is there a way?
Use ulimit to limit the CPU usage? Note that this is per process, so if the user "forks" continually the process, you may be in trouble.
Another method would be to have a wrapper process that monitors and kills all it's child processes, and let that start the a.out. It depends on whether you can trust your "clients" or not (e.g. are they your good friends, or is this a school project or a public website) - your paranoia level should increase by threat level.
If you want more refined security, run the process via ssh in a virtual machine. Then just kill the virtual machine after X seconds, and start a fresh one from a saved snapshot. (You could have a pool of VM's ready to run, so the user don't have to wait for the VM to load)

Infrastructure for Running your Zend Queue Receiver

I have a simple messaging queue setup and running using the Zend_Queue object heirarchy. I'm using a Zend_Queue_Adapter_Db back-end. I'm interested in using this as a job queue, to schedule things for processing at a later time. They're jobs that don't need to happen immediately, but should happen sooner rather than later.
Is there a best-practices/standard way to setup your infrastructure to run jobs? I understand the code for receiving a message from the queue, but what's not so clear to me is how run the program that does that receiving. A cron that receives n messages on the command-line, run once a minute? A cron that fires off multiple web requests, each web request running the receiver script? Something else?
Tangential bonus question. If I'm running other queries with Zend_Db, will the message queue queries be considered part of that transaction?
You can do it like a thread pool. Create a command line php script to handle the receiving. It should be started by a shell script that automatically restarts the process if it dies. The shell script should not start the process if it is already running (use a $pid.running file or similar). Have cron run several of these every 1-10 minutes. That should handle the receiving nicely.
I wouldn't have the cron fire a web request unless your cron is on another server for some strange reason.
Another way to use this would be to have some backround process creating data, and a web user(s) consume it as they naturally browse the site. A report generator might work this way. Company-wide reports are available to all users but you don't want them all generating this db/time intensive report. So you create a queue and process one at a time possible removing duplicates. All users can view the report(s) when ready.
According to the docs it doens't look like the zend db is even using the same connection as your other zend_db queries. But of course the best way to find out is to make a simple test.
EDIT
The multiple lines in the cron are for concurrency. each line represents a worker for the pool. I was not clear, you don't want the pid as the identifier, you want to pass that as a parameter.
/home/byron/run_queue.sh Process1
/home/byron/run_queue.sh Process2
/home/byron/run_queue.sh Process3
The bash script would check for the $process.running file if it finds it exit.
otherwise:
Create the $process.running file.
start the php process. Block/wait until finished.
Delete the $process.running file.
This allows for the php script to die but not cause the pool to loose a worker.
If the queue is empty the php script exits immediately and is started again by the nex invocation of cron.

Categories