we are writing a PHP script which creates virtual machines via a RESTful API call. That part is quite easy. Once that request to create the VM is sent to the server, the API request returns with essentially "Machine queued to be created...". When we create a virtual machine, we insert a record into a MySQL database basically with VM label, and DATE-CREATED-STARTED. That record also has a field DATE-CREATED-FINISHED which is NULL.
LABEL DATE-CREATED-STARTED DATE-CREATED-FINISHED
test-vm-1 2011-05-14 12:00:00 NULL
So here is our problem. How do we basically spin/spawn off a PHP worker, on the initial request, that checks the status of the queued virtual machine every 10 seconds, and when the virtual machine is up and running, updates DATE-CREATED-FINISHED. Keep in mind, the initial API request immediately returns "Machine queue to be created." and then exits. The PHP worker needs to be doing the 10 second check in the background.
Can your server not fire a request once the VM has been created?
Eg.
PHP script requests the server via your API to create a new VM.
PHP script records start time and exits. VM in queue on server waits to be created.
Server finally creates VM and calls an update tables php script.
That way you have no polling, no cron scripts, no background threads. etc. But only if you're system can work this way. Otherwise I'd look at setting up a cron script as mentioned by #dqhendricks or if possible a background script as #Savas Alp mentioned.
If your hosting allows, create a PHP CLI program and execute it in the background like the following.
<?php
while (true)
{
sleep(10);
// Do the checks etc.
}
?>
And run it like the following command:
php background.php & // Assuming you're using Linux
If your hosting does not allow running background jobs, you must utilize every opportunity to do this check; like doing it at the beginning of every PHP page request. To help facilitate this, after creating a virtual machine, the resulting page may refresh itself at every 10 seconds!
As variant, you can use Tasks module, and there is sample of task code:
class VMCheck extends \Tasks\Task
{
protected $vm_name;
public function add($vm_name)
{
$this->getStorage()->store(__CLASS__, $vm_name, true);
}
public function execute()
{
do
{
$check = CheckAPI_call($vm_name); //your checking code here
sleep(10);
}
while (empty($check));
}
public function restore($data)
{
$this->vm_name = $data;
}
}
Related
I'm currently running an Apache server (2.2) on my local machine (Windows) which I'm using to run some PHP scripts to take care of some tedious work. One of the scripts involves a ton of moving, resizing, and download / uploading files to another server. I would very much like the script to run constantly so that I don't have to baby the script by starting it up again every time it times out.
set_time_limit(0);
ignore_user_abort(1);
Both are set in my script, but after about 30mins to an hour the script stops and I get the 504 Gateway Time-out message in my browser. Is there something I missing in Apache or PHP to prevent the timeout? Or should I be running the script a different way?
Or should I be running the script a different way?
Definitely. You should run your script from command line (CLI)
if i should implement something like this i would you 2 different scripts:
A. process_controller.php
B. process.php
The workflow should be:
the user call the script A by using a browser
the script A start the script B by using a system() or exec() and pass to it a "process token" via command line.
the script B write the execution status into a shared space: a file named as the token, a database table. in general something that can be read also by the script A by using the token as reference
the script A contains an AJAX call, in polling, that ask to the script A the status of the process for a given token
Ajax polling:
<script>
var $myToken;
function ajaxPolling()
{
$.get('process_controller.php?action=getStatus&token='+$myToken, function(data) {
$('.result').html(data);
});
}
setInterval("ajaxPolling()",60*1000); //Every minute
</script>
there are some considerations about the communication between the 2 processes, depending on how many instances of the script B you would be able to run in parallel
Just one: you don't need a random/unique token
One per user: session_start(); $token = session_id();
More than one per user: session_start(); $token = session_id().microtime();
If you need to run it form your browser, You should make sure that there is not php execution limit in the php.ini file, but also that there is not limit set in mod_php (or what ever you are using) under apache.
Use php's system() to call a shell script which starts a service/background task.
I noticed that when I have an endless worker I cannot profile PHP shell scripts. Because when it's killed it doesn't send the probe.
What changes shall I do?
When you are trying to profile a worker which is running an endless loop. In this case you have to manually edit your code to either remove the endless loop or instrument your code to manually call the close() method of the probe (https://blackfire.io/doc/manual-instrumentation).
That's because the data is sent to the agent only when the close() method is called (it is called automatically at the end of the program unless you killed it).
You can manually instrument some code by using the BlackfireProbe class that comes bundled with the Blackfire's probe:
// Get the probe main instance
$probe = BlackfireProbe::getMainInstance();
// start profiling the code
$probe->enable();
// Calling close() instead of disable() stops the profiling and forces the collected data to be sent to Blackfire:
// stop the profiling
// send the result to Blackfire
$probe->close();
As with auto-instrumentation, profiling is only active when the code is run through the Companion or the blackfire CLI utility. If not, all calls are converted to noops.
I don't know, maybe in 2015 following page did not exist, but now you can do profiling in following way: https://blackfire.io/docs/24-days/17-php-sdk
$blackfire = new LoopClient(new Client(), 10);
$blackfire->setSignal(SIGUSR1);
$blackfire->attachReference(7);
$blackfire->promoteReferenceSignal(SIGUSR2);
for (;;) {
$blackfire->startLoop($profileConfig);
consume();
$blackfire->endLoop();
usleep(400000);
}
Now you can send signal SIGUSR1 to process of this worker and LoopClient will start profiling. It'll listen 10 iterations of method consume and send last probe. After that it'll stop profiling.
I'm using CakePHP 2.3.8 and I'm using the CakePHP Queue Plugin.
I'm inserting data in a queue to be processed by a shell. I can process the queue (which runs a shell), and it runs properly. However, when I process the queue the page hangs until the queue is done processing. The queue can take a while to processes because it makes external requests, and if there are failures I have some retries and waiting periods. Basically, I need to find a way to process the queue in the background so the user doesn't have to wait for it.
This is my code right now
App::uses('Queue', 'Queue.Lib');
//queue up the items!
foreach($queue_items as $item){
$task_id = Queue::add("ExternalSource ".$item, 'shell');
$this->ExternalSource->id = $item;
$this->ExternalSource->saveField('queue_task_id', $task_id['QueueTask']['id']);
}
//now process the queue
Queue::process();
echo "Done!";
I've also tried calling the shell directly but the same thing happens. I have to wait until it's done being processed.
Is there any way to call the shell so that the user doesn't have to wait for it to finish being processed? I'd prefer not to do it with a cronjob checking frequently.
Edit
I've also tried using exec but doesn't seem to be working
$exec = exec('/Applications/XAMPP/xamppfiles/htdocs/app/Console/cake InitiateQueueProcess');
var_dump($exec);
The dump shows string '' (length=0). When I change exec to exec('pwd'); it shows the directory. I can use that exact path and call the shell from the terminal so I know it's correct. I also changed the permission but still nothing happens. The shell doesn't run.
I have a PHP script that runs on a shared hosting environment server. This PHP script takes a long time to run. It may take 20 to 30 minutes to finish a run. It is a recurring background process. However I do not have control over when the process starts (it could be triggered every five minutes, or every three hours, no one knows).
Anyway, at the beginnin of this script I would like to detect if the previous process is still running, if the earlier run is still running and has not finished, then I would not run the script again. If it is not running, then I run the new process.
In other words, here is a pseudo code. Let's call the script abc.php
1. Start script abc.php
2. Check if an older version of abc.phh is still running. If it is running, then terminate
3. If it is not running, then continue with abc.php and do your work which might take 30 minutes or more
How can I do that? Please keep in mind this is shared hosting.
UPDATE: I was thinking of using a DB detection mechanism. So, when the script starts, it will set a value in a DB as 'STARTED=TRUE', when done, it will set 'STARTED=FALSE'. However this solution is not proper, because there is no garantee that the script will terminate properly. It might get interrupted, and therefore may not update the STARTED value to FALSE. So the DB solution is out of the question. It has to be a process detection of some sort, or maybe a different solution that I did not think off. Thanks.
If this is a CGI process, I would try using exec + ps, if the latter is available in your environment. A quick SO search turns up this answer: https://stackoverflow.com/a/7182595/177920
You'll need to have a script that is responsible for (and separate from) checking to see if your target script is running, of course, otherwise you'll always see that your target script is running based on the order of ops in your "psuedo code".
You can implement a simple locking mechanism: Create a tmp lock file when script starts and check before if the lock file already exists. If it does, dont run the script, it it doesnt create a lock file and run the script. At then end of successful run, delete the lock file so that it will run properly next time.
if(!locked()) {
lock();
// your code here
unlock();
} else {
echo "script already running";
}
function lock() { file_put_contents("write.lock", 'running'); }
function locked() { return file_exists("write.lock"); }
function unlock() { return unlink("write.lock"); }
I'm currently running an Apache server (2.2) on my local machine (Windows) which I'm using to run some PHP scripts to take care of some tedious work. One of the scripts involves a ton of moving, resizing, and download / uploading files to another server. I would very much like the script to run constantly so that I don't have to baby the script by starting it up again every time it times out.
set_time_limit(0);
ignore_user_abort(1);
Both are set in my script, but after about 30mins to an hour the script stops and I get the 504 Gateway Time-out message in my browser. Is there something I missing in Apache or PHP to prevent the timeout? Or should I be running the script a different way?
Or should I be running the script a different way?
Definitely. You should run your script from command line (CLI)
if i should implement something like this i would you 2 different scripts:
A. process_controller.php
B. process.php
The workflow should be:
the user call the script A by using a browser
the script A start the script B by using a system() or exec() and pass to it a "process token" via command line.
the script B write the execution status into a shared space: a file named as the token, a database table. in general something that can be read also by the script A by using the token as reference
the script A contains an AJAX call, in polling, that ask to the script A the status of the process for a given token
Ajax polling:
<script>
var $myToken;
function ajaxPolling()
{
$.get('process_controller.php?action=getStatus&token='+$myToken, function(data) {
$('.result').html(data);
});
}
setInterval("ajaxPolling()",60*1000); //Every minute
</script>
there are some considerations about the communication between the 2 processes, depending on how many instances of the script B you would be able to run in parallel
Just one: you don't need a random/unique token
One per user: session_start(); $token = session_id();
More than one per user: session_start(); $token = session_id().microtime();
If you need to run it form your browser, You should make sure that there is not php execution limit in the php.ini file, but also that there is not limit set in mod_php (or what ever you are using) under apache.
Use php's system() to call a shell script which starts a service/background task.