I am using a PHP page to backup my database and it works just perfectly.
My problem (my future problem, Id better say) is that my cron job in my hosting allows working function only if a response is returned by 5 seconds, otherwise it fails.
My DB so far is quite small and I am not sure how much time it takes but surely less than 5 seconds since it is working.
But when my db will grow up, I reckon issues will start.
How can I make my function working also after 5 seconds?
With an async function will I solve?
hm, do you have shell access?
i made my index.php and its called master class to detect cli mode. like so:
if( php_sapi_name() === 'fpm-fcgi') {
define( 'WEB_MODE', true );
define( 'CLI_MODE', false );
} else if (php_sapi_name() === 'cli') {
define( 'WEB_MODE', false );
define( 'CLI_MODE', true );
} else {
define( 'WEB_MODE', true );
define( 'CLI_MODE', false );
}
then in my script which is executed by http request i do it like so:
$this->asyncShellExecution('/usr/bin/php ' . ROOT_PATH .'/php/index.php -m houseKeeping' );
private function asyncShellExecution(string $str)
{
exec($str . " > /dev/null 2>/dev/null &");
}
in cli mode i call new asyncHouseKeeping()
class asyncHouseKeeping
{
function __construct()
{
$args=getopt("m:t:i:o:l:s:f:");
if($args['m'] == 'houseKeeping') {
$this->doHouseKeeping();
} else {
exit;
}
}
}
that way i am getting the cpu and time critical image optimization process after uploading images by xhr out of the blocking code and it runs it stuff afterwards.
Related
Setting:
I have a wordpress site but disabled wp_cron to have the full control of cron.
define('DISABLE_WP_CRON', true);
In crontab -e, I have following cron job:
*/2 * * * * /usr/bin/php /var/www/cron/mycron.php init >> /var/log/error.log 2>&1
The mycron.php has a simple function
if (!empty($argv[1])) {
switch ($argv[1]){
case 'init':
cron_test();
break;
}
}
function cron_test() {
$time = date(DATE_RFC822, time());
write_log("Start:" . $time); //outputs debug to my own log file
};
function write_log($log){
if ( true === WP_DEBUG ) {
if ( is_array( $log ) || is_object( $log ) ) {
write_log( print_r( $log, true ) );
} else {
write_log( $log );
}
}
};
Note that I declared the mycron.php in functions.php for wp:
require_once('parts/mycron.php');
Error log:
In my error.log for the cron, I have the following error:
PHP Warning: Use of undefined constant WP_DEBUG - assumed 'WP_DEBUG'
So, I am guessing there is some sort of disconnection between cron and wp, which is my best guess.
What I am trying to do:
The mycron.php will have many wordpress functions that I would need. How do I make the cron to recognize the wp function such as WP_DEBUG?
Any help will be much appreciated.
Thanks!
You need to load Wordpress functions manually, to use them in a custom script.
require_once("../../../../wp-load.php");
Also answered in depth here,
How to include Wordpress functions in custom .php file?
I was trying to use Server Side Events mechanics in my project. (This is like Long Polling on steroids)
Example from "Sending events from the server" subtitle works beautifully. After few seconds, from disconnection, the apache process is killed. This method works fine.
BUT! If I try to use RabbitMQ, Apache does't get the process killed after browser disconnects from server (es.close()). And process leaves as is and gets killed only after the docker container restarts.
connection_aborted and connection_status don't work at all. connection_aborted returns only 0 and connection_status returns CONNECTION_NORMAL even after disconnect. It happens only when I use RabbitMQ. Without RMQ this functions works well.
ignore_user_abort(false) doesn't work either.
Code example:
<?php
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Connection\AbstractConnection;
use PhpAmqpLib\Exception\AMQPTimeoutException;
use PhpAmqpLib\Message\AMQPMessage;
class RequestsRabbit
{
protected $rabbit;
/** #var AMQPChannel */
protected $channel;
public $exchange = 'requests.events';
public function __construct(AbstractConnection $rabbit)
{
$this->rabbit = $rabbit;
}
public function getChannel()
{
if ($this->channel === null) {
$channel = $this->rabbit->channel();
$channel->exchange_declare($this->exchange, 'fanout', false, false, false);
$this->channel = $channel;
}
return $this->channel;
}
public function send($message)
{
$channel = $this->getChannel();
$message = json_encode($message);
$channel->basic_publish(new AMQPMessage($message), $this->exchange);
}
public function subscribe(callable $callable)
{
$channel = $this->getChannel();
list($queue_name) = $channel->queue_declare('', false, false, true, false);
$channel->queue_bind($queue_name, $this->exchange);
$callback = function (AMQPMessage $msg) use ($callable) {
call_user_func($callable, json_decode($msg->body));
};
$channel->basic_consume($queue_name, '', false, true, false, false, $callback);
while (count($channel->callbacks)) {
if (connection_aborted()) {
break;
}
try {
$channel->wait(null, true, 5);
} catch (AMQPTimeoutException $exception) {
}
}
$channel->close();
$this->rabbit->close();
}
}
What happens:
Browser establishes SSE connection to the server. var es = new EventSource(url);
Apache2 spawns new process to handle this request.
PHP generates a new Queue and connects to it.
Browser closes connection es.close()
Apache2 doesn't kill process and it stays as is. Queue of RabbitMQ will not be deleted. If I do some reconnections, it spawns a bunch of processes and a bunch of queues (1 reconnection = 1 process = 1 queue).
I close all tabs -- processes alive. I close browser -- the same situation.
Looks line some kind of PHP bug. Or of Apach2?
What I use:
Last Docker and docker-compose
php:7.1.12-apache or php:5.6-apache image (this happens on both versions of PHP)
Some screenshots:
Please, help me to figure out what's going on...
P.S. Sorry for my English. If you can find a mistake or typo, point to it in the comments. I'll be very grateful :)
You don't say if you're using send() or subscribe() (or both) during your server-side events. Assuming you're using subscribe() there is no bug. This loop:
while (count($channel->callbacks)) {
if (connection_aborted()) {
break;
}
try {
$channel->wait(null, true, 5);
} catch (AMQPTimeoutException $exception) {
}
}
Will run until the process is killed or the connection is remotely closed from RabbitMQ. This is normal when listening for queued messages. If you need to stop the loop at some point you can set a variable to check in the loop or throw an exception when the SSE is ended (although I find this awkward).
I am not very familiar with javascript and not sure how to handle alerts in php script when using phantomjs.
This is my code:
$this->clickcontrol(Constants::LINK, 'delete', false);
$this->acceptAlert();
So how should I change this to handle alerts in phantomjs
This looks like PHPUnit's PHPUnit_Extensions_Selenium2TestCase.
When faced with this, I have created following function (I put it into a common base test case class myself, but it also can be in your test class):
protected function waitForAlert($expectedText, $timeout = 10000)
{
$this->waitUntil(
function () use ($expectedText) {
if ($this->alertText() == $expectedText) {
return true;
}
},
$timeout
);
$this->acceptAlert();
}
Then in the test itself you can use it as such:
$this->waitForAlert('You need a complete profile');
If there is no alert it will fail after the timeout set
Hope this helps ;)
Is there a way you can abort a block of code if it's taking too long in PHP? Perhaps something like:
//Set the max time to 2 seconds
$time = new TimeOut(2);
$time->startTime();
sleep(3)
$time->endTime();
if ($time->timeExpired()){
echo 'This function took too long to execute and was aborted.';
}
It doesn't have to be exactly like above, but are there any native PHP functions or classes that do something like this?
Edit: Ben Lee's answer with pcnt_fork would be the perfect solution except that it's not available for Windows. Is there any other way to accomplish this with PHP that works for Windows and Linux, but doesn't require an external library?
Edit 2: XzKto's solution works in some cases, but not consistently and I can't seem to catch the exception, no matter what I try. The use case is detecting a timeout for a unit test. If the test times out, I want to terminate it and then move on to the next test.
You can do this by forking the process, and then using the parent process to monitor the child process. pcntl_fork is a method that forks the process, so you have two nearly identical programs in memory running in parallel. The only difference is that in one process, the parent, pcntl_fork returns a positive integer which corresponds to the process id of the child process. And in the other process, the child, pcntl_fork returns 0.
Here's an example:
$pid = pcntl_fork();
if ($pid == 0) {
// this is the child process
} else {
// this is the parent process, and we know the child process id is in $pid
}
That's the basic structure. Next step is to add a process expiration. Your stuff will run in the child process, and the parent process will be responsible only for monitoring and timing the child process. But in order for one process (the parent) to kill another (the child), there needs to be a signal. Signals are how processes communicate, and the signal that means "you should end immediately" is SIGKILL. You can send this signal using posix_kill. So the parent should just wait 2 seconds then kill the child, like so:
$pid = pcntl_fork();
if ($pid == 0) {
// this is the child process
// run your potentially time-consuming method
} else {
// this is the parent process, and we know the child process id is in $pid
sleep(2); // wait 2 seconds
posix_kill($pid, SIGKILL); // then kill the child
}
You can't really do that if you script pauses on one command (for example sleep()) besides forking, but there are a lot of work arounds for special cases: like asynchronous queries if you programm pauses on DB query, proc_open if you programm pauses at some external execution etc. Unfortunately they are all different so there is no general solution.
If you script waits for a long loop/many lines of code you can do a dirty trick like this:
declare(ticks=1);
class Timouter {
private static $start_time = false,
$timeout;
public static function start($timeout) {
self::$start_time = microtime(true);
self::$timeout = (float) $timeout;
register_tick_function(array('Timouter', 'tick'));
}
public static function end() {
unregister_tick_function(array('Timouter', 'tick'));
}
public static function tick() {
if ((microtime(true) - self::$start_time) > self::$timeout)
throw new Exception;
}
}
//Main code
try {
//Start timeout
Timouter::start(3);
//Some long code to execute that you want to set timeout for.
while (1);
} catch (Exception $e) {
Timouter::end();
echo "Timeouted!";
}
but I don't think it is very good. If you specify the exact case I think we can help you better.
This is an old question, and has probably been solved many times by now, but for people looking for an easy way to solve this problem, there is a library now: PHP Invoker.
You can use declare function if the execution time exceeds the limits. http://www.php.net/manual/en/control-structures.declare.php
Here a code example of how to use
define("MAX_EXECUTION_TIME", 2); # seconds
$timeline = time() + MAX_EXECUTION_TIME;
function check_timeout()
{
if( time() < $GLOBALS['timeline'] ) return;
# timeout reached:
print "Timeout!".PHP_EOL;
exit;
}
register_tick_function("check_timeout");
$data = "";
declare( ticks=1 ){
# here the process that might require long execution time
sleep(5); // Comment this line to see this data text
$data = "Long process result".PHP_EOL;
}
# Ok, process completed, output the result:
print $data;
With this code you will see the timeout message.
If you want to get the Long process result inside the declare block you can just remove the sleep(5) line or increase the Max Execution Time declared at the start of the script
What about set-time-limit if you are not in the safe mode.
Cooked this up in about two minutes, I forgot to call $time->startTime(); so I don't really know exactly how long it took ;)
class TimeOut{
public function __construct($time=0)
{
$this->limit = $time;
}
public function startTime()
{
$this->old = microtime(true);
}
public function checkTime()
{
$this->new = microtime(true);
}
public function timeExpired()
{
$this->checkTime();
return ($this->new - $this->old > $this->limit);
}
}
And the demo.
I don't really get what your endTime() call does, so I made checkTime() instead, which also serves no real purpose but to update the internal values. timeExpired() calls it automatically because it would sure stink if you forgot to call checkTime() and it was using the old times.
You can also use a 2nd script that has the pause code in it that is executed via a curl call with a timeout set. The other obvious solution is to fix the cause of the pause.
Here is my way to do that. Thanks to others answers:
<?php
class Timeouter
{
private static $start_time = FALSE, $timeout;
/**
* #param integer $seconds Time in seconds
* #param null $error_msg
*/
public static function limit($seconds, $error_msg = NULL)
: void
{
self::$start_time = microtime(TRUE);
self::$timeout = (float) $seconds;
register_tick_function([ self::class, 'tick' ], $error_msg);
}
public static function end()
: void
{
unregister_tick_function([ self::class, 'tick' ]);
}
public static function tick($error)
: void
{
if ((microtime(TRUE) - self::$start_time) > self::$timeout) {
throw new \RuntimeException($error ?? 'You code took too much time.');
}
}
public static function step()
: void
{
usleep(1);
}
}
Then you can try like this:
<?php
try {
//Start timeout
Timeouter::limit(2, 'You code is heavy. Sorry.');
//Some long code to execute that you want to set timeout for.
declare(ticks=1) {
foreach (range(1, 100000) as $x) {
Timeouter::step(); // Not always necessary
echo $x . "-";
}
}
Timeouter::end();
} catch (Exception $e) {
Timeouter::end();
echo $e->getMessage(); // 'You code is heavy. Sorry.'
}
I made a script in php using pcntl_fork and lockfile to control the execution of external calls doing the kill after the timeout.
#!/usr/bin/env php
<?php
if(count($argv)<4){
print "\n\n\n";
print "./fork.php PATH \"COMMAND\" TIMEOUT\n"; // TIMEOUT IN SECS
print "Example:\n";
print "./fork.php /root/ \"php run.php\" 20";
print "\n\n\n";
die;
}
$PATH = $argv[1];
$LOCKFILE = $argv[1].$argv[2].".lock";
$TIMEOUT = (int)$argv[3];
$RUN = $argv[2];
chdir($PATH);
$fp = fopen($LOCKFILE,"w");
if (!flock($fp, LOCK_EX | LOCK_NB)) {
print "Already Running\n";
exit();
}
$tasks = [
"kill",
"run",
];
function killChilds($pid,$signal) {
exec("ps -ef| awk '\$3 == '$pid' { print \$2 }'", $output, $ret);
if($ret) return 'you need ps, grep, and awk';
while(list(,$t) = each($output)) {
if ( $t != $pid && $t != posix_getpid()) {
posix_kill($t, $signal);
}
}
}
$pidmaster = getmypid();
print "Add PID: ".(string)$pidmaster." MASTER\n";
foreach ($tasks as $task) {
$pid = pcntl_fork();
$pidslave = posix_getpid();
if($pidslave != $pidmaster){
print "Add PID: ".(string)$pidslave." ".strtoupper($task)."\n";
}
if ($pid == -1) {
exit("Error forking...\n");
}
else if ($pid == 0) {
execute_task($task);
exit();
}
}
while(pcntl_waitpid(0, $status) != -1);
echo "Do stuff after all parallel execution is complete.\n";
unlink($LOCKFILE);
function execute_task($task_id) {
global $pidmaster;
global $TIMEOUT;
global $RUN;
if($task_id=='kill'){
print("SET TIMEOUT = ". (string)$TIMEOUT."\n");
sleep($TIMEOUT);
print("FINISHED BY TIMEOUT: ". (string)$TIMEOUT."\n");
killChilds($pidmaster,SIGTERM);
die;
}elseif($task_id=='run'){
###############################################
### START EXECUTION CODE OR EXTERNAL SCRIPT ###
###############################################
system($RUN);
################################
### END ###
################################
killChilds($pidmaster,SIGTERM);
die;
}
}
Test Script run.php
<?php
$i=0;
while($i<25){
print "test... $i\n";
$i++;
sleep(1);
}
Have successfully connected Gearman to an existing PHP project. Using supervisord to ensure that the workers are running, it has produced pretty good results!
I have a critical issue, however, in that the "setCompleteCallback" is not working at all.
Split up somewhat like this:
Client
$client = new GearmanClient();
$client->addServer();
$client->setCompleteCallback(
array( 'LDPE_Service_AWSConnect_Transfer_Target', 'transferComplete' ) );
// push core to S3 bucket
$target = new LDPE_Service_AWSConnect_Transfer_Target( $transaction->id,
"/usr/local/include/LDP/", LDPE_Service_S3::BUCKET_CORE );
// push S3 bucket to instances
foreach( $aws_target_list as $dns )
{
$target->addChildRequest(
new LDPE_Service_AWSConnect_Transfer_Target( $transaction->id,
null, LDPE_Service_S3::BUCKET_CORE, $dns )
);
}
$client->addTaskBackground( 'transferStart', serialize( $target ) );
$client->runTasks();
Worker
(basically bootstraps a Zend Framework environment, and loads the exec functions)
include 'bootstrap.php';
ini_set('memory_limit', -1);
$worker = new GearmanWorker();
$worker->addServer();
$worker->addFunction( 'transferStart', array(
'LDPE_Service_AWSConnect_Transfer_Target', 'transferStart' ) );
while ($worker->work())
{
switch( $worker->returnCode() )
{
case GEARMAN_SUCCESS:
break;
default:
echo "ERROR RET: " . $worker->returnCode() . "\n";
exit;
}
}
Finally, here's the LDPE_Service_AWSConnect_Transfer_Target class that contains all of the heavy lifting. I've pruned out all of the logic, and it doesn't fire at all.
Implementation Methods
class LDPE_Service_AWSConnect_Transfer_Target {
public static function transferStart( GearmanJob $job )
{
$workload = $job->workload();
$target = unserialize( $workload );
echo "transferStart/begin [ " .
$target->getShortRepresentation() . " ]\n";
// perform a series of actions
echo "transferStart/complete [ " .
$target->getShortRepresentation() . " ]\n";
return serialize( $target );
}
public static function transferComplete( GearmanTask $task )
{
echo "transferComplete/begin\n";
$workload = $task->data();
$parent_target = unserialize( $workload );
echo "transferComplete/complete\n";
}
}
To be clear then, the "transferStart/begin" and "transferStart/complete" strings are correctly printed to logs, however, transferComplete/begin is never fired. What's going on?
Thanks!
Alex
Seems as though the callbacks don't fire when run in background mode..
Try setting the callback after your call to the process function
$client->addTaskBackground('my_task', 'payload');
$client->setCompleteCallback('complete');
$client->runTasks();
I had tried that, it really boiled down to having the client run as a Gearman task itself. The client was being invoked as a part of a browser-invoked page. Seems that the callback wasn't being honored under this context. The solution was to move the client that schedules the callbacks into a Gearman-run method. I added a "scheduleXXXX" function to the work, which pretty much called the flow above. This function received the "normal" function's input, serialized.