use Ev and Road Runner together? - php

app_example.php
trying to combine an example of RR app and Ev together.
trying to use Ev to update a global variable,
which is used in HTTP response.
Ev::run(Ev::RUN_NOWAIT);
does not seem to have any effect.
Ev::run();
works. but Ev is done before the http request is handled.
Would like to have the Ev executed periodically while http request is being handled,
at the same time.
use Spiral\RoadRunner;
use Nyholm\Psr7;
include "vendor/autoload.php";
$worker = RoadRunner\Worker::create();
$psrFactory = new Psr7\Factory\Psr17Factory();
$psr7 = new RoadRunner\Http\PSR7Worker($worker, $psrFactory, $psrFactory, $psrFactory);
$global_variable = 0;
**$w = new EvTimer(2, 1, function ($w) {
global $global_variable;
$global_variable++;
echo "is called every second, is launched after 2 seconds\n";
echo "iteration = ", Ev::iteration(), PHP_EOL;
// Stop the watcher after 5 iterations
Ev::iteration() == 5 and $w->stop();
// Stop the watcher if further calls cause more than 10 iterations
Ev::iteration() >= 10 and $w->stop();
});
Ev::run(Ev::RUN_NOWAIT);
# Ev::run();**
while (true) {
try {
$request = $psr7->waitRequest();
if (!($request instanceof \Psr\Http\Message\ServerRequestInterface)) { // Termination request received
break;
}
} catch (Exception $ex) {
$psr7->respond(new Psr7\Response(400)); // Bad Request
continue;
}
try {
// Application code logic
$psr7->respond(new Psr7\Response(200, [], 'Hello RoadRunner!' . $global_variable));
} catch (Exception $ex) {
$psr7->respond(new Psr7\Response(500, [], 'Something Went Wrong!'));
}
}

<?php
use React\EventLoop\Loop;
require __DIR__ . '/vendor/autoload.php';
$http = new React\Http\HttpServer(function (Psr\Http\Message\ServerRequestInterface $request) {
return React\Http\Message\Response::plaintext(
"Hello World!\n"
);
});
$socket = new React\Socket\SocketServer('127.0.0.1:8080');
$http->listen($socket);
echo "Server running at http://127.0.0.1:8080" . PHP_EOL;
Loop::addPeriodicTimer(5, function () {
$memory = memory_get_usage() / 1024;
$formatted = number_format($memory, 3).'K';
echo "Current memory usage: {$formatted}\n";
# here is my own little logic to get data from db
# to update global variables
});

Related

Server-side PHP event page not loading when using while loop

I have a file named handler.php which reads data from a text file and pushes it to a client page.
Relevant client code:
<script>
if(typeof(EventSource) !== "undefined") {
var source = new EventSource("handler.php");
source.onmessage = function(event) {
var textarea = document.getElementById("subtitles");
textarea.value += event.data;
textarea.scrollTop = textarea.scrollHeight;
};
} else {
document.getElementById("subtitles").value = "Server-sent events not supported.";
}
</script>
Handler.php code:
$id = 0;
$event = 'event1';
$oldValue = null;
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
header('X-Accel-Buffering: no');
while(true){
try {
$data = file_get_contents('liveData.txt');
} catch(Exception $e) {
$data = $e->getMessage();
}
if ($oldValue !== $data) {
$oldValue = $data;
echo 'id: ' . $id++ . PHP_EOL;
echo 'event: ' . $event . PHP_EOL;
echo 'retry: 2000' . PHP_EOL;
echo 'data: ' . json_encode($data) . PHP_EOL;
echo PHP_EOL;
#ob_flush();
#flush();
sleep(1);
}
}
When using the loop, handler.php is never loaded so the client doesn't get sent any data. In the Chrome developer network tab, handler.php is shown as "Pending" and then "Cancelled". The file itself stays locked for around 30 seconds.
However, if I remove the while loop (as shown below), handler.php is loaded and the client does receive data (only once, even though the liveData.txt file is constantly updated).
Handler.php without loop:
$id = 0;
$event = 'event1';
$oldValue = null;
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
header('X-Accel-Buffering: no');
try {
$data = file_get_contents('liveData.txt');
} catch(Exception $e) {
$data = $e->getMessage();
}
if ($oldValue !== $data) {
$oldValue = $data;
echo 'id: ' . $id++ . PHP_EOL;
echo 'event: ' . $event . PHP_EOL;
echo 'retry: 2000' . PHP_EOL;
echo 'data: ' . json_encode($data) . PHP_EOL;
echo PHP_EOL;
#ob_flush();
#flush();
}
I'm using SSE as I only need one-way communication (so websockets are probably overkill) and I really don't want to use polling. If I can't sort this out, I may have to.
The client side of the SSE connection looks OK as far as I can tell - though I moved the var textarea..... outside of the onmessage handler.
UPDATE: I should have looked closer but the event to monitor is event1 so we need to set an event listener for that event.
<script>
if( typeof( EventSource ) !== "undefined" ) {
var url = 'handler.php'
var source = new EventSource( url );
var textarea = document.getElementById("subtitles");
source.addEventListener('event1', function(e){
textarea.value += e.data;
textarea.scrollTop = textarea.scrollHeight;
console.info(e.data);
},false );
} else {
document.getElementById("subtitles").value = "Server-sent events not supported.";
}
</script>
As for the SSE server script I tend to employ a method like this
<?php
/* make sure the script does not timeout */
set_time_limit( 0 );
ini_set('auto_detect_line_endings', 1);
ini_set('max_execution_time', '0');
/* start fresh */
ob_end_clean();
/* ultility function for sending SSE messages */
function sse( $evtname='sse', $data=null, $retry=1000 ){
if( !is_null( $data ) ){
echo "event:".$evtname."\r\n";
echo "retry:".$retry."\r\n";
echo "data:" . json_encode( $data, JSON_FORCE_OBJECT | JSON_HEX_QUOT | JSON_HEX_TAG | JSON_HEX_AMP | JSON_HEX_APOS );
echo "\r\n\r\n";
}
}
$id = 0;
$event = 'event1';
$oldValue = null;
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
header('X-Accel-Buffering: no');
while( true ){
try {
$data = #file_get_contents( 'liveData.txt' );
} catch( Exception $e ) {
$data = $e->getMessage();
}
if( $oldValue !== $data ) {
/* data has changed or first iteration */
$oldValue = $data;
/* send the sse message */
sse( $event, $data );
/* make sure all buffers are cleansed */
if( #ob_get_level() > 0 ) for( $i=0; $i < #ob_get_level(); $i++ ) #ob_flush();
#flush();
}
/*
sleep each iteration regardless of whether the data has changed or not....
*/
sleep(1);
}
if( #ob_get_level() > 0 ) {
for( $i=0; $i < #ob_get_level(); $i++ ) #ob_flush();
#ob_end_clean();
}
?>
When using the loop, handler.php is never loaded so the client doesn't
get sent any data. In the Chrome developer network tab, handler.php is
shown as "Pending" and then "Cancelled". The file itself stays locked
for around 30 seconds.
This is because the webserver (Apache) or the browser or even PHP itself cancel the request when there is no response within 30 seconds.
So I guess the flushing does not work, try to actively start and end the buffer without using # functions so you get a clue when there is an error.
// Start output buffer
ob_start();
// Write content
echo '';
// Flush output buffer
ob_end_flush();
I think you have a problem with the way the web works. The PHP code doesn't run in your browser - it just creates something that the web server hands off to the browser over the wire.
Once the page is loaded from the server that's it. You will need to implement something that polls for changes.
One way I've done this is to put the page in a loop that refreshes and therefore fetches the page again with the new data every second or so (but this could seriously overload your server if there's a lot of folks on that page).
The only other solution is to use push technology and a javascript framework that can take the push and repopulate the relevant parts of the page, or a javascript loop on a timer that pulls the data.
(Posted solution on behalf of the question author).
Success! While debugging for the nth time, I decided to go back to basics and start again. I scrapped the loop and reduced the PHP code to a bare minimum, but kept the client-side code RamRaider provided. And now it all works wonderfully! And by playing around with the retry value, I can specify exactly how often data is pushed.
PHP (server side):
<?php
$id = 0;
$event = 'event1';
$oldValue = null;
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
header('X-Accel-Buffering: no');
try {
$data = file_get_contents('liveData.txt');
} catch(Exception $e) {
$data = $e->getMessage();
}
if ($oldValue !== $data) {
$oldValue = $data;
echo 'id: ' . $id++ . PHP_EOL;
echo 'event: ' . $event . PHP_EOL;
echo 'retry: 500' . PHP_EOL;
echo "data: {$data}\n\n";
echo PHP_EOL;
#ob_flush();
#flush();
}
?>
Javascript (client side):
<script>
if ( typeof(EventSource ) !== "undefined") {
var url = 'handler.php'
var source = new EventSource( url );
var textarea = document.getElementById("subtitles");
source.addEventListener('event1', function(e){
textarea.value += e.data;
textarea.scrollTop = textarea.scrollHeight;
console.info(e.data);
}, false );
} else {
document.getElementById("subtitles").value = "Server-sent events not supported.";
}
</script>

How use multi-threading with this

I've been reading up on multi-threading with PHP, but I'm having a tough time integrating it into my command line php script.
I read multithreading
and multithread foreach.
But I'm really not sure. Any thoughts how to apply multi-threading here? The reason I need multi-threading here is that Telnet takes forever (see shell script). But I can't write to my DB concurrently ($stmt2). I'm looping through my list of devices with $stmt->fetch.
Maybe I should do something like run task specifically, with just the telnet/shell script call in the task, like that example:
$task = new class extends Thread {
private $response;
public function run()
{
$content = file_get_contents("http://google.com");
preg_match("~<title>(.+)</title>~", $content, $matches);
$this->response = $matches[1];
}
};
$task->start() && $task->join();
var_dump($task->response); // string(6) "Google"
But, I'm getting the error when I try to add this to my code below:
PHP Parse error: syntax error, unexpected T_CLASS in /opt/IBM/custom/NAC_Dslam/calix_swVerThreaded.php on line 100
this is the line:
$task = new class ...
My script looks like this:
$stmt =$mysqli->prepare("SELECT ip, model FROM TableD WHERE vendor = 'Calix' AND model in ('C7','E7') AND sw_ver IS NULL LIMIT 6000"); //AND ping_reply IS NULL AND software_version IS NULL
$stmt->bind_result($ip, $model); //list of ip's
if(!$stmt->execute())
{
//err
}
$stmt2 = $mysqli2->prepare("UPDATE TableD SET sw_ver = ?
WHERE vendor = 'Calix'
AND ip = ? ");
$stmt2->bind_param("ss", $software, $ip);
while($stmt->fetch()) {
//initializing var's
if(pingAddress($ip)=="alive") { //Ones that don't ping are dead to us.
///////this is the part that takes forever and should be multi-threaded/////
//Call shell script to telnet to calix dslam and get version for that ip
if($model == "C7"){
$task = new class extends Thread {
private $itsOutput;
public function run()
{
exec ("./calix_C7_swVer.sh $ip", $itsOutput);//takes forever/telnet
//in shell script. Can't
//be fixed. Each time I
//call this script it's a
//different ip
}
};
$task->start() && $task->join();
var_dump($task->itsOutput); //should be returned output above //takes forever to telnet
//$output = $task->itsOutput;
$output2=array_reverse($output,true);
if (!(preg_grep("/DENY/", $output2))){
$found = preg_grep("/COMPLD/", $output2);
$ind = key($found);
$version = explode(",", $output[$ind+1]);
if(strlen($version[3])>=1) { //if sw ver came back in an acceptable size
$software = $version[3];
$software = trim($software,'"'); //trim double quote (usually is there)
print "sw ver after trim: " . $software . "\n";
if(!$stmt2->execute()) { //write sw version to netcool db
$tempErr = "Failed to insert into dslam_elements_nac: " . $stmt2->error;
printf($tempErr . "\n"); //show mysql execute error if exists
$err->logThis($tempErr);
}
if(!$stmtX->execute()) { //commit it
$tempErr = "Failed to commit dslam_elements_nac: " . $stmtX->error;
printf($tempErr . "\n"); //show mysql execute error if exists
$err->logThis($tempErr);
}
} //we got a version back
else { //version not retrieved
//error processing
} //didn't get sw ver
} //not deny
} //c7
else if($model == "E7") {
exec ("./calix_E7_swVer.sh $ip", $output);
$output2=array_reverse($output,true);
if (!(preg_grep("/DENY/", $output2))){
$found = preg_grep("/yes/", $output2);
$ind = key($found);
$version = explode(" ", $output[$ind]);
if(strlen($version[5])>=1) { //if sw ver came back in an acceptable size
$software = $version[5];
print "sw ver after trim: " . $software . "\n";
if(!$stmt2->execute()) { //write sw version to netcool db
$tempErr = "Failed to insert into dslam_elements_nac: " . $stmt2->error;
printf($tempErr . "\n"); //show mysql execute error if exists
$err->logThis($tempErr);
}
if(!$stmtX->execute()) { //commit it
//err processing
}
} //we got a version back
else { //version not retrieved
//handle it
} //didn't get sw ver
} //not deny
}
} //while
update
I'm trying this (pcntl_fork), but it doesn't seem to be quite what I need because when I sleep(30), which I think is similar to my shell script call, other processes don't continue and do the next one.
<?php
declare(ticks = 1);
$max=10;
$child=0;
$res = array("aabc", "bcd", "cde", "eft", "ggg", "hhh", "iii", "jjj", "kkk", "lll", "mmm", "nnn", "ooo", "ppp", "qqq", "aabc", "bcd", "cde", "eft", "ggg", "hhh", "iii", "jjj", "kkk", "lll", "mmm", "nnn", "ooo", "ppp", "qqq");
function sig_handler($signo) {
global $child;
switch ($signo) {
case SIGCHLD:
//echo "SIGCHLD receivedn";
// clean up zombies
$pid = pcntl_waitpid(-1, $status, WNOHANG);
$child -= 1;
//exit;
}
}
pcntl_signal(SIGCHLD, "sig_handler");
//$website_scraper = new scraper();
foreach($res as $r){
while ($child >= $max) {
sleep(5); //echo " - sleep $child n";
//pcntl_waitpid(0,$status);
}
$child++;
$pid=pcntl_fork();
if ($pid==-1) {
die("Could not fork:n");
}
elseif ($pid) {
// we're in the parent fork, dont do anything
}
else {
//example of what a child process could do:
print "child process stuff \n";
sleep(30);
//$website_scraper -> scraper("http://foo.com");
exit;
}
while(pcntl_waitpid(0, $status) != -1) { //////???
$status = pcntl_wexitstatus($status);
echo "child $status completed \n";
}
print "did stuff \n";
}
?>
I've been reading up on multi-threading with PHP
Don't. PHP threading has very limited utility, as it cannot be used in a web server environment. It can only be used in command-line scripts.
The author of the PHP pthreads extension has written:
pthreads v3 is restricted to operating in CLI only: I have spent many years trying to explain that threads in a web server just don't make sense, after 1,111 commits to pthreads I have realised that, my advice is going unheeded.
So I'm promoting the advice to hard and fast fact: you can't use pthreads safely and sensibly anywhere but CLI.
If you need to communicate with multiple network devices in parallel, consider using stream_select to perform asynchronous I/O, or running multiple PHP processes as part of a worker queue to manage the connections.

How to do a catch all in Slim framework?

I'm making a short URL service using Slimphp to take care of my routing. I can define any route just fine but if I want to react to /<code here> instead of that taking me to the index page of the project.
This is my code:
<?php
require 'vendor/autoload.php';
use ShortUrls\ShortUrls;
error_reporting(E_ALL);
ini_set('display_errors', true);
$app = new \Slim\Slim();(array(
"view" => new \Slim\Views\Smarty()
));
$view = $app->view();
$view->parserDirectory = dirname(__FILE__) . 'vendor/smarty/smarty/libs';
$view->parserCompileDirectory = dirname(__FILE__) . '/compiled';
$view->parserCacheDirectory = dirname(__FILE__) . '/cache';
$view->setTemplatesDirectory(dirname(__FILE__) . '/lib/templates/');
\ShortUrls\Config::init_config();
$app->get('/', function ($hash) {
try {
} catch (ResourceNotFoundException $e) {
echo '404';
}
$short = new ShortUrls();
if ($hash) {
if ($short_url = $short->get_url_by_hash(($hash))) {
print '<pre>';
print_r($short_url);
print '</prE>';
}
} else {
$short->create_short_url("http://www.locovsworld.com");
// $app->render('layout.tpl', array('test' => 'Hello'));
}
global $app;
print_r( $app->request()->params() );
echo 'done';
});
$app->run();
Remember / == index /9082ABC could be a short URL that I have to query from the database and redirect the client to.
I already got the answer its the following ...
$app->get('/(:hash)', function ($hash) {
};
I am sorry to bother you guys :(

php and soap... possible to catch errors?

The script runs on php 4 with nusoap library
require_once('nusoap/lib/nusoap.php');
ini_set("soap.wsdl_cache_enabled", "0");
$client = new soapclient("some-url",true);
$err = $client->getError();
if ($err)
{
header("Location: error-page");
exit();
}
My question is this: in case an error is detected, is it possible to wait for 1-2 secs ( something like sleep(2); ) and then try to re-enable the soap connection? And for future reference... how can i get all possible errors and build cases for them? For example for some errors wait to re-initialize the connection, for some other errors, log to db the reason, and for the rest just redirect to a general error page.
You do know how to program, right? Just drop the code into a loop:
$retries = 3; // how many times to retry the connection
$sleep = 2; // number of seconds to sleep in-between retries
$i = 1;
while (TRUE) {
$client = new soapclient("some-url",true);
if ( ! $client->getError()) {
break; // break out of the loop on success
} elseif ($i === $retries) {
header("Location: error-page");
exit();
}
sleep($sleep);
++$i;
}

Gearman with multiple servers and php workers

I'm having a problem with gearman workers running on multiple servers which i can't seem to solve.
The problem occurs when a worker server is taken offline, rather than the worker process being cancelled, and causes all other worker processes to error and fail.
Example with just 1 client and 2 workers -
Client:
$client = new GearmanClient ();
$client->addServer ('192.168.1.200');
$client->addServer ('192.168.1.201');
$job = $client->do ('generate_tile', serialize ($arrData));
Worker:
$worker = new GearmanWorker ();
$worker->addServer ('192.168.1.200');
$worker->addServer ('192.168.1.201');
$worker->addFunction ('generate_tile', 'generate_tile');
while (1)
{
if (!$worker->work ())
{
switch ($worker->returnCode ())
{
default:
echo "Error: " . $worker->returnCode () . ': ' . $worker->error () . "\n";
break;
}
}
}
function generate_tile ($job) { ... }
The worker code is being run on 2 separate servers. When every server is up and running both workers execute jobs as expected. When one of the worker processes is cancelled, the other worker executes all jobs as expected.
However, when the server with the cancelled worker process is shutdown and taken completely offline, requests to the client script hang and the remaining worker process does not pick up any jobs.
I get the following set of errors from the remaining worker process:
Error: 46: gearman_con_wait:timeout reached
Error: 46: gearman_con_wait:timeout reached
Error: 4: gearman_con_flush:write:110
Error: 46: gearman_con_wait:timeout reached
Error: 4: gearman_con_flush:write:113
Error: 4: gearman_con_flush:write:113
Error: 4: gearman_con_flush:write:113
....
When i start-up the other server, not starting the worker process on it, the remaining worker process immediately jumps into life and executes any remaining jobs.
It seems clear to me that i need some code in the worker process to cope with any servers that may be offline, however i cannot see how to do this.
Many thanks,
Andy
Our tests with multiple gearman servers shows that if the last server in the list (192.168.1.201 in your case) is taken down, the workers stop executing the way you are describing. (Also, the workers grab jobs from the last server. They process jobs on .200 only if on .201 there are no jobs).
It seems that this is a bug with the linked list in the gearman server, which is reported to be fixed multiple times, but with all available versions of gearman, the bug persist. Sorry, I know that's not a solution, but we had the same problem and didn't found a solution. (if someone can provide working solution for this problem, I agree to give large bounty)
Further to #Darhazer 's comment above. We found that as well and solved like thus :-
// Gearman workers show a strong preference for servers at the end of a list so randomize the order
$worker = new GearmanWorker();
$s2 = explode(",", Configure::read('workers.servers'));
shuffle($s2);
$servers = implode(",", $s2);
$worker->addServers($servers);
We run 6 to 10 workers at any time, and expire them after they've completed x requests.
I use this class, which keep track of which jobs work on which servers. It hasn't been thoroughly tested, just wrote it now. I've pasted an edited version, so there might be a typo or somesuch, but otherwise appears to solve the issue.
<?
class MyGearmanClient {
static $server = "server1,server2,server3";
static $server_array = false;
static $workingServers = false;
static $gmclient = false;
static $timeout = 5000;
static $defaultTimeout = 5000;
static function randomServer() {
return self::$server_array[rand(0, count(self::$server_array) -1)];
}
static function getServer($job = false) {
if (self::$server_array == false) {
self::$server_array = explode(",", self::$server);
self::$workingServers = array();
}
$serverList = array();
if ($job) {
if (array_key_exists($job, self::$workingServers)) {
foreach (self::$server_array as $server) {
if (array_key_exists($server, self::$workingServers[$job])) {
if (self::$workingServers[$job][$server]) {
$serverList[] = $server;
}
} else {
$serverList[] = $server;
}
}
if (count($serverList) == 0) {
# All servers have failed, need to insert all the servers again and retry.
$serverList = self::$workingServers[$job] = self::$server_array;
}
return $serverList[rand(0, count($serverList) - 1)];
} else {
return self::randomServer();
}
} else {
return self::randomServer();
}
}
static function serverWorked($server, $job) {
self::$workingServers[$job][$server] = $server;
}
static function serverFailed($server, $job) {
self::$workingServers[$job][$server] = false;
}
static function Connect($server = false, $job = false) {
if ($server) {
self::$server = self::getServer();
}
self::$gmclient= new GearmanClient();
self::$gmclient->setTimeout(self::$timeout);
# add the default job server
self::$gmclient->addServer($server = self::getServer($job));
return $server;
}
static function Destroy() {
self::$gmclient = false;
}
static function Client($name, $vars, $timeout = false) {
if (is_int($timeout)) {
self::$timeout = $timeout;
} else {
self::$timeout = self::$defaultTimeout;
}
do {
$server = self::Connect(false, $name);
$value = self::$gmclient->do($name, $vars);
$return_code = self::$gmclient->returnCode();
if (!$value) {
$error_message = self::$gmclient->error();
if ($return_code == 47) {
self::serverFailed($server, $name);
if (count(self::$server_array) > 1) {
// ADDED SINGLE SERVER LOOP AVOIDANCE // echo "Timeout on server $server, trying another server...\n";
continue;
} else {
return false;
}
}
echo "ERR: $error_message ($return_code)\n";
}
# printf("Worker has returned\n");
$short_value = substr($value, 0, 80);
switch ($return_code)
{
case GEARMAN_WORK_DATA:
echo "DATA: $short_value\n";
break;
case GEARMAN_SUCCESS:
self::serverWorked($server, $name);
break;
case GEARMAN_WORK_STATUS:
list($numerator, $denominator)= self::$gmclient->doStatus();
echo "Status: $numerator/$denominator\n";
break;
case GEARMAN_TIMEOUT:
// self::Connect();
// Fall through
default:
echo "ERR: $error_message " . self::$gmclient->error() . " ($return_code)\n";
break;
}
}
while($return_code != GEARMAN_SUCCESS);
$rv = unserialize($value);
return $rv["rv"];
}
}
# Example usage:
# $rv = MyGearmanClient::Client("Function", $args);
?>
since 'addServer' from gearman client is not working properly this code can choose a jobserver randomly and if fails try the next one, this way you can balance the load.
// job servers
$jobservers = array('192.168.1.1','192.168.1.2');
// prepare gearman client
$gmclient = new GearmanClient();
// shuffle job servers (deliver jobs equally by server)
shuffle($jobservers);
// add job servers
foreach($jobservers as $jobserver) {
// add random jobserver
$gmclient->addServer($jobserver);
// check server state if ok end foreach
if (#$gmclient->ping('ping')) break;
// if connections fails reset client
$gmclient = new GearmanClient();
}
Solution tested and working ok.
$client = new GearmanClient();
if(!$client->addServer("11.11.65.73",4730))
$client->addServer("11.11.65.79",4730);

Categories