In a Silex application running on HVVM I have setup a dummy event listener on Kernel TERMINATE:
$app['dispatcher']->addListener(
KernelEvents::TERMINATE,
function () use ($app) {
usleep(10000000);
$app['logger']->alert("I AM REGISTERED!");
}
);
I was expecting my application to render the response as fast as possible within a second and after 10s I expected the message "I AM REGISTERED" to appear in my log.
Yet strangely the response is sent after the event has been executed, meaning the event blocks the response for 10s and I see both the response and the log message at the same time.
What is going on here?
I find it odd that in the Application.php, it appears that send is called before terminate:
vendor/silex/silex/src/Silex/Application.php:
/**
* Handles the request and delivers the response.
*
* #param Request|null $request Request to process
*/
public function run(Request $request = null)
{
if (null === $request) {
$request = Request::createFromGlobals();
}
$response = $this->handle($request);
$response->send();
$this->terminate($request, $response);
}
The symfony2 docs about HttpKernel, which silex is uisng as well, it says:
Internally, the HttpKernel makes use of the fastcgi_finish_request PHP
function. This means that at the moment, only the PHP FPM server API
is able to send a response to the client while the server's PHP
process still performs some tasks. With all other server APIs,
listeners to kernel.terminate are still executed, but the response is
not sent to the client until they are all completed.
And fastcgi_finish_request is not currently supported by hhvm.
Hence, the response will not be sent unless all events are completed.
PHP is not asynchronous, so while event handling is possible through use of callbacks, as soon as the event triggers, the control flow of the process will be dedicated to it.
Frameworks tend to delay content response to be the last action taken, in case any form of header modification has to happen.
As you mentioned, the content is being sent/echoed before the TERMINATE event is fired, but that's not the whole story.
It depends on how your server is set up. If, for example, you have gzip enabled in apache (very common), then apache will cache all content until PHP has finished execution (and then it will gzip and send it). You mentioned that you're on HHVM, which could also be the problem - it might not flush the content itself until execution is complete.
Either way, the best solution is to... well... not sleep. I'm assuming that you're sleeping to give the database a chance to flush to disk (10 seconds is a really long time to wait for that, though). If that's not the case, then finding a decent solution won't be easy until we can understand why you need to wait that long.
Related
I was wondering if there is a way to have a ReactPHP HTTP Server handle requests Asynchronously. I set up a very basic HTTP Server using the documentation (https://github.com/reactphp/http)
HTTPServer.php
<?php
$httpServer = new React\Http\Server(
function(Psr\Http\Message\ServerRequestInterface $request) {
$responseData = $request->getUri()->getPath();
if($responseData == "/testSlow") {sleep(5);} // simulate a slow response time
if($responseData == "/testFast") {sleep(1);} // simulate a fast response time
return new React\Http\Message\Response(
"200",
array("Access-Control-Allow-Headers" => "*", "Access-Control-Allow-Origin" => "*", "Content-Type" => "application/json"),
json_encode($responseData)
);
}
);
$socketServer = new React\Socket\Server("0.0.0.0:31");
$httpServer->listen($socketServer);
?>
It seems to working fine but Synchronously, if I send a request to the /testSlow path and then immediately to the /testFast path, the slow one will always finish first after 5 seconds and only once it has finished will the fast one then start and finish after 1 second
Am I missing some additional setup?
ReactPHP's event loop handles requests asynchronously, not in parallel. It means that there is only one running process. And call to sleep() hangs this process, i.e. prevents event loop from handling next requests. So, in asynchronous apps (in Node.js as well) it is a common practice to move heavy processing to dedicated processes.
I am not ReactPHP expert, so cannot provide a working example, but can point the root cause of the problem. I would recommend to read this awesome blog: https://sergeyzhuk.me/reactphp-series, and this article in particular: https://sergeyzhuk.me/2018/05/04/reactphp-child-processes
Hi i'm trying to execute a LONG RUNNING request (action) in background.
function actionRequest($id){
//execute very long process here in background but continue redirect
Yii::app()->user->setFlash('success', "Currently processing your request you may check it from time to time.");
$this->redirect(array('index', 'id'=>$id));
}
What i'm trying to achieve is to NOT have the user waiting for the request to be processed since it generally takes 5-10min, and the request usually goes to a timeout, and even if I set the timeout longer, waiting for 5-10 min. isn't a good user experience.
So I want to return to the page immediately notifying the user that his/her request is being processed, while he can still browse, and do other stuff in the application, he/she can then go back to the page and see that his/her request was processed.
I've looked into Yii extensions backjob, It works, the redirect is executed immediately (somehow a background request), but when doing other things, like navigating in the site, it doesn't load, and it seems that the request is still there, and i cannot continue using the application until the request is finished.
A similar extension runactions promises the same thing, but I could not even get it to work, it says it 'touches a url', like a fire and forget job but doesn't work.
I've also tried to look into message queuing services like Gearman, RabbitMQ, but is really highly technical, I couldn't even install Gearman in my windows machine so "farming" services won't work for me. Some answers to background processing includes CRON and AJAX but that doesn't sound too good, plus a lot of issues.
Is there any other workaround to having asynchronous background processing? I've really sought hard for this, and i'm really not looking for advanced/sophisticated solutions like "farming out work to several machines" and the likes. Thank You very much!
If you want to be able to run asynchronous jobs via Yii, you may not have a choice but to dabble with some AJAX in order to retrieve the status of the job asynchronously. Here are high-level guidelines that worked for me. Hopefully this will assist you in some way!
Setting up a console action
To run background jobs, you will need to use Yii's console component. Under /protected/commands, create a copy of your web controller that has your actionRequest() (e.g. /protected/commands/BulkCommand.php).
This should allow you to go in your /protected folder and run yiic bulk request.
Keep in mind that if you have not created a console application before, you will need to set up its configuration similar to how you've done it for the web application. A straight copy of /protected/config/main.php into /protected/config/console.php should do 90% of the job.
Customizing an extension for running asynchronous console jobs
What has worked for me is using a combination of two extensions: CConsole and TConsoleRunner. TConsoleRunner uses popen to run shell scripts, which worked for me on Windows and Ubuntu. I simply merged its run() code into CConsole as follows:
public function popen($shell, $redirectOutput = '')
{
$shell = $this->resolveCommandLine($shell, false, $redirectOutput);
$ret = self::RETURN_CODE_SUCCESS;
if (!$this->displayCommands) {
ob_start();
}
if ($this->isWindows()) {
pclose(popen('start /b '.$shell, 'r'));
}
else {
pclose(popen($shell.' > /dev/null &', 'r'));
}
if (!$this->displayCommands) {
ob_end_clean();
}
return $ret;
}
protected function isWindows()
{
if(PHP_OS == 'WINNT' || PHP_OS == 'WIN32')
return true;
else
return false;
}
Afterwards, I changed CConsole's runCommand() to the following:
public function runCommand($command, $args, $async = false, &$outputLines = null, $executor = 'popen')
{
...
switch ($executor) {
...
case 'popen':
return $this->popen($shell);
...
}
}
Running the asynchronous job
With the above set up, you can now use the following snippet of code to call yiic bulk request we created earlier.
$console = new CConsole();
$console->runCommand('bulk request', array(
'--arg1="argument"',
'--arg2="argument"',
'--arg3="argument"',
));
You would insert this in your original actionRequest().
Checking up on the status
Unfortunately, I'm not sure what kind of work your bulk request is doing. For myself, I was gathering a whole bunch of files and putting them in a folder. I knew going in how many files I expected, so I could easily create a controller action that verifies how many files have been created so far and give a % of the status as a simple division.
I've recently discovered EventSource, YUI3 has a Gallery module to normalise and fallback behaviour, that's what I've chosen to go with in my example as I use that framework already.
So I've searched about quite a bit, read many blogs, posts and examples, all of which show pretty much the same thing: How to set up basic SSE events. I now have 6 examples of open/message/error/close events firing.
What I don't have (what I'd hoped this link was going to give me) is an example of how to fire SSE events which are more useful to my application, I'm trying one called 'update'.
Here's is my basic test page: http://codefinger.co.nz/public/yui/eventsource/test.php (it might as well be an html file, there's no php code in here yet)
And here's the 'message.php' in the EventSource constructor:
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache'); // recommended to prevent caching of event data.
/**
* Constructs the SSE data format and flushes that data to the client.
*
* #param string $id Timestamp/id of this connection.
* #param string $msg Line of text that should be transmitted.
*/
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
while(true) {
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
sleep(10);
}
// I was hoping calling this file with a param might allow me to fire an event,
// which it does dutifully, but no browsers register the 'data : update' - though
// I do see the response in Firebug.
if( $_REQUEST['cmd'] ){
sendMsg($serverTime, $_REQUEST['cmd'] );
}
?>
From the live example above, you can see that I've tried to use YUI's io module to send a request, with param, to fire my 'update' event when I click the 'update' button. It seems to work, as you can see in Firebug's Net panel, but my event isn't handled (I realise the script above will run that loop again, I just want to get my event handled in connected browsers, then I'll remove/cleanup).
Am I doing this part wrong? Or is there something more fundamental I'm doing wrong? I'm trying to push events in response to my UI's state changing.
This SO question seemed to come close, #tomfumb commented that his next question was going to be "how to send new events to the client after the initial connection is made - now I see that the PHP just has to never stop executing." But surely I'd only send events as they happen... and not continuously...
there are several issues in your approach:
The server-side code that reads the cmd parameter is unreachable because of the infinite loop that sends event data to the client.
You are trying to send an event from the client to the server. It is in the specification name - Server-Sent Events - the server is the sender and the client is the receiver of events. You have options here:
Use the appropriate specification for the job called Web Sockets which is a two-way communication API
Write the logic that makes the desired type of communication possible
If you choose to stay with the SSE API I see two possible scenarios
Reuse the same Event Source connection and store a pool of connections on the server. When the user sends subsequent XMLHttpRequest with the update command, get the EventSource connection from the pool, that was made by this visitor, and send response with it that specifies your custom event type, the default type is message. It is important to avoid entering in the infinite loop that would make another EventSource connection to the client, but the client does not handle it because he made the request with XMLHttpRequest and not with EventSource.
Make all requests with EventSource. Before making a new EventSource request, close the previous one - you can do this from the client or from the server. On the server check the parameters and then send data to client.
Also you can use XMLHttpRequest with (long) polling and thus avoiding the need of using EventSource. Because of the simplicity of your example I can't see a reason to mix the two type of requests.
I have a simple problem. I use php as server part and have an html output. My site shows a status about an other server. So the flow is:
Browser user goes on www.example.com/status
Browser contacts www.example.com/status
PHP Server receives request and ask for stauts on www.statusserver.com/status
PHP Receives the data, transforms it in readable HTML output and send it back to the client
Browser user can see the status.
Now, I've created a singleton class in php which accesses the statusserver only 8 seconds. So it updates the status all 8 seconds. If a user requests for update inbetween, the server returns the locally (on www.example.com) stored status.
That's nice isn't it? But then I did an easy test and started 5 browser windows to see if it works. Here it comes, the php server created a singleton class for each request. So now 5 Clients requesting all 8 seconds the status on the statusserver. this means I have every 8 second 5 calls to the status server instead of one!
Isn't there a possibility to provide only one instance to all users within an apache server? That would be solve the problem in case 1000 users are connecting to www.example.com/status....
thx for any hints
=============================
EDIT:
I already use a caching on harddrive:
public function getFile($filename)
{
$diff = (time()-filemtime($filename));
//echo "diff:$diff<br/>";
if($diff>8){
//echo 'grösser 8<br/>';
self::updateFile($filename);
}
if (is_readable($filename)) {
try {
$returnValue = #ImageCreateFromPNG($filename);
if($returnValue == ''){
sleep(1);
return self::getFile($filename);
}else{
return $returnValue;
}
} catch (Exception $e){
sleep(1);
return self::getFile($filename);
}
} else {
sleep(1);
return self::getFile($filename);
}
}
this is the call in the singleton. I call for a file and save it on harddrive. but all the request call it at same time and start requesting the status server.
I think the only solution would be a standalone application which does an update every 8 seconds on the file... All request should just read the file and nomore able to update it.
This standalone could be a perl script or something similar...
Php requests are handled by different processes and each of them have a different state, there isn't any resident process like in other web development framework. You should handle that behavior directly in your class using for instance some caching.
The method which query the server status should have this logic
public function getStatus() {
if (!$status = $cache->load()) {
// cache miss
$status = // do your query here
$cache->save($status); // store the result in cache
}
return $status;
}
In this way only one request of X will fetch the real status. The X value depends on your cache configuration.
Some cache library you can use:
APC
Memcached
Zend_Cache which is just a wrapper for actual caching engines
Or you can store the result in plain text file and on every request check for the m_time of the file itself and rewrite it if more than xx seconds are passed.
Update
Your code is pretty strange, why all those sleep calls? Why a try/catch block when ImageCreateFromPNG does not throw?
You're asking a different question, since php is not an application server and cannot store state across processes your approach is correct. I suggest you to use APC (uses shared memory so it would be at least 10x faster than reading a file) to share status across different processes. With this approach your code could become
public function getFile($filename)
{
$latest_update = apc_fetch('latest_update');
if (false == $latest_update) {
// cache expired or first request
apc_store('latest_update', time(), 8); // 8 is the ttl in seconds
// fetch file here and save on local storage
self::updateFile($filename);
}
// here you can process the file
return $your_processed_file;
}
With this approach the code in the if part will be executed from two different processes only if a process is blocked just after the if line, which should not happen because is almost an atomic operation.
Furthermore if you want to ensure that you should use something like semaphores to handle that, but it would be an oversized solution for this kind of requirement.
Finally imho 8 seconds is a small interval, I'd use something bigger, at least 30 seconds, but this depends from your requirements.
As far as I know it is not possible in PHP. However, you surely can serialize and cache the object instance.
Check out http://php.net/manual/en/language.oop5.serialization.php
I'm working with the cURL implementation in PHP and leveraging curl_multi_init() and curl_multi_exec() to execute batches of requests in parallel. I've been doing this for a while, and understand this piece of it.
However, the request bodies contain a signature that is calculated with a timestamp. From the moment this signature is generated, I have a limited window of time to make the request before the server will reject the request once it's made. Most of the time this is fine. However, in some cases, I need to do large uploads (5+ GB).
If I batch requests into a pool of 100, 200, 1000, 20000, or anything in-between, and I'm uploading large amounts of data to the server, the initial requests that execute will complete successfully. Later requests, however, won't have started until after the timestamp in the signature expires, so the server rejects those requests out-of-hand.
The current flow I'm using goes something like this:
Do any processing ahead of time.
Add the not-yet-executed cURL handles to the batch.
Let cURL handle executing all of the requests.
Look at the data that came back and parse it all.
I'm interested in finding a way to execute a callback function that can generate a signature on-demand and update the request body at the moment that PHP/cURL goes to execute that particular request. I know that you can bind a callback function to a cURL handle that will execute repeatedly while the request is happening, and you have access to the cURL handle all along the way.
So my question is this: Is there any way to configure an onBefore and/or onAfter callback for a cURL handle? Something that can execute immediately before the cURL executes the request, and then something that can execute immediately after the response comes back so that the response data can be parsed.
I'd like to do something a bit more event oriented, like so:
Add a not-yet-executed cURL handle to the batch, assigning a callback function to execute when cURL (not myself) executes the request (both before and after).
Take the results of the batch request and do whatever I want with the data.
No, this isn't possible with the built in functions of cURL. However, it would be trivial to implement a wrapper around the native functions to do what you want.
For instance, vaguely implementing the Observer pattern:
<?php
class CurlWrapper {
private $ch;
private $listeners;
public function __construct($url) {
$this->ch = curl_init($url);
$this->setopt(CURLOPT_RETURNTRANSFER, true);
}
public function setopt($opt, $value) {
$this->notify('setopt', array('option' => $opt, 'value' => $value));
curl_setopt($this->ch, $opt, $value);
}
public function setopt_array($opts) {
$this->notify('setopt_array', array('options' => $opts));
curl_setopt_array($this->ch, $opts);
}
public function exec() {
$this->notify('beforeExec', array());
$ret = curl_exec($this->ch);
$this->notify('afterExec', array('result' => $ret));
return $ret;
}
public function attachListener($event, $fn) {
if (is_callable($fn)) {
$this->listeners[$event][] = $fn;
}
}
private function notify($event, $data) {
if (isset($this->listeners[$event])) {
foreach ($this->listeners[$event] as $listener) {
$listener($this, $data);
}
}
}
}
$c = new CurlWrapper('http://stackoverflow.com');
$c->setopt(CURLOPT_HTTPGET, true);
$c->attachListener('beforeExec', function($handle, $data) {
echo "before exec\n";
});
$result = $c->exec();
echo strlen($result), "\n";
You can add event listeners (which must be callables) to the object with addListener, and they will automatically be called at the relevant moment.
Obviously you would need to do some more work to this to make it fit your requirements, but it isn't a bad start, I think.
Anything to do with cURL is not advanced PHP. It's "advanced mucking about".
If you have these huge volumes of data going through cURL I would recommend not using cURL at all (actually, I would always recommend not using cURL)
I'd look into a socket implementation. Good ones aren't easy to find, but not that hard to write yourself.
Ok, so you say that the requests are parallelized, I'm not sure exactly what that means, but that's not too important.
As an aside, I'll explain what I mean by Asynchronous. If you open a raw TCP socket, you can call the socket_set_blocking function on the connection, this means that read / write operations don't block. You can take several of these connections and write a small amount of data to each of them in a loop, this way you are sending your requests "at once".
The reason I asked whether you have to wait until the whole message is consumed before the endpoint validates the signature is that even if Curl is sending the requests "all at once", there's always a possibility that the time it takes to upload will mean that the validation fails. Presumably it's slower to upload 2000 requests at once than to upload 5, so you'd expect more failures for the former case? Similarly, if your requests are processing synchronously (i.e. one at a time) then you'll see the same error for the same reason, although in this case it's the later requests that are expected to fail. Maybe you need to think about the data upload rate required to upload a message of a particular size within a particular time frame, then try and calculate an optimum multi-payload size. Perhaps the best approach is the simplest: upload one at a time and calculate the signature just before each upload?
A better approach might be to put the signature in a message header, this way the signature can be read earlier in the upload process.