I am trying to force PHP to NEVER try to connect and send information to a webservice over
a set period of time. I am using below and for some reason it will run over the time allowed and I can't figure out why. For example if I set it to 30 seconds, I will see that it takes a lot longer in some cases. I need to figure out a way to 100% kill connection if it takes too long.
TIA,
Will
$intTimeout = $this->getTimeout();
$client = new SoapClient("{$this->url}?WSDL", array('trace' => 0, 'connection_timeout'=>$intTimeout));
try {
$this->response = $client->someService($objPerson);
}
catch(Exception $e) {
}
$this->restoreTimeout();
function getTimeout()
{
$intTimeout = #$this->timeout ? $this->timeout : 60;
$_SESSION['old_sock'] = ini_get('default_socket_timeout');
ini_set('default_socket_timeout', $intTimeout);
return $intTimeout;
}
function restoreTimeout()
{
$intCurSockTime = #$_SESSION['old_sock'];
if($intCurSockTime) ini_set('default_socket_timeout', $intCurSockTime);
}
the problem is: default_socket_timeout is counting before socket responds, as long as it gets response, it will wait forever.
this is a good example using curl with timeout as fallback: http://www.darqbyte.com/2009/10/21/timing-out-php-soap-calls/
similar question: PHP SoapClient Timeout
Related
I was trying to use Server Side Events mechanics in my project. (This is like Long Polling on steroids)
Example from "Sending events from the server" subtitle works beautifully. After few seconds, from disconnection, the apache process is killed. This method works fine.
BUT! If I try to use RabbitMQ, Apache does't get the process killed after browser disconnects from server (es.close()). And process leaves as is and gets killed only after the docker container restarts.
connection_aborted and connection_status don't work at all. connection_aborted returns only 0 and connection_status returns CONNECTION_NORMAL even after disconnect. It happens only when I use RabbitMQ. Without RMQ this functions works well.
ignore_user_abort(false) doesn't work either.
Code example:
<?php
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Connection\AbstractConnection;
use PhpAmqpLib\Exception\AMQPTimeoutException;
use PhpAmqpLib\Message\AMQPMessage;
class RequestsRabbit
{
protected $rabbit;
/** #var AMQPChannel */
protected $channel;
public $exchange = 'requests.events';
public function __construct(AbstractConnection $rabbit)
{
$this->rabbit = $rabbit;
}
public function getChannel()
{
if ($this->channel === null) {
$channel = $this->rabbit->channel();
$channel->exchange_declare($this->exchange, 'fanout', false, false, false);
$this->channel = $channel;
}
return $this->channel;
}
public function send($message)
{
$channel = $this->getChannel();
$message = json_encode($message);
$channel->basic_publish(new AMQPMessage($message), $this->exchange);
}
public function subscribe(callable $callable)
{
$channel = $this->getChannel();
list($queue_name) = $channel->queue_declare('', false, false, true, false);
$channel->queue_bind($queue_name, $this->exchange);
$callback = function (AMQPMessage $msg) use ($callable) {
call_user_func($callable, json_decode($msg->body));
};
$channel->basic_consume($queue_name, '', false, true, false, false, $callback);
while (count($channel->callbacks)) {
if (connection_aborted()) {
break;
}
try {
$channel->wait(null, true, 5);
} catch (AMQPTimeoutException $exception) {
}
}
$channel->close();
$this->rabbit->close();
}
}
What happens:
Browser establishes SSE connection to the server. var es = new EventSource(url);
Apache2 spawns new process to handle this request.
PHP generates a new Queue and connects to it.
Browser closes connection es.close()
Apache2 doesn't kill process and it stays as is. Queue of RabbitMQ will not be deleted. If I do some reconnections, it spawns a bunch of processes and a bunch of queues (1 reconnection = 1 process = 1 queue).
I close all tabs -- processes alive. I close browser -- the same situation.
Looks line some kind of PHP bug. Or of Apach2?
What I use:
Last Docker and docker-compose
php:7.1.12-apache or php:5.6-apache image (this happens on both versions of PHP)
Some screenshots:
Please, help me to figure out what's going on...
P.S. Sorry for my English. If you can find a mistake or typo, point to it in the comments. I'll be very grateful :)
You don't say if you're using send() or subscribe() (or both) during your server-side events. Assuming you're using subscribe() there is no bug. This loop:
while (count($channel->callbacks)) {
if (connection_aborted()) {
break;
}
try {
$channel->wait(null, true, 5);
} catch (AMQPTimeoutException $exception) {
}
}
Will run until the process is killed or the connection is remotely closed from RabbitMQ. This is normal when listening for queued messages. If you need to stop the loop at some point you can set a variable to check in the loop or throw an exception when the SSE is ended (although I find this awkward).
I am using the raw blockchain api and the docs say that I can do 1 request every 10 seconds, how would I make sure that I don't go over this limit? I'd prefer to keep it server side with php. Thank you for the response
After each call to the API add to your internal time counter 10 seconds to know when the next call will be allowed.
class ApiRequest{
private $nextRequestTime = time();
private function allowRequest(){
$local_time = now();
if($local_time >= $this->nextRequestTime ){
$this->nextRequestTime = ($local_time + 10);
return true;
}
return false;
}
public function doRequest($request){
if($this->allowRequest()){
// process the $request...
}
}
}
When function ApiRequest::allowRequest() returns false you know that you should process the request later.
I've built an IRC bot using a PHP bot framework called Philip (https://github.com/epochblue/philip). When the command !hello is sent into the chat by anyone, the bot should say "hello..." into the channel, wait 45 seconds, say "foo", wait 15 seconds, then say "bar" (I know that doesn't make any sense, just trying to get this code to work).
Here's the code that I've tried so far:
Attempt #1
$bot->onChannel('/^!hello$/', function($event) {
$event->addResponse(Response::msg($event->getRequest()->getSource(), "hello..."));
$now = time();
while ($now + 45 > time()) {
}
$event->addResponse(Response::msg($event->getRequest()->getSource(), "foo"));
while ($now + 60 > time()) {
}
$event->addResponse(Response::msg($event->getRequest()->getSource(), "bar"));
});
Attempt #2
$bot->onChannel('/^!hello$/', function($event) {
$event->addResponse(Response::msg($event->getRequest()->getSource(), "hello..."));
sleep(45);
$event->addResponse(Response::msg($event->getRequest()->getSource(), "foo"));
sleep(15);
$event->addResponse(Response::msg($event->getRequest()->getSource(), "bar"));
});
With both of those attempts, the bot will wait the full 60 seconds before outputting anything at all. So instead of sending one message, then waiting, then sending another message, then waiting, then sending a third message, it just waited all 60 seconds then sent messages.
Any idea as to how I can get this to work as I would like it to?
Thanks
If you take a look at the fwrite() docs, it has the following tidbit:
Note:
Writing to a network stream may end before the whole string is written. Return value of fwrite() may be checked:
<?php
function fwrite_stream($fp, $string) {
for ($written = 0; $written < strlen($string); $written += $fwrite) {
$fwrite = fwrite($fp, substr($string, $written));
if ($fwrite === false) {
return $written;
}
}
return $written;
}
So in your Phillip::send method, you could incorporate a similar solution to hold the method until the fwrite to the socket is completed and return a success boolean, etc.
Sleep freezes the thread you are currently in.
You should either do the message sending on a new thread (and sleep there causing the main thread not to freeze) using "pthreads" or you should use a timer.
I am using curl_multi to send out emails out in a rolling curl script similar to this one but i added a curlopt_timeout of 10 seconds and a curlopt_connecttimeout of 20 seconds
http://www.onlineaspect.com/2009/01/26/how-to-use-curl_multi-without-blocking/
while testing it i reduced the timeouts to 1ms by using timeout_ms and connecttimeout_ms respectively, just to see how it handles a timeout. But the timeout kills the entire curl process. Is there a way to continue with the other threads even if one times out??
Thanks.
-devo
https://github.com/krakjoe/pthreads
<?php
class Possibilities extends Thread {
public function __construct($url){
$this->url = $url;
}
public function run(){
/*
* Or use curl, this is quicker to make an example ...
*/
return file_get_contents($this->url);
}
}
$threads = array();
$urls = get_my_urls_from_somewhere();
foreach($urls as $index => $url){
$threads[$index]=new Possibilities($url);
$threads[$index]->start();
}
foreach($threads as $index => $thread ){
if( ( $response = $threads[$index]->join() ) ){
/** good, got a response */
} else { /** we do not care **/ }
}
?>
My guess is, you are using curl multi as it's the only option for concurrent execution of the code sending out emails ... if this is the case, I do not suggest that you use anything like the code above, I suggest that you thread the calls to mail() directly as this will be faster and more efficient by far.
But now you know, you can thread in PHP .. enjoy :)
I'm developing a system consisting in frontend built in CakePHP framework and Java based backend. The communication between this two ecosystems is carried out by sending JSON messages from CakePHP controller to RabbitMQ broker. When message is consumed, the result is being send back to the frontend.
Now, I need to consume the message and push the result from the controller to user browser. For the PHP part I'm using a phpamqplib, but it needs to have an infinite loop when listening for new messages:
$channel->basic_consume('AMQP.COMMAND.OUTPUT.QUEUE',
'consumer',
false,
false,
false,
false,
array($this, 'processMessage'));
function shutdown($ch, $conn){
$ch->close();
$conn->close();
}
register_shutdown_function('shutdown', $channel, $conn);
while (count($channel->callbacks)) {
$read = array($conn->getSocket()); // add here other sockets that you need to attend
$write = null;
$except = null;
if (false === ($num_changed_streams = stream_select($read, $write, $except, 60))) {
/* Error handling */
} elseif ($num_changed_streams > 0) {
$channel->wait();
}
}
In my controller this is provoking Apache Server to throw an error because maximum execution of 30 seconds is exceeded.
I really need help here. What's the best solution to listen for new messages and then pushing the result to the view?
Thanks
Cheers.
I highly recommend converting this to an AJAX-based infrastructure, and re-factor your code to do this:
CakePHP makes an AJAX call to load the page every x seconds
The AJAX URL gets the remaining elements from the queue, and outputs them
Your code doesn't look complete, so I can't refactor it completely, but you could change the AJAX URL to do something like this:
if (count($channel->callbacks)) {
$read = array($conn->getSocket()); // add here other sockets that you need to attend
$write = null;
$except = null;
if (false === ($num_changed_streams = stream_select($read, $write, $except, 60))) {
/* Error handling */
}
}
and close the channel when done.
Your other option, if you really want to use push, is to use web sockets. Do a search, or this tutorial might help you get started.