I have been using this library repejota/phpnats for developing a NATS Client that can subscribe to a particular channel. But after connecting, receiving few messages and having some 30 secs idle time, it gets disconnect itself without any interruption. However my Node.js client is working good with the same NATS server.
Here is how I am subscribing...
$c->subscribe(
'foo',
function ($message) {
echo $message->getBody();
}
);
$c->wait();
Any suggestions/help???
Thanks!
Was this just the default PHP timeout killing it off?
Maybe something like this:
ini_set('max_execution_time', 180); // gives about 3 minutes for example
By default, PHP scripts can't live forever as PHP shall be rather considered stateless. This is by design and default life span is 30 seconds (hosters usually extend that to 180 secs but that's irrelevant really). You can extend that time yourself by setting max_execution_time to any value (with 0 meaning "forever") but that's not recommended unless you know you want that. If not, then commonly used approach is to make the script invoke itself (ie via GET request) often passing some params to let invoked script resume where caller finished.
$options = new ConnectionOptions();
$options->setHost('127.0.0.1')->setPort(4222);
$client = new Connection($options);
$client->connect(-1);
You need to set connect parameters as -1
Related
I use php chilkat component and some functions sometimes require a long time.
imap->Disconnect(); maximum time detected - 60 seconds.
When I just do imap = null - it still required 60 seconds.
I guess when it is destructed, it also disconnects inside the chilkat component.
How can I prevent the long time of execution, because the application speed is bad.
Can I just kill the connection immediately ?
Why does this:
selectMailbox() - maximum time detected - 68 seconds.
takes so long?
closeMailbox() - 10 seconds.
I have set ReadTimeout = 2 but execution time was detected 5 seconds..
This is the code:
$time = microtime(true);
$this->imap->put_ReadTimeout(2);
$this->imap->Disconnect();
$this->imap = null;
var_dump(microtime(true) - $time);
If your IMAP server takes a long time to respond, then the IMAP client can't make the IMAP server respond any faster. Perhaps the IMAP server is overloaded at particular times..
You can set the Imap.ReadTimeout property to a smaller value. The default value is 30 seconds. Let's say you set the ReadTimeout = 5. This will tell Chilkat to abandon the connection/session if the IMAP server does not send a response within 5 seconds. The good part is that your function will return after 5 seconds. The bad part is that your session will be lost and you'll need to re-connection, re-authenticate, and re-select the mailbox. Maybe this is OK for the call to Disconnect. It's probably not OK for the call to SelectMailbox.
Context :
I'm making a PHP websocket server (here) running as a DAEMON in which there is obviously a main loop listening for sockets connections and incoming data so i can't just create an other loop with a sleep(x_number_of_seconds); in it because it'll freeze my whole server.
I can't execute an external script with a CRON job or fork a new process too (i guess) because I have to be in the scope of my server class to send data to connected client sockets.
Does anyone knows a magic trick to achieve this in PHP ? :/
Some crazy ideas :
Keeping track of the last loop execution time with microtime(true), and compare it with the current time on each loop, if it's about my desired X seconds interval, execute the method... which would result in a very drunk and inconsistent interval loop.
Run a JavaScript setInterval() in a browser that will communicate with my server trough a websocket and tell it to execute my method... i said they where crazy ideas !
Additional infos about what i'm trying to achieve :
I'm making a little online game (RPG like) in which I would like to add some NPCs that updates their behaviours every X seconds.
Is there an other ways of achieving this ? Am I missing something ? Should I rewrite my server in Node.js ??
Thanks a lot for the help !
A perfect alternative doesn't seams to exists so I'll use my crazy solution #1 :
$this->last_tick_time = microtime(true);
$this->tick_interval = 1;
$this->tick_counter = 0;
while(true)
{
//loop code here...
$t= microtime(true) - $this->last_tick_time;
if($t>= $this->tick_interval)
{
$this->on_server_tick(++$this->tick_counter);
$this->last_tick_time = microtime(true) - ($t- $this->tick_interval);
}
}
Basically, if the time elapsed since the last server tick is greater or equal to my desired tick interval, execute on_server_tick() method. And most importantly : we subtract the time overflow to make the next tick happen faster if this one happened too late. This way we fill the gaps and at the end, if the socket_select timeout is set to 1 second, we will never have a gap greater than 1.99999999+ second.
I also keep track of the tick counter, this way I can use modulo (%) to execute code on multiple intervals like this :
protected function on_server_tick($counter)
{
if($counter%5 == 0)
{
// 5 seconds interval
}
if($counter%10 == 0)
{
// 10 seconds interval
}
}
which covers all my needs ! :D
Don't worry PHP, I won't replace you with Node.js, you still my friend.
It looks to me like the websocket-framework you are using is too primitive to allow your server to do other useful things while waiting for connections from clients. The only call to PHP's socket_select() function is hard-coded to a one second timeout, and it does nothing when the time runs out. It really ought to allow a callback or an outside loop.
Look at the http://php.net/manual/en/function.socket-select.php manual page. The last parameter is a timeout time. socket_select() waits for incoming data on a socket or until the timeout time is up, which sounds like what you want to do, but the library has no provision for it. Then look at how the library uses it in core/classes/SocketServer.php.
I'm assuming you call run() and then it just never returns to your calling code until it gets a message on the socket, which prevents you from doing anything.
I have a simple script that makes redirection to mobile version of a website if it finds that user is browsing on mobile phone. It uses Tera-WURFL webservice to acomplish that and it will be placed on other hosting than Tera-WURFL itself. I want to protect it, in case of Tera-WURFL hosting downtime. In other words, if my script takes more than a second to run, then stop executing it and just redirect to regular website. How to do it effectively (so that the CPU would not be overly burdened by the script)?
EDIT: It looks that TeraWurflRemoteClient class have a timeout property. Read below. Now I need to find how to include it in my script, so that it would redirect to regular website in case of this timeout.
Here is the script:
// Instantiate a new TeraWurflRemoteClient object
$wurflObj = new TeraWurflRemoteClient('http://my-Tera-WURFL-install.pl/webservicep.php');
// Define which capabilities you want to test for. Full list: http://wurfl.sourceforge.net/help_doc.php#product_info
$capabilities = array("product_info");
// Define the response format (XML or JSON)
$data_format = TeraWurflRemoteClient::$FORMAT_JSON;
// Call the remote service (the first parameter is the User Agent - leave it as null to let TeraWurflRemoteClient find the user agent from the server global variable)
$wurflObj->getCapabilitiesFromAgent(null, $capabilities, $data_format);
// Use the results to serve the appropriate interface
if ($wurflObj->getDeviceCapability("is_tablet") || !$wurflObj->getDeviceCapability("is_wireless_device") || $_GET["ver"]=="desktop") {
header('Location: http://website.pl/'); //default index file
} else {
header('Location: http://m.website.pl/'); //where to go
}
?>
And here is source of TeraWurflRemoteClient.php that is being included. It has optional timeout argument as mentioned in documentation:
// The timeout in seconds to wait for the server to respond before giving up
$timeout = 1;
TeraWurflRemoteClient class have a timeout property. And it is 1 second by default, as I see in documentation.
So, this script won't be executed longer than a second.
Try achieving this by setting a very short timeout on the HTTP request to TeraWurfl inside their class, so that if the response doesn't come back in like 2-3 secs, consider the check to be false and show the full website.
The place to look for setting a shorter timeout might vary depending on the transport you use to make your HTTP request. Like in Curl you can set the timeout for the HTTP request.
After this do reset your HTTP request timeout back to what it was so that you don't affect any other code.
Also I found this while researching on it, you might want to give it a read, though I would say stay away from forking unless you are very well aware of how things work.
And just now Adelf posted that TeraWurflRemoteClient class has a timeout of 1 sec by default, so that solves your problem but I will post my answer anyway.
I'm trying to make a proxy-like page that forwards an AJAX request to a SOAP server.
The browser sends 2 requests to the same page (i.e. server.php with different query string) every 10 seconds.
The server makes a soap call to the soap server depending on the query string.
All is working fine.
Then I put a sleep (40 secs) in the soap server to simulate a slow response and I also put a timeout on the caller to abort the call after some seconds.
server.php: Pseudo code:
$timeout = 10;
ini_set("default_socket_timeout", $timeout);
$id = $_GET['id'];
$wsdl= 'http://soapserver/wsdl'
$client = new SoapClient($wsdl,array('connection_timeout'=> $timeout));
print($client->getQuote($id));
If the browser sends an ajax request to http://myserver/server.php?id=IBM
the request stops after the timeout I set.
If I try to make a second call before the first stops, the second one doesn't not respect timeout.
i.e.
Request:
GET http://myserver/server.php?id=IBM
and after 1 second
GET http://myserver/server.php?id=AAP
Response:
after 10 seconds:
No data
after 20 seconds:
No data
I also tried to not use PHP SOAP and use curl instead but I got the same results.
I also tried to open 3 tabs on my browser and call:
http://myserver/server.php?id=IBM
http://myserver/server.php?id=AAP
http://myserver/server.php?id=MSX
The first one stops after 10 seconds, the second after 20 seconds and the third after 30 seconds.
Is this a normal behaviour or I miss something ?
Thanks in advance
You are probably starting sessions, and session_start() blocks a second call to it until the other request has 'freed' the session (in other words: has finished and will not write any data to the session anymore). For time consuming requests, don't start a session if you don't need one, and if you DO need one, get all the data that you need and then call session_write_close() BEFORE you doing the time-consuming thing. If you need to write to the session afterwards, just call session_start() again.
I have a simple problem. I use php as server part and have an html output. My site shows a status about an other server. So the flow is:
Browser user goes on www.example.com/status
Browser contacts www.example.com/status
PHP Server receives request and ask for stauts on www.statusserver.com/status
PHP Receives the data, transforms it in readable HTML output and send it back to the client
Browser user can see the status.
Now, I've created a singleton class in php which accesses the statusserver only 8 seconds. So it updates the status all 8 seconds. If a user requests for update inbetween, the server returns the locally (on www.example.com) stored status.
That's nice isn't it? But then I did an easy test and started 5 browser windows to see if it works. Here it comes, the php server created a singleton class for each request. So now 5 Clients requesting all 8 seconds the status on the statusserver. this means I have every 8 second 5 calls to the status server instead of one!
Isn't there a possibility to provide only one instance to all users within an apache server? That would be solve the problem in case 1000 users are connecting to www.example.com/status....
thx for any hints
=============================
EDIT:
I already use a caching on harddrive:
public function getFile($filename)
{
$diff = (time()-filemtime($filename));
//echo "diff:$diff<br/>";
if($diff>8){
//echo 'grösser 8<br/>';
self::updateFile($filename);
}
if (is_readable($filename)) {
try {
$returnValue = #ImageCreateFromPNG($filename);
if($returnValue == ''){
sleep(1);
return self::getFile($filename);
}else{
return $returnValue;
}
} catch (Exception $e){
sleep(1);
return self::getFile($filename);
}
} else {
sleep(1);
return self::getFile($filename);
}
}
this is the call in the singleton. I call for a file and save it on harddrive. but all the request call it at same time and start requesting the status server.
I think the only solution would be a standalone application which does an update every 8 seconds on the file... All request should just read the file and nomore able to update it.
This standalone could be a perl script or something similar...
Php requests are handled by different processes and each of them have a different state, there isn't any resident process like in other web development framework. You should handle that behavior directly in your class using for instance some caching.
The method which query the server status should have this logic
public function getStatus() {
if (!$status = $cache->load()) {
// cache miss
$status = // do your query here
$cache->save($status); // store the result in cache
}
return $status;
}
In this way only one request of X will fetch the real status. The X value depends on your cache configuration.
Some cache library you can use:
APC
Memcached
Zend_Cache which is just a wrapper for actual caching engines
Or you can store the result in plain text file and on every request check for the m_time of the file itself and rewrite it if more than xx seconds are passed.
Update
Your code is pretty strange, why all those sleep calls? Why a try/catch block when ImageCreateFromPNG does not throw?
You're asking a different question, since php is not an application server and cannot store state across processes your approach is correct. I suggest you to use APC (uses shared memory so it would be at least 10x faster than reading a file) to share status across different processes. With this approach your code could become
public function getFile($filename)
{
$latest_update = apc_fetch('latest_update');
if (false == $latest_update) {
// cache expired or first request
apc_store('latest_update', time(), 8); // 8 is the ttl in seconds
// fetch file here and save on local storage
self::updateFile($filename);
}
// here you can process the file
return $your_processed_file;
}
With this approach the code in the if part will be executed from two different processes only if a process is blocked just after the if line, which should not happen because is almost an atomic operation.
Furthermore if you want to ensure that you should use something like semaphores to handle that, but it would be an oversized solution for this kind of requirement.
Finally imho 8 seconds is a small interval, I'd use something bigger, at least 30 seconds, but this depends from your requirements.
As far as I know it is not possible in PHP. However, you surely can serialize and cache the object instance.
Check out http://php.net/manual/en/language.oop5.serialization.php