I have a simple script that makes redirection to mobile version of a website if it finds that user is browsing on mobile phone. It uses Tera-WURFL webservice to acomplish that and it will be placed on other hosting than Tera-WURFL itself. I want to protect it, in case of Tera-WURFL hosting downtime. In other words, if my script takes more than a second to run, then stop executing it and just redirect to regular website. How to do it effectively (so that the CPU would not be overly burdened by the script)?
EDIT: It looks that TeraWurflRemoteClient class have a timeout property. Read below. Now I need to find how to include it in my script, so that it would redirect to regular website in case of this timeout.
Here is the script:
// Instantiate a new TeraWurflRemoteClient object
$wurflObj = new TeraWurflRemoteClient('http://my-Tera-WURFL-install.pl/webservicep.php');
// Define which capabilities you want to test for. Full list: http://wurfl.sourceforge.net/help_doc.php#product_info
$capabilities = array("product_info");
// Define the response format (XML or JSON)
$data_format = TeraWurflRemoteClient::$FORMAT_JSON;
// Call the remote service (the first parameter is the User Agent - leave it as null to let TeraWurflRemoteClient find the user agent from the server global variable)
$wurflObj->getCapabilitiesFromAgent(null, $capabilities, $data_format);
// Use the results to serve the appropriate interface
if ($wurflObj->getDeviceCapability("is_tablet") || !$wurflObj->getDeviceCapability("is_wireless_device") || $_GET["ver"]=="desktop") {
header('Location: http://website.pl/'); //default index file
} else {
header('Location: http://m.website.pl/'); //where to go
}
?>
And here is source of TeraWurflRemoteClient.php that is being included. It has optional timeout argument as mentioned in documentation:
// The timeout in seconds to wait for the server to respond before giving up
$timeout = 1;
TeraWurflRemoteClient class have a timeout property. And it is 1 second by default, as I see in documentation.
So, this script won't be executed longer than a second.
Try achieving this by setting a very short timeout on the HTTP request to TeraWurfl inside their class, so that if the response doesn't come back in like 2-3 secs, consider the check to be false and show the full website.
The place to look for setting a shorter timeout might vary depending on the transport you use to make your HTTP request. Like in Curl you can set the timeout for the HTTP request.
After this do reset your HTTP request timeout back to what it was so that you don't affect any other code.
Also I found this while researching on it, you might want to give it a read, though I would say stay away from forking unless you are very well aware of how things work.
And just now Adelf posted that TeraWurflRemoteClient class has a timeout of 1 sec by default, so that solves your problem but I will post my answer anyway.
Related
I have a php script making requests to some web site. I run this script from command line so no web server on my side is involved. Just pure PHP and a shell.
The response is split into pages so I need to make multiple requests to gain all the data with one script run. Obviously, the request's URL is identical except one parameter. Nothing complicated:
$base_url = '...';
$pages = ...; // a number I receive elsewhere
$delay = ...; // a delay to avoid too many requests
$p = 0;
while ($p < $pages) {
$url = $base_url . "&some_param=$p";
... // Here cURL takes it's turn because of cookies
sleep($delay);
}
The pages I get this way look all the same - like the first one that was requested. (So I get just a repetitive list multiplied by the number of pages.)
I decided that it happens because of some caching on the web server's end which persists despite of an additional random parameter I pass. Closing and reinitializing cURL session doesn't help as well.
I also noticed that if I quickly fix the initial $p value manually (so requests start from different page) and then launch the script again, the result changes. I do it quicker than $delay value.
It means that two different requests made from the same script run give same result, while two different requests made from two different script runs give different results, regardless of delay between the requests. So it can't be just caching on the responded side.
I tried to work that around and wrapped the actual request in a separate script which I run using exec() from the main script. So there is (should be, I consider) a separate shell instance for any single page request, and those requests should not share any kind of cache between them.
Despite of that, I keep getting the same page again. The code looks something like that:
$pages = ...;
$delay = ...;
$p = 0;
$command_stub = 'php get_single_page.php';
while ($p < $pages) {
$command = $command_stub . " $p";
exec($command, $response);
// $response is the same again for different $p's
sleep($delay);
}
If I again change the starting page manually in the script, I get a result for that page all over again. Until I change it once more. And so on. Several minutes may pass between two runs of the main script, and it still yields identical result until I switch the number by hand.
I can't comprehend why this is happening. Can somebody explain it?
The short answer is no. Curl certainly doesn't retain anything between executions unless configured to do so (e.g.: setting a cookie file).
I suspect the server is expecting a session token of some sort (cookie or other HTTP header are my guess). Without the session token it will just ignore the request for subsequent pages.
I have been using this library repejota/phpnats for developing a NATS Client that can subscribe to a particular channel. But after connecting, receiving few messages and having some 30 secs idle time, it gets disconnect itself without any interruption. However my Node.js client is working good with the same NATS server.
Here is how I am subscribing...
$c->subscribe(
'foo',
function ($message) {
echo $message->getBody();
}
);
$c->wait();
Any suggestions/help???
Thanks!
Was this just the default PHP timeout killing it off?
Maybe something like this:
ini_set('max_execution_time', 180); // gives about 3 minutes for example
By default, PHP scripts can't live forever as PHP shall be rather considered stateless. This is by design and default life span is 30 seconds (hosters usually extend that to 180 secs but that's irrelevant really). You can extend that time yourself by setting max_execution_time to any value (with 0 meaning "forever") but that's not recommended unless you know you want that. If not, then commonly used approach is to make the script invoke itself (ie via GET request) often passing some params to let invoked script resume where caller finished.
$options = new ConnectionOptions();
$options->setHost('127.0.0.1')->setPort(4222);
$client = new Connection($options);
$client->connect(-1);
You need to set connect parameters as -1
I have noticed a few websites such as hypem.com show a "You didnt get served" error message when the site is busy rather than just letting people wait, time out or refresh; aggravating what is probably a server load issue.
We are too loaded to process your request. Please click "back" in your
browser and try what you were doing again.
How is this achieved before the server becomes overloaded? It sounds like a really neat way to manage user expectation if a site happens to get overloaded whilst also giving the site time to recover.
Another options is this:
$load = sys_getloadavg();
if ($load[0] > 80) {
header('HTTP/1.1 503 Too busy, try again later');
die('Server too busy. Please try again later.');
}
I got it from php's site http://php.net/sys_getloadavg, altough I'm not sure what the values represent that the sys_getloadavg returns
You could simply create a 500.html file and have your webserver use that whenever a 50x error is thrown.
I.e. in your apache config:
ErrorDocument 500 /errors/500.html
Or use a php shutdown function to check if the request timeout (which defaults to 30s) has been reached and if so - redirect/render something static (so that rendering the error itself cannot cause problems).
Note that most sites where you'll see a "This site is taking too long to respond" message are effectively generating that message with javascript.
This may be to do with the database connection timing out, but that assumes that your server has a bigger DB load than CPU load when times get tough. If this is the case, you can make your DB connector show the message if no connection happens for 1 second.
You could also use a quick query to the logs table to find out how many hits/second there are and automatically not respond to any more after a certain point in order to preserve QOS for the others. In this case, you would have to set that level manually, based on server logs. An alternative method can be seen here in the Drupal throttle module.
Another alternative would be to use the Apache status page to get information on how many child processes are free and to throttle id there are none left as per #giltotherescue's answer to this question.
You can restrict the maximum connection in apache configuration too...
Refer
http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients
http://www.howtoforge.com/configuring_apache_for_maximum_performance
This is not a strictly PHP solution, but you could do like Twitter, i.e.:
serve a mostly static HTML and Javascript app from a CDN or another server of yours
the calls to the actual heavy work server-side (PHP in your case) functions/APIs are actually done in AJAX from one of your static JS files
so you can set a timeout on your AJAX calls and return a "Seems like loading tweets may take longer than expected"-like notice.
You can use the php tick function to detect when a server isn't loading for a specified amount of time, then display an error messages. Basic usage:
<?php
$connection = false;
function checkConnection( $connectionWaitingTime = 3 )
{
// check connection & time
global $time,$connection;
if( ($t = (time() - $time)) >= $connectionWaitingTime && !$connection){
echo ("<p> Server not responding for <strong>$t</strong> seconds !! </p>");
die("Connection aborted");
}
}
register_tick_function("checkConnection");
$time = time();
declare (ticks=1)
{
require 'yourapp.php' // load your main app logic
$connection = true ;
}
The while(true) is just to simulate a loaded server.
To implement the script in your site, you need to remove the while statement and add your page logic E.G dispatch event or front controller action etc.
And the $connectionWaitingTime in the checkCOnnection function is set to timeout after 3 seconds, but you can change that to whatever you want
I have a simple problem. I use php as server part and have an html output. My site shows a status about an other server. So the flow is:
Browser user goes on www.example.com/status
Browser contacts www.example.com/status
PHP Server receives request and ask for stauts on www.statusserver.com/status
PHP Receives the data, transforms it in readable HTML output and send it back to the client
Browser user can see the status.
Now, I've created a singleton class in php which accesses the statusserver only 8 seconds. So it updates the status all 8 seconds. If a user requests for update inbetween, the server returns the locally (on www.example.com) stored status.
That's nice isn't it? But then I did an easy test and started 5 browser windows to see if it works. Here it comes, the php server created a singleton class for each request. So now 5 Clients requesting all 8 seconds the status on the statusserver. this means I have every 8 second 5 calls to the status server instead of one!
Isn't there a possibility to provide only one instance to all users within an apache server? That would be solve the problem in case 1000 users are connecting to www.example.com/status....
thx for any hints
=============================
EDIT:
I already use a caching on harddrive:
public function getFile($filename)
{
$diff = (time()-filemtime($filename));
//echo "diff:$diff<br/>";
if($diff>8){
//echo 'grösser 8<br/>';
self::updateFile($filename);
}
if (is_readable($filename)) {
try {
$returnValue = #ImageCreateFromPNG($filename);
if($returnValue == ''){
sleep(1);
return self::getFile($filename);
}else{
return $returnValue;
}
} catch (Exception $e){
sleep(1);
return self::getFile($filename);
}
} else {
sleep(1);
return self::getFile($filename);
}
}
this is the call in the singleton. I call for a file and save it on harddrive. but all the request call it at same time and start requesting the status server.
I think the only solution would be a standalone application which does an update every 8 seconds on the file... All request should just read the file and nomore able to update it.
This standalone could be a perl script or something similar...
Php requests are handled by different processes and each of them have a different state, there isn't any resident process like in other web development framework. You should handle that behavior directly in your class using for instance some caching.
The method which query the server status should have this logic
public function getStatus() {
if (!$status = $cache->load()) {
// cache miss
$status = // do your query here
$cache->save($status); // store the result in cache
}
return $status;
}
In this way only one request of X will fetch the real status. The X value depends on your cache configuration.
Some cache library you can use:
APC
Memcached
Zend_Cache which is just a wrapper for actual caching engines
Or you can store the result in plain text file and on every request check for the m_time of the file itself and rewrite it if more than xx seconds are passed.
Update
Your code is pretty strange, why all those sleep calls? Why a try/catch block when ImageCreateFromPNG does not throw?
You're asking a different question, since php is not an application server and cannot store state across processes your approach is correct. I suggest you to use APC (uses shared memory so it would be at least 10x faster than reading a file) to share status across different processes. With this approach your code could become
public function getFile($filename)
{
$latest_update = apc_fetch('latest_update');
if (false == $latest_update) {
// cache expired or first request
apc_store('latest_update', time(), 8); // 8 is the ttl in seconds
// fetch file here and save on local storage
self::updateFile($filename);
}
// here you can process the file
return $your_processed_file;
}
With this approach the code in the if part will be executed from two different processes only if a process is blocked just after the if line, which should not happen because is almost an atomic operation.
Furthermore if you want to ensure that you should use something like semaphores to handle that, but it would be an oversized solution for this kind of requirement.
Finally imho 8 seconds is a small interval, I'd use something bigger, at least 30 seconds, but this depends from your requirements.
As far as I know it is not possible in PHP. However, you surely can serialize and cache the object instance.
Check out http://php.net/manual/en/language.oop5.serialization.php
Consider the following scenario:
http://www.restserver.com/example.php returns some content that I want to work with in my web-application.
I don't want to load it using ajax (SEO issues etc.)
My page takes 100ms to generate, the REST resource also takes 100ms to be loaded.
We assume that the 100ms generation time of my website occour before I begin working with the REST resource. What comes after that can be neglected.
Example Code:
Index.php of my website
<?
do_some_heavy_mysql_stuff(); // takes 100 ms
get_rest_resource(); // takes 100 ms
render_html_with_data_from_mysql_and_rest(); // takes neglectable amount of time
?>
Website will take ~200ms to generate.
I want to turn this into:
<?
Restclient::initiate_rest_loading(); // takes 0ms
do_some_heavy_mysql_stuff(); // takes 100 ms
Restclient::get_rest_resource(); // takes 0 ms because 100 ms have already passed since initiation
render_html_with_data_from_mysql_and_rest(); // takes neglectable amount of time
?>
Website will take ~100ms to generate.
To accomplis this I thought about using something like this:
(I am pretty sure this code will not work because this question is all about asking how to accomplish this, and whether its possible. I just thought some naive code could demonstrate it best)
class Restclient {
public static $buffer;
public static function initiate_rest_loading() {
// open resource
$handle = fopen ("http://www.restserver.com/example.php", "r");
// set to non blocking so fgets will return immediately
stream_set_blocking($handle,0);
// initate loading, but return immediately to continue website generation
fgets($handle, 40960);
}
public static function get_rest_resource() {
// set stream to blocking again because now we really want the data
stream_set_blocking($handle,1);
// get the data and save it so templates can work with it
self::$buffer = fgets($handle, 40960); templates
}
}
So final question:
Is this possible and how?
What do I have to keep an eye on (internal buffer overflows, stream lengths etc.)
Are there better methods?
Does this well work with http resources?
Any input is appriciated!
I hope I explained it understandable. If anything is unclear, please leave a comment, so I can rephrase it!
As "any input is appreciated", here is mine:
What you want is called asynchronous (you want to something while something else is being done "in the background").
To solve your problem, I thought on this:
Separate do_some_heavy_mysql_stuff and get_rest_resource in two different PHP scripts.
Use cURL "multi" ability to do simultaneous requests. Please, check:
curl_multi_init and related PHP functions
Simultaneous HTTP requests in PHP with cURL
This way, you can perform both scripts at the same time. Using cURL multi features, you can call http://example.com/do_some_heavy_mysql_stuff.php and http://example.com/get_rest_resource.php at the same time, and then play with the results as soon as they're available.
These are my first thoughts, and Iim sharing them with you. Maybe there are different and more interesting approaches... Good luck!