In my routes.php I have a debug filter like so:
Route::filter('debug', function() {
if(App::environment() !== 'dev') { return; }
error_log("\n\n\n\n REQUEST NO. " . $staticRequestCount++ . "\n\n");
// log the request headers
// log the request body
I'm a noob in both php and laravel. Is it possible to create a static requestCount varaible as above which keep increasing all the time until you restart the server (or similar) ?
In php, its not possible to share a variable across different requests without using a external storage support. Each request will be a separate process or thread according to the apache worker implementation. So the code wont be able to share a common variable in memory to serve as a counter.
You can do it by writing the counter values on to a cache. Check out APC or memcached.
I don't think it's possible. You cannot detect if server was restarted using PHP. But you can simple save such counter into file and read it from file each time you run your filter, increase it and save modified value but of course it won't be automatically deleted (or set to 0) if server will be restarted.
Related
I have a PHP script that performs a connection to my other server using file_get_contents, and then retrieves and displays the data.
//authorize connection to the ext. server
$xml_data=file_get_contents("http://server.com/connectioncounts");
$doc = new DOMDocument();
$doc->loadXML($xml_data);
//variables to check for name / connection count
$wmsast = $doc->getElementsByTagName('Name');
$wmsasct = $wmsast->length;
//start the loop that fetches and displays each name
for ($sidx = 0; $sidx < $wmsasct; $sidx++) {
$strname = $wmsast->item($sidx)->getElementsByTagName("WhoIs")->item(0)->nodeValue;
$strctot = $wmsast->item($sidx)->getElementsByTagName("Sessions")->item(0)->nodeValue;
/**************************************
Display only one instance of their name.
strpos will check to see if the string contains a _ character
**************************************/
if (strpos($strname, '_') !== FALSE){
//null. ignoring any duplicates
}
else {
//Leftovers. This section contains the names that are only the BASE (no _jibberish, etc)
echo $sidx . " <b>Name: </b>" . $strname . " Sessions: " . $strctot . "<br />";
}//end display base check
}//end name loop
From the client side, I'm calling on this script using jQuery load () and to execute using mousemove().
$(document).mousemove(function(event){
$('.xmlData').load('./connectioncounts.php').fadeIn(1000);
});
And I've also experimented with set interval which works just as well:
var auto_refresh = setInterval(
function ()
{
$('.xmlData').load('./connectioncounts.php').fadeIn("slow");
}, 1000); //refresh, 1000 milli = 1 second
It all works and the contents appear in "real time", but I can already notice an effect on performance and it's just me using it.
I'm trying to come up with a better solution but falling short. The problem with what I have now is that each client would be forcing the script to initiate a new connection to the other server, so I need a solution that will consistently keep the information updated without involving the clients making a new connection directly.
One idea I had was to use a cron job that executes the script, and modify the PHP to log the contents. Then I could simply get the contents of that cache from the client side. This would mean that there is only one connection being made instead of forcing a new connection every time a client wants the data.
The only problem is that the cron would have to be run frequently, like every few seconds. I've read about people running cron this much before, but every instance I've come across isn't making an external connection each time as well.
Is there any option for me other than cron to achieve this or in your experience is that good enough?
How about this:
When the first client reads your data, you retrieve them from the remote server and cache them together with a timestamp.
When the next clients read the same data, you check how old the contents of the cache is and only if it's older than 2 seconds (or whatever) you access the remote server again.
make yourself familiar with APC as a global storage. Once you have fetched the file, store it in the APC cache and set a timeout. You only need to connect to the remote server, once a page is not in the cache or outdated.
Mousemove: are you sure? That generates gazllions of parallel requests unless you set a semaphore clientside to not issue any AJAX queries anymore.
I have a PHP script that has to reload a page on the client (server push) when something specific happens on the server. So I have to listen for changes. My idea is to have a text file that contains the number of page loads for the current page. So I would like to monitor the file and as soon as it is modified, to use server push in order to update the content on the client. The question is how to track the file for changes in PHP?
You could do something like:
<?php
while(true){
$file = stat('/file');
if($file['mtime'] == time()){
//... Do Something Here ..//
}
sleep(1);
}
This will continuously look for a change in the modified time of a file every second. If you don't constrain it you could kill your disk IO and may need to adjust your ulimit.
This will check your file for a change:
<?php
$current_contents = "";
function checkForChange($filepath) {
global $current_contents;
$new_contents = file_get_contents($filepath);
if (strcmp($new_contents, $current_contents) {
$current_contents = $new_contents;
return true;
}
return false;
}
But that will not solve your problem. The php file that serves the client finishes executing before the rendered html is sent to the client. That client will need to call back to some php file to check for a change... and since that is also a http request, the file will finish executing and forget anything in memory.
In order to properly solve this, you'll probably have to back off the idea of checking a file. Either the server needs to know when and how to contact currently connected clients, or those clients need to poll a lightweight service at a regular interval.
This is sort of hacky but what about creating a cron job that sucks in the page, stores it in a scope or table, and then simply compares it every 30 seconds?
I have a simple script that makes redirection to mobile version of a website if it finds that user is browsing on mobile phone. It uses Tera-WURFL webservice to acomplish that and it will be placed on other hosting than Tera-WURFL itself. I want to protect it, in case of Tera-WURFL hosting downtime. In other words, if my script takes more than a second to run, then stop executing it and just redirect to regular website. How to do it effectively (so that the CPU would not be overly burdened by the script)?
EDIT: It looks that TeraWurflRemoteClient class have a timeout property. Read below. Now I need to find how to include it in my script, so that it would redirect to regular website in case of this timeout.
Here is the script:
// Instantiate a new TeraWurflRemoteClient object
$wurflObj = new TeraWurflRemoteClient('http://my-Tera-WURFL-install.pl/webservicep.php');
// Define which capabilities you want to test for. Full list: http://wurfl.sourceforge.net/help_doc.php#product_info
$capabilities = array("product_info");
// Define the response format (XML or JSON)
$data_format = TeraWurflRemoteClient::$FORMAT_JSON;
// Call the remote service (the first parameter is the User Agent - leave it as null to let TeraWurflRemoteClient find the user agent from the server global variable)
$wurflObj->getCapabilitiesFromAgent(null, $capabilities, $data_format);
// Use the results to serve the appropriate interface
if ($wurflObj->getDeviceCapability("is_tablet") || !$wurflObj->getDeviceCapability("is_wireless_device") || $_GET["ver"]=="desktop") {
header('Location: http://website.pl/'); //default index file
} else {
header('Location: http://m.website.pl/'); //where to go
}
?>
And here is source of TeraWurflRemoteClient.php that is being included. It has optional timeout argument as mentioned in documentation:
// The timeout in seconds to wait for the server to respond before giving up
$timeout = 1;
TeraWurflRemoteClient class have a timeout property. And it is 1 second by default, as I see in documentation.
So, this script won't be executed longer than a second.
Try achieving this by setting a very short timeout on the HTTP request to TeraWurfl inside their class, so that if the response doesn't come back in like 2-3 secs, consider the check to be false and show the full website.
The place to look for setting a shorter timeout might vary depending on the transport you use to make your HTTP request. Like in Curl you can set the timeout for the HTTP request.
After this do reset your HTTP request timeout back to what it was so that you don't affect any other code.
Also I found this while researching on it, you might want to give it a read, though I would say stay away from forking unless you are very well aware of how things work.
And just now Adelf posted that TeraWurflRemoteClient class has a timeout of 1 sec by default, so that solves your problem but I will post my answer anyway.
I have noticed a few websites such as hypem.com show a "You didnt get served" error message when the site is busy rather than just letting people wait, time out or refresh; aggravating what is probably a server load issue.
We are too loaded to process your request. Please click "back" in your
browser and try what you were doing again.
How is this achieved before the server becomes overloaded? It sounds like a really neat way to manage user expectation if a site happens to get overloaded whilst also giving the site time to recover.
Another options is this:
$load = sys_getloadavg();
if ($load[0] > 80) {
header('HTTP/1.1 503 Too busy, try again later');
die('Server too busy. Please try again later.');
}
I got it from php's site http://php.net/sys_getloadavg, altough I'm not sure what the values represent that the sys_getloadavg returns
You could simply create a 500.html file and have your webserver use that whenever a 50x error is thrown.
I.e. in your apache config:
ErrorDocument 500 /errors/500.html
Or use a php shutdown function to check if the request timeout (which defaults to 30s) has been reached and if so - redirect/render something static (so that rendering the error itself cannot cause problems).
Note that most sites where you'll see a "This site is taking too long to respond" message are effectively generating that message with javascript.
This may be to do with the database connection timing out, but that assumes that your server has a bigger DB load than CPU load when times get tough. If this is the case, you can make your DB connector show the message if no connection happens for 1 second.
You could also use a quick query to the logs table to find out how many hits/second there are and automatically not respond to any more after a certain point in order to preserve QOS for the others. In this case, you would have to set that level manually, based on server logs. An alternative method can be seen here in the Drupal throttle module.
Another alternative would be to use the Apache status page to get information on how many child processes are free and to throttle id there are none left as per #giltotherescue's answer to this question.
You can restrict the maximum connection in apache configuration too...
Refer
http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients
http://www.howtoforge.com/configuring_apache_for_maximum_performance
This is not a strictly PHP solution, but you could do like Twitter, i.e.:
serve a mostly static HTML and Javascript app from a CDN or another server of yours
the calls to the actual heavy work server-side (PHP in your case) functions/APIs are actually done in AJAX from one of your static JS files
so you can set a timeout on your AJAX calls and return a "Seems like loading tweets may take longer than expected"-like notice.
You can use the php tick function to detect when a server isn't loading for a specified amount of time, then display an error messages. Basic usage:
<?php
$connection = false;
function checkConnection( $connectionWaitingTime = 3 )
{
// check connection & time
global $time,$connection;
if( ($t = (time() - $time)) >= $connectionWaitingTime && !$connection){
echo ("<p> Server not responding for <strong>$t</strong> seconds !! </p>");
die("Connection aborted");
}
}
register_tick_function("checkConnection");
$time = time();
declare (ticks=1)
{
require 'yourapp.php' // load your main app logic
$connection = true ;
}
The while(true) is just to simulate a loaded server.
To implement the script in your site, you need to remove the while statement and add your page logic E.G dispatch event or front controller action etc.
And the $connectionWaitingTime in the checkCOnnection function is set to timeout after 3 seconds, but you can change that to whatever you want
I have a simple problem. I use php as server part and have an html output. My site shows a status about an other server. So the flow is:
Browser user goes on www.example.com/status
Browser contacts www.example.com/status
PHP Server receives request and ask for stauts on www.statusserver.com/status
PHP Receives the data, transforms it in readable HTML output and send it back to the client
Browser user can see the status.
Now, I've created a singleton class in php which accesses the statusserver only 8 seconds. So it updates the status all 8 seconds. If a user requests for update inbetween, the server returns the locally (on www.example.com) stored status.
That's nice isn't it? But then I did an easy test and started 5 browser windows to see if it works. Here it comes, the php server created a singleton class for each request. So now 5 Clients requesting all 8 seconds the status on the statusserver. this means I have every 8 second 5 calls to the status server instead of one!
Isn't there a possibility to provide only one instance to all users within an apache server? That would be solve the problem in case 1000 users are connecting to www.example.com/status....
thx for any hints
=============================
EDIT:
I already use a caching on harddrive:
public function getFile($filename)
{
$diff = (time()-filemtime($filename));
//echo "diff:$diff<br/>";
if($diff>8){
//echo 'grösser 8<br/>';
self::updateFile($filename);
}
if (is_readable($filename)) {
try {
$returnValue = #ImageCreateFromPNG($filename);
if($returnValue == ''){
sleep(1);
return self::getFile($filename);
}else{
return $returnValue;
}
} catch (Exception $e){
sleep(1);
return self::getFile($filename);
}
} else {
sleep(1);
return self::getFile($filename);
}
}
this is the call in the singleton. I call for a file and save it on harddrive. but all the request call it at same time and start requesting the status server.
I think the only solution would be a standalone application which does an update every 8 seconds on the file... All request should just read the file and nomore able to update it.
This standalone could be a perl script or something similar...
Php requests are handled by different processes and each of them have a different state, there isn't any resident process like in other web development framework. You should handle that behavior directly in your class using for instance some caching.
The method which query the server status should have this logic
public function getStatus() {
if (!$status = $cache->load()) {
// cache miss
$status = // do your query here
$cache->save($status); // store the result in cache
}
return $status;
}
In this way only one request of X will fetch the real status. The X value depends on your cache configuration.
Some cache library you can use:
APC
Memcached
Zend_Cache which is just a wrapper for actual caching engines
Or you can store the result in plain text file and on every request check for the m_time of the file itself and rewrite it if more than xx seconds are passed.
Update
Your code is pretty strange, why all those sleep calls? Why a try/catch block when ImageCreateFromPNG does not throw?
You're asking a different question, since php is not an application server and cannot store state across processes your approach is correct. I suggest you to use APC (uses shared memory so it would be at least 10x faster than reading a file) to share status across different processes. With this approach your code could become
public function getFile($filename)
{
$latest_update = apc_fetch('latest_update');
if (false == $latest_update) {
// cache expired or first request
apc_store('latest_update', time(), 8); // 8 is the ttl in seconds
// fetch file here and save on local storage
self::updateFile($filename);
}
// here you can process the file
return $your_processed_file;
}
With this approach the code in the if part will be executed from two different processes only if a process is blocked just after the if line, which should not happen because is almost an atomic operation.
Furthermore if you want to ensure that you should use something like semaphores to handle that, but it would be an oversized solution for this kind of requirement.
Finally imho 8 seconds is a small interval, I'd use something bigger, at least 30 seconds, but this depends from your requirements.
As far as I know it is not possible in PHP. However, you surely can serialize and cache the object instance.
Check out http://php.net/manual/en/language.oop5.serialization.php