I have a log file maintained by a PHP script. The PHP script is subject to parallel processing. I cannot get the flock() mechanism to work on the log file: in my case, flock() does not prevent the log file shared by PHP scripts running in parallel from being accessed at the same time and being sometimes overwritten.
I want to be able to read a file, do some processing, modify the data and write back without the same code running in parallel on the server doing the same at the same time. The read modify write has to be in sequence.
On one of my shared hostings (OVH France), it does not work as expected. In that case, we see that the counter $c has the same value in different iframes, which should not be possible if the lock works as expected, which it does on an other shared hosting.
Any suggestions to make this work, or for an alternative method?
Googling "read modify write" php or fetch and add or test and set did not provide useful information: all solutions are based on a working flock().
Here is some standalone running demo code to illustrate. It generates a number of parallel requests from the browser to the server and displays the results. It is easy to visually observe a disfunction: if your webserver does not support flock() like one of mine, the counter value and the number of log lines will be the same in some frames.
<!DOCTYPE html>
<html lang="en">
<title>File lock test</title>
<style>
iframe {
width: 10em;
height: 300px;
}
</style>
<?php
$timeStart = microtime(true);
if ($_GET) { // iframe
// GET
$time = $_GET['time'] ?? 'no time';
$instance = $_GET['instance'] ?? 'no instance';
// open file
// $mode = 'w+'; // no read
// $mode = 'r+'; // does not create file, we have to lock file creation also
$mode = 'c+'; // read, write, create
$fhandle = fopen(__FILE__ .'.rwtestfile.txt', $mode) or exit('fopen');
// lock
flock($fhandle, LOCK_EX) or exit('flock');
// start of file (optional, only some modes like require it)
rewind($fhandle);
// read file (or default initial value if new file)
$fcontent = fread($fhandle, 10000) or ' 0';
// counter value from previous write is last integer value of file
$c = strrchr($fcontent, ' ') + 1;
// new line for file
$fcontent .= "<br />\n$time $instance $c";
// reset once in a while
if ($c > 20) {
$fcontent = ' 0'; // avoid long content
}
// simulate other activity
usleep(rand(1000, 2000));
// start of file
rewind($fhandle);
// write
fwrite($fhandle, $fcontent) or exit('fwrite');
// truncate (in unexpected case file is shorter now)
ftruncate($fhandle, ftell($fhandle)) or exit('ftruncate');
// close
fclose($fhandle) or exit('fclose');
// echo
echo "instance:$instance c:$c<br />";
echo $timeStart ."<br />";
echo microtime(true) - $timeStart ."<br />";
echo $fcontent ."<br />";
} else {
echo 'File lock test<br />';
// iframes that will be requested in parallel, to check flock
for ($i = 0; $i < 14; $i++) {
echo '<iframe src="?instance='. $i .'&time='. date('H:i:s') .'"></iframe>'."\n";
}
}
There is a warning about flock() limitations in the PHP: flock - Manual, but it is about ISAPI (Windows) and FAT (Windows). My server configuration is:
PHP Version 7.2.5
System: Linux cluster026.gra.hosting.ovh.net
Server API: CGI/FastCGI
A way to do an atomic test and set instruction in PHP is to use mkdir(). It is a bit strange to use a directory for that instead of a file, but mkdir() will create a directory or return a false (and a suppressile warning) if it already exists. File commands like fopen(), fwrite(), file_put_contents() do not test and set in one instruction.
<?php
// lock
$fnLock = __FILE__ .'.lock'; // lock directory filename
$lockLooping = 0; // counter can be used for tuning depending on lock duration
do {
if (#mkdir($fnLock, 0777)) { // mkdir is a test and set command
$lockLooping = 0;
} else {
$lockLooping += 1;
$lockAge = time() - filemtime($fnLock);
if ($lockAge > 10) {
rmdir($fnLock); // robustness, in case a lock was not erased
} else {
// wait without consuming CPU before try again
usleep(rand(2500, 25000)); // random to avoid parallel process conflict again
}
}
} while ($lockLooping > 0);
// do stuff under atomic protection
// don't take too long, because parallel processes are waiting for the unlock (rmdir)
$content = file_get_contents($protected_file_name); // example read
$content = $modified_content; // example modify
file_put_contents($protected_file_name, $modified_content); // example write
// unlock
rmdir($fnLock);
Using files for data management coordinated only by PHP request handlers you are heading for a world of pain - you've only just dipped your toes in the water thus far.
Using LOCK_EX, your writer needs to wait for any (and every) instance of LOCK_SH to be released before it will acquire the lock. Here you are setting flock to block until the lock can be acquired. On a relatively busy system, the writer could be blocked indefinitely. There is no priority queuing of locks on most OS that would place any subsequent reader requesting the lock behind a process waiting for a write lock.
A further complication is that you can only use flock on an open file handle. Meaning that a opening the file and acquiring the lock is not atomic, further you need to flush the stat cache in order to determine the age of the file after acquiring the lock.
Any writes to the file (even using file_put_contents()) are not atomic. So in the absence of exclusive locking you can't be sure that nobody will read a partial file.
In the absence of additional components (e.g. a daemon providing a lock queuing mechanism, or a caching reverse proxy in front of the web server, or a relational database) then your only option is to assume that you cannot ensure exclusive access and use atomic operations to semaphore the file, something like:
$lock_age=time()-filectime(dirname(CACHE_FILE) . "/lock");
if (filemtime(CACHE_FILE)>time()-CACHE_TTL
&& $lock_age>MAX_LOCK_TIME) {
rmdir(dirname(CACHE_FILE) . "/lock");
mkdir(dirname(CACHE_FILE) . "/lock") || die "I give up";
}
$content=generate_content(); // might want to add specific timing checks around this
file_put_contents(CACHE_FILE, $content);
rmdir(dirname(CACHE_FILE) . "/lock");
} else if (is_dir(dirname(CACHE_FILE) . "/lock") {
$snooze=MAX_LOCK_TIME-$lock_age;
sleep($snooze);
$content=file_get_contents(CACHE_FILE);
} else {
$content=file_get_contents(CACHE_FILE);
}
(note that this is a really ugly hack)
There is one fopen() test and set mode: the x mode.
x Create and open for writing only; place the file pointer at the beginning of the file. If the file already exists, the fopen() call will fail by returning FALSE and generating an error of level E_WARNING. If the file does not exist, attempt to create it.
The fopen($filename ,'x') behaviour is the same as mkdir() and it can be used in the same way:
<?php
// lock
$fnLock = __FILE__ .'.lock'; // lock file filename
$lockLooping = 0; // counter can be used for tuning depending on lock duration
do {
if ($lockHandle = #fopen($fnLock, 'x')) { // test and set command
$lockLooping = 0;
} else {
$lockLooping += 1;
$lockAge = time() - filemtime($fnLock);
if ($lockAge > 10) {
rmdir($fnLock); // robustness, in case a lock was not erased
} else {
// wait without consuming CPU before try again
usleep(rand(2500, 25000)); // random to avoid parallel process conflict again
}
}
} while ($lockLooping > 0);
// do stuff under atomic protection
// don't take too long, because parallel processes are waiting for the unlock (rmdir)
$content = file_get_contents($protected_file_name); // example read
$content = $modified_content; // example modify
file_put_contents($protected_file_name, $modified_content); // example write
// unlock
fclose($lockHandle);
unlink($fnLock);
It is a good idea to test this, e.g. using the code in the question.
Many people rely on locking as documented, but surprises may appear during test or production under load (parallel requests from one browser may be enough).
Related
Following the good advice on this link:
How to keep checking for a file until it exists, then provide a link to it
The loop will never end if the file will never be created.
In a perfect system, it should not happen, but if it does how would one exit from that loop?
I have a similar case:
/* More codes above */
// writing on the file
$csvfile = $foldername.$date.$version.".csv";
$csv = fopen( $csvfile, 'w+' );
foreach ($_POST['lists'] as $pref) {
fputcsv($csv, $pref, ";");
}
// close and wait IO creation
fclose($csv);
sleep(1);
// Running the Java
$exec = shell_exec("/usr/bin/java -jar $app $csvfile");
sleep(3);
$xmlfile = preg_replace('/\\.[^.\\s]{3,4}$/', '.xml', $csvfile);
if (file_exists("$csvfile") && (file_exists("$xmlfile"))){
header("Location:index.php?msg");
exit;
}
else if (!file_exists("$csvfile")){
header("Location:index.php?msgf=".basename($csvfile)." creation failed!");
exit;
}
else if (!file_exists("$xmlfile")){
header("Location:index.php?msgf=".basename($xmlfile)." creation failed!");
exit;
}
//exit;
} // Just the end
?>
( Yes, bad idea to pass variables in the url.. I got that covered )
I use sleep(N); because I know the java takes short to create the file, same for the csv on the php.
How can I improve the check on the file, to wait the necessary time before reporting the status OK or NOT ok if the file was not created?
After reading your comments, I think "the best loop" isn't a good question to get a better answer.
The linked script just give a good approach when the script expects a file. That script will wait until the file is created or forever (but the creator ensures about the file creation).
Better than that, you could give a particular period to ensure if the file exists or not.
If after the shell_exec the java script didn't create the file (which I think is almost impossible, but is just a thought), you could use a code like above:
$cycles = 0;
while (!($isFileCreated = file_exists($filename)) && $cycles > 1000) {
$cycles++;
usleep(1);
}
if (!$isFileCreated)
{
//some action
//throw new RuntimeException("File doesn't exists");
}
//another action
The script above will wait until the file is created or until reach a particular amount of cycles (it's better to call cycles than microseconds, because I can't ensure that each cycle will be execute in one microsecond). The number of cycles can be changed if you need more time.
Why can I not read a file locked with LOCK_EX? I am still able to write to it.
I wanted to know, what happens, if one process locks a file (with LOCK_SH or LOCK_EX) and another process tries to read this file or write to it, but ignores the lock at all. So I made a little script that has 3 functionalities:
Locking: Opens the target file, writes to it, locks the file (with specified lock), writes to it again, sleeps 10 seconds, unlocks it and closes it.
Reading: Opens the target file, reads from it and closes it.
Writing: Opens the target file, writes to it and closes it.
I tested it by having two consoles side by side and doing the folloing:
FIRST CONSOLE | SECOND CONSOLE
-----------------------------+-----------------------
php test lock LOCK_SH | php test read
php test lock LOCK_SH | php test write
php test lock LOCK_EX | php test read
php test lock LOCK_EX | php test write
LOCK_SH seems to have no effect at all, because the first process as well as the second process can read and write to the file. If the file is being locked with LOCK_EX by the first process, both processes can still write to it, but only the first process can read. Is there any reasoning behind this?
Here is my little test program (tested on Windows 7 Home Premium 64-bit):
<?php
// USAGE: php test [lock | read | write] [LOCK_SH | LOCK_EX]
// The first argument specifies whether
// this script should lock the file, read
// from it or write to it.
// The second argument is only used in lock-mode
// and specifies whether LOCK_SH or LOCK_EX
// should be used to lock the file
// Reads $file and logs information.
function r ($file) {
echo "Reading file\n";
if (($buffer = #fread($file, 64)) !== false)
echo "Read ", strlen($buffer), " bytes: ", $buffer, "\n";
else
echo "Could not read file\n";
}
// Sets the cursor to 0.
function resetCursor ($file) {
echo "Resetting cursor\n", #fseek($file, 0, SEEK_SET) === 0 ? "Reset cursor" : "Could not reset cursor", "\n";
}
// Writes $str to $file and logs information.
function w ($file, $str) {
echo "Writing \"", $str, "\"\n";
if (($bytes = #fwrite($file, $str)) !== false)
echo "Wrote ", $bytes, " bytes\n";
else
echo "Could not write to file\n";
}
// "ENTRYPOINT"
if (($file = #fopen("check", "a+")) !== false) {
echo "Opened file\n";
switch ($argv[1]) {
case "lock":
w($file, "1");
echo "Locking file\n";
if (#flock($file, constant($argv[2]))) {
echo "Locked file\n";
w($file, "2");
resetCursor($file);
r($file);
echo "Sleeping 10 seconds\n";
sleep(10);
echo "Woke up\n";
echo "Unlocking file\n", #flock($file, LOCK_UN) ? "Unlocked file" : "Could not unlock file", "\n";
} else {
echo "Could not lock file\n";
}
break;
case "read":
resetCursor($file);
r($file);
break;
case "write":
w($file, "3");
break;
}
echo "Closing file\n", #fclose($file) ? "Closed file" : "Could not close file", "\n";
} else {
echo "Could not open file\n";
}
?>
That's a very good question, but also a complex one because it depends on a lot of conditions.
We have to start with another pair of locking types - advisory and mandatory:
Advisory locking simply give you "status flags" by which you know if a resource is locked or not.
Mandatory locking enforces the locks, regardless of whether you're checking these "status flags".
... and that should answer your question, but I'll continue in order to explain your particular case.
What you seem to be experiencing is the behavior of advisory locks - there's nothing preventing you from reading or writing to a file, no matter if there is a lock for it or if you even checked for one.
However, you will find a note in the PHP manual for flock(), saying the following:
flock() uses mandatory locking instead of advisory locking on Windows. Mandatory locking is also supported on Linux and System V based operating systems via the usual mechanism supported by the fcntl() system call: that is, if the file in question has the setgid permission bit set and the group execution bit cleared. On Linux, the file system will also need to be mounted with the mand option for this to work.
So, if PHP uses mandatory locking on Windows and you've tested this on Windows, either the manual is wrong/outdated/inaccurate (I'm too lazy to check for that right now) or you have to read this big red warning on the same page:
On some operating systems flock() is implemented at the process level. When using a multithreaded server API like ISAPI you may not be able to rely on flock() to protect files against other PHP scripts running in parallel threads of the same server instance!
flock() is not supported on antiquated filesystems like FAT and its derivates and will therefore always return FALSE under this environments (this is especially true for Windows 98 users).
I don't believe it is even possible that your php-cli executable somehow spawns threads for itself, so there's the option that you're using a filesystem that simply doesn't support locking.
My guess is that the manual isn't entirely accurate and you do in fact get advisory locks on Windows, because you are also experiencing a different behavior between LOCK_EX (exclusive lock) and LOCK_SH (shared lock) - it doesn't make sense for them to differ if your filesystem just ignores the locks.
And that brings us to the difference between exclusive and shared locks or LOCK_EX and LOCK_SH respectively. The logic behind both is based around writing, but there's a slight difference ...
Exclusive locks are (generally) used when you want to write to a file, because only one process at a time may hold an exclusive lock over the same resource. That gives you safety, because no other process would read from that resource while the lock-holder writes to it.
Shared locks are used to ensure that a resource is not being written to while you read from it. Since no process is writing to the resource, it is not being modified and is therefore safe to read from for multiple processes at the same time.
I created a script for use with my website that is supposed to erase the oldest entry in cache when a new item needs to be cached. My website is very large with 500,000 photos on it and the cache space is set to 2 GB.
These functions are what cause the trouble:
function cache_tofile($fullf, $c)
{
error_reporting(0);
if(strpos($fullf, "/") === FALSE)
{
$fullf = "./".$fullf;
}
$lp = strrpos($fullf, "/");
$fp = substr($fullf, $lp + 1);
$dp = substr($fullf, 0, $lp);
$sz = strlen($c);
cache_space_make($sz);
mkdir($dp, 0755, true);
cache_space_make($sz);
if(!file_exists($fullf))
{
$h = #fopen($fullf, "w");
if(flock($h, LOCK_EX))
{
ftruncate($h, 0);
rewind($h);
$tmo = 1000;
$cc = 1;
$i = fputs($h, $c);
while($i < strlen($c) || $tmo-- > 1)
{
$c = substr($c, $i);
$i = fwrite($h, $c);
}
flock($h, LOCK_UN);
fclose($h);
}
}
error_reporting(7);
}
function cache_space_make($sz)
{
$ct = 0;
$cf = cachefolder();
clearstatcache();
$fi = shell_exec("df -i ".$cf." | tail -1 | awk -F\" \" '{print \$4}'");
if($fi < 1)
{
return;
}
if(($old = disk_free_space($cf)) === false)
{
return;
}
while($old < $sz)
{
$ct++;
if($ct > 10000)
{
error_log("Deleted over 10,000 files. Is disk screwed up?");
break;
}
$fi = shell_exec("rm \$(find ".$cf."cache -type f -printf '%T+ %p\n' | sort | head -1 | awk -F\" \" '{print \$2}');");
clearstatcache();
$old = disk_free_space($cf);
}
}
cachefolder() is a function that returns the correct folder name with a / appended to it.
When the functions are executed, the CPU usage for apache is between 95% and 100% and other services on the server are extremely slow to access during that time. I also noticed in whm that cache disk usage is at 100% and refuses to drop until I clear the cache. I was expecting more like maybe 90ish%.
What I am trying to do with the cache_tofile function is attempt to free disk space in order to create a folder then free disk space to make the cache file. The cache_space_make function takes one parameter representing the amount of disk space to free up.
In that function I use system calls to try to find the oldest file in the directory tree of the entire cache and I was unable to find native php functions to do so.
The cache file format is as follows:
/cacherootfolder/requestedurl
For example, if one requests http://www.example.com/abc/def then from both functions, the folder that is supposed to be created is abc and the file is then def so the entire file in the system will be:
/cacherootfolder/abc/def
If one requests http://www.example.com/111/222 then the folder 111 is created and the file 222 will be created
/cacherootfolder/111/222
Each file in both cases contain the same content as what the user requests based on the url. (example: /cacherootfolder/111/222 contains the same content as what one would see when viewing source from http://www.example.com/111/222)
The intent of the caching system is to deliver all web pages at optimal speed.
My question then is how do I prevent the system from trying to lockup when the cache is full. Is there better code I can use than what I provided?
I would start by replacing the || in your code by &&, which was most likely the intention.
Currently, the loop will always run at least 1000 times - I very much hope the intention was to stop trying after 1000 times.
Also, drop the ftruncate and rewind.
From the PHP Manual on fopen (emphasis mine):
'w' Open for writing only; place the file pointer at the beginning of the file and truncate the
file to zero length. If the file does not exist, attempt to create it.
So your truncate is redundant, as is your rewind.
Next, review your shell_exec's.
The one outside the loop doesn't seem too much of a bottleneck to me, but the one inside the loop...
Let's say you have 1'000'000 files in that cache folder.
find will happily list all of them for you, no matter how long it takes.
Then you sort that list.
And then you flush 999'999 entries of that list down the toilet, and only keep the first one.
Then you do some stuff with awk that I don't really care about, and then you delete the file.
On the next iteration, you'll only have to go through 999'999 files, of which you discard only 999'998.
See where I'm going?
I consider calling shell scripts out of pure convenience bad practice anyway, but if you do it, do it as efficiently as possible, at least!
Do one shell_exec without head -1, store the resulting list in a variable, and iterate over it.
Although it might be better to abandon shell_exec altogether and instead program the corresponding routines in PHP (one could argue that find and rm are machine code, and therefore faster than code written in PHP to do the same task, but there sure is a lot of overhead for all that IO redirection).
Please do all that, and then see how bad it still performs.
If the results are still unacceptable, I suggest you put in some code to measure the time certain parts of those functions require (tip: microtime(true)) or use a profiler, like XDebug, to see where exactly most of your time is spent.
Also, why did you turn off error reporting for that block? Looks more than suspicious to me.
And as a little bonus, you can get rid of $cc since you're not using it anywhere.
I have found this script
Quick and easy flood protection?
and I have turned it into a function.
Works great for the most part. From time to time I see an error:
[<a href='function.unlink'>function.unlink</a>]: No such file or directory
in line:
else if ($diff>3600) { unlink($path); } // If first request was more than 1 hour, new ip file
Apparently some IP files for some reason are getting deleted ?
I have tried to find the logic error, but I'm not good at all at that. Maybe somebody could help.
The function:
function ht_request_limiter() {
if (!isset($_SERVER['REMOTE_ADDR'])) { return; } // Maybe its impossible, however we check it first
if (empty($_SERVER['REMOTE_ADDR'])) { return; } // Maybe its impossible, however we check it first
$path = '/home/czivbaby/valuemarket.gr/ip-sec/'; // I use a function to validate a path first and return if false...
$path = $path.$_SERVER['REMOTE_ADDR'].'.txt'; // Real file path (filename = <ip>.txt)
$now = time(); // Current timestamp
if (!file_exists($path)) { // If first request or new request after 1 hour / 24 hour ban, new file with <timestamp>|<counter>
if ($handle = fopen($path, 'w+')) {
if (fwrite($handle, $now.'|0')) { chmod($path, 0700); } // Chmod to prevent access via web
fclose($handle);
}
}
else if (($content = file_get_contents($path)) !== false) { // Load existing file
$content = explode('|',$content); // Create paraset [0] -> timestamp [1] -> counter
$diff = (int)$now-(int)$content[0]; // Time difference in seconds from first request to now
if ($content[1] == 'ban') { // If [1] = ban we check if it was less than 24 hours and die if so
if ($diff>86400) { unlink($path); } // 24 hours in seconds.. if more delete ip file
else {
header("HTTP/1.1 503 Service Unavailable");
exit("Your IP is banned for 24 hours, because of too many requests.");
}
}
else if ($diff>3600) { unlink($path); } // If first request was more than 1 hour, new ip file
else {
$current = ((int)$content[1])+1; // Counter + 1
if ($current>200) { // We check rpm (request per minute) after 200 request to get a good ~value
$rpm = ($current/($diff/60));
if ($rpm>10) { // If there was more than 10 rpm -> ban (if you have a request all 5 secs. you will be banned after ~17 minutes)
if ($handle = fopen($path, 'w+')) {
fwrite($handle, $content[0].'|ban');
fclose($handle);
// Maybe you like to log the ip once -> die after next request
}
return;
}
}
if ($handle = fopen($path, 'w+')) { // else write counter
fwrite($handle, $content[0].'|'.$current .'');
fclose($handle);
}
}
}
}
Your server is processing two (or more) requests at the same time from the same client, and the script does not seem to handle this (completely normal) situation correctly. Web browsers download multiple objects from a server in parallel in order to speed up browsing. It's quite likely that, every now and then, a browser does two requests which then end up executing in parallel so that two copies of that script end up at the same unlink() call at roughly the same time. One succeeds in deleting the file, and the other one gives the error message.
Even if your server has a single CPU, the operating system will be happily providing multitasking by context switching between multiple PHP processes which are executing the same PHP script at the same time for the same client IP address.
The script should probably use file locking (http://php.net/manual/en/function.flock.php) to lock the file while working on it. Or simply ignore the unlink() error (by placing a # in front of the unlink), but other concurrency problems are likely to come up.
The script should:
Open the file for reading and writing using $f = fopen($filename, 'r+');
Lock the opened file using the file handle. The flock($f, LOCK_EX) call will block and wait if some other process already has a lock.
Read file contents.
Decide what to do (increment counter, refuse to service).
fseek($f, 0, SEEK_SET) to beginning of file, ftruncate($f, 0) to make it empty and rewrite the file contents if necessary or unlink() the file if necessary.
Close the file handle with fclose($f), which also releases the lock on it and lets another process continue with step 3.
The pattern is same for all programming languages.
I have a simple caching system as
if (file_exists($cache)) {
echo file_get_contents($cache);
// if coming here when $cache is deleting, then nothing to display
}
else {
// PHP process
}
We regularly delete outdated cache files, e.g. deleting all caches after 1 hour. Although this process is very fast, but I am thinking that a cache file can be deleted right between the if statement and file_get_contents processes.
I mean when if statement checks the existence of cache file, it exists; but when file_get_contents tries to catch it, it is no longer there (deleted by simultaneous cache deleting process).
file_get_contents locks the file to avoid the undergoing delete process during the read process. But the file can be deleted when the if statement sends the PHP process to the first condition (before start of the file_get_contents).
Is there any approach to avoid this? Is the cache deleting system different?
NOTE: I did not face any practical problem, as it is not very probable to catch this event, but logically it is possible, and should happen on heavy loads.
Luckily file_get_contents return FALSE on error, so you could quick-bake it like:
if (FALSE !== ($buffer = file_get_contents())) {
echo $buffer;
return;
}
// PHP process
or similiar. It's a bit the quick and dirty way, considering you want to place the # operator to hide any warnings about non-existent files:
if (FALSE !== ($buffer = #file_get_contents())) {
The other alternative would be to lock, however that might prevent your cache-deletion to not delete the file if you have locked it.
Then left is to stall the cache your own. That means reading the file-creation time in PHP, check that it is < 5 minutes then for the file-deletion processing (5 minutes is exemplary) and then you would know that the file is already stale and for being replaced with fresh content. Re-create the file then. Otherwise read the file in, which probably is better then with readfile instead of file_get_contents and echo.
On failure, file_get_contents returns false, so what about this:
if (($output = file_get_contents($filename)) === false){
// Do the processing.
$output = 'Generated content';
// Save cache file
file_put_contents($filename, $output);
}
echo $output;
By the way, you may want to consider using fpassthru, which is more memory-efficient, especially for larger files. Using file_get_contents on large files (> 100 MB), will probably cause problems (depending on your configuration).
<?php
$fp = #fopen($filename, 'rb');
if ($fp === false){
// Generate output
} else {
fpassthru($fp);
}