Is The include Command of PHP buffered? - php

I've some problems in my script with inlcuding some Files with include
I can reduce it to a small code example:
<?php
$res = 0;
include "test.txt";
echo "RES = > $res".PHP_EOL;
file_put_contents('./test.txt','<?php $res='.($res+1).';'.PHP_EOL);
include "test.txt";
echo "RES = > $res".PHP_EOL;
I'm expecting an Output of
RES => 0
RES => 1
//On next Call I'm expecting ...
RES => 1
RES => 2
But what I'm getting:
RES => 0
RES => 0
Even the next call gives the same Result (RES => 0). When I recall the Script 1-2 sec later, I'm getting an increment of the RES.
So my question: Is the include statement of PHP buffered? I haven't seen some parts in the documentation of php about buffering. What is the problem with my example?

It depends on whether or not you have an OPCode cache installed. If you do the script is loaded from memory after the first time.
I'm not sure on the behavior without an OPCode cache. PHP may load the file from disk each time you call include. You could find out with strace et al. You'll probably reap the benefits of a filesystem cache even if PHP is going back to disk on subsequent invocations of include.
Generally I would encourage use of an OPCode cache.
EDIT
I now see you're changing the content of the file before the second include... I've tried your example from the CLI and it's working as you expect. Try it on your server via the CLI. If it works (which it should) then there's a good chance you have an OPCode cache enabled and the particular configuration is preventing the expected behavior.
You should also verify Apache is writing out the updated file as you expect. Maybe when you write to disk with file_put_contents, you also log what each version of the generated file is. Something like this after your existing file_put_contents call:
// For logging
file_put_contents('./test-' . time() . '.txt','<?php $res='.($res+1).';'.PHP_EOL);

When running the example from the command line or with disabled opcache, the script works fine.
Disabling opcache would give correct results.
A workaround with an enabled opcache would be something like this (this will prevent caching the file):
//Instead of include "test.txt";
//we include the part manual by eval
$cont = file_get_contents("test.txt");
//Strip of leading <?php and eval the string
eval(substr($cont, 5));

Related

PHP filesize() showing old filesize with a file inside a windows shared (network) folder

I have the following script that runs to read new content from a file:
<?php
clearstatcache();
$fileURL = "\\\\saturn\extern\seq_ws.csv";
$fileAvailable = file_exists($fileURL);
$bytesRead = file_get_contents("bytes.txt");
if($fileAvailable){
$fileSize = filesize($fileURL);
//Statusses 1 = Partial read, 2 = Complete read, 0 = No read, -1 File not found. followed by !!
if($bytesRead < $fileSize){
//$bytesRead till $fileSize bytes read from file.
$content = file_get_contents($fileURL, NULL, NULL, $bytesRead);
file_put_contents("bytes.txt", ((int)$bytesRead + strlen($content)));
echo "1!!$content";
}else if($bytesRead > $fileSize){
//File edit or delete detected, whole file read again.
$content = file_get_contents($fileURL);
file_put_contents("bytes.txt", strlen($content));
echo "2!!$content";
}else if($bytesRead == $fileSize){
//No new data found, no action taken.
echo "0!!";
}
}else{
//File delete detected, reading whole file when available
echo "-1!!";
file_put_contents("bytes.txt", "0");
}
?>
It works perfect when I run it and does what is expected.
When I edit the file from the same PC and my server it works instantly and returns the correct values.
However when I edit the file from another PC, my script takes about 4-6 seconds to read the correct filesize of the file.
I added clearstatcache(); on top of my script, because I think its a caching issue. But the strange thing is that when I change the file from the server PC it responds instantly, but from another it doesn't.
On top of that as soon as the other PC changes the file, I see the file change in Windows with the filesize and content but for some reason, it takes Apache about 4-6 seconds to detect the change. In those 4-6 seconds it receives the old filesize from before the change.
So I have the following questions:
Is the filesize information cached anywhere maybe either on the Apache server or inside Windows?
If question 1 applies, is there anyway to remove or disable this caching?
Is it possible this isnt a caching problem?
I think that in Your local PC php has development settings.
So I suggest to check php.ini for this param: realpath_cache_ttl
Which is:
realpath_cache_ttl integer
Duration of time (in seconds) for which to cache realpath
information for a given file or directory.
For systems with rarely changing files,
consider increasing the value.
to test it, php info both locally and on server to check that value:
<?php phpinfo();

About PHP parallel file read/write

Have a file in a website. A PHP script modifies it like this:
$contents = file_get_contents("MyFile");
// ** Modify $contents **
// Now rewrite:
$file = fopen("MyFile","w+");
fwrite($file, $contents);
fclose($file);
The modification is pretty simple. It grabs the file's contents and adds a few lines. Then it overwrites the file.
I am aware that PHP has a function for appending contents to a file rather than overwriting it all over again. However, I want to keep using this method since I'll probably change the modification algorithm in the future (so appending may not be enough).
Anyway, I was testing this out, making like 100 requests. Each time I call the script, I add a new line to the file:
First call:
First!
Second call:
First!
Second!
Third call:
First!
Second!
Third!
Pretty cool. But then:
Fourth call:
Fourth!
Fifth call:
Fourth!
Fifth!
As you can see, the first, second and third lines simply disappeared.
I've determined that the problem isn't the contents string modification algorithm (I've tested it separately). Something is messed up either when reading or writing the file.
I think it is very likely that the issue is when the file's contents are read: if $contents, for some odd reason, is empty, then the behavior shown above makes sense.
I'm no expert with PHP, but perhaps the fact that I performed 100 calls almost simultaneously caused this issue. What if there are two processes, and one is writing the file while the other is reading it?
What is the recommended approach for this issue? How should I manage file modifications when several processes could be writing/reading the same file?
What you need to do is use flock() (file lock)
What I think is happening is your script is grabbing the file while the previous script is still writing to it. Since the file is still being written to, it doesn't exist at the moment when PHP grabs it, so php gets an empty string, and once the later processes is done it overwrites the previous file.
The solution is to have the script usleep() for a few milliseconds when the file is locked and then try again. Just be sure to put a limit on how many times your script can try.
NOTICE:
If another PHP script or application accesses the file, it may not necessarily use/check for file locks. This is because file locks are often seen as an optional extra, since in most cases they aren't needed.
So the issue is parallel accesses to the same file, while one is writing to the file another instance is reading before the file has been updated.
PHP luckily has a mechanisms for locking the file so no one can read from it until the lock is released and the file has been updated.
flock()
can be used and the documentation is here
You need to create a lock, so that any concurrent requests will have to wait their turn. This can be done using the flock() function. You will have to use fopen(), as opposed to file_get_contents(), but it should not be a problem:
$file = 'file.txt';
$fh = fopen($file, 'r+');
if (flock($fh, LOCK_EX)) { // Get an exclusive lock
$data = fread($fh, filesize($file)); // Get the contents of file
// Do something with data here...
ftruncate($fh, 0); // Empty the file
fwrite($fh, $newData); // Write new data to file
fclose($fh); // Close handle and release lock
} else {
die('Unable to get a lock on file: '.$file);
}

Bug in my caching code

Here's my code:
$cachefile = "cache/ttcache.php";
if(file_exists($cachefile) && ((time() - filemtime($cachefile)) < 900))
{
include($cachefile);
}
else
{
ob_start();
/*resource-intensive loop that outputs
a listing of the top tags used on the website*/
$fp = fopen($cachefile, 'w');
fwrite($fp, ob_get_contents());
fflush($fp);
fclose($fp);
ob_end_flush();
}
This code seemed like it worked fine at first sight, but I found a bug, and I can't figure out how to solve it. Basically, it seems that after I leave the page alone for a period of time, the cache file empties (either that, or when I refresh the page, it clears the cache file, rendering it blank). Then the conditional sees the now-blank cache file, sees its age as less than 900 seconds, and pulls the blank cache file's contents in place of re-running the loop and refilling the cache.
I catted the cache file in the command line and saw that it is indeed blank when this problem exists.
I tried setting it to 60 seconds to replicate this problem more often and hopefully get to the bottom of it, but it doesn't seem to replicate if I am looking for it, only when I leave the page and come back after a while.
Any help?
In the caching routines that I write, I almost always check the filesize, as I want to make sure I'm not spewing blank data, because I rely on a bash script to clear out the cache.
if(file_exists($cachefile) && (filesize($cachefile) > 1024) && ((time() - filemtime($cachefile)) < 900))
This assumes that your outputted cache file is > 1024 bytes, which, usually it will be if it's anything relatively large. Adding a lock file would be useful as well, as noted in the comments above to avoid multiple processes trying to write to the same lock file.
you can double check the file size with the filesize() function, if it's too small, act as if the cache was old.
if there's no PHP in the file, you may want to either use readfile() for performance reasons to just spit the file back out to the end user.

How can I optimize this simple PHP script?

This first script gets called several times for each user via an AJAX request. It calls another script on a different server to get the last line of a text file. It works fine, but I think there is a lot of room for improvement but I am not a very good PHP coder, so I am hoping with the help of the community I can optimize this for speed and efficiency:
AJAX POST Request made to this script
<?php session_start();
$fileName = $_POST['textFile'];
$result = file_get_contents($_SESSION['serverURL']."fileReader.php?textFile=$fileName");
echo $result;
?>
It makes a GET request to this external script which reads a text file
<?php
$fileName = $_GET['textFile'];
if (file_exists('text/'.$fileName.'.txt')) {
$lines = file('text/'.$fileName.'.txt');
echo $lines[sizeof($lines)-1];
}
else{
echo 0;
}
?>
I would appreciate any help. I think there is more improvement that can be made in the first script. It makes an expensive function call (file_get_contents), well at least I think its expensive!
This script should limit the locations and file types that it's going to return.
Think of somebody trying this:
http://www.yoursite.com/yourscript.php?textFile=../../../etc/passwd (or something similar)
Try to find out where delays occur.. does the HTTP request take long, or is the file so large that reading it takes long.
If the request is slow, try caching results locally.
If the file is huge, then you could set up a cron job that extracts the last line of the file at regular intervals (or at every change), and save that to a file that your other script can access directly.
readfile is your friend here
it reads a file on disk and streams it to the client.
script 1:
<?php
session_start();
// added basic argument filtering
$fileName = preg_replace('/[^A-Za-z0-9_]/', '', $_POST['textFile']);
$fileName = $_SESSION['serverURL'].'text/'.$fileName.'.txt';
if (file_exists($fileName)) {
// script 2 could be pasted here
//for the entire file
//readfile($fileName);
//for just the last line
$lines = file($fileName);
echo $lines[count($lines)-1];
exit(0);
}
echo 0;
?>
This script could further be improved by adding caching to it. But that is more complicated.
The very basic caching could be.
script 2:
<?php
$lastModifiedTimeStamp filemtime($fileName);
if (isset($_SERVER['HTTP_IF_MODIFIED_SINCE'])) {
$browserCachedCopyTimestamp = strtotime(preg_replace('/;.*$/', '', $_SERVER['HTTP_IF_MODIFIED_SINCE']));
if ($browserCachedCopyTimestamp >= $lastModifiedTimeStamp) {
header("HTTP/1.0 304 Not Modified");
exit(0);
}
}
header('Content-Length: '.filesize($fileName));
header('Expires: '.gmdate('D, d M Y H:i:s \G\M\T', time() + 604800)); // (3600 * 24 * 7)
header('Last-Modified: '.date('D, d M Y H:i:s \G\M\T', $lastModifiedTimeStamp));
?>
First things first: Do you really need to optimize that? Is that the slowest part in your use case? Have you used xdebug to verify that? If you've done that, read on:
You cannot really optimize the first script usefully: If you need a http-request, you need a http-request. Skipping the http request could be a performance gain, though, if it is possible (i.e. if the first script can access the same files the second script would operate on).
As for the second script: Reading the whole file into memory does look like some overhead, but that is neglibable, if the files are small. The code looks very readable, I would leave it as is in that case.
If your files are big, however, you might want to use fopen() and its friends fseek() and fread()
# Do not forget to sanitize the file name here!
# An attacker could demand the last line of your password
# file or similar! ($fileName = '../../passwords.txt')
$filePointer = fopen($fileName, 'r');
$i = 1;
$chunkSize = 200;
# Read 200 byte chunks from the file and check if the chunk
# contains a newline
do {
fseek($filePointer, -($i * $chunkSize), SEEK_END);
$line = fread($filePointer, $i++ * $chunkSize);
} while (($pos = strrpos($line, "\n")) === false);
return substr($line, $pos + 1);
If the files are unchanging, you should cache the last line.
If the files are changing and you control the way they are produced, it might or might not be an improvement to reverse the order lines are written, depending on how often a line is read over its lifetime.
Edit:
Your server could figure out what it wants to write to its log, put it in memcache, and then write it to the log. The request for the last line could be fulfulled from memcache instead of file read.
The most probable source of delay is that cross-server HTTP request. If the files are small, the cost of fopen/fread/fclose is nothing compared to the whole HTTP request.
(Not long ago I used HTTP to retrieve images to dinamically generate image-based menus. Replacing the HTTP request by a local file read reduced the delay from seconds to tenths of a second.)
I assume that the obvious solution of accessing the file server filesystem directly is out of the question. If not, then it's the best and simplest option.
If not, you could use caching. Instead of getting the whole file, you just issue a HEAD request and compare the timestamp to a local copy.
Also, if you are ajax-updating a lot of clients based on the same files, you might consider looking at using comet (meteor, for example). It's used for things like chats, where a single change has to be broadcasted to several clients.

Is the same file tokenized every time I include it?

This question is about the PHP parsing engine.
When I include a file multiple times in a single runtime, does PHP tokenize it every time or does it keep a cache and just run the compiled code on subsequent inclusions?
EDIT: More details: I am not using an external caching mechanism and I am dealing with the same file being included multiple times during the same request.
EDIT 2: The file I'm trying to include contains procedural code. I want it to be executed every time I include() it, I am just curious if PHP internally keeps track of the tokenized version of the file for speed reasons.
You should use a PHP bytecode cache such as APC. That will accomplish what you want, to re-use a compiled version of a PHP page on subsequent requests. Otherwise, PHP reads the file, tokenizes and compiles it on every request.
By default the file is parsed every time it is (really) included, even within the same php instance.
But there are opcode caches like e.g. apc
<?php
$i = 'include_test.php';
file_put_contents($i, '<?php $x = 1;');
include $i;
echo $x, ' ';
file_put_contents($i, '<?php $x = 2;');
include $i;
echo $x, ' '1 2(ok, weak proof. PHP could check whether the file's mtime has changed. And that what apc does, I think. But without a cache PHP really doesn't)
Look at include_once().
It will include it again.
Also if you are using objects. Look at __autoload()
I just wrote a basic test, much like VolkerK's. Here's what I tested:
<?php
file_put_contents('include.php','<?php echo $i . "<br />"; ?>');
for($i = 0; $i<10; $i++){
include('include.php');
if($i == 5){
file_put_contents('include.php','<?php echo $i+$i; echo "<br />"; ?>');
}
}
?>
This generated the following:
0
1
2
3
4
5
12
14
16
18
So, unless it caches based on mtime of the file, it seems it parses every include. You would likely want to use include_once() instead of standard include(). Hope that helps!

Categories