Generate php source code based on php array - php

I've got a non-modifiable function which takes several seconds to finish.
The function returns an array of objects. The result only changes about once per day.
To speed things up I wanted to cache the result using APC but the hosting provider(shared hosting environment) does not offer any memory caching solutions (APC, memcache, ...).
The only solution I found was using serialize() to store the data into a file and then deserializing the data back again.
What about generating php source code out of the array? Later I could simple call
require data.php
to get the data into a predefined variable.
Thanks!
UPDATE: Storing the resulting .html is no option because the output is user-dependant.

Do you mean something like this?
// File: data.php
<?php
return array(
32,
42
);
// Another file
$result = include 'data.php';
var_dump($result);
This is already possible. To update your file, you can use something like this
file_put_contents('data.php', '<?php return ' . var_export($array, true) . ';');
Update:
However, there is also nothing wrong with serialize()/unserialize() and storing the serialized array into a file.

Why not just cache the resulting html page that is generated? You could do that fairly simply:
// Check to see if cached file exists
// You could run a crob job to delete this at a certain time
// or have the cache file expire after a set amount of time
if(file_exists('cache.html')) {
include('cache.html');
exit;
}
ob_start(); // start capturing output buffer
// do output
$output = ob_get_contents();
$handle = fopen('cache.html', 'w');
fwrite($handle, $output);
fclose($handle);
ob_end_flush();

You could just write the answers to a database, and use the function arguments as key.

Related

Cache dynamic JavaScript generated by PHP

I use JShrink with a custom function to combine 8 uncompressed JavaScript files to a single compressed (minified) one, like this:
<?php
// Filename: js.php
header('Content-type: text/javascript');
require_once '../JShrink.php';
function concatenateFiles($files)
{
$buffer = '';
foreach($files as $file) {
$buffer .= file_get_contents(__DIR__ . '/' . $file);
}
return $buffer;
}
$js = concatenateFiles([
'core.min.js',
'promise.js',
'welcome.js',
'imagesloaded.js',
'cropper.js',
'translate.js',
'custom.js',
'masonry.js',
]);
$output = \JShrink\Minifier::minify($js);
echo $output;
Then I call this php file in my index page footer:
<script type="text/javascript" src="<? echo $url ?>/js/js.php"></script>
It is not being cached.
I modify my JS codes daily and I don't like to keep combining them manually, but also I need a way to get the echoed JS code cached, only that code and not all php files on the server.
How can I do this, and how would the cache purge process be?
Thanks in advance.
In theory, you need to use a header("...") with proper expiration. In practice, that's not working properly. You can spend your life googling for proper examples of "Cache-Control" and "Expires:" and none of what you find will work. So I suggest you to read this:
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching
ETags are the modern solution to tell the browser when your resource has changed - or not.
If the cache file doesnt exist or if any of the file modification timestamps is later then the cache, render it, then safe it to the cache, then echo the cache or the rendered result.

About PHP parallel file read/write

Have a file in a website. A PHP script modifies it like this:
$contents = file_get_contents("MyFile");
// ** Modify $contents **
// Now rewrite:
$file = fopen("MyFile","w+");
fwrite($file, $contents);
fclose($file);
The modification is pretty simple. It grabs the file's contents and adds a few lines. Then it overwrites the file.
I am aware that PHP has a function for appending contents to a file rather than overwriting it all over again. However, I want to keep using this method since I'll probably change the modification algorithm in the future (so appending may not be enough).
Anyway, I was testing this out, making like 100 requests. Each time I call the script, I add a new line to the file:
First call:
First!
Second call:
First!
Second!
Third call:
First!
Second!
Third!
Pretty cool. But then:
Fourth call:
Fourth!
Fifth call:
Fourth!
Fifth!
As you can see, the first, second and third lines simply disappeared.
I've determined that the problem isn't the contents string modification algorithm (I've tested it separately). Something is messed up either when reading or writing the file.
I think it is very likely that the issue is when the file's contents are read: if $contents, for some odd reason, is empty, then the behavior shown above makes sense.
I'm no expert with PHP, but perhaps the fact that I performed 100 calls almost simultaneously caused this issue. What if there are two processes, and one is writing the file while the other is reading it?
What is the recommended approach for this issue? How should I manage file modifications when several processes could be writing/reading the same file?
What you need to do is use flock() (file lock)
What I think is happening is your script is grabbing the file while the previous script is still writing to it. Since the file is still being written to, it doesn't exist at the moment when PHP grabs it, so php gets an empty string, and once the later processes is done it overwrites the previous file.
The solution is to have the script usleep() for a few milliseconds when the file is locked and then try again. Just be sure to put a limit on how many times your script can try.
NOTICE:
If another PHP script or application accesses the file, it may not necessarily use/check for file locks. This is because file locks are often seen as an optional extra, since in most cases they aren't needed.
So the issue is parallel accesses to the same file, while one is writing to the file another instance is reading before the file has been updated.
PHP luckily has a mechanisms for locking the file so no one can read from it until the lock is released and the file has been updated.
flock()
can be used and the documentation is here
You need to create a lock, so that any concurrent requests will have to wait their turn. This can be done using the flock() function. You will have to use fopen(), as opposed to file_get_contents(), but it should not be a problem:
$file = 'file.txt';
$fh = fopen($file, 'r+');
if (flock($fh, LOCK_EX)) { // Get an exclusive lock
$data = fread($fh, filesize($file)); // Get the contents of file
// Do something with data here...
ftruncate($fh, 0); // Empty the file
fwrite($fh, $newData); // Write new data to file
fclose($fh); // Close handle and release lock
} else {
die('Unable to get a lock on file: '.$file);
}

fetch templates from database/string

I store my templates as files, and would like to have the opportunity to store them also in a MySql db.
My template System
//function of Template class, where $file is a path to a file
function fetch() {
ob_start();
if (is_array($this->vars)) extract($this->vars);
include($file);
$contents = ob_get_contents();
ob_end_clean();
return $contents;
}
function set($name, $value) {
$this->vars[$name] = is_object($value) ? $value->fetch() : $value;
}
usage:
$tpl = & new Template('path/to/template');
$tpl->set('titel', $titel);
Template example:
<h1><?=titel?></h1>
<p>Lorem ipsum...</p>
My approach
Selecting the the template from the database as a String
what i got is like $tpl = "<h1><?=$titel? >...";
Now I would like to pass it to the template system, so I extended my constructor and the fetch function:
function fetch() {
if (is_array($this->vars)) extract($this->vars);
ob_start();
if(is_file($file)){
include($file);
}else{
//first idea: eval ($file);
//second idea: print $file;
}
$contents = ob_get_contents();
ob_end_clean();
return $contents;
}
'eval' gives me an Parsing exception, because it interprets the whole String as php, not just the php part.
'print' is really strange: It doesn't print the staff between , but I can see it in the source code of the page. php function are beeing ignored.
So what should I try instead?
Maybe not the best solution, but its simple and it should work:
fetch your template from the db
write a file with the template
include this file
(optional: delete the file)
If you add a Timestamp column to your template table, you can use the filesystem as a cache. Just compare the timestamps of the file and the database to decide if its sufficient to reuse the file.
If you prepend '?>' to your eval, it should work.
<?php
$string = 'hello <?php echo $variable; ?>';
$variable = "world";
eval('?>' . $string);
But you should know that eval() is a rather slow thing. Its resulting op-code cannot be cached in APC (or similar). You should find a way to cache your templates on disk. For one you wouldn't have to pull them from the database every time they're needed. And you could make use of regular op-code caching (done transparently by APC).
Every time I see some half-baked home-grown "template engine", I ask myself why the author did not rely on one of the many existing template engines out there? Most of them have already solved most of the problems you could possible have. Smarty (and Twig, phpTAL, …) make it a real charme to pull template sources from wherever you like (while trying to maintain optimal performance). Do you have any special reasons for not using one of these?
I would do pretty much the same thing as tweber except I would prefer depending on the local file timestamps rather than the DB.
Something like this: Each file has a TTL ( expiration time ) of lets say 60 seconds. The real reason is to avoid hitting the DB too hard/often needlessly, you'll quickly realize just how much faster filesystem access is compared to network and mysql especially if the mysql instance is running on a remote server.
# implement a function that gets the contents of the file ( key here is the filename )
# from DB and saves them to disk.
function fectchFreshCopy( $filename ) {
# mysql_connect(); ...
}
if (is_array($this->vars)) extract($this->vars);
ob_start();
# first check if the file exists already
if( file_exits($file) ) {
# now check the timestamp of the files creation to know if it has expired:
$mod_timestamp = filemtime( $file );
if ( ( time() - $mod_timestamp ) >= 60 ) {
# then the file has expired, lets fetch a fresh copy from DB
# and save it to disk..
fetchFreshCopy();
}
}else{
# the file doesnt exist at all, fetch and save it!
fetchFreshCopy();
}
include( $file );
$contents = ob_get_contents();
ob_end_clean();
return $contents;
}
Cheers, hope thats useful

How can I optimize this simple PHP script?

This first script gets called several times for each user via an AJAX request. It calls another script on a different server to get the last line of a text file. It works fine, but I think there is a lot of room for improvement but I am not a very good PHP coder, so I am hoping with the help of the community I can optimize this for speed and efficiency:
AJAX POST Request made to this script
<?php session_start();
$fileName = $_POST['textFile'];
$result = file_get_contents($_SESSION['serverURL']."fileReader.php?textFile=$fileName");
echo $result;
?>
It makes a GET request to this external script which reads a text file
<?php
$fileName = $_GET['textFile'];
if (file_exists('text/'.$fileName.'.txt')) {
$lines = file('text/'.$fileName.'.txt');
echo $lines[sizeof($lines)-1];
}
else{
echo 0;
}
?>
I would appreciate any help. I think there is more improvement that can be made in the first script. It makes an expensive function call (file_get_contents), well at least I think its expensive!
This script should limit the locations and file types that it's going to return.
Think of somebody trying this:
http://www.yoursite.com/yourscript.php?textFile=../../../etc/passwd (or something similar)
Try to find out where delays occur.. does the HTTP request take long, or is the file so large that reading it takes long.
If the request is slow, try caching results locally.
If the file is huge, then you could set up a cron job that extracts the last line of the file at regular intervals (or at every change), and save that to a file that your other script can access directly.
readfile is your friend here
it reads a file on disk and streams it to the client.
script 1:
<?php
session_start();
// added basic argument filtering
$fileName = preg_replace('/[^A-Za-z0-9_]/', '', $_POST['textFile']);
$fileName = $_SESSION['serverURL'].'text/'.$fileName.'.txt';
if (file_exists($fileName)) {
// script 2 could be pasted here
//for the entire file
//readfile($fileName);
//for just the last line
$lines = file($fileName);
echo $lines[count($lines)-1];
exit(0);
}
echo 0;
?>
This script could further be improved by adding caching to it. But that is more complicated.
The very basic caching could be.
script 2:
<?php
$lastModifiedTimeStamp filemtime($fileName);
if (isset($_SERVER['HTTP_IF_MODIFIED_SINCE'])) {
$browserCachedCopyTimestamp = strtotime(preg_replace('/;.*$/', '', $_SERVER['HTTP_IF_MODIFIED_SINCE']));
if ($browserCachedCopyTimestamp >= $lastModifiedTimeStamp) {
header("HTTP/1.0 304 Not Modified");
exit(0);
}
}
header('Content-Length: '.filesize($fileName));
header('Expires: '.gmdate('D, d M Y H:i:s \G\M\T', time() + 604800)); // (3600 * 24 * 7)
header('Last-Modified: '.date('D, d M Y H:i:s \G\M\T', $lastModifiedTimeStamp));
?>
First things first: Do you really need to optimize that? Is that the slowest part in your use case? Have you used xdebug to verify that? If you've done that, read on:
You cannot really optimize the first script usefully: If you need a http-request, you need a http-request. Skipping the http request could be a performance gain, though, if it is possible (i.e. if the first script can access the same files the second script would operate on).
As for the second script: Reading the whole file into memory does look like some overhead, but that is neglibable, if the files are small. The code looks very readable, I would leave it as is in that case.
If your files are big, however, you might want to use fopen() and its friends fseek() and fread()
# Do not forget to sanitize the file name here!
# An attacker could demand the last line of your password
# file or similar! ($fileName = '../../passwords.txt')
$filePointer = fopen($fileName, 'r');
$i = 1;
$chunkSize = 200;
# Read 200 byte chunks from the file and check if the chunk
# contains a newline
do {
fseek($filePointer, -($i * $chunkSize), SEEK_END);
$line = fread($filePointer, $i++ * $chunkSize);
} while (($pos = strrpos($line, "\n")) === false);
return substr($line, $pos + 1);
If the files are unchanging, you should cache the last line.
If the files are changing and you control the way they are produced, it might or might not be an improvement to reverse the order lines are written, depending on how often a line is read over its lifetime.
Edit:
Your server could figure out what it wants to write to its log, put it in memcache, and then write it to the log. The request for the last line could be fulfulled from memcache instead of file read.
The most probable source of delay is that cross-server HTTP request. If the files are small, the cost of fopen/fread/fclose is nothing compared to the whole HTTP request.
(Not long ago I used HTTP to retrieve images to dinamically generate image-based menus. Replacing the HTTP request by a local file read reduced the delay from seconds to tenths of a second.)
I assume that the obvious solution of accessing the file server filesystem directly is out of the question. If not, then it's the best and simplest option.
If not, you could use caching. Instead of getting the whole file, you just issue a HEAD request and compare the timestamp to a local copy.
Also, if you are ajax-updating a lot of clients based on the same files, you might consider looking at using comet (meteor, for example). It's used for things like chats, where a single change has to be broadcasted to several clients.

Streaming output to a file and the browser

So, I'm looking for something more efficient than this:
<?php
ob_start();
include 'test.php';
$content = ob_get_contents();
file_put_contents('test.html', $content);
echo $content;
?>
The problems with the above:
Client doesn't receive anything until the entire page is rendered
File might be enormous, so I'd rather not have the whole thing in memory
Any suggestions?
Interesting problem; don't think I've tried to solve this before.
I'm thinking you'll need to have a second request going from your front-facing PHP script to your server. This could be a simple call to http://localhost/test.php. If you use fopen-wrappers, you could use fread() to pull the output of test.php as it is rendered, and after each chunk is received, output it to the screen and append it to your test.html file.
Here's how that might look (untested!):
<?php
$remote_fp = fopen("http://localhost/test.php", "r");
$local_fp = fopen("test.html", "w");
while ($buf = fread($remote_fp, 1024)) {
echo $buf;
fwrite($local_fp, $buf);
}
fclose($remote_fp);
fclose($local_fp);
?>
A better way to do this is to use the first two parameters accepted by ob_start: output_callback and chunk_size. The former specifies a callback to handle output as it's buffered, and the latter specifies the size of the chunks of output to handle.
Here's an example:
$output_file = fopen('test.html', 'w');
if ($output_file === false) {
// Handle error
}
$write_ob_to_file = function($buffer) use ($output_file) {
fwrite($output_file, $buffer);
// Output string as-is
return false;
};
ob_start($write_ob_to_file, 4096);
include 'test.php';
ob_end_flush();
fclose($output_file);
In this example, the output buffer will be flushed (sent) for every 4096 bytes of output (and once more at the end by the ob_end_flush call). Each time the buffer is flushed, the callback $write_ob_to_file will be called and passed the latest chunk. This gets written to test.html. The callback then returns false, meaning "output this chunk as is". If you wanted to only write the output to file and not to PHP's output stream, you could return an empty string instead.
Pix0r's answer is what you want unless you actually need it "included" rather than just executed. For example, if you have login information before the test.php, it will not get passed into the file if you call it with fopen.
If you need it genuinely included, then what you have is the simplest method, but if you want constant output, you'll need to actually write test.php in a manner that outputs as well as stores the information as it goes. As far as I know there's no way to both collect buffer and output it as you go.
Here you go x-send-file, use mod_xsendfile to send file efficiently, really easy.

Categories