Cache dynamic JavaScript generated by PHP - php

I use JShrink with a custom function to combine 8 uncompressed JavaScript files to a single compressed (minified) one, like this:
<?php
// Filename: js.php
header('Content-type: text/javascript');
require_once '../JShrink.php';
function concatenateFiles($files)
{
$buffer = '';
foreach($files as $file) {
$buffer .= file_get_contents(__DIR__ . '/' . $file);
}
return $buffer;
}
$js = concatenateFiles([
'core.min.js',
'promise.js',
'welcome.js',
'imagesloaded.js',
'cropper.js',
'translate.js',
'custom.js',
'masonry.js',
]);
$output = \JShrink\Minifier::minify($js);
echo $output;
Then I call this php file in my index page footer:
<script type="text/javascript" src="<? echo $url ?>/js/js.php"></script>
It is not being cached.
I modify my JS codes daily and I don't like to keep combining them manually, but also I need a way to get the echoed JS code cached, only that code and not all php files on the server.
How can I do this, and how would the cache purge process be?
Thanks in advance.

In theory, you need to use a header("...") with proper expiration. In practice, that's not working properly. You can spend your life googling for proper examples of "Cache-Control" and "Expires:" and none of what you find will work. So I suggest you to read this:
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching
ETags are the modern solution to tell the browser when your resource has changed - or not.

If the cache file doesnt exist or if any of the file modification timestamps is later then the cache, render it, then safe it to the cache, then echo the cache or the rendered result.

Related

PHP file_get_contents to get jquery min code

I am writing a script that will go through all my .js files and minify them into one .php file to be included on the site. I just run this script after I have edited some js and want to upload it to the live site.
The issue: I can not load the content of jquery-2.1.4.min.js using file_get_contents. I have tried changing the name of the file to jquery.js and that did not help. I do not have any complex javascript in the other files (just random strings) but they open fine.
With the code:
if (!file_get_contents($filename)) {
die ("dammit");
}
I get the response of "dammit". All other files are fine though, so I know the file name and path are correct. One of the weird things is that there are no errors coming up (I have used error_reporting (-1); to make sure they will).
Is anyone else able to get the file contents of jquery? Any ideas what would cause this and if it will be a problem with other javascript or css?
As requested, here is the full code:
$buffer = $jsStartBuffer;
//get a list of files in the folder (only .js files)
$fileArray = array();
if (is_dir($jsMakeFile["SourcePath"])){
if ($dh = opendir($jsMakeFile["SourcePath"])){
while (($file = readdir($dh)) !== false){
$file_parts = pathinfo($jsMakeFile["SourcePath"].$file);
if ($file_parts['extension'] == "js") {
$fileArray[] = $file;
}
}
}
}
print_r($fileArray);
foreach ($fileArray as $nextRawFile) {
$buffer .= file_get_contents($jsMakeFile["SourcePath"].$nextRawFile);
if (!file_get_contents($jsMakeFile["SourcePath"].$nextRawFile)) {
die ("dammit");
}
echo $jsMakeFile["SourcePath"].$nextRawFile;
}
$buffer .= $jsEndBuffer;
echo $buffer;
$buffer = \JShrink\Minifier::minify($buffer);
file_put_contents($jsMakeFile["finalFile"]["path"].$jsMakeFile["finalFile"]["name"], $buffer);
When I put other .js files in there it is fine (I even tried lightbox.min.js and it worked fine!) I have tried a few different versions of jquery.min and they all seem to fail.
OK, solution found. It is to do with the actual file created by jquery.
The way I solved it was:
- Go to the query site, and instead of downloading the required file, open it in a new tab/window
- Copy all the content in this window
- Create a new file where required and name as required
- Paste the content into this file and save it
This new file will now be able to be read by file_get_contents. I would imagine this solution would help if you are trying to work with jquery (and other) files in php in any way and having issues.

PHP: headers already sent when using fwrite but not when using fputcsv

I know the theory behind this error however it is now driving me crazy again. I'm using Tonic in my application. With this library you redirect all traffic to your dispatch.php script which then executes the appropriate Resource and that Resource return a Response which is displayed (output) by dispatch.php.
The output method of Response look like this:
/**
* Output the response
*/
public function output()
{
foreach ($this->headers as $name => $value) {
header($name.': '.$value, true, $this->responseCode());
}
echo $this->body;
}
So AFAIK this tells us that you can't write anything to php output in your Resource.
I now have a Resource that dynamically generates a csv from an input csv and outputs it to the browser (it converts 1 column of the data to a different format).
$csv = fopen('php://output', 'w');
// sets header
$response->__set('contentDisposition:', 'attachment; filename="' . $fileName . '"');
while (($line = fgetcsv($filePointer, 0, ",", '"')) !== FALSE) {
// generate line
fputcsv($csv, $line);
}
fclose($filePointer);
return $response;
This works 100% fine. No issue with headers and the correct file is generated and offered for download. This is already confusing because we are writing to the output before headers are set? What does fputcsv actually do?
I have a second resource that does a similar thing but it outputs a custom file format (text file).
$output = fopen('php://output', 'w');
// sets header
$response->__set('contentDisposition:', 'attachment; filename="' . $fileName . '"');
while (($line = fgetcsv($filePointer, 0, ",", '"')) !== FALSE) {
// generate a record (multiple lines) not shown / snipped
fwrite($output, $record);
}
fclose($filePointer);
return $response;
The only difference is that it uses fwrite instead of fputcsv and bang
headers already sent by... // line number = fwrite()
This is very confusing! IMHO it should actually fail in both cases? Why does the first one work? how can I get the second one to work?
(I can generate a huge string containing the file and put that into the responses body and it works. However files could be rather big (up to 50 mb) and hence want to avoid this.)
$record is not set, generating an error of level NOTICE. If you have error_reporting to true, PHP will put this error in the output before sending the headers.
Set error_reporting to false and keep an eye on your logs instead.
Here my solution. I'm not going to mark it as answer for a while because maybe someone comes up with something better (simpler) than this.
First a comment about fwrite and fputcsv:
fputcsv has a complete different source and not much in common with fwrite (it does not call fwrite internally, it's a separate function in C source code). Since I don't know C I can't tell why they behave differently but they do.
Solution:
The generated files can be "large" depending on input and hence generating the whole file by string concatenation and keeping it in memory isn't a great solution.
I googled a bit and found mod_xsendfile for apache. This works by setting a custom header in php containing the path to the file to be sent to the user. The mod then removes that custom header and sends the file as response.
The problem with mod_xsendfile is that it is not compatible with mod_rewrite, which I use too. You will get 404 errors. To solve this you need to add
RewriteCond %{REQUEST_FILENAME} !-f
to the according place in apache config (don't rewrite if the request is for an actual physically existing file). However that's not enough. You need to set the header X-Sendfile in a php script that was not rewritten, is an actual existing php file.
So in the \Tonic\Resource class generating the file at the end I redirect to an above outlined script:
$response->__set('location', $url . "?fileName=" . urlencode($fileName));
$response->code = \Tonic\Response::FOUND;
return $response;
In the download script we redirect to in above snippet just do (validation stuff omitted):
$filePath = trim($_GET['fileName']);
header ('X-Sendfile: ' . $filePath);
header ('Content-Disposition: attachment; filename="' . $filePath . '"');
and the browser will display a download dialog for the generated file.
You will also need to create cron job to delete the generated files.
/usr/bin/find /path/to/generatedFiles/ -type f -mmin +60 -exec rm {} +
this will delete all files older than 60 min in directory /path/to/generatedFiles.
I use ubuntu server so you can add it to the file
/etc/cron.daily/standard
or generated a new file in that directory or generated a new file in /etc/cron.hourly containing that command.
Note:
I name the generated files after the sha1 hash of the input csv file so the name is unique and if someone repeats the same requests several times in a short period you can just return the already generated file a second time.

fetch templates from database/string

I store my templates as files, and would like to have the opportunity to store them also in a MySql db.
My template System
//function of Template class, where $file is a path to a file
function fetch() {
ob_start();
if (is_array($this->vars)) extract($this->vars);
include($file);
$contents = ob_get_contents();
ob_end_clean();
return $contents;
}
function set($name, $value) {
$this->vars[$name] = is_object($value) ? $value->fetch() : $value;
}
usage:
$tpl = & new Template('path/to/template');
$tpl->set('titel', $titel);
Template example:
<h1><?=titel?></h1>
<p>Lorem ipsum...</p>
My approach
Selecting the the template from the database as a String
what i got is like $tpl = "<h1><?=$titel? >...";
Now I would like to pass it to the template system, so I extended my constructor and the fetch function:
function fetch() {
if (is_array($this->vars)) extract($this->vars);
ob_start();
if(is_file($file)){
include($file);
}else{
//first idea: eval ($file);
//second idea: print $file;
}
$contents = ob_get_contents();
ob_end_clean();
return $contents;
}
'eval' gives me an Parsing exception, because it interprets the whole String as php, not just the php part.
'print' is really strange: It doesn't print the staff between , but I can see it in the source code of the page. php function are beeing ignored.
So what should I try instead?
Maybe not the best solution, but its simple and it should work:
fetch your template from the db
write a file with the template
include this file
(optional: delete the file)
If you add a Timestamp column to your template table, you can use the filesystem as a cache. Just compare the timestamps of the file and the database to decide if its sufficient to reuse the file.
If you prepend '?>' to your eval, it should work.
<?php
$string = 'hello <?php echo $variable; ?>';
$variable = "world";
eval('?>' . $string);
But you should know that eval() is a rather slow thing. Its resulting op-code cannot be cached in APC (or similar). You should find a way to cache your templates on disk. For one you wouldn't have to pull them from the database every time they're needed. And you could make use of regular op-code caching (done transparently by APC).
Every time I see some half-baked home-grown "template engine", I ask myself why the author did not rely on one of the many existing template engines out there? Most of them have already solved most of the problems you could possible have. Smarty (and Twig, phpTAL, …) make it a real charme to pull template sources from wherever you like (while trying to maintain optimal performance). Do you have any special reasons for not using one of these?
I would do pretty much the same thing as tweber except I would prefer depending on the local file timestamps rather than the DB.
Something like this: Each file has a TTL ( expiration time ) of lets say 60 seconds. The real reason is to avoid hitting the DB too hard/often needlessly, you'll quickly realize just how much faster filesystem access is compared to network and mysql especially if the mysql instance is running on a remote server.
# implement a function that gets the contents of the file ( key here is the filename )
# from DB and saves them to disk.
function fectchFreshCopy( $filename ) {
# mysql_connect(); ...
}
if (is_array($this->vars)) extract($this->vars);
ob_start();
# first check if the file exists already
if( file_exits($file) ) {
# now check the timestamp of the files creation to know if it has expired:
$mod_timestamp = filemtime( $file );
if ( ( time() - $mod_timestamp ) >= 60 ) {
# then the file has expired, lets fetch a fresh copy from DB
# and save it to disk..
fetchFreshCopy();
}
}else{
# the file doesnt exist at all, fetch and save it!
fetchFreshCopy();
}
include( $file );
$contents = ob_get_contents();
ob_end_clean();
return $contents;
}
Cheers, hope thats useful

Serving php as css/js: Is it fast enough? What drawbacks are there?

I've recently started getting into the area of optimizing preformance and load times client side, compressing css/js, gzipping, paying attention to YSlow, etc.
I'm wondering, while trying to achieve all these micro-optimizations, what are the pros and cons of serving php files as css or javascript?
I'm not entirely sure where the bottleneck is, if there is one. I would assume that between an identical css and php file, the "pure" css file would be slightly faster simply because it doesn't need to parse php code. However, in a php file you can have more control over headers which may be more important(?).
Currently I'm doing a filemtime() check on a "trigger" file, and with some php voodoo writing a single compressed css file from it, combined with several other files in a defined group. This creates a file like css/groupname/301469778.css, which the php template catches and updates the html tags with the new file name. It seemed like the safest method, but I don't really like the server cache getting filled up with junk css files after several edits. I also don't bother doing this for small "helper" css files that are only loaded for certain pages.
If 99% of my output is generated by php anyways, what's the harm (if any) by using php to directly output css/js content? (assuming there are no php errors)
If using php, is it a good idea to mod_rewrite the files to use the css/js extension for any edge cases of browser misinterpretation? Can't hurt? Not needed?
Are there any separate guidelines/methods for css and javascript? I would assume that they would be equal.
Which is faster: A single css file with several #imports, or a php file with several readfile() calls?
What other ways does using php affect speed?
Once the file is cached in the browser, does it make a difference anymore?
I would prefer to use php with .htaccess because it is much simpler, but in the end I will use whatever method is best.
ok, so here are your direct answers:
no harm at all as long as your code is fine. The browser won't notice any difference.
no need for mod_rewrite. the browsers usually don't care about the URL (and often not even about the MIME type).
CSS files are usually smaller and often one file is enough, so no need to combine. Be aware that combining files from different directories affect images referenced in the CSS as they remain relative to the CSS URL
definitely readfile() will be faster as #import requires multiple HTTP requests and you want to reduce as much as possible
when comparing a single HTTP request, PHP may be slightly slower. But you loose the possibility to combine files unless you do that offline.
no, but browser caches are unreliable and improper web server config may cause the browser to unnecessarily re-fetch the URL.
It's impossible to give you a much more concrete answer because it depends a lot on your project details.
We are developing really large DHTML/AJAX web application with about 2+ MB of JavaScript code and they still load quickly with some optimizations:
try to reduce the number of Script URLs included. We use a simple PHP script that loads a bunch of .js files and sends them in one go to the browser (all concatenated). This will load your page a lot faster when you have a lot of .js files as we do since the overhead of setting up a HTTP connection is usually much higher that the actually transferring the content itself. Note that the browser needs to download JS files synchroneously.
be cache friendly. Our HTML page is also generated via PHP and the URL to the scripts contains a hash that's dependent on the file modification times. The PHP script above that combines the .js files then checks the HTTP cache headers and sets a long expiration time so that the browser does not even have to load any external scripts the second time the user visits the page.
GZIP compress the scripts. This will reduce your code by about 90%. We don't even have to minify the code (which makes debugging easier).
So, yes, using PHP to send the CSS/JS files can improve the loading time of your page a lot - especially for large pages.
EDIT: You may use this code to combine your files:
function combine_files($list, $mime) {
if (!is_array($list))
throw new Exception("Invalid list parameter");
ob_start();
$lastmod = filemtime(__FILE__);
foreach ($list as $fname) {
$fm = #filemtime($fname);
if ($fm === false) {
$msg = $_SERVER["SCRIPT_NAME"].": Failed to load file '$fname'";
if ($mime == "application/x-javascript") {
echo 'alert("'.addcslashes($msg, "\0..\37\"\\").'");';
exit(1);
} else {
die("*** ERROR: $msg");
}
}
if ($fm > $lastmod)
$lastmod = $fm;
}
//--
$if_modified_since = preg_replace('/;.*$/', '',
$_SERVER["HTTP_IF_MODIFIED_SINCE"]);
$gmdate_mod = gmdate('D, d M Y H:i:s', $lastmod) . ' GMT';
$etag = '"'.md5($gmdate_mod).'"';
if (headers_sent())
die("ABORTING - headers already sent");
if (($if_modified_since == $gmdate_mod) or
($etag == $_SERVER["HTTP_IF_NONE_MATCH"])) {
if (php_sapi_name()=='CGI') {
Header("Status: 304 Not Modified");
} else {
Header("HTTP/1.0 304 Not Modified");
}
exit();
}
header("Last-Modified: $gmdate_mod");
header("ETag: $etag");
fc_enable_gzip();
// Cache-Control
$maxage = 30*24*60*60; // 30 Tage (Versions-Unterstützung im HTML Code!)
$expire = gmdate('D, d M Y H:i:s', time() + $maxage) . ' GMT';
header("Expires: $expire");
header("Cache-Control: max-age=$maxage, must-revalidate");
header("Content-Type: $mime");
echo "/* ".date("r")." */\n";
foreach ($list as $fname) {
echo "\n\n/***** $fname *****/\n\n";
readfile($fname);
}
}
function files_hash($list, $basedir="") {
$temp = array();
$incomplete = false;
if (!is_array($list))
$list = array($list);
if ($basedir!="")
$basedir="$basedir/";
foreach ($list as $fname) {
$t = #filemtime($basedir.$fname);
if ($t===false)
$incomplete = true;
else
$temp[] = $t;
}
if (!count($temp))
return "ERROR";
return md5(implode(",",$temp)) . ($incomplete ? "-INCOMPLETE" : "");
}
function fc_compress_output_gzip($output) {
$compressed = gzencode($output);
$olen = strlen($output);
$clen = strlen($compressed);
if ($olen)
header("X-Compression-Info: original $olen bytes, gzipped $clen bytes ".
'('.round(100/$olen*$clen).'%)');
return $compressed;
}
function fc_compress_output_deflate($output) {
$compressed = gzdeflate($output, 9);
$olen = strlen($output);
$clen = strlen($compressed);
if ($olen)
header("X-Compression-Info: original $olen bytes, deflated $clen bytes ".
'('.round(100/$olen*$clen).'%)');
return $compressed;
}
function fc_enable_gzip() {
if(isset($_SERVER['HTTP_ACCEPT_ENCODING']))
$AE = $_SERVER['HTTP_ACCEPT_ENCODING'];
else
$AE = $_SERVER['HTTP_TE'];
$support_gzip = !(strpos($AE, 'gzip')===FALSE);
$support_deflate = !(strpos($AE, 'deflate')===FALSE);
if($support_gzip && $support_deflate) {
$support_deflate = $PREFER_DEFLATE;
}
if ($support_deflate) {
header("Content-Encoding: deflate");
ob_start("fc_compress_output_deflate");
} else{
if($support_gzip){
header("Content-Encoding: gzip");
ob_start("fc_compress_output_gzip");
} else{
ob_start();
}
}
}
Use files_hash() to generate a unique hash string that changes whenever your source files change and combine_files() to send the combined files to the browser. So, use files_hash() when generating the HTML code for the tag and combine_files() in the PHP script that is loaded via that tag. Just place the hash in the query string of the URL.
<script language="JavaScript" src="get_the_code.php?hash=<?=files_hash($list_of_js_files)?>"></script>
Make sure you specify the same $list in both cases.
You're talking about serving static files via PHP, there's really little point doing that since its always going to be slower than Apache serving a normal file. A CSS #import will be quicker that PHP's readfile() but the best performance will be gained by serving one minified CSS file that combines all the CSS you need to use.
If sounds like you're on the right track though. I'd advise pre-processing your CSS and saving to disk. If you need to set special headers for things like caching just do this in your VirtualHost directive or .htaccess file.
To avoid lots of cached files you could use a simple file-naming convention for your minified CSS. For example, if your main CSS file called main.css and it references reset.css and forms.css via #imports, the minified version could be called main.min.css
When this file is regenerated it simply replaces it. If you include a reference to that file in your HTML, you could send the request to PHP if the file doesn't exist, combine and minify the file (via something like YUI Compressor), and save it to disk and therefore be served via normal HTTP for all future requests.
When you update your CSS just delete the main.min.css version and it will automatically regenerate.
You can do the preprocessing with an ANT Build. Sorry, the post is german, but I've tried translate.google.com and it worked fine :-) So you can use the post as tutorial to achieve a better performance...
I would preprocess the files and save them to disk, just like simonrjones said. Caching-stuff etc. should be done by the dedicated elements, like Apache WebServer, Headers and Browser.
While slower, one advantage / reason you might have to do this is to put dynamic content into the files on the server, but still have them appear to be js or css from the client perspective.
Like this for example, passing the environment from php to javascript:
var environment = <?=getenv('APPLICATION_ENV');?>
// More JS code here ...

Generate php source code based on php array

I've got a non-modifiable function which takes several seconds to finish.
The function returns an array of objects. The result only changes about once per day.
To speed things up I wanted to cache the result using APC but the hosting provider(shared hosting environment) does not offer any memory caching solutions (APC, memcache, ...).
The only solution I found was using serialize() to store the data into a file and then deserializing the data back again.
What about generating php source code out of the array? Later I could simple call
require data.php
to get the data into a predefined variable.
Thanks!
UPDATE: Storing the resulting .html is no option because the output is user-dependant.
Do you mean something like this?
// File: data.php
<?php
return array(
32,
42
);
// Another file
$result = include 'data.php';
var_dump($result);
This is already possible. To update your file, you can use something like this
file_put_contents('data.php', '<?php return ' . var_export($array, true) . ';');
Update:
However, there is also nothing wrong with serialize()/unserialize() and storing the serialized array into a file.
Why not just cache the resulting html page that is generated? You could do that fairly simply:
// Check to see if cached file exists
// You could run a crob job to delete this at a certain time
// or have the cache file expire after a set amount of time
if(file_exists('cache.html')) {
include('cache.html');
exit;
}
ob_start(); // start capturing output buffer
// do output
$output = ob_get_contents();
$handle = fopen('cache.html', 'w');
fwrite($handle, $output);
fclose($handle);
ob_end_flush();
You could just write the answers to a database, and use the function arguments as key.

Categories