I have a script called "image.php" that is used to count impressions and then print the image.
This script is called in this way:
<img src="path/image.php?id=12345" />
And it's used very often by my users, i see thousand of request per day
So I am looking to understand what is the best way to output the image at the end of this script:
Method 1 (actually in use):
header("Content-type: $mime"); //$mime is found with getimagesize function
readfile("$image_url");
exit;
Method 2 (pretty sure that is slowest):
header("Content-type: $mime");
echo file_get_contents("$image_url");
exit;
Method 3:
header('Location: '.$image_url);
exit();
Is method 3 better / faster than method 1?
Ok first of all Method 3 is way faster when redirected to the original file.
The first 2 methods need file access and read the file and also they don't use the browser cache!
Also when you store the rendered images, you can better let apache handle your static files.
Apache is way faster than PHP and it uses the right browser caching (3 or 4 times faster wouldn't be a suprise).
What happens is when you request a static file, apache send the Last-Modified header
If your client requests the same image again it sends the If-Modified-Since header with that same date. If the file isn't changed you server respond with an 304 Not Modified header without any data wich saves you a lot IO operations (Besides the ETAG header wich is also used)
For your impressions count of the image, you could create a cronjob that parses your apache access logs so the end-user won't even notice it. But in your case it's easier to count the impressions in your script and then redirect
Essentially, what readfile does is it reads the file directly into the output buffer while file_get_contents loads the file into the memory (string). So, when you output the results the data is copied from the memory into the output buffer, making it two times slower than readfile.
Related
I want to force download a pdf,doc or docx file.
With the following code,Pdf files get opened in my tab instead of getting downloaded.
I have a table having download link in every row.I want to download file on click of this link.
foreach($a as $id = > $item) {
echo '<tr><td><a href="http://staging.experiencecommerce.com/ecsite-v3/uploads/'.substr($item['f_resume'], 63).'" ">';
//Note:substr($item['f_resume'], 63) is file_name
echo '</a></td><td>'.$item['f_date'].'</td></tr>';
}
I went through some Question on SO with same problem and tried their solution,but in Vain.
When I included the solution inside foreach,the page downloads file on load and when I place the solution outside ,the Php script gets downloaded.
Where am I going wrong?
You can set headers that will force downloading:
header('Content-Type: application/force-download');
header('Content-Disposition: attachment; filename="filenamehere.pdf"');
If you're not using PHP to provide content of that files you can set headers using eg. .htaccess (requires mod_headers).
<FilesMatch ".pdf$">
FileETag None
<ifModule mod_headers.c>
Header set Content-Type "application/force-download"
</ifModule>
</FilesMatch>
After our whole chat session I think we can leave this answer here, just for future reference:
As seen in your initial post, once you click the link, you relinquish all control to the browser so it will treat the file as it sees fit. Usually this involves trying to find whatever application or plugin the system can find to treat your file.
Whenever you want to force the download of the file all you have to do is divorce the presentation itself from the task at hand. In this particular case:
1 - Create a new script that will identify the file via parameters passed and force the download on it, as seen on the examples at this site php.net/manual/en/function.readfile.php.
2 - Rework the presentation so the links do no longer point to the file itself, but to the new script with the appropriate parameters (like, for example, download_file.php?file_id=#FILE_ID#).
3 - Treat the case in which the file can not be found by, for example, die("The file could not be found") before setting the headers.
One word of advice: do not use the file location as a parameter!!!. Use instead something that you can retrieve from a database to then collect the file location. If you pass the file location itself as a parameter nothing is stopping me from doing this:
http://yoursite.com/download_file.php?file=download_file.php
http://yoursite.com/download_file.php?file=index.php
http://yoursite.com/download_file.php?file=whatever_file_there_is
With the adequate circumstances, like autodetection of the xtype for the requested file, it would allow me to access your code and exploit any possible flaws
One second and final note of advice: php can only output one thing at once. If you want it to output a website you can't output a pdf file afterwards. That's why - among other reasons - you divorce the different tasks at hand and also, that's why everything went awry when you tried directly including the download script after each link was printed.
If it helps, imagine php not as your usual real-time programming language, but as a printer. It will print everything you tell it to and serve it in reasonably sized chunks. There's no stopping it until the end is reached, there's no possible exploring two opposite branching code paths unless you call the script again with the appropriate conditions.
Hope the chat helped you.
I have a ton of data to send to the browser, maybe 100mb or so. I've chunked them up into smaller files so I can simulate streaming. Let's say I have 200 files of 500kb each. I build an array of the 200 files in javascript, and then loop over that and make ajax calls for each. It works fine. Then I wanted to improve it, so I gziped everything on the server and they went down to about 20% of the original chunk size. My ajax calls the following file:
fileserver.php?file=/temp/media_C46_20110719_113332_ori-0.js.gz
In fileserver.php, I have, very simply:
$filepath = isset($_GET['file']) ? $_GET['file'] : '';
if($filepath!=''){
if(substr($filepath,-2,2)=='gz'){
header("Content-Type: text/plain" );
header("Content-Length: " .(string)(filesize($filepath)) );
header("Content-Encoding: gzip");
readfile(filepath);
}
else{
header("Location: ".$filepath );
}
}
Again, this totally works. The problem is that it takes forever! Looking at my network tab in chrome, it's taking 15 seconds or so to get a 100kb chunk. I can download that file directly in less than a second. The php script above should take virtually no time to run. I know the client (browser) needs to spend a bit of time to inflate the content, but it's got to be less than a second. So what's taking 15 seconds! Are there any other tools I can use to check this out?
I know I could set the header variables in apache, but I don't have access to that, and doing it in php is functionally equivalent, right? Are those the correct headers to set?
I just figured out the problem. The filesize() function wasn't getting the correct path, so It was printing as blank. I fixed that to send the correct info and it works much much faster now.
My PHP script is outputting the contents of a .sql file, after it has been called by a POST request from my Delphi Desktop Client.
Here is what is happening:
My Desktop Client sends a POST request to my PHP Script.
The Script then calls mysqldump and generates a file - xdb_backup.sql
The Script then include "xdb_backup.sql"; which will print and return it to the Desktop Client, whereafter it deletes the SQL file.
The problem is, that the size of the SQL file can vary (for testing, I generated one that is 6 mb). I would like my desktop client to be able to show the progress, however the PHP script does not expose it's size, so I have no Progressbar.Max value to assign.
How can I make my PHP script let the Client know how big it is before the whole thing is over ?
Note: Downloading the SQL file is not an option, as the script has to destroy it. :)
You would do
$fsize = filesize($file_path);
where $file_path will be path to the generated file xdb_backup.sql,
to get the filesize in server and return headers with the following line attached.
header("Content-Length: " . $fsize);
Take a look at http://www.hotscripts.com/forums/php/47774-download-script-not-sending-file-size-header-corrupt-files-since-using-remote-file-server.html which explains a download php script.
You have to send a Content-Length header using header function. Something like this:
header('Content-Length: '.filesize('yourfile.sql'));
You may want to send the file using readfile instead of include.
You can set the Content-Length header with the size of xdb_backup.sql
Question about this helper http://codeigniter.com/user_guide/helpers/download_helper.html
If, for example, program.exe weights 4 GB, will it take a lot of PHP memory for reading and delivering that file?
$data = file_get_contents("/path/to/program.exe"); // Read the file's contents
$name = 'software.exe';
force_download($name, $data);
force_download function just set the proper HTTP headers to make the client's browser download the file. So, it won't open the file, just pass it's URL to the client.
Check the helper source code, if you need: https://bitbucket.org/ellislab/codeigniter-reactor/src/31b5c1dcf2ed/system/helpers/download_helper.php
Edit: I'd sugest creating your own version of the helper, and, instead of using strlen to get the file size, use the php function filesize, which takes only the file name as argument and returns the size in bytes.
More info, at http://www.php.net/manual/en/function.filesize.php
Yea... that could get... bad...
file_get_contents reads the entire contents of a file into a string. For large files, that can get, well, bad. I would look into readfile. Please remember too -- since CI automatically caches when you are loading a view, that means there will be no discernible benefit to readfile if it is used in a CI view. It would almost be better to handle this with an external script or by outputting directly from the controller and not calling the view at all.
I have a php dynamically generated image which I need to write to file to call later. My problem is that I need this image to have appropriate expiration headers included in it. There are a massive number of these and their headers vary individually file-by-file making .htaccess controls not an option.
I can write expiration headers if I'm outputting the image directly to the browser with this:
header("Content-Type: image/jpeg");
header('Expires: "' . gmdate("D, d M Y H:i:s", $expirationDate) . '"');
imagepng($image, NULL);
Or I can write the image to a file to be used later with this:
imagepng($image, $filepath)
But I can't for the life of me figure out how to combine those two and write the image to a file while including its expiration headers. How would you go about writing an image file with an expires header?
I think your best bet it to server the file just as you are, something like:
header("Content-Type: image/jpeg");
header('Expires: "' . gmdate("D, d M Y H:i:s",
$expirationDate) . '"');
imagepng($image, NULL);
Sure you're using php to serve a static file, but the expire header is going to limit repeat requests.
Update: Since $image is a generated file, on the first request generate and save the image, then output it. On additional requests, just output the already generated image. Essentially the expire headers are controlling the browser's cache, while you need to implement some kind of caching on the server to avoid generating the same output multiple times.
So you're looking at two different kinds of caching. You can do them in the same script, with a combination of two scripts - really however you want.
Unless you can set a standard expire header with apache (which you say you can't, since it varies), I believe this is your best (if not only) choice.
Of course there is the convoluted and complex way:
Set up mod_rewrite to send requests for missing images to your php script.
Append some session id to the image request (so it's unique to the browser).
Have the php script send the expire header, and the image content.
Have the php script link the real static image to the session specific image name.
Or something like that. I'd just serve them all up using php.
Update: Or use mod_asis from VolkerK's great answer.
If you really want to store both the headers and the content in files on the server you could use mod_asis:
In the server configuration file, associate files with the send-as-is handler e.g.
AddHandler send-as-is asis
The contents of any file with a .asis extension will then be sent by Apache to the client with almost no changes. In particular, HTTP headers are derived from the file itself according to mod_cgi rules, so an asis file must include valid headers, and may also use the CGI Status: header to determine the HTTP response code.
Your php script then would write both the headers and the content to files that are handled as send-as-is by the apache webserver.
Perhaps all you have to do is exactly ...nothing, except writing the image data to the disc.
Depending on the webserver you're using some caching mechanisms work out of the box for static files (which you would create with the php script).
If you're using apache's httpd take a look at http://httpd.apache.org/docs/2.2/mod/core.html#fileetag and http://httpd.apache.org/docs/2.2/caching.html. By default httpd will also send a last-modified header and it supports If-Modified-Since request headers.
When your php script changes the image files the ETag changes as well and/or the If-Modified-Since condition would be met and the httpd sends the data. Otherwise it would only send a response saying "nothing has changed" to the client.