Is there an alternative to php readfile() in safe mode server? - php

I host my site on a shared hosting, which lately changed the server to safe mode (without even notifying that).
I use a feature that download files from the server, using the readfile() function (I use php).
Now, in safe_mode, this function is no longer available.
Is there a replacement or a workaround to handle the situation that the file will be able to be downloaded by the user?
Thanks

As I wrote in comments, readfile() is disabled by including it in disable_functions php.ini directive. It has nothing to do with safe mode. Try checking which functions are disabled and see if you can use any other filesystem function(-s) to do what readfile() does.
To see the list of disabled functions, use:
var_dump(ini_get('disable_functions'));
You might use:
// for any file
$file = fopen($filename, 'rb');
if ( $file !== false ) {
fpassthru($file);
fclose($file);
}
// for any file, if fpassthru() is disabled
$file = fopen($filename, 'rb');
if ( $file !== false ) {
while ( !feof($file) ) {
echo fread($file, 4096);
}
fclose($file);
}
// for small files;
// this should not be used for large files, as it loads whole file into memory
$data = file_get_contents($filename);
if ( $data !== false ) {
echo $data;
}
// if and only if everything else fails, there is a *very dirty* alternative;
// this is *dirty* mainly because it "explodes" data into "lines" as if it was
// textual data
$data = file($filename);
if ( $data !== false ) {
echo implode('', $data);
}

I assume you are using readfile to load remote files, as you said "from the server". If that is correct, your problem is not safe mode but that opening URLs with normal php file functions is not permitted anymore (setting allow_url_fopen disabled).
In that case, you can use PHP's curl functions to download files. Also, file_get_contents is a valid alternative.

Related

Why fread (PHP) don't read entire file?

I have this code on PHP that load a local file:
$filename = "fille.txt";
$fp = fopen($filename, "rb");
$content = fread($fp, 25699);
fclose($fp);
print_r($content);
With this code I can see all the contents of the file. But when I change the $filename to a external link, like:
$filename = "https:/.../texts/fille.txt";
I can't see all the contents of the file, he appears cut to me. Whats the problem?
The fread() function can be used for network operations. But network connections work different than file system operations. A network cannot read a bigger file in a single attempt, that is not how typical networks work. Instead they work package based. So data arrives in chunks.
And if you take a look into the documentation of the function you use then you will see that:
Reading stops as soon as one of the following conditions is met:
[...]
a packet becomes available or the socket timeout occurs (for network streams)
[...]
So what you observe actually is documented behavior. You need to continue to read packages in a loop to get the whole file. Until you received an EOF.
Take a look yourself: https://www.php.net/manual/en/function.fread.php
And further down in that documentation you will see that example:
Example #3 Remote fread() examples
<?php
$handle = fopen("http://www.example.com/", "rb");
if (FALSE === $handle) {
exit("Failed to open stream to URL");
}
$contents = '';
while (!feof($handle)) {
$contents .= fread($handle, 8192);
}
fclose($handle);
?>

php file and fopen functions not working

I'm trying to learn about creating web bots and I'm working my way through a book called Webbots, Spiders, and Screen Scrapers by Michael Schrenk. In the book he gives example code for a basic bot that downloads a webpage. I have copied the code exactly as it is in the book (sans comments):
<?
$target = "http://www.schrenk.com/nostarch/webbots/hello_world.html";
$downloaded_page_array = file($target);
for($xx=0; $xx<count($downloaded_page_array); $xx++)
echo $downloaded_page_array[$xx];
?>
I put this code in a php file and uploaded to my site. When I navigate to it in the browser however, nothing happens. It just loads a blank page. No content.
Earlier I tried another snippet that the author provided, again, this one was copied EXACTLY from the book, only with this one I didn't really get a blank page, the page just tried to load until it eventually timed out. Never got the correct content back:
$target = "http://www.schrenk.com/nostarch/webbots/hello_world.html";
$file_handle = fopen($target, "r");
while (!feof($file_handle))
echo fgets($file_handle, 4096);
fclose($file_handle);
I have checked the URL to make sure the file exists and it does. I have no idea why this wouldn't work. I've read through how to use the file(); and fopen(); functions in PHP but from what I can tell they are both being used correctly. What am I doing wrong here?
Accessing URLs via fopen() is a bad idea. It requires you to have allow_url_fopen enabled in your PHP config, which opens the door to a vast number of exploits (hosters disable it for a reason).
Try using cURL functions instead: they will give you much more flexibility and control. PHP documentation gives you some great examples to start with.
Not fgets($file_handle, 4096) but fread($file_handle, 4096) ;
$target = "http://www.schrenk.com/nostarch/webbots/hello_world.html";
$file_handle = fopen($target, "r");
while (!feof($file_handle))
echo fread($file_handle, 4096);
fclose($file_handle);
Then later if you want to create a new file from the extracted text :
// extracting text operation
$target = "http://www.schrenk.com/nostarch/webbots/hello_world.html";
$file_handle = fopen($target, "r");
$getText = fread($file_handle, 4096);
fclose($file_handle);
// writing file operation
$writeHandle = fopen ("folder/text.txt","w"); // file will be created if not existed
$writeFile = fwrite($writeHandle,$getText );
fclose($writeHandle );
First you should put error_reporting(E_ALL); ini_set('display_errors', '1'); to your script to enable displaying errors in your script as AbraCadaver mentioned in his comment.
A reason could be, that allow_url_fopen is disabled on your hosting.
This option enables the URL-aware fopen wrappers that enable accessing URL object like files. Default wrappers are provided for the access of remote files using the ftp or http protocol, some extensions like zlib may register additional wrappers.
See: http://php.net/manual/en/filesystem.configuration.php#ini.allow-url-fopen
You can check that via:
var_dump(ini_get('allow_url_fopen'));
Your script requires true to run correct.
If allow_url_fopen is not true or 1 you can try to use file_get_contents() to load a url.
<?php
$homepage = file_get_contents('http://www.example.com/');
echo $homepage;
?>
See: http://php.net/manual/en/function.file-get-contents.php

Why does readfile() exhaust PHP memory?

I've seen many questions about how to efficiently use PHP to download files rather than allowing direct HTTP requests (to keep files secure, to track downloads, etc.).
The answer is almost always PHP readfile().
Downloading large files reliably in PHP
How to force download of big files without using too much memory?
Best way to transparently log downloads?
BUT, although it works great during testing with huge files, when it's on a live site with hundreds of users, downloads start to hang and PHP memory limits are exhausted.
So what is it about how readfile() works that causes memory to blow up so bad when traffic is high? I thought it's supposed to bypass heavy use of PHP memory by writing directly to the output buffer?
EDIT: (To clarify, I'm looking for a "why", not "what can I do". I think that Apache's mod_xsendfile is the best way to circumvent)
Description
int readfile ( string $filename [, bool $use_include_path = false [, resource $context ]] )
Reads a file and writes it to the output buffer*.
PHP has to read the file and it writes to the output buffer.
So, for 300Mb file, no matter what the implementation you wrote (by many small segments, or by 1 big chunk) PHP has to read through 300Mb of file eventually.
If multiple user has to download the file, there will be a problem.
(In one server, hosting providers will limit memory given to each hosting user. With such limited memory, using buffer is not going to be a good idea. )
I think using the direct link to download a file is a much better approach for big files.
If you have output buffering on than use ob_end_flush() right before the call to readfile()
header(...);
ob_end_flush();
#readfile($file);
As mentioned here: "Allowed memory .. exhausted" when using readfile, the following block of code at the top of the php file did the trick for me.
This will checks if php output buffering is active. If so it turns it off.
if (ob_get_level()) {
ob_end_clean();
}
You might want to turn off output buffering altogether for that particular location, using PHP's output_buffering configuration directive.
Apache example:
<Directory "/your/downloadable/files">
...
php_admin_value output_buffering "0"
...
</Directory>
"Off" as the value seems to work as well, while it really should throw an error. At least according to how other types are converted to booleans in PHP. *shrugs*
Came up with this idea in the past (as part of my library) to avoid high memory usage:
function suTunnelStream( $sUrl, $sMimeType, $sCharType = null )
{
$f = #fopen( $sUrl, 'rb' );
if( $f === false )
{ return false; }
$b = false;
$u = true;
while( $u !== false && !feof($f ))
{
$u = #fread( $f, 1024 );
if( $u !== false )
{
if( !$b )
{ $b = true;
suClearOutputBuffers();
suCachedHeader( 0, $sMimeType, $sCharType, null, !suIsValidString($sCharType)?('content-disposition: attachment; filename="'.suUniqueId($sUrl).'"'):null );
}
echo $u;
}
}
#fclose( $f );
return ( $b && $u !== false );
}
Maybe this can give you some inspiration.
Well, it is memory intensive function. I would pipe users to a static server that has specific rule set in place to control downloads instead of using readfile().
If that's not an option add more RAM to satisfy the load or introduce queuing system that gracefully controls server usage.

Best way to download a file in PHP

Which would be the best way to download a file from another domain in PHP?
i.e. A zip file.
The easiest one is file_get_contents(), a more advanced way would be with cURL for example. You can store the data to your harddrive with file_put_contents().
normally, the fopen functions work for remote files too, so you could do the following to circumvent the memory limit (but it's slower than file_get_contents)
<?php
$remote = fopen("http://www.example.com/file.zip", "rb");
$local = fopen("local_name_of_file.zip", 'w');
while (!feof($remote)) {
$content = fread($remote, 8192);
fwrite($local, $content);
}
fclose($local);
fclose($remote);
?>
copied from here: http://www.php.net/fread
You may use one code line to do this:
copy(URL, destination);
This function returns TRUE on success and FALSE on failure.

Whats a better way to write this function. Its gets a remote file and copys it localy, in php

So yea, im working on a windows system and while this works locally, know it will break on other peoples servers. Whats a cross platform way to do the same as this
function fetch($get,$put){
file_put_contents($put,file_get_contents($get));
}
I don't see why that would fail unless the other computer is on PHP4. What you would need to do to make that backwards compatible is add functionality to provide replacements for file_get_contents & file_put_contents:
if(version_compare(phpversion(),'5','<')) {
function file_get_contents($file) {
// mimick functionality here
}
function file_put_contents($file,$data) {
// mimick functionality here
}
}
Here would be the solution using simple file operations:
<?php
$file = "http://www.domain.com/thisisthefileiwant.zip";
$hostfile = fopen($file, 'r');
$fh = fopen("thisisthenameofthefileiwantafterdownloading.zip", 'w');
while (!feof($hostfile)) {
$output = fread($hostfile, 8192);
fwrite($fh, $output);
}
fclose($hostfile);
fclose($fh);
?>
Ensure your directory has write permissions enabled. (CHMOD)
Therefore, a replacement for your fetch($get, $put) would be:
function fetch($get, $put) {
$hostfile = fopen($get, 'r');
$fh = fopen($put, 'w');
while (!feof($hostfile)) {
$output = fread($hostfile, 8192);
fwrite($fh, $output);
}
fclose($hostfile);
fclose($fh);
}
Hope it helped! =)
Cheers,
KrX
Shawn's answer is absolute correct, the only thing is that you need to make sure your $put varialable is a valid path on either the Windows Server on the Unix server.
well when i read your question I understood you wanted to bring a file from a remote server to your server locally, this can be done with the FTP extension from php
http://www.php.net/manual/en/function.ftp-fget.php
if this is not what you intent I believe what shawn says is correct
else tell me in the comments and i'll help you more
If the fopen wrappers are not enabled, the curl extension could be: http://php.net/curl

Categories