PHP File Download stops - php

I am writing a basic license-validated PHP download script. The target file is about 50MB, and it works for some. Others can't finish it, sometimes retrying it works.
Here is the script:
$method = $_GET['method'];
if($method == "webdownload") {
$airlineid = $_GET['airline'];
$sql = "SELECT * FROM airlines WHERE airlineid='$airlineid'";
$result = mysql_query($sql) or die(mysql_error());
$row = mysql_fetch_array($result);
if($row['licensekey'] == "")
die("Invalid airline id");
$filename = $row['code'].'_installer.exe';
$file_path = '../resources/application/files/'.$row['airlineid'].'/'.$row['clientversion'].'/application_installer.exe';
if($row['licensestate'] != "OK")
die("The license associated with this downloaded has been deauthorized.");
if(!is_file($file_path))
die("The file associated with this version for this airline appears to be invalid.");
//download code here - it runs once only, if refreshed it will not allow it.
header('Content-type: application/exe');
header("Content-Disposition: attachment; filename=".$filename);
header("Content-Length: ".filesize($file_path));
header("Content-Transfer-Encoding: binary");
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Pragma: public');
//header('X-Sendfile: '.$file_path); I tried this - it had no effect and I want the portability.
$file = #fopen($file_path,"rb");
while(!feof($file)) {
$buffer = fread($file, 1024 * 8);
print($buffer);
flush();
}
close($file);
}
EDIT: Upon advice, I discovered that the script, among others, is highly vulnerable to SQL injection. I have replaced direct variable-SQL expressions with the use of this function:
function secure_string($raw) {
$sid = strtolower($raw);
$sid = str_replace("'","_SINGLE_QUOTE", $sid);
$sid = str_replace('"','_DOUBLE_QUOTE', $sid);
$cmd[0] = "insert";
$cmd[1] = "select";
$cmd[2] = "union";
$cmd[3] = "delete";
$cmd[4] = "modify";
$cmd[5] = "replace";
$cmd[6] = "update";
$cmd[7] = "create";
$cmd[8] = "alter";
for($index = 0; $index <= 8; $index++) {
$sid = str_replace($cmd[$index],"_SQL_COMMAND", $sid);
}
return $sid;
}
Is that sufficient to block SQL-injection?
EDIT2: I have used this function in conjunction with a PDO prepare functions to eliminate this exploit. Thanks 100x for letting me learn this lesson without disastrous results.

readfile() is a function that puts the entire file into the buffer in one go. Will probably prevent PHP from timing out. Use it instead of the fopen() and print() loop you have at the bottom.
Another solution is to see if you server has mod_x_sendfile as this takes the downloading out of PHP and into apache internals.
Edit: I notice you say you've tried sendfile. Might be a better option if you can get it working.

Related

Download a file with php and polymer

I'm having some trouble with this one. I have found some helpful scripts on the web and have been modifying them for my needs. However, I can't seem to download a file. It will respond back with the contents of the file but doesn't download it. I am using Polymer 1.0+ for my client side and PHP for my server side. The client side code to download a file is as follows:
<!--THIS IS THE HTML SIDE-->
<iron-ajax
id="ajaxDownloadItem"
url="../../../dropFilesBackend/index.php/main/DownloadItem"
method="GET"
handle-as="document"
last-response="{{downloadResponse}}"
on-response="ajaxDownloadItemResponse">
</iron-ajax>
//THIS IS THE JAVASCRIPT THAT WILL CALL THE "iron-ajax" ELEMENT
downloadItem:function(e){
this.$.ajaxDownloadItem.params = {"FILENAME":this.selectedItem.FILENAME,
"PATH":this.folder};
this.$.ajaxDownloadItem.generateRequest();
},
The server side code is as follows (the url is different because I do some url modification to get to the correct script):
function actionDownloadItem(){
valRequestMethodGet();
$username = $_SESSION['USERNAME'];
if(validateLoggedIn($username)){
$itemName = arrayGet($_GET,"FILENAME");
$path = arrayGet($_GET,"PATH");
$username = $_SESSION['USERNAME'];
$downloadItem = CoreFilePath();
$downloadItem .= "/".$_SESSION['USERNAME']."".$path."".$itemName;
DownloadFile($downloadItem);
}
else {
echo "Not Logged In.";
}
}
function DownloadFile($filePath) {
//ignore_user_abort(true);
set_time_limit(0); // disable the time limit for this script
//touch($filePath);
//chmod($filePath, 0775);
if ($fd = fopen($filePath, "r")) {
$fsize = filesize($filePath);//this returns 12
$path_parts = pathinfo($filePath);//basename = textfile.txt
$ext = strtolower($path_parts["extension"]);//this returns txt
$header = headerMimeType($ext); //this returns text/plain
header('Content-disposition: attachment; filename="'.$path_parts["basename"].'"'); // use 'attachment' to force a file download
header("Content-type: $header");
header("Content-length: $fsize");
header("Cache-control: private"); //use this to open files directly
while(!feof($fd)) {
$buffer = fread($fd, 2048);
echo $buffer;
}
}
fclose ($fd);
}
Any help on this one would be greatly appreciated.
First you will need the file handle
$pathToSave = '/home/something/something.txt';
$writeHandle = fopen($pathToSave, 'wb');
Then, while you are reading the download, write to the file instead of echoing
fwrite($writeHandle, fread($fd, 2048));
Finally, after writing to the file finished close the handle
fclose($writeHandle);
I neglect the error check, you should implement your own.

Mysql Blob to xls using php without making a file on server?

I'm trying to extract a BLOB from mysql and send it to the requester without saving it on the server. I've gotten it to work with PDF files, but some of our clients want xls files. When getting the xls file, the downloaded file is garbage. In HxD it looks like it is putting an extra 11 bytes on the front of the file.
Here is my code, both working and not working:
function blob_download_xls() {
$mysqli = openMySQLconnetion();
$sql = "SELECT * FROM Uploads;";
$results = $mysqli->query($sql);
$row = $results->fetch_assoc();
$bytes = $row['filedata'];
header('Content-type: application/vnd.ms-excel');
header('Content-Disposition: attachment; filename="report.xls"');
print $bytes;
}
function blob_download_pdf() {
$mysqli = openMySQLconnetion();
$sql = "SELECT * FROM Uploads;";
$results = $mysqli->query($sql);
$row = $results->fetch_assoc();
$bytes = $row['filedata'];
header("Content-type: application/pdf");
header('Content-Disposition: inline; filename="report.pdf"');
print $bytes;
}
Any idea what I'm doing wrong?
I asked this question with a new account without realizing that I already had an account here. I've solved the problem and it was stupid.
When writing my function.php, I put a closing tag '?>' on the end of the file. After this tag was a 0d0a for a line feed return and then 9 20's for 9 spaces. I removed the closing tag and now it works perfectly.
The correct way to deal with this is to die() immediately after printing the bytes, before the server has a chance to output any HTML that may succeed the PHP code.
function blob_download_xls() {
$mysqli = openMySQLconnetion();
$sql = "SELECT * FROM Uploads;";
$results = $mysqli->query($sql);
$row = $results->fetch_assoc();
$bytes = $row['filedata'];
header('Content-type: application/vnd.ms-excel');
header('Content-Disposition: attachment; filename="report.xls"');
print $bytes;
die();
}

Why Doesn't This PHP Download Script Work?

Here is a simple script I have written to limit downloads for users to one at a time (IE if they are downloading a file then they cannot download another one until they cancel the current download or it finishes).
ignore_user_abort(true);
$local_file = $_GET['filename'];
$download_file = explode("/", $local_file);
$download_file = $download_file[count($download_file) -1];
// set the download rate limit (value is in kilobytes per second
$download_rate = 100;
if(file_exists($local_file) && is_file($local_file)) {
$ip = visitor_ip();
if(!are_downloading($ip)) {
header('Cache-control: private');
header('Content-Type: application/octet-stream');
header('Content-Length: '.filesize($local_file));
header('Content-Disposition: filename='.$download_file);
flush();
$file = fopen($local_file, "r");
log_downloader($ip);
while(!feof($file)) {
if (!connection_aborted()) {
// send the current file part to the browser
print fread($file, round($download_rate * 1024));
// flush the content to the browser
flush();
// sleep one second
sleep(1);
} else {
break;
}
}
clear_downloader($ip);
fclose($file);
} else {
die('<span style="color:#DDDDDD">Due to server limitations you may only download one file at a time. Please cancel or wait for your current download to finish before trying again. Click here to return.</span>');
}
} else {
die('Error: The file '.$local_file.' does not exist!');
}
function visitor_ip() {
if(isset($_SERVER['HTTP_X_FORWARDED_FOR']))
$TheIp=$_SERVER['HTTP_X_FORWARDED_FOR'];
else $TheIp=$_SERVER['REMOTE_ADDR'];
return trim($TheIp);
}
function are_downloading($ip) {
$query = "select * from downloaders where ip_addr='$ip'";
$result = mysql_query($query);
$num_rows = mysql_num_rows($result);
return $num_rows > 0;
}
function log_downloader($ip) {
$query = "insert into downloaders (ip_addr) values ('$ip')";
$result = mysql_query($query);
}
function clear_downloader($ip) {
$query = "delete from downloaders where ip_addr='$ip'";
$result = mysql_query($query);
}
When I test it out, it works fine, but for a lot of people, their IP never gets cleared out of the database - even when they have finished downloading/cancelled a file. Why don't the IPs get deleted?
The problem was that with big downloads the MySQL connection went away, I simply had to reconnect in the clear_downloader function and now it works fine.

Download script that downloads the page itself when no ID is specified, what's wrong?

I coded a script that when users want to download a file, it shows an advert first and then start the download passing the ID of the file via $_GET.
Problem is that if I reach the page with no ID specified (download_file.php instead of download_file.php?id=1, for instance), the page starts the download of the page itself.
<?php
require("/membri/lostlife/mysql.php");
// Variables:
$id = $_GET["id"];
$result = mysql_query("SELECT * FROM Setting WHERE ID = $id");
$row = mysql_fetch_array($result);
$downloads = $row["Downloads"] + 1;
//
switch ($_GET["action"])
{
case "download":
// Download the file:
header("Content-Type: application/zip");
header("Content-Disposition: attachment; filename=\"$row[Filename]\"");
readfile("/membri/lostlife/setting/$row[Filename]");
// Update the database:
mysql_query("UPDATE Setting SET Downloads = $downloads WHERE ID = $id");
break;
default:
echo "";
header("Refresh: 5; url=?id=$id&action=download");
}
?>
That's my code. What's wrong with it?
Also you got in your default from your switch a refresh header.. so when the action is NOT 'download' it is going to refresh to action=download.
ill would do it this way:
require("/membri/lostlife/mysql.php");
$id = $_GET["id"];
$action = $_GET["action"];
// if its not empty and it is numeric(check if its a integer can be done in different ways)
if(!empty($id) && is_numeric($id))
{
$query = mysql_query("SELECT Downloads, Filename FROM Setting WHERE ID = $id");
$row = mysql_fetch_assoc($query);
$download = $row['Downloads'];
$filename = $row[Filename];
if($action == "downoad") {
header("Content-Type: application/zip");
header("Content-Disposition: attachment; filename=\"". $filename ."\"");
readfile("/membri/lostlife/setting/". $filename);
}
}
else
{
die("No ID found");
};
You also updating something? what your doing know is update the download what you got from your select statement? so you don't need to update it? you do you want to count what you download?

Fastest way possible to read contents of a file

Ok, I'm looking for the fastest possible way to read all of the contents of a file via php with a filepath on the server, also these files can be huge. So it's very important that it does a READ ONLY to it as fast as possible.
Is reading it line by line faster than reading the entire contents? Though, I remember reading up on this some, that reading the entire contents can produce errors for huge files. Is this true?
If you want to load the full-content of a file to a PHP variable, the easiest (and, probably fastest) way would be file_get_contents.
But, if you are working with big files, loading the whole file into memory might not be such a good idea : you'll probably end up with a memory_limit error, as PHP will not allow your script to use more than (usually) a couple mega-bytes of memory.
So, even if it's not the fastest solution, reading the file line by line (fopen+fgets+fclose), and working with those lines on the fly, without loading the whole file into memory, might be necessary...
file_get_contents() is the most optimized way to read files in PHP, however - since you're reading files in memory you're always limited to the amount of memory available.
You can issue a ini_set('memory_limit', -1) if you have the right permissions but you'll still be limited by the amount of memory available on your system, this is common to all programming languages.
The only solution is to read the file in chunks, for that you can use file_get_contents() with the fourth and fifth arguments ($offset and $maxlen - specified in bytes):
string file_get_contents(string $filename[, bool $use_include_path = false[, resource $context[, int $offset = -1[, int $maxlen = -1]]]])
Here is an example where I use this technique to serve large download files:
public function Download($path, $speed = null)
{
if (is_file($path) === true)
{
set_time_limit(0);
while (ob_get_level() > 0)
{
ob_end_clean();
}
$size = sprintf('%u', filesize($path));
$speed = (is_int($speed) === true) ? $size : intval($speed) * 1024;
header('Expires: 0');
header('Pragma: public');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Content-Type: application/octet-stream');
header('Content-Length: ' . $size);
header('Content-Disposition: attachment; filename="' . basename($path) . '"');
header('Content-Transfer-Encoding: binary');
for ($i = 0; $i <= $size; $i = $i + $speed)
{
ph()->HTTP->Flush(file_get_contents($path, false, null, $i, $speed));
ph()->HTTP->Sleep(1);
}
exit();
}
return false;
}
Another option is the use the less optimized fopen(), feof(), fgets() and fclose() functions, specially if you care about getting whole lines at once, here is another example I provided in another StackOverflow question for importing large SQL queries into the database:
function SplitSQL($file, $delimiter = ';')
{
set_time_limit(0);
if (is_file($file) === true)
{
$file = fopen($file, 'r');
if (is_resource($file) === true)
{
$query = array();
while (feof($file) === false)
{
$query[] = fgets($file);
if (preg_match('~' . preg_quote($delimiter, '~') . '\s*$~iS', end($query)) === 1)
{
$query = trim(implode('', $query));
if (mysql_query($query) === false)
{
echo '<h3>ERROR: ' . $query . '</h3>' . "\n";
}
else
{
echo '<h3>SUCCESS: ' . $query . '</h3>' . "\n";
}
while (ob_get_level() > 0)
{
ob_end_flush();
}
flush();
}
if (is_string($query) === true)
{
$query = array();
}
}
return fclose($file);
}
}
return false;
}
Which technique you use will really depend on what you're trying to do (as you can see with the SQL import function and the download function), but you'll always have to read the data in chunks.
$file_handle = fopen("myfile", "r");
while (!feof($file_handle)) {
$line = fgets($file_handle);
echo $line;
}
fclose($file_handle);
Open the file and stores in $file_handle as reference to the file itself.
Check whether you are already at the end of the file.
Keep reading the file until you are at the end, printing each line as you read it.
Close the file.
You could use file_get_contents
Example:
$homepage = file_get_contents('http://www.example.com/');
echo $homepage;
Use fpassthru or readfile.
Both use constant memory with increasing file size.
http://raditha.com/wiki/Readfile_vs_include
foreach (new SplFileObject($filepath) as $lineNumber => $lineContent) {
echo $lineNumber."==>".$lineContent;
//process your operations here
}
Reading the whole file in one go is faster.
But huge files may eat up all your memory and cause problems. Then your safest bet is to read line by line.
If you're not worried about memory and file size,
$lines = file($path);
$lines is then the array of the file.
You Could Try cURL (http://php.net/manual/en/book.curl.php).
Altho You Might Want To Check, It Has Its Limits As Well
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://example.com/");
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec ($ch); // Whole Page As String
curl_close ($ch);

Categories