Save remote file that pushes headers to force download - php

I can download remote files using PHP but how do you download from a link that pushes headers out? I mean, you can click on some links and it will force a download and present you with dialog box to save the file. How can I download and save this sort of thing using PHP?
Any examples or links to tutorials would be great since I couldn't find anything useful on this topic.
Thank you for any help
Updated and [SOLVED]
<?php
set_time_limit(300);
// File to download
$remoteFile = $_GET['url'];
$file = fopen($remoteFile, "r");
if (!$file) {
echo "<p>Unable to open remote file.\n";
exit;
}
$line = '';
while (!feof ($file)) {
$line .= fgets ($file, 4096);
}
//readfile($line);
file_put_contents('here2.mp4', $line);
fclose($file);
?>

Just tried to reproduce situation. Gubmo is right, this download method works for me with Content-Type: application/octet-stream and Content-type: application/force-download headers.
As explained here, HTTP 410 means that URL requested by the client is no longer available from that system. This is not a 'never heard of it' response, but a 'does not live here any more' response. Maybe they have some kind of antileach system.
This should be investigated. If they need cookies -- stream-context-create can help. Or maybe they check referer. But I am almost sure that problem is not in headers.
Hope this helps.
UPD Sample code you've asked about.
// file to download -- application/octet-stream
$remoteFile = 'http://dev/test/remote/send.php';
// file to download -- application/force-download
$remoteFile = 'http://chtyvo.org.ua/authors/Skriabin_Kuzma/Ya_Pobieda_i_Berlin.rtf.zip';
// file to store
$localFile = 'kuzma.zip';
$fin = fopen($remoteFile, "r");
if (!$fin) {
die("Unable to open remote file");
}
$fout = fopen($localFile, "w");
if (!$fout) {
die("Unable to open local file");
}
while (!feof($fin)) {
$line = fgets($fin, 1024);
fwrite($fout, $line, 1024);
}
fclose($fout);
fclose($fin);
Same as yours.

You can do it the same way as you download your remote files. Those “force download” header values just tell user agents that want to display the data inline to download them instead. But it makes no difference for your script as it cannot display the data.

Related

Live video streaming from recording files over HTTP based on PHP

I want to organize live streaming from recording files over HTTP based on PHP. INTRODUCTION: On the streaming server I writing video to local file(local_file.mpg) and when received a request from client then start streaming to it from $start_byte = filesize("local_file.mpg")-10MB; The local_file.mpg is still writing and PHP script continue reading it and flushing. PROBLEM: I streaming it via HTTP Range with the following headers:
header('HTTP/1.1 206 Partial Content');
header("Content-Type: video/mpeg");
header('Content-Length: '.($seek_end - $seek_start));
header('Content-Range: bytes '.$seek_start.'-'.$seek_end.'/'.$size);
And flushing as follows:
while(!feof($fp)){
$buf_size = 1024*8;
$pos = ftell($fp);
if ($pos >= $item["to_byte"]){
fclose($fp);
break;
}
if ($pos + $buf_size > $item["to_byte"]){
$buf_size = $item["to_byte"] - $pos;
}
if ($buf_size > 0){
echo fread($fp, $buf_size);
}
flush();
ob_flush();
}
I open it via VLC or FFplay, but it played until the time moment when the stream was requested. This is to be expected, because we determine the size of the file and provide it to requested side. But if we artificially increase a file size, for example $size = filesize("local_file.mpg")+999999999999; it also not help, because video players requesting new data too early when it is not recorded. And also stopped play at the time moment when the stream was requested.
1. Please advice how to correct organize live streaming from recording files over HTTP based on PHP. 2. Is it possible to do it with HTTP RANGE mechanism or I should use another way?
UPDATE: Based on this question I tried the next code:
<?php
$file = "online.mpg";
function flush_buffers(){
ob_end_flush();
ob_flush();
flush();
ob_start();
}
header('Content-Type: video/mpeg');
$stream = fopen( $file, "rb" );
fseek($stream, (filesize($file)-10000000), SEEK_SET);
while(1){
$response = fread($stream, 8192);
echo $response;
flush_buffers();
}
fclose( $stream );
exit();
?>
And it works well via ffplay, but via VLC it played no more then 1 minute and stoped then. Please advice how to make it work on VLC also?
Do you have a time limit for php execution ?
if yes , change it to unlimited with :
set_time_limit(0);

Download a remote file and rename the file- & filetype on client-side

I am working on a project where on the client-side it should start a download from a remote URL.
I currently got it to work with cURL, but the problem is that it downloads to the server, and not on the client's browser (asking where to save it).
The URL looks like this:
http://r13---sn-5hn7snee.googlevideo.com/videoplayback?expire=138181&mv=u&ipbits=0&clen=7511717&fexp=929447,913430,930802,916612,902546,937417,913434,936916,934022,936921,936923&ms=au&mt=1396164&itag=247&key=yt5&source=youtube&ip=2a00:7143:100:1:225:90ff:fe52:b3ad&sparams=clen,dur,gir,id,ip,ipbits,itag,lmt,source,upn,expire&signature=DBDCE9EFFC094C0FC653610C8FAAF367EA942CC4C6EF3971A186C&upn=0JQJ6yqyYPo&sver=3&gir=yes&lmt=13960601932&dur=130.133&id=17f0fd8b&size=1280x720,type=video/mp4
When downloading this just from the URL the filename will be 'videoplayback' without any file extension. So I need to have a custom file name and .mp4 as file type. (Which works on the cURL code below), but it writes to server instead of client-side download.
function download($file_source, $file_target) {
$rh = fopen($file_source, 'rb');
$wh = fopen($file_target, 'w+b');
if (!$rh || !$wh) {
return false;
}
while (!feof($rh)) {
if (fwrite($wh, fread($rh, 4096)) === FALSE) {
return false;
}
echo ' ';
flush();
}
fclose($rh);
fclose($wh);
return true;
}
$result = download('http://r13---sn-5hn7snee.googlevideo.com/videoplayback?expire=138181&mv=u&ipbits=0&clen=7511717&fexp=929447,913430,930802,916612,902546,937417,913434,936916,934022,936921,936923&ms=au&mt=1396164&itag=247&key=yt5&source=youtube&ip=2a00:7143:100:1:225:90ff:fe52:b3ad&sparams=clen,dur,gir,id,ip,ipbits,itag,lmt,source,upn,expire&signature=DBDCE9EFFC094C0FC653610C8FAAF367EA942CC4C6EF3971A186C&upn=0JQJ6yqyYPo&sver=3&gir=yes&lmt=13960601932&dur=130.133&id=17f0fd8b&size=1280x720,type=video/mp4');
if (!$result)
throw new Exception('Download error...');
So my question is how I can turn this into so it renames the file and changes the extension to mp4, and then downloads the file on client-side.
Thanks
First output the correct headers to the client so it knows how to handle a file and then echo the file contents. So try this way:
<?php
// ... fetch the file using CURL and store it (in memory or on file system) ...
// Set the content type
header('Content-type: application/pdf');
// It will be called downloaded.pdf
header('Content-Disposition: attachment; filename="downloaded.pdf"');
// ... echo/output the file contents to the browser...
?>
Check out "Example #1 Download dialog" in the PHP documentation.
Also, you mention you are using CURL, I don't see that anywhere in your code.

Save large files from php stdin

Advise me the most optimal way to save large files from php stdin, please.
iOS developer sends me large video content to server and i have to store it in to files.
I read the stdin thread with video data and write it to the file. For example, in this way:
$handle = fopen("php://input", "rb");
while (!feof($handle)) {
$http_raw_post_data .= fread($handle, 8192);
}
What function is better to use? file_get_contents or fread or something else?
I agree with #hek2mgl that treating this as a multipart form upload would make most sense, but if you can't alter your input interface then you can use file_put_contents() on a stream instead of looping through the file yourself.
$handle = fopen("php://input", "rb");
if (false === file_put_contents("outputfile.dat", $handle))
{
// handle error
}
fclose($handle);
It's cleaner than iterating through the file, and it might be faster (I haven't tested).
Don't use file_get_contents because it would attempt to load all the content of the file into a string
FROM PHP DOC
file_get_contents() is the preferred way to read the contents of a file into a string. It will use memory mapping techniques if supported by your OS to enhance performance.
Am sure you just want to create the movie file on your server .. this is a more efficient way
$in = fopen("php://input", "rb");
$out = fopen('largefile.dat', 'w');
while ( ! feof($in) ) {
fwrite($out, fread($in, 8192));
}
If you use nginx as web server i want to recommend nginx upload module with possibility to resume upload.

Closing incoming connection in Apache/PHP

I've got a script which receives large file uploads via a PUT request. These files have some processing done on the fly as they are uploading. Sometimes we can detect that a file is invalid in the first few bytes so we die() with an error message. The only problem is that the client still sends the rest of the data which is a huge waste. Is there a way to shutdown the incoming connection?
Code:
$fp = fopen('php://input', 'rb');
// Do some data checking here
if( <invalid> ) {
fclose($fp);
die('Error');
}
stream_socket_shutdown looked like it might do the job but it has no effect.
Is there any way to do this? Even if I have to write an extension just for this?
You may want to give the following a shot and see if this terminates the connection properly:
$fp = fopen('php://input', 'rb');
// Do some data checking here
if( <invalid> ) {
fclose($fp);
header("Content-Length: 0");
header("Connection: close");
flush();
die('Error');
}

Downloading large files reliably in PHP

I have a php script on a server to send files to recipents: they get a unique link and then they can download large files. Sometimes there is a problem with the transfer and the file is corrupted or never finishes. I am wondering if there is a better way to send large files
Code:
$f = fopen(DOWNLOAD_DIR.$database[$_REQUEST['fid']]['filePath'], 'r');
while(!feof($f)){
print fgets($f, 1024);
}
fclose($f);
I have seen functions such as
http_send_file
http_send_data
But I am not sure if they will work.
What is the best way to solve this problem?
Regards
erwing
Chunking files is the fastest / simplest method in PHP, if you can't or don't want to make use of something a bit more professional like cURL, mod-xsendfile on Apache or some dedicated script.
$filename = $filePath.$filename;
$chunksize = 5 * (1024 * 1024); //5 MB (= 5 242 880 bytes) per one chunk of file.
if(file_exists($filename))
{
set_time_limit(300);
$size = intval(sprintf("%u", filesize($filename)));
header('Content-Type: application/octet-stream');
header('Content-Transfer-Encoding: binary');
header('Content-Length: '.$size);
header('Content-Disposition: attachment;filename="'.basename($filename).'"');
if($size > $chunksize)
{
$handle = fopen($filename, 'rb');
while (!feof($handle))
{
print(#fread($handle, $chunksize));
ob_flush();
flush();
}
fclose($handle);
}
else readfile($path);
exit;
}
else echo 'File "'.$filename.'" does not exist!';
Ported from richnetapps.com / NeedBee. Tested on 200 MB files, on which readfile() died, even with maximum allowed memory limit set to 1G, that is five times more than downloaded file size.
BTW: I tested this also on files >2GB, but PHP only managed to write first 2GB of file and then broke the connection. File-related functions (fopen, fread, fseek) uses INT, so you ultimately hit the limit of 2GB. Above mentioned solutions (i.e. mod-xsendfile) seems to be the only option in this case.
EDIT: Make yourself 100% that your file is saved in utf-8. If you omit that, downloaded files will be corrupted. This is, because this solutions uses print to push chunk of a file to a browser.
If you are sending truly large files and worried about the impact this will have, you could use the x-sendfile header.
From the SOQ using-xsendfile-with-apache-php, an howto blog.adaniels.nl : how-i-php-x-sendfile/
Best solution would be to rely on lighty or apache, but if in PHP, I would use PEAR's HTTP_Download (no need to reinvent the wheel etc.), has some nice features, like:
Basic throttling mechanism
Ranges (partial downloads and resuming)
See intro/usage docs.
We've been using this in a couple of projects and it works quite fine so far:
/**
* Copy a file's content to php://output.
*
* #param string $filename
* #return void
*/
protected function _output($filename)
{
$filesize = filesize($filename);
$chunksize = 4096;
if($filesize > $chunksize)
{
$srcStream = fopen($filename, 'rb');
$dstStream = fopen('php://output', 'wb');
$offset = 0;
while(!feof($srcStream)) {
$offset += stream_copy_to_stream($srcStream, $dstStream, $chunksize, $offset);
}
fclose($dstStream);
fclose($srcStream);
}
else
{
// stream_copy_to_stream behaves() strange when filesize > chunksize.
// Seems to never hit the EOF.
// On the other handside file_get_contents() is not scalable.
// Therefore we only use file_get_contents() on small files.
echo file_get_contents($filename);
}
}
For downloading files the easiest way I can think of would be to put the file in a temporary location and give them a unique URL that they can download via regular HTTP.
As part generating these links you could also remove files that were more than X hours old.
Create a symbolic link to the actual file and make the download link point at the symbolic link. Then, when the user clicks on the DL link, they'll get a file download from the real file but named from the symbolic link. It takes milliseconds to create the symbolic link and is better than trying to copy the file to a new name and download from there.
For example:
<?php
// validation code here
$realFile = "Hidden_Zip_File.zip";
$id = "UserID1234";
if ($_COOKIE['authvalid'] == "true") {
$newFile = sprintf("myzipfile_%s.zip", $id); //creates: myzipfile_UserID1234.zip
system(sprintf('ln -s %s %s', $realFile, $newFile), $retval);
if ($retval != 0) {
die("Error getting download file.");
}
$dlLink = "/downloads/hiddenfiles/".$newFile;
}
// rest of code
?>
<a href="<?php echo $dlLink; ?>Download File</a>
That's what I did because Go Daddy kills the script from running after 2 minutes 30 seconds or so....this prevents that problem and hides the actual file.
You can then setup a CRON job to delete the symbolic links at regular intervals....
This whole process will then send the file to the browser and it doesn't matter how long it runs since it's not a script.
When I have done this in the past I've used this:
set_time_limit(0); //Set the execution time to infinite.
header('Content-Type: application/exe'); //This was for a LARGE exe (680MB) so the content type was application/exe
readfile($fileName); //readfile will stream the file.
These 3 lines of code will do all the work of the download readfile() will stream the entire file specified to the client, and be sure to set an infinite time limit else you may be running out of time before the file is finished streaming.
If you are using lighttpd as a webserver, an alternative for secure downloads would be to use ModSecDownload. It needs server configuration but you'll let the webserver handle the download itself instead of the PHP script.
Generating the download URL would look like that (taken from the documentation) and it could of course be only generated for authorized users:
<?php
$secret = "verysecret";
$uri_prefix = "/dl/";
# filename
# please note file name starts with "/"
$f = "/secret-file.txt";
# current timestamp
$t = time();
$t_hex = sprintf("%08x", $t);
$m = md5($secret.$f.$t_hex);
# generate link
printf('%s',
$uri_prefix, $m, $t_hex, $f, $f);
?>
Of course, depending on the size of the files, using readfile() such as proposed by Unkwntech is excellent. And using xsendfile as proposed by garrow is another good idea also supported by Apache.
header("Content-length:".filesize($filename));
header('Content-Type: application/zip'); // ZIP file
header('Content-Type: application/octet-stream');
header('Content-Disposition: attachment; filename="downloadpackage.zip"');
header('Content-Transfer-Encoding: binary');
ob_end_clean();
readfile($filename);
exit();
I'm not sure this is a good idea for large files. If the thread for your download script runs until the user has finished the download, and you're running something like Apache, just 50 or more concurrent downloads could crash your server, because Apache isn't designed to run large numbers of long-running threads at the same time. Of course I might be wrong, if the apache thread somehow terminates and the download sits in a buffer somewhere whilst the download progresses.
I have used the following snippet found in the comments of the php manual entry for readfile:
function _readfileChunked($filename, $retbytes=true) {
$chunksize = 1*(1024*1024); // how many bytes per chunk
$buffer = '';
$cnt =0;
// $handle = fopen($filename, 'rb');
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, $chunksize);
echo $buffer;
ob_flush();
flush();
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
I have had same problem,
my problem solved by adding this before starting session
session_cache_limiter('none');
This is tested on files of a size 200+ MB on a server that has 256MB memory limit.
header('Content-Type: application/zip');
header("Content-Disposition: attachment; filename=\"$file_name\"");
set_time_limit(0);
$file = #fopen($filePath, "rb");
while(!feof($file)) {
print(#fread($file, 1024*8));
ob_flush();
flush();
}

Categories