Okay, I have a problem that I hope you can help me fix.
I am running a server that stores video files that are very large, some up to 650 MB. I need a user to be able to request this page and have it download the file to their machine. I have tried everything, but a plain readfile() request hangs for about 90 seconds before quitting and gives me a "No data received error 324 code," a chunked readfile script that I have found from several websites doesn't even start a download, FTP through PHP solutions did nothing but give me errors when I tried to get the file, and the only cURL solutions that I have found just create another file on my server. That is not what I need.
To be clear I need the user to be able to download the file to their computer and not to the server.
I don't know if this code is garbage or if it just needs a tweak or two, but any help is appreciated!
<?php
$fn = $_GET["fn"];
echo $fn."<br/>";
$url = $fn;
$path = "dl".$fn;
$fp = fopen($path, 'w');
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);
?>
I wouldn't recommend serving large binary files using PHP or any other scripting technology for that matter. They where never design for this -- you can use apache, nginx or whatever standard http server you have on the back end. If you still need to use PHP, then you should probably check out readfile_chunked.
http://php.net/readfile#48683
and here's a great tutorial.
http://teddy.fr/blog/how-serve-big-files-through-php
good luck.
readfile() doesnt buffer. However, php itself might buffer. Turn buffering off
while (ob_get_level())
ob_end_clean();
readfile($file);
Your web server might buffer. Turn that off too. How you do it depends on the webserver, and why its buffering.
I see two problem that can happen:
First: Your web server may be closed the connection by timeout. you should look the web server config.
Second: Timeout with curl. I recommend to see this post.
Related
I am using file_get_contents in PHP to get information from a client's collections on contentDM. CDM has an API so you can get that info by making php queries, like, say:
http://servername:port/webutilities/index.php?q=function/arguments
It has worked pretty well thus far, across computers and operating systems. However, this time things work a little differently.
http://servername/utils/collection/mycollectionname/id/myid/filename/myname
For this query I fill in mycollection, myid, and myname with relevant values. myid and mycollection have to exist in the system, obviously. However, myname can be anything you want. When you run the query, it doesn't return a web page or anything to your browser. It just automatically downloads a file with myname as the name of the file, and puts it in your local /Downloads folder.
I DON'T WISH TO DOWNLOAD THIS FILE. I just want to read the contents of the file it returns directly into PHP as a string. The file I am trying to get just contains xml data.
file_get_contents works to get the data in that file, if I use it with PHP7 and Apache on my laptop running Ubuntu. But, on my desktop which runs Windows 10, and XAMPP (Apache and PHP5), I get this error (I've replaced sensitive data with ###):
Warning:
file_get_contents(###/utils/collection/###/id/1110/filename/1111.cpd):
failed to open stream: No such file or directory in
D:\Titus\Documents\GitHub\NativeAmericanSCArchive\NASCA-site\api\update.php
on line 18
My coworkers have been unable to help me so I am curious if anyone here can confirm or deny whether this is an operating system issue, or a PHP version issue, and whether there's a solid alternative method that is likely to work in PHP5 and on both Windows and Ubuntu.
file_get_contents() is a simple screwdriver. It's very good for getting data by simply GET requests where the header, HTTP request method, timeout, cookiejar, redirects, and other important things do not matter.
fopen() with a stream context or cURL with setopt are powerdrills with every bit and option you can think of.
In addition to this, due to some recent website hacks, we had to secure our sites more. In doing so, we discovered that file_get_contents failed to work, where curl still would work.
Not 100%, but I believe that this php.ini setting may have been blocking the file_get_contents request.
; Disable allow_url_fopen for security reasons
allow_url_fopen = 0
Either way, our code now works with curl.
reference :
http://25labs.com/alternative-for-file_get_contents-using-curl/
http://phpsec.org/projects/phpsecinfo/tests/allow_url_fopen.html
So, You can solve this problem by using PHP cURL extension. Here is an example that does the same thing you were trying:
function curl($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$url = 'your_api_url';
$data = curl($url);
And finally you can check your data by print_r($data). Hope it you it will works and you will understand.
Reference : http://php.net/manual/en/book.curl.php
I need to download larger files to my server .I have dedicated Server ..which is 100mbps .But its taking too much time to download 8mb file.I use below code.Is there any Class to download files quickly ? which chunks the file and download it real quick ?
<?php
$url = 'http://www.example.com/a-large-file.zip';
$path = '/path/to/a-large-file.zip';
$fp = fopen($path, 'w');
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);
?>
Edit : File is mp4 file
If your line is as quick as you say, then it's likely to be a bottleneck at the other end.
Don't forget, file transfer speed is affect by not only your download speed, but also the other ends upload speed.
As such, it's unlikely there is anything you go do to improve this. Certainly no class or code will help you improve it by more than a few milliseconds - your best bet is to look at the network.
Specifically, check the number of concurrent connections from your end - it's all well and good having a decent line, but if it's being used for 100 connections, it's always going to be slower than one being used for 1 connection.
Likewise, check the download from another machine/server - if it's just a slow from a different server, then it's almost certainly a bottleneck at the other end.
I'm a novice, so I'll try and do my best to explain a problem I'm having. I apologize in advance if there's something I left out or is unclear.
I'm serving an 81MB zip file outside my root directory to people who are validated beforehand. I've been getting reports of corrupted downloads or an inability to complete the download. I've verified this happening on my machine if I simulate a slow connection.
I'm on shared hosting running Apache-Coyote/1.1.
I get a network timeout error. I think my host might be doing killing the downloads if they take too long, but they haven't verified either way.
I thought I was maybe running into a memory limit or time limit, so my host installed the apache module XSendFile. My headers in the file that handles the download after validation are being set this way:
<?php
set_time_limit(0);
$file = '/absolute/path/to/myzip/myzip.zip';
header("X-Sendfile: $file");
header("Content-type: application/zip");
header('Content-Disposition: attachment; filename="' . basename($file) . '"');
Any help or suggestions would be appreciated. Thanks!
I would suggest taking a look at this comment:
http://www.php.net/manual/en/function.readfile.php#99406
Particularly, if you are using apache. If not the code in the link above should be helpful:
I started running into trouble when I had really large files being sent to clients with really slow download speeds. In those cases, the
script would time out and the download would terminate with an
incomplete file. I am dead-set against disabling script timeouts - any
time that is the solution to a programming problem, you are doing
something wrong - so I attempted to scale the timeout based on the
size of the file. That ultimately failed though because it was
impossible to predict the speed at which the end user would be
downloading the file at, so it was really just a best guess so
inevitably we still get reports of script timeouts.
Then I stumbled across a fantastic Apache module called mod_xsendfile ( https://tn123.org/mod_xsendfile/ (binaries) or
https://github.com/nmaier/mod_xsendfile (source)). This module
basically monitors the output buffer for the presence of special
headers, and when it finds them it triggers apache to send the file on
its own, almost as if the user requested the file directly. PHP
processing is halted at that point, so no timeout errors regardless of
the size of the file or the download speed of the client. And the end
client gets the full benefits of Apache sending the file, such as an
accurate file size report and download status bar.
The code I finally ended up with is too long to post here, but in general is uses the mod_xsendfile module if it is present, and if not
the script falls back to using the code I originally posted. You can
find some example code at https://gist.github.com/854168
EDIT
Just to have a reference of code that does the "chunking" Link to Original Code:
<?php
function readfile_chunked ($filename,$type='array') {
$chunk_array=array();
$chunksize = 1*(1024*1024); // how many bytes per chunk
$buffer = '';
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
switch($type)
{
case'array':
// Returns Lines Array like file()
$lines[] = fgets($handle, $chunksize);
break;
case'string':
// Returns Lines String like file_get_contents()
$lines = fread($handle, $chunksize);
break;
}
}
fclose($handle);
return $lines;
}
?>
I have a script that pulls URLs from the database and downloads them (pdf or jpg) to a local file.
Code is:
$cp = curl_init($remote_url);
$fp = fopen($dest_temp, "w");
#curl_setopt($cp, CURLOPT_FILE, $fp);
#curl_setopt($ch, CURLOPT_HEADER, TRUE);
curl_exec($cp);
curl_close($cp);
fclose($fp);
If the remote file is there, it works fine. If the remote file is not there, it just bombs and the browser hangs forever.
What's the best approach to handling this, should I somehow ping for the file first? or can I set options above that will handle this. I tried setting timeouts but it had no effect.
this is my first experience using cURL
I used to use wget much as you're using curl and got frustrated with the lack of ability to know what is going on because its essentially calling out to an external program.
I use perl WWW:Mechanize and the link below is a PHP version which might be a bit more robust for you to be able to deal with such instances.
http://www.compasswebpublisher.com/php/www-mechanize-for-php
Hope this helps.
My script downloads only 2 mb. I think it is because of some sort of restriction on this server. My script resides on this server and is to download files (like pictures and movies) from the web. But movies fail; only two mb get downloaded to the server. Can I do anything about it?
Try adding:
php_value upload_max_filesize 15M
To your .htaccess (if you're using apache at least). That is 15mb. The default is usually set to 2mb.
This has worked fine for me, but ftrotter is correct in pointing out that you may also hit time limits as well.
If your PHP is not running in safe mode you can try calling:
set_time_limit($seconds);
To change the length of time your PHP script is allowed to run. Otherwise you will have to change max_execution_time in your php.ini if you can.
There are several problems you might be having here.
Take a close look at your php.ini file.
You want to be certain for instance that you have very large values for
max_execution_time
memory_limit
you might even consider
default_socket_timeout
It would help us if you included any error the script gives, or reports to syslog (or windows equivalent)
HTH,
-FT
Also be aware that if you're on shared hosting, some providers will kill off long running processes without warning.
You need to read files by small chunks, not the whole file at one file_get_contents call. Provide an example of how you download them. We will help to modify script to remove memory consuming parts.
This is how I do:
$filename = 'file';
$fp = fopen($filename,'w+');
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $the_link);
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_exec($ch);
curl_close($ch);
fclose($fp);
Also, I have changed with max_execution_time and upload_max_filesize with the .htaccess file.
Yes but you need to write the script so it can resume. This is pretty forward use CURLOPT_RANGE to continue after a first shutdown by the server.
Then call your downloading page from time to time via a cron job and use a flock around the CURL code to see if the download is still in progress or terminated.
It is very unlikely that you can download large files in one HTTP response.