Hello i want to download remote zip, which is about 8 MB big. I wrote simple script
set_time_limit(0);
$zip = file_get_contents('http://web.tld/folder/download/getfile.do?filename=file.zip&_lang=Lang');
file_put_contents('zip_files/file.zip',$zip);
it works but stored file is not 8 MB but only 52 KB.
Its same if i use
set_time_limit(0);
$url = 'http://web.tld/folder/download/getfile.do?filename=file.zip&_lang=Lang';
$path = 'zip_files/file.zip';
/* get and save remote data without exceeding php memory limit */
$fp = fopen($path, 'w');
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);
so maybe i have to use some stream option ?! Thank you
ps: i tried Snoopy library (http://sourceforge.net/projects/snoopy/) and its also same, only 52KB :P
include "libs/Snoopy-2.0/Snoopy.class.php";
$snoopy = new Snoopy;
$snoopy->submit($url);
print $snoopy->results;
Look inside saved file (use any text editor) it's possible to see not zip, just a page with wrong URL or something.
Related
<?php
file_put_contents("10gb.zip", fopen("http://website.website/10GB.zip", 'r'));
echo "File Downloaded!";
I am using this code to download files from url to my server. But when I run my code My hosing servers memory turn into red! -_- and my download stuck at 3.79 GB.
Is there any limitation to download big files? i want to download more than 50 GB with 5 process! Is it possible?
i would go for streaming when dealing with large file rather than copying them directly
from the example provided here : http://php.net/manual/en/function.stream-copy-to-stream.php
you can try :
<?php
function pipe_streams($in, $out)
{
while (!feof($in))
fwrite($out,fread($in,8192));
}
pipe_streams("http://website.website/10GB.zip", "10gb.zip");
?>
or use curl (http://php.net/manual/en/book.curl.php) :
<?php
$url = "http://website.website/10GB.zip";
$path = "10gb.zip";
$fp = fopen($path, 'w');
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);
?>
check this https://www.sitepoint.com/performant-reading-big-files-php/ for more streaming options
I have one Image on another server (Image).but when i get this image With file_get_contents() function it will return
Not Found Error
and generate this Image.
file_put_contents(destination_path, file_get_contents(another_server_path));
plz help me. if there are another way to get those image.
Try this.
There is problem with URL Special character.then you have to decode some special character from url basename.
$imgfile = 'http://www.lagrolla.com.au/image/m fr 137 group.jpg';
$destinationPath = '/path/to/folder/';
$filename = basename($imgpath);
$imgpath = str_replace($filename,'',$imgpath).rawurldecode($filename);
copy($imgfile,$destination_path.$filename);
Another way to download copy file from another server is using curl:
$ch = curl_init('http://www.lagrolla.com.au/image/data/m%20fr%20137%20group.jpg');
$destinationPath = '/path/to/folder/filenameWithNoSpaces.jpg';
$fp = fopen($destinationPath, 'wb');
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
Note: It is bad practice to save images with spaces in file name, so you should save this file with proper name.
I am looking for a function that gets the metadata of a .mp3 file from a URL (NOT local .mp3 file on my server).
Also, I don't want to install http://php.net/manual/en/id3.installation.php or anything similar to my server.
I am looking for a standalone function.
Right now i am using this function:
<?php
function getfileinfo($remoteFile)
{
$url=$remoteFile;
$uuid=uniqid("designaeon_", true);
$file="../temp/".$uuid.".mp3";
$size=0;
$ch = curl_init($remoteFile);
//==============================Get Size==========================//
$contentLength = 'unknown';
$ch1 = curl_init($remoteFile);
curl_setopt($ch1, CURLOPT_NOBODY, true);
curl_setopt($ch1, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch1, CURLOPT_HEADER, true);
curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch1);
curl_close($ch1);
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
$size=$contentLength;
}
//==============================Get Size==========================//
if (!$fp = fopen($file, "wb")) {
echo 'Error opening temp file for binary writing';
return false;
} else if (!$urlp = fopen($url, "r")) {
echo 'Error opening URL for reading';
return false;
}
try {
$to_get = 65536; // 64 KB
$chunk_size = 4096; // Haven't bothered to tune this, maybe other values would work better??
$got = 0; $data = null;
// Grab the first 64 KB of the file
while(!feof($urlp) && $got < $to_get) { $data = $data . fgets($urlp, $chunk_size); $got += $chunk_size; } fwrite($fp, $data); // Grab the last 64 KB of the file, if we know how big it is. if ($size > 0) {
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RESUME_FROM, $size - $to_get);
curl_exec($ch);
// Now $fp should be the first and last 64KB of the file!!
#fclose($fp);
#fclose($urlp);
} catch (Exception $e) {
#fclose($fp);
#fclose($urlp);
echo 'Error transfering file using fopen and cURL !!';
return false;
}
$getID3 = new getID3;
$filename=$file;
$ThisFileInfo = $getID3->analyze($filename);
getid3_lib::CopyTagsToComments($ThisFileInfo);
unlink($file);
return $ThisFileInfo;
}
?>
This function downloads 64KB from a URL of an .mp3 file, then returns the array with the metadata by using getID3 function (which works on local .mp3 files only) and then deletes the 64KB's previously downloaded.
Problem with this function is that it is way too slow from its nature (downloads 64KB's per .mp3, imagine 1000 mp3 files.)
To make my question clear : I need a fast standalone function that reads metadata of a remote URL .mp3 file.
This function downloads 64KB from a URL of an .mp3 file, then returns the array with the metadata by using getID3 function (which works on local .mp3 files only) and then deletes the 64KB's previously downloaded. Problem with this function is that it is way too slow from its nature (downloads 64KB's per .mp3, imagine 1000 mp3 files.)
Yeah, well what do you propose? How do you expect to get data if you don't get data? There is no way to have a generic remote HTTP server send you that ID3 data. Really, there is no magic. Think about it.
What you're doing now is already pretty solid, except that it doesn't handle all versions of ID3 and won't work for files with more than 64KB of ID3 tags. What I would do to improve it to is to use multi-cURL.
There are several PHP classes available that make this easier:
https://github.com/jmathai/php-multi-curl
$mc = EpiCurl::getInstance();
$results[] = $mc->addUrl(/* Your stream URL here /*); // Run this in a loop, 10 at a time or so
foreach ($results as $result) {
// Do something with the data.
}
I'm trying to run PHP script (from a Linux server) that will download a file through direct download link and save it on my server.
here is the script I'm using:
<?php
$url = 'http://download.maxmind.com/app/geoip_download?edition_id=108&date=20131015&suffix=zip&license_key=XXXXXXXXXXX';
$path = '/apps/test/';
$fp = fopen($path, 'w');
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);
?>
for some reason it doesn't work for me, any suggestions ?
You need to verify that the ports are open on your firewall and use the below command:
(this will also download the file in the original format)
shell_exec("wget -P /apps/birst/php_test_scripts/ --content-disposition "."'"."https://download.maxmind.com/app/geoip_download?edition_id=108&suffix=zip&license_key=XXXXXXXX"."'");
Why dont you just use :
shell_exec("wget -P /target/directory/ http://download.link.com/download.zip");
Try this
$url = 'http://download.maxmind.com/app/geoip_download?edition_id=108&date=20131015&suffix=zip&license_key=XXXXXXXXXXX';
$path = '/apps/test/';
$filepath = $path .'file.zip';
$data = file_get_contents($url);
file_put_contents($filepath, $data);
I am using FPDF to extract info from a PNG file. Unfortunately, the server has fopen disabled. Can anyone recommend a good way of getting around this? Any help would be much appreciated. Thanks in advance!
function _parsepng($file)
{
// Extract info from a PNG file
$f = fopen($file,'rb');
if(!$f)
$this->Error('Can\'t open image file: '.$file);
$info = $this->_parsepngstream($f,$file);
fclose($f);
return $info;
}
You can try curl but usually if your hosting company disables one they also disable the other.
I really don't know if it can work :-)
but you may try with fsockopen
http://www.php.net/manual/en/function.fsockopen.php
file:// can be used as protocol
hope this helps
Ended up using a cURL workaround. Thanks everyone for the input!
function _parsepng($file)
{
$ch = curl_init($file);
$fp = fopen('/tmp/myfile.png', 'wb');
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_FILE, $fp);
$image_data = curl_exec($ch);
curl_close($ch);
// Extract info from a PNG file
$f = fopen('/tmp/myfile.png','rb');
if(!$f)
$this->Error('Can\'t open image file: '.$file);
$info = $this->_parsepngstream($f,$file);
fclose($f);
return $info;
}