uploading remote url to server - php

I am using following codes to upload remote files to my server. It works great where direct download link is given but recently I have noticed that few websites are giving mysql links as download link and when we click on that link the files start downloading to my pc. But even in html source of that page it does not show the direct link.
Here is my code:
<form method="post">
<input name="url" size="50" />
<input name="submit" type="submit" />
</form>
<?php
if (!isset($_POST['submit'])) die();
$destination_folder = 'mydownloads/';
$url = $_POST['url'];
$newfname = $destination_folder . basename($url);
$file = fopen ($url, "rb");
if ($file) {
$newf = fopen ($newfname, "wb");
if ($newf)
while(!feof($file)) {
fwrite($newf, fread($file, 1024 * 8 ), 1024 * 8 );
}
}
if ($file) {
fclose($file);
}
if ($newf) {
fclose($newf);
}
?>
It works great for all links where the download link is direct for example if I will give
http://priceinindia.org/muzicpc/48.php?id=415508 link it will upload the music file but the file name will be 48.php?id=415508 but the actual mp3 file is stored at
http://lq.mzc.in/data48-2/37202/Appy_Budday_(Videshi)-Santokh_Singh(www.Mzc.in).mp3
So if I can get the actual destination url the name will be Appy_Budday_(Videshi)-Santokh_Singh(www.Mzc.in).mp3
So I want to get the actual download url.

You should use Curl library for this. http://php.net/manual/en/book.curl.php
An example of how to use curl is provided in tha manual (on that link) befo before you close the connections, call curl_getinfo (http://php.net/manual/en/function.curl-getinfo.php) and specifically get CURLINFO_EFFECTIVE_URL which is what you want.
<?php
// Create a curl handle
$ch = curl_init('http://www.yahoo.com/');
// Execute
$fileData = curl_exec($ch);
// Check if any error occured
if(!curl_errno($ch)) {
$effectiveURL = curl_getinfo($ch, CURLINFO_EFFECTIVE_URL);
}
// Close handle
curl_close($ch);
?>
(You can also use curl to write directly to a file - use the CURLOPT_FILE options. Also in the manual)

The problem is the original URL is redirecting. You want to catch the URL it is being redirected to, try using the headers and then get the basename($redirect_url) as your file name.
+1 for Robbie using CURL.
If you run (from command line)
[username#localhost ~]$ curl http://priceinindia.org/muzicpc/48.php?id=415508 -I
HTTP/1.1 302 Moved Temporarily
Server: nginx/1.0.10
Date: Wed, 19 Sep 2012 07:31:18 GMT
Content-Type: text/html
Connection: keep-alive
X-Powered-By: PHP/5.3.10
Location: http://lq.mzc.in/data48-2/37202/Appy_Budday_(Videshi)-Santokh_Singh(www.Mzc.in).mp3
You can see the location header here is the new url.
in php try something like
$ch = curl_init('http://priceinindia.org/muzicpc/48.php?id=415508');
curl_setopt($ch, CURLOPT_HEADER, 1); // return header
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, false); // dont redirect
$c = curl_exec($ch); //execute
echo curl_getinfo($ch, CURLINFO_HTTP_CODE); // will echo http code. 302 for temp move
echo curl_getinfo($ch, CURLINFO_EFFECTIVE_URL); // url being redirected to
You want to find the location part of the header. not sure the setting im sure though.
EDIT 3..or 4?
Yeah right, I see whats happening. You actually want to follow the location url then echo the effective url without downloading file. try.
$ch = curl_init('http://priceinindia.org/muzicpc/48.php?id=415508');
curl_setopt($ch, CURLOPT_NOBODY, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$c = curl_exec($ch); //execute
echo curl_getinfo($ch, CURLINFO_EFFECTIVE_URL); // url being redirected to
When I run this my output is
[username#localhost ~]$ php test.php
http://lq.mzc.in/data48-2/37202/Appy_Budday_(Videshi)-Santokh_Singh(www.Mzc.in).mp3

Related

Download Multiple Google Drive Files as a Zipped File

Once the user login to the portal, a list of PDF reports are displayed.
In order to download the reports in demand, user can check/uncheck the box associated to each report.
For Instance,
There are 10 reports in the list. User has selected 7 reports. Clicked Download. This workflow should result in the download of a zipped file which comprises of all the selected reports(7) rather than downloading each file individually.
These 10 reports in the above example are stored in the Google Drive. We store the Google download URL in database. Using this download URL we need to accomplish the aforesaid result.
Tried using Google Drive API Quickstart Reference. Error: 403 hit at the second attempt to save the files in file system.
PHP cURL implementation failed with 403 status code at the third round of running the script.
Basically, the plan was to save each selected file inside a folder in the file system. Then, Zip the folder and download the zip.
Here is what I have tried recently,
<?php
define('SAVE_REPORT_DIR', getcwd(). '/pathtosave/'. time());
function fs_report_save($fileUrl)
{
static $counter = 1;
if (!file_exists(SAVE_REPORT_DIR)) {
mkdir(SAVE_REPORT_DIR, 0777, true);
}
//The path & filename to save to.
$saveTo = SAVE_REPORT_DIR. '/'. time(). '.pdf';
//Open file handler.
$fp = fopen($saveTo, 'w+');
//If $fp is FALSE, something went wrong.
if($fp === false){
throw new Exception('Could not open: ' . $saveTo);
}
//Create a cURL handle.
$ch = curl_init($fileUrl);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
//Pass our file handle to cURL.
curl_setopt($ch, CURLOPT_FILE, $fp);
//Timeout if the file doesn't download after 20 seconds.
curl_setopt($ch, CURLOPT_TIMEOUT, 20);
//Execute the request.
curl_exec($ch);
//If there was an error, throw an Exception
if(curl_errno($ch)){
throw new Exception(curl_error($ch));
}
//Get the HTTP status code.
$statusCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
//Close the cURL handler.
curl_close($ch);
//Close the file handler.
fclose($fp);
if($statusCode == 200){
echo 'File: '. $saveTo .'. Downloaded!<br>';
} else{
echo "Status Code: " . $statusCode;
}
}
$reports = array(
'https://drive.google.com/uc?id=a&export=download',
'https://drive.google.com/uc?id=b&export=download',
'https://drive.google.com/uc?id=c&export=download'
);
foreach($reports as $report) {
fs_report_save($report);
}
?>
Please give a direction to accomplish the result.
Thanks
As #DalmTo has said, the API is not going to let you download multiples files in bulk as a zip, what you can do is create a Folder inside Drive and download that folder as zip.
There is a ton more information in this answer by #Tanaike:
Download Folder as Zip Google Drive API

Correct PHP way to check if external image exists?

I know that there are at least 10 the same questions with answers but none of them seems to work for me flawlessly. I'm trying to check if internal or external image exists (is image URL valid?).
fopen($url, 'r') fails unless I use #fopen():
Warning: fopen(http://example.com/img.jpg) [function.fopen]: failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in file.php on line 21
getimagesize($img) fails when image doesn't exist (PHP 5.3.8):
Warning: getimagesize() [function.getimagesize]: php_network_getaddresses: getaddrinfo failed
CURL fails because it isn't supported by some servers (although it's present mostly everywhere).
fileExists() fails because it doesn't work with external URLs and
can't possibly check if we're dealing with image.
Four methods that are the most common answers to such question are wrong. What would be the correct way to do that?
getimagesize($img) fails when image doesn't exist: am not sure you understand what you want .....
FROM PHP DOC
The getimagesize() function will determine the size of any given image file and return the dimensions along with the file type and a height/width text string to be used inside a normal HTML IMG tag and the correspondant HTTP content type.
On failure, FALSE is returned.
Example
$img = array("http://i.stack.imgur.com/52Ha1.png","http://example.com/img.jpg");
foreach ( $img as $v ) {
echo $v, getimagesize($v) ? " = OK \n" : " = Not valid \n";
}
Output
http://i.stack.imgur.com/52Ha1.png = OK
http://example.com/img.jpg = Not valid
getimagesize works just fine
PHP 5.3.19
PHP 5.4.9
Edit
#Paul .but your question is essentially saying "How do I handle this so I won't get an error when there's an error condition". And the answer to that is "you can't". Because all these functions will trigger an error when there is an error condition. So (if you don't want the error) you suppress it. None of this should matter in production because you shouldn't be displaying errors anyway ;-) – DaveRandom
This code is actually to check file... But, it does works for images!
$url = "http://www.myfico.com/Images/sample_overlay.gif";
$header_response = get_headers($url, 1);
if ( strpos( $header_response[0], "404" ) !== false )
{
// FILE DOES NOT EXIST
}
else
{
// FILE EXISTS!!
}
function checkExternalFile($url)
{
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_exec($ch);
$retCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
return $retCode;
}
$fileExists = checkExternalFile("http://example.com/your/url/here.jpg");
// $fileExists > 400 = not found
// $fileExists = 200 = found.
If you're using PHP >=5.0.0 you can pass an additional parameter into fopen to specify context options for HTTP, among them whether to ignore failure status codes.
$contextOptions = array( 'http' => array('ignore_errors' => true));
$context = stream_context_create($contextOptions);
$handle = fopen($url, 'r', false, $context);
Use fsockopen, connect to the server, send a HEAD request and see what status you get back.
The only time you need to be aware of problems is if the domain doesn't exist.
Example code:
$file = "http://example.com/img.jpg";
$path = parse_url($file);
$fp = #fsockopen($path['host'],$path['port']?:80);
if( !$fp) echo "Failed to connect... Either server is down or host doesn't exist.";
else {
fputs($fp,"HEAD ".$file." HTTP/1.0\r\n"
."Host: ".$path['host']."\r\n\r\n");
$firstline = fgets($fp);
list(,$status,$statustext) = explode(" ",$firstline,3);
if( $status == 200) echo "OK!";
else "Status ".$status." ".$statustext."...";
}
You can use the PEAR/HTTP_Request2 Package for this. You can find it here
Here comes an example. The Example expects that you have installed or downloaded the HTTP_Request2 package properly. It uses the old style socket adapter, not curl.
<?php
require_once 'HTTP/Request2.php';
require_once 'HTTP/Request2/Adapter/Socket.php';
$request = new HTTP_Request2 (
$your_url,
HTTP_Request2::METHOD_GET,
array('adapter' => new HTTP_Request2_Adapter_Socket())
);
switch($request->send()->getResponseCode()) {
case 404 :
echo 'not found';
break;
case 200 :
echo 'found';
break;
default :
echo 'needs further attention';
}
I found try catch the best solution for this. It is working fine with me.
try{
list($width, $height) = getimagesize($h_image->image_url);
}
catch (Exception $e)
{
}
I know you wrote "without curl" but still, somebody may find this helpfull:
function curl_head($url) {
$ch = curl_init($url);
//curl_setopt($ch, CURLOPT_USERAGENT, 'Your user agent');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 1); # get headers
curl_setopt($ch, CURLOPT_NOBODY, 1); # omit body
//curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 1); # do SSL check
//curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2); # verify domain within cert
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); # follow "Location" redirs
//curl_setopt($ch, CURLOPT_TIMEOUT_MS, 700); # dies after 700ms
$result = curl_exec($ch);
curl_close($ch);
return $result;
}
print_r(curl_head('https://www.example.com/image.jpg'));
You will see someting like this HTTP/1.1 200 OK or HTTP/1.1 404 Not Found in returned header array. You can do also multiple parallel requests with curl multi.
There are multiple steps, there is no single solution:
Validate URL
Check whether the file is available (can be done directly with step 3)
Download the image into a tmp file.
Use getimagesize to check the size of the image.
For this kind of work you can catch the exceptions and handle them well to define your answer. In this case you could even suppress errors because it's intended that they trick might fail. So you handle the errors correctly.
Because it's not possible to do a 100% check on it without having the actual image downloaded. So step 1 and 2 are required, 3 and 4 optional for a more definitive answer.

Php curl: working script, need a little added

This is my code:
function get_remote_file_to_cache(){
$sites_array = array("http://www.php.net", "http://www.engadget.com", "http://www.google.se", "http://arstechnica.com", "http://wired.com");
$the_site= $sites_array[rand(0, 4)];
$curl = curl_init();
$fp = fopen("rr.txt", "w");
curl_setopt ($curl, CURLOPT_URL, $the_site);
curl_setopt($curl, CURLOPT_FILE, $fp);
curl_exec ($curl);
curl_close ($curl);
}
$cache_file = 'rr.txt';
$cache_life = '15'; //caching time, in seconds
$filemtime = #filemtime($cache_file);
if (!$filemtime or (time() - $filemtime >= $cache_life)){
ob_start();
echo file_get_contents($cache_file);
ob_get_flush();
echo " <br><br><h1>Writing to cache</h1>";
get_remote_file_to_cache();
}else{
echo "<h1>Reading from cache file:</h1><br> ";
ob_start();
echo file_get_contents($cache_file);
ob_get_flush();
}
Everything works as it should and no problems or surprises, and as you can see its pretty simple code but I am new to CURL and would just like to add one check to the code, but dont know how:
Is there anyway to check that the file fetched from the remote site is not a 404 (not found) page or such but is a status code 200 (successful) ?
So basically, only write to cache file if the fill is status code 200.
Thanks!
To get the status code from a cURL handle, use curl_getinfo after curl_exec:
$status = curl_getinfo($curl, CURLINFO_HTTP_CODE);
But the cached file will be overwritten when
$fp = fopen("rr.txt", "w");
is called, regardless of the HTTP code, this means that to update the cache only when status is 200, you need to read the contents into memory, or write to a temporary file. Then finally write to the real file if the status is 200.
It is also a good idea to
touch('rr.txt');
before executing cURL, so that the next request that may come before the current operation finish will not also try to load the page to page too.
Try this after curl_exec
$httpCode = curl_getinfo($curl, CURLINFO_HTTP_CODE);

PHP server to server file request

I have script-1 on server A, where user ask for a file.
I have script-2 on server B (the file repository) where I check that user can access it and return the correct file (I'm using Smart File Download http://www.zubrag.com/scripts/download.php).
I've tried cURL and file_get_contents, I've changed Content Header in various ways, but I wasn't still able to download the file.
This is my request:
$request = "http://mysite.com/download.php?f=test.pdf";
and it works fine.
What should I call in script-1 to force the file be downloaded?
Some of my tries
This works, but I don't know how to handle unauthorized or broken downloads
header('Content-type: application/pdf');
$handle = fopen($request, "r");
if ($handle) {
while (!feof($handle)) {
$buffer = fgets($handle, 4096);
echo $buffer;
}
fclose($handle);
}
This prints the pdf code (not the text) straight in the browser (I think it's a header problem):
$c = curl_init();
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_URL, $request);
$contents = curl_exec($c);
curl_close($c);
if ($contents) return $contents;
else return FALSE;
This generate a white page
file_get_contents($request);
To force download, add
header('Content-disposition: attachment');
But Note, that it's not in HTTP 1.1 spec anymore, see Uses of content-disposition in an HTTP response header first answer
Without your code I don't know what you've tried, but you need to get the contents of the file via cURL and then save it to your server. Something like...
$url = 'http://website.com/file.pdf';
$path = '/tmp/file.pdf';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$contents = curl_exec($ch);
curl_close($ch);
file_put_contents($path, $contents);
If you want to downloads file from the FTP server you can use php File Transfer Protocol (FTP) extension. Please find below code:
<?php
$SERVER_ADDRESS="";
$SERVER_USERNAME="";
$SERVER_PASSWORD="";
$conn_id = ftp_connect($SERVER_ADDRESS);
// login with username and password
$login_result = ftp_login($conn_id, $SERVER_USERNAME, $SERVER_PASSWORD);
$server_file="test.pdf" //FTP server file path
$local_file = "new.pdf"; //Local server file path
##----- DOWNLOAD $SERVER_FILE AND SAVE IT TO $LOCAL_FILE--------##
if (ftp_get($conn_id, $local_file, $server_file, FTP_BINARY)) {
echo "Successfully written to $local_file\n";
} else {
echo "There was a problem\n";
}
ftp_close($conn_id);
?>
Download the file with curl, then check this: http://php.net/function.readfile
It shows how to force download.
SOLVED
I ended by simply redirect the request with:
header("Location: $request");

There are any open-soure PHP Web Proxy ready to use?

I need a PHP Web Proxy that read html, show to the user and rewrite all the links for when the user click in the next link the proxy will handle the request again, just like this code, but with additionaly sould make the rewrite of all the links.
<?php
// Set your return content type
header('Content-type: text/html');
// Website url to open
$daurl = 'http://www.yahoo.com';
// Get that website's content
$handle = fopen($daurl, "r");
// If there is something, read and return
if ($handle) {
while (!feof($handle)) {
$buffer = fgets($handle, 4096);
echo $buffer;
}
fclose($handle);
}
?>
I hope I have explained well. This question is for not reinventing the wheel.
Another additional question. This kind of proxies will deal with contents like Flash?
For an open source solution, check out PHProxy. I've used it in the past and it seemed to work quite well from what I can remember.
It will sort of work, you need to rewrite any relative path to apsolute, and I think cookies won't work in this case. Use cURL for this operations...
function curl($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
return curl_exec($ch);
curl_close ($ch);
}
$url = "http://www.yahoo.com";
echo curl($url);

Categories