I've got this script for saving albumart from Deezer to my server. The albumart url is alright, you can try yourself. And it does make a file but it's not the image I would like to see but a corrupted file. I am guessing it has something to do with the (I guess) 301 they provide when you visit the original link you get from the API. But I don't know hot to solve that problem if it is that.
<?php
// Deezer
$query = 'https://api.deezer.com/2.0/search?q=madonna';
$file = file_get_contents($query);
$parsedFile = json_decode($file);
$albumart = $parsedFile->data[0]->artist->picture;
$artist = $parsedFile->data[0]->artist->name;
$dir = dirname(__FILE__).'/albumarts/'.$artist.'.jpg';
file_put_contents($dir, $albumart);
?>
Two issues:
1) $albumart contains a URL (in your case http://api.deezer.com/2.0/artist/290/image). You need to do file_get_contents on that url.
<?php
// Deezer
$query = 'https://api.deezer.com/2.0/search?q=madonna';
$file = file_get_contents($query);
$parsedFile = json_decode($file);
$albumart = $parsedFile->data[0]->artist->picture;
$artist = $parsedFile->data[0]->artist->name;
$dir = dirname(__FILE__).'/albumarts/'.$artist.'.jpg';
file_put_contents($dir, file_get_contents($albumart)); // << Changed this line
?>
2) The redirect may be a problem (as you suggest). To get around that, use curl functions.
// Get file using curl.
// NOTE: you can add other options, read the manual
$ch = curl_init($albumart);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
curl_close($ch);
// Save output
file_put_contents($dir, $data);
Note, you should use curl() for handling getting content from external URLs as a matter of principal. Safer and you have better control. Some hosts also block accessing external URLS using file_get_contents anyway.
Why not get the headers for the file (the headers contain the redirect).
$headerdata=get_headers($albumart);
echo($headerdata[4]);//show the redirect (for testing)
$actualloc=str_replace("Location: ","",$headerdata[4]);//remove the 'location' header string
file_put_contents($dir, $actualloc);
I think it's the 4th record in the header, if not check it with a print_r($hearderdata);
this will return the proper url of the image file.
Related
I am uploading files that I need to attach to email via office365 API.
What I need is the content of the file in a variable WITHOUT storing/saving the file, how can I do that?
foreach ($request->filesToUpload as $file) {
$originalName = $file->getClientOriginalName();//working
$content = $file->getContent();//<- I need this, but not working
//$this->addAttachment($content, $originalName) //logic for later
}
Access the contents of the file like so:
$content = file_get_contents(Input::file('nameAttribute')->getRealPath());
or in other words inside that loop
$contents = file_get_contents($file->getRealPath());
Get the real path with object methods, and you may interact with it like any other file.
Waqas Bukhary,
You get that result, beucause, getContent is not a method of uploadedFiles, check the documentation.
But you have the path, so you can always read the content, like:
$path = $file->path();
$handle = fopen($path, 'r');
$content = fread($handle, filesize($path);
fclose($handle);
You can also use the Request File method if you know the name of the file field, check it here.
I am making a web application that grabs one page from the internet using cUrl and updates another page accordingly. I have been doing this by saving the HTML from cUrl and then parsing it on the other page. The issue is: I can't figure out what permissions I should use for the text file. I don't have it saved in my /public/ html folder, since I don't want any of the website's users to be able to see it. I only want them to be able to see the way it's parsed on the site.
Here is the cUrl code:
$perfidlist = "" ;
$sourcefile = "../templates/textfilefromsite.txt";
$trackerfile = "../templates/trackerfile.txt";
//CURL REQUEST 1 OF 2
$ch = curl_init("http://www.website.com");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
<more cUrl options omitted>
ob_start();
$curl2 = curl_exec($ch);
ob_end_clean();
curl_close($ch);
//WRITING FILES
$out = fopen($sourcefile, "r");
$oldfiletext = fread($out, filesize($sourcefile));
fclose($out);
$runcode = 1 ;
And the part where I save the text file:
/*only writing a file if the site has changed*/
if (strcmp($oldfiletext, $curl2) !==0)
{
$out = fopen($sourcefile, "w");
fwrite($out, $curl2);
fclose($out);
$tracker = fopen($trackerfile, "a+");
fwrite($tracker, date('Y/m/d H:i:s')."\n");
fclose($tracker);
$runcode = 1 ;
}
I am receiving an error at that last '$out = fopen($sourcefile, "w");' part that says:
Warning: fopen(../templates/textfilefromsite.txt): failed to open stream: Permission denied in /usr/share/nginx/templates/homedir.php on line 72
Any ideas?
The issue was with file/folder permissions. I ended up changing the permissions for the file to '666' meaning '-rw-rw-rw-' and it worked.
Hi I want to download some 250 files from a URL which are in a sequence. I am almost done with it! Just the Problem is the structure of my URL is:
http://lee.kias.re.kr/~newton/sann/out/201409//SEQUENCE1.prsa
Where id is in a sequence but the file name "SEQUENCE1.psra" has a format "SEQUENCE?.psra".
Is there any way I can specify this format of file in my code? And also there are other files in folder, but only 1 with ".psra" ext.
Code:
<?php
// Source URL pattern
//$sourceURLOriginal = "http://www.somewebsite.com/document{x}.pdf";
$sourceURLOriginal = " http://lee.kias.re.kr/~newton/sann/out/201409/{x}/**SEQUENCE?.prsa**";
// Destination folder
$destinationFolder = "C:\\Users\\hp\\Downloads\\SOP\\ppi\\RSAdata";
// Destination file name pattern
$destinationFileNameOriginal = "doc{x}.txt";
// Start number
$start = 7043;
// End number
$end = 7045;
$n=1;
// From start to end
for ($i=$start; $i<=$end; $i++) {
// Replace source URL parameter with number
$sourceURL = str_replace("{x}", $i, $sourceURLOriginal);
// Destination file name
$destinationFile = $destinationFolder . "\\" .
str_replace("{x}", $i, $destinationFileNameOriginal);
// Read from URL, write to file
file_put_contents($destinationFile,
file_get_contents($sourceURL)
);
// Output progress
echo "File #$i complete\n";
}
?>
Its working if I directly specify the URL!
Error:
Warning: file_get_contents( http://lee.kias.re.kr/~newton/sann/out/201409/7043/SEQUENCE?.prsa): failed to open stream: Invalid argument in C:\xampp\htdocs\SOP\download.php on line 37
File #7043 complete
Its making the files but they are empty!
If there is a way in which I can download that whole folder(named with id in sequence) can also work! But how do we download the whole folder in a folder?
It may be possible file_get_contents() function is not working on your server.
Try this code :
function url_get_contents ($Url) {
if (!function_exists('curl_init')){
die('CURL is not installed!');
}
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $Url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$output = curl_exec($ch);
curl_close($ch);
return $output;
}
Here you go.
I didnt test the whole file_get_contents, file_put_contents part, but if you say its adding the files (albeit, blank) then I assume it still works here...
Everything else works fine. I left a var_dump() in so you can see what the return looks like.
I did what I suggested in my comment. Open the folder, parse the file list, grab the filename you need.
Also, I dont know if you read my original comments, but $sourceURLOriginal has an extra space at the beginning, which might have been giving you an issue.
<?php
$start=7043;
$end=7045;
$sourceURLOriginal="http://lee.kias.re.kr/~newton/sann/out/201409/";
$destinationFolder='C:\Users\hp\Downloads\SOP\ppi\RSAdata';
for ($i=$start; $i<=$end; $i++) {
$contents=file_get_contents($sourceURLOriginal.$i);
preg_match_All("|href=[\"'](.*?)[\"']|",$contents,$hrefs);
$file_list=array();
if (empty($hrefs[1])) continue;
unset($hrefs[1][0],$hrefs[1][1],$hrefs[1][2],$hrefs[1][3],$hrefs[1][4]);
$file_list=array_values($hrefs[1]);
var_dump($file_list);
foreach ($file_list as $index=>$file) {
if (strpos($file,'prsa')!==false) {
$needed_file=$index;
break;
}
}
file_put_contents($destinationFolder.'\doc'.$i.'.txt',
file_get_contents($sourceURLOriginal.$i.'/'.$file_list[$needed_file])
);
}
I'm trying to obtain data from my Adobe Media server. For instance when I navigate with my browser to this URL:
http://misite.com:1111/admin/getLiveStreamStats?auser=myuname&apswd=mypwd&appInst=live&stream=srd
misite.com should be localhost. I get the contents shown in my browser. Now I'm trying to get those contents inside my php file:
$url = 'http://misite.com:1111/admin/ping?auser=myuname&apswd=mypwd';
$contents = file_get_contents($url);
echo $contents;
//OR:
print($contents)
But this gives me only a blank page. I've checked my source code and it returns empty. What should I do?
Any suggestions?
$url = 'http://misite.com:1111/admin/ping?auser=myuname&apswd=mypwd';
$contents = file_get_contents($url);
echo $contents;
This has no file to refer to. Notice ping is not ping.php
$url = 'http://example.com/somephpscript.php?auser=myuname&apswd=mypwd';
$contents = file_get_contents($url);
echo $contents;
Hi I wan to save the sourecode of http://stats.pingdom.com/w984f0uw0rey to some directory in my website
<?php
if(!copy("http://stats.pingdom.com/w984f0uw0rey", "stats.html"))
{
echo("failed to copy file");
}
;
?>
but this does not work either for me:
<?php
$homepage = file_get_contents('http://stats.pingdom.com/w984f0uw0rey');
echo $homepage;
?>
But I cannot figure how to do it!
thanks
use
<?
file_put_contents('w984f0uw0rey.html', file_get_contents('http://stats.pingdom.com/w984f0uw0rey'));
?>
be sure that the script has write privileges to the current directory
Use file_get_contents().
The best variant you can do in PHP is to use stream_copy_to_stream:
$url = 'http://www.example.com/file.zip';
$file = "/downloads/stats.html";
$src = fopen($url, 'r');
$dest = fopen($file, 'w');
echo stream_copy_to_stream($src, $dest) . " bytes copied.\n";
If you need to add HTTP options like headers, use context options with the fopen call. See as well this similar answer which shows how. It's likely you need to set a user-agent and things so that the other website's server believes you're a browser.