I am using file_get_contents in PHP to get information from a client's collections on contentDM. CDM has an API so you can get that info by making php queries, like, say:
http://servername:port/webutilities/index.php?q=function/arguments
It has worked pretty well thus far, across computers and operating systems. However, this time things work a little differently.
http://servername/utils/collection/mycollectionname/id/myid/filename/myname
For this query I fill in mycollection, myid, and myname with relevant values. myid and mycollection have to exist in the system, obviously. However, myname can be anything you want. When you run the query, it doesn't return a web page or anything to your browser. It just automatically downloads a file with myname as the name of the file, and puts it in your local /Downloads folder.
I DON'T WISH TO DOWNLOAD THIS FILE. I just want to read the contents of the file it returns directly into PHP as a string. The file I am trying to get just contains xml data.
file_get_contents works to get the data in that file, if I use it with PHP7 and Apache on my laptop running Ubuntu. But, on my desktop which runs Windows 10, and XAMPP (Apache and PHP5), I get this error (I've replaced sensitive data with ###):
Warning:
file_get_contents(###/utils/collection/###/id/1110/filename/1111.cpd):
failed to open stream: No such file or directory in
D:\Titus\Documents\GitHub\NativeAmericanSCArchive\NASCA-site\api\update.php
on line 18
My coworkers have been unable to help me so I am curious if anyone here can confirm or deny whether this is an operating system issue, or a PHP version issue, and whether there's a solid alternative method that is likely to work in PHP5 and on both Windows and Ubuntu.
file_get_contents() is a simple screwdriver. It's very good for getting data by simply GET requests where the header, HTTP request method, timeout, cookiejar, redirects, and other important things do not matter.
fopen() with a stream context or cURL with setopt are powerdrills with every bit and option you can think of.
In addition to this, due to some recent website hacks, we had to secure our sites more. In doing so, we discovered that file_get_contents failed to work, where curl still would work.
Not 100%, but I believe that this php.ini setting may have been blocking the file_get_contents request.
; Disable allow_url_fopen for security reasons
allow_url_fopen = 0
Either way, our code now works with curl.
reference :
http://25labs.com/alternative-for-file_get_contents-using-curl/
http://phpsec.org/projects/phpsecinfo/tests/allow_url_fopen.html
So, You can solve this problem by using PHP cURL extension. Here is an example that does the same thing you were trying:
function curl($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$url = 'your_api_url';
$data = curl($url);
And finally you can check your data by print_r($data). Hope it you it will works and you will understand.
Reference : http://php.net/manual/en/book.curl.php
Related
This only happens on my webserver, not on the local system.
I have a curl request like this
ini_set('display_errors', 1);
error_reporting(E_ALL);
$url = 'http://***.***.***.***:8080/api_v1/oauth/token';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE);
$response = curl_exec($ch);
This makes the page loading for a while and just returns a white screen. It is really impossible to show errors, output or just anything else.
Whenever I change the url to another url (existing or not existing) i get proper errors or output if the url makes sense, as long as the url does not contain any dots or colons...
Is there any restriction for the usage or a curlopt I am missing?
I have no control over the target url, I need to consume the api in the ip:port structure.
UPDATE
The problem is not related to the target URL or data coming in: the same problem occurs when I enter a url that makes no sense at all as long as it doesn't contain . or :
I guess it is a setting on the webserver since all my tests work fine on localhost (MAMP)
Unfortunately I have no access to any logs or files except the ones I upload myself (one.com webhosting)
UPDATE 2
Turns out my hoster is blocking all outgoing traffic to implicit IP's and ports other than 80 and 443.
Cancelled my subscription and taking a decent provider now.
Thanks for the help
As #Quasimodo suggests, then I'd take a look in the log-file, if I were you. If you're on a Ubuntu-server using Apache, then look at /var/log/apache2/error.log. A neat trick is to open a terminal and write:
tail -f /var/log/apache2/error.log
This will open a running stream to the terminal. Then you can make your curl-request crash (in your browser) and then go back to the terminal and see what new and juicy errors you have received.
It's most likely some configuration-file on your server. So it would be helpful, if you write a couple of specs from that server, such as:
- Which web server you're using (Apache, Nginx, other)
- PHP version
... You can find all of these information easily using phpinfo.
My best guess is that you need to enable PHP_Curl for your server configuration, - but it is a buck-wild cowboy shot from the hip.
Addition 1
I can see that you've just editted the question (that it thinks for a while and then gives a blank screen). I'd say, that your curl-request might be trying to load a big amount of data, and that your PHP-configuration has a cap at 128mb (or something).
I'd check the PHPinfo for these two values:
max_input_vars
memory_limit
To see if either of them are suspiciously low.
Turns out my hoster is blocking all outgoing traffic to implicit IP's and ports other than 80 and 443. Cancelled my subscription and taking a decent provider now. Thanks for the help
I was curious to know if there was a way to download a file from SERVER A and put it on SERVER B where SERVER A has the ability to dynamically change what's in the downloaded file.
The point behind it is that I'm trying to build an error handler for a tool that will be used when a file that is a needed part of a tool goes missing. It would be like WordPress realizing there is a file missing on your site and your site sends a request to wordpress.com to get the missing files like this:
(SERVER B): PHP spits out error on include
(SERVER B): PHP tries to get a file installer for the missing files from SERVER A by saying SERVER B is missing FILE A, FILE B, FILE C, etc...
for the step above I was thinking it could be done using this:
file_put_contents("missing_installer.php", "http://SERVER_A.com/mi_inst_installer.php?query-asking-for-missing-item(s)=missing-item", 'r'));
NOTE (only if you don't understand what the above code does): The above code is supposed to tell SERVER A's PHP file, mi_inst_installer.php, to spit out data (the installer) and put it into file, missing_installer.php, on SERVER B
(SERVER B): PHP installs missing files using the newly obtained missing_installer.php
Any ideas on what to do?
You can also use cURL for dynamics on SERVER A's Side:
function curl($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
$return = curl_exec($ch);
curl_close ($ch);
return $return;
}
file_put_contents('missing_installer.php', curl('http://SERVER_A.com/mi_inst_installer.php?query-asking-for-missing-item(s)=missin-item'));
Jacky's answer is good only if allow_url_fopen is set to 1.
also use PHP's reference for cURL transfer options (lets you customize how the the request is sent and/or returned). Also, it's a good idea to get used to how cURL works generally; see the other PHP reference (client URL library)
try something like this:
$mycontent = file_get_contents('http://SERVER_A.com/mi_inst_installer.php?query-asking-for-missing-item(s)=missin-item');
file_put_contents('missing_installer.php', $mycontent));
you need to get (using file_get_contents()) the contents of the downloaded file first and then put it into the second parameter of file_put_contents() first.
According to the description of the Google Custom Search API you can invoke it using the GET verb of the REST interface, like with the example:
GET https://www.googleapis.com/customsearch/v1?key=INSERT-YOUR-KEY&cx=017576662512468239146:omuauf_lfve&q=lectures
I setup my API key and custom search engine, and when pasted my test query directly on my browser it worked fine, and I got the JSON file displayed to me.
Then I tried to invoke the API from my PHP code by using:
$json = file_get_contents("$url") or die("failed");
Where $url was the same one that worked on the browser, but my PHP code was dying when trying to open it.
After that I tried with curl, and it worked. The code was this:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$body = curl_exec($ch);
Questions:
How come file_get_contents() didn't work and curl did?
Could I use fsocket for this as well?
Question 1:
At first you should check ini setting allow_url_fopen, AFAIK this is the only reason why file_get_contents() shouldn't work. Also deprecated safe_mode may cause this.
Oh, based on your comment, you have to add http:// to URL when using with file system functions, it's a wrapper that tells php that you need to use http request, without it function thinks you require to open ./google.com (the same as google.txt).
Question 2:
Yes, you can build almost any cURL request with sockets.
My personal opinion is that you should stick with cURL because:
timeout settings
handles all possible HTTP states
easy and detailed configuration (there is no need for detailed knowledge of HTTP headers)
file_get_contents probably will rewrite your request after getting the IP, obtaining the same thing as:
file_get_contents("xxx.yyy.www.zzz/app1",...)
Many servers will deny you access if you go through IP addressing in the request.
With cURL this problem doesn't exists. It resolves the hostname leaving the request as you set it, so the server is not rude in response.
This could be the "cause", too..
1) Why are you using the quotes when calling file_get_contents?
2) As it was mentioned in the comment, file_get_contents requires allow_url_fopen to be enabled on your php.ini.
3) You could use fsockopen, but you would have to handle HTTP requests/responses manually, which would be to reinvent the wheel when you have cURL. The same goes for socket_create.
4) Regarding the title of this question: cURL can be more customizable and useful to work with complex HTTP transactions than file_get_contents. Though, it should be mentioned, that working with stream contexts allows you to make a lot of settings for your file_get_contents calls. However, I think cURL is still more complete since it gives you, for instance, the possibility of working with multiple parallel handlers.
I have a script that pulls URLs from the database and downloads them (pdf or jpg) to a local file.
Code is:
$cp = curl_init($remote_url);
$fp = fopen($dest_temp, "w");
#curl_setopt($cp, CURLOPT_FILE, $fp);
#curl_setopt($ch, CURLOPT_HEADER, TRUE);
curl_exec($cp);
curl_close($cp);
fclose($fp);
If the remote file is there, it works fine. If the remote file is not there, it just bombs and the browser hangs forever.
What's the best approach to handling this, should I somehow ping for the file first? or can I set options above that will handle this. I tried setting timeouts but it had no effect.
this is my first experience using cURL
I used to use wget much as you're using curl and got frustrated with the lack of ability to know what is going on because its essentially calling out to an external program.
I use perl WWW:Mechanize and the link below is a PHP version which might be a bit more robust for you to be able to deal with such instances.
http://www.compasswebpublisher.com/php/www-mechanize-for-php
Hope this helps.
We are using shared hosting and the follow features are disabled.
file_uploads = Off
allow_url_fopen = Off
allow_url_include = Off
We are unable to change hosting and need to figure out some workarounds. The hosting co is also not able/willing to enable these features.
For example:
We are calling 1 server from another in order to get content. So we do an include but since URL file include is disabled we are not sure what options we have to get the content on that second server and store it there using some kind of cache.
We control the content server fully (dedicated) so we can do whatever necessary just not sure if there is some easy solution to the problem.
Since you're looking to retrieve remote content the easiest way will be to write the functionality to fetch the content yourself with something like curl (php.net/curl)
Have you tried something like this:
http://www.humanumbrella.com/2007/12/08/how-to-download-a-remote-file-in-php-and-then-save-it/
It depends on how locked down the server is. The given examples (using curl functions or fsockopen) should not be hampered by the restrictions you mentioned.
You can solve your problem like this
a) Create mechanism in dedicated server to fetch any file (plus some kind of key based authentication and restrictions on paths where files can be fetched from)
eg: A url that says get_file?path=/path/to/file&key=security_key
b) Write a function to fetch this as if from a local file
function fetch_file($path) {
$ch = curl_init("http://www.example.com/get_file?path=$path&key=security_key");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
$output = curl_exec($ch);
curl_close($ch);
return $output;
}
Then you can eval the returned string and that would be like including the file
eval fetch_file($path);
Another solution to write to the server if php file upload is prevented is to ftp the file on to your server and include the file.