Stuff like //google.com does not work with file_get_contents is there a solution that does not require adding the protocol to the string?
you can't use file_get_contents with //google.com because what it's actually doing is file:///google.com when you do this in your web browser it's actually using the current protocol you're currently on. So if you had https://mywebsite.com and you linked to something as //google.com it would actually do is https://google.com. That being said you need to do file_get_contents('http://google.com');
I think that providing protocol for loading resource with file_get_contents is necessary. This function is evaluated on server not on client browser where automatic protocol check with // works.
Why you cannot pass protocol?
If you still need to decide protocol based on actual request to server you can check the environment variable $_SERVER a if there is https put there https or vice verse.
Related
$output = file_get_contents("http://www.canadapost.ca/cpc2/addrm/hh/current/indexa/caONu-e.asp");
var_dump($output);
HTTP 505 Status means the webserver does not support the HTTP version used by the client (in this case, your PHP program).
What version of PHP are you running, and what HTTP/Web package(s) are you using in your PHP program?
[edit...]
Some servers deliberately block some browsers -- your code may "look like" a browser that the server is configured to ignore. I would particularly check the user agent string that your code is passing along to the server.
Check in your PHP installation (php.ini file) if the allow_url_fopen is enabled.
If not, any calls to file_get_contents will fail.
It works fine for me.
That site could be blocking the server that you're using to access it.
When you run the URL from your browser, your own ISP is used to get the information and display in your browser. But when you run from PHP, the ISP of your web host is used to get the information, then it passes it back to you.
Maybe you can do this to check and see what kind of headers its returning for you?
$headers=get_headers("http://www.canadapost.ca/cpc2/addrm/hh/current/indexa/caONu-e.asp");
print_r($headers);
According to the description of the Google Custom Search API you can invoke it using the GET verb of the REST interface, like with the example:
GET https://www.googleapis.com/customsearch/v1?key=INSERT-YOUR-KEY&cx=017576662512468239146:omuauf_lfve&q=lectures
I setup my API key and custom search engine, and when pasted my test query directly on my browser it worked fine, and I got the JSON file displayed to me.
Then I tried to invoke the API from my PHP code by using:
$json = file_get_contents("$url") or die("failed");
Where $url was the same one that worked on the browser, but my PHP code was dying when trying to open it.
After that I tried with curl, and it worked. The code was this:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$body = curl_exec($ch);
Questions:
How come file_get_contents() didn't work and curl did?
Could I use fsocket for this as well?
Question 1:
At first you should check ini setting allow_url_fopen, AFAIK this is the only reason why file_get_contents() shouldn't work. Also deprecated safe_mode may cause this.
Oh, based on your comment, you have to add http:// to URL when using with file system functions, it's a wrapper that tells php that you need to use http request, without it function thinks you require to open ./google.com (the same as google.txt).
Question 2:
Yes, you can build almost any cURL request with sockets.
My personal opinion is that you should stick with cURL because:
timeout settings
handles all possible HTTP states
easy and detailed configuration (there is no need for detailed knowledge of HTTP headers)
file_get_contents probably will rewrite your request after getting the IP, obtaining the same thing as:
file_get_contents("xxx.yyy.www.zzz/app1",...)
Many servers will deny you access if you go through IP addressing in the request.
With cURL this problem doesn't exists. It resolves the hostname leaving the request as you set it, so the server is not rude in response.
This could be the "cause", too..
1) Why are you using the quotes when calling file_get_contents?
2) As it was mentioned in the comment, file_get_contents requires allow_url_fopen to be enabled on your php.ini.
3) You could use fsockopen, but you would have to handle HTTP requests/responses manually, which would be to reinvent the wheel when you have cURL. The same goes for socket_create.
4) Regarding the title of this question: cURL can be more customizable and useful to work with complex HTTP transactions than file_get_contents. Though, it should be mentioned, that working with stream contexts allows you to make a lot of settings for your file_get_contents calls. However, I think cURL is still more complete since it gives you, for instance, the possibility of working with multiple parallel handlers.
I have an ssl certificate on my web-site. Once images are loaded on the page from another site, it causes warnings kind of "the page contains both secure and nonsecure items", so you have to press OK or you see "broken" ssl connection in the browser. One of the ways to escape that warnings is to use http page instead of https, correct?
But, as far as I know, there is another way to exclude that warnings using php or just using javascript. I believe the images are loaded to the temporary folder on my server and are loaded as https images at the same time.
Could anybody tell me the best way to do that?
Browsing the forum didn't help me a lot.
Thank you.
So,
how to load
<?php echo '<img src="http://www.not_my_site.com/image.jpg" alt="">'; ?>
with no warnings on my page https://my_site.com/index.php ?
You cannot surpress the error as it's a browser thing.
The only way would be to wrap those calls using an https call on your site. Something like:
<?php echo '<a href="https://my_site.com/external.php?resource=http://www.not_my_site.com/image.jpg" alt="">'; ?>
You will have to write the external.php script to make the request on the client's behalf, and then return the content over your existing SSL connection. You only NEED to do this for external HTTP-only resources.
The process would work as follows:
The end user's web browser makes an HTTPS request to your external.php script.
Check for a saved copy of the resource. If you've got it cached then skip to step 6, returning the cached resource.
Your server forwards on the call to the HTTP resource specified as the resource.
The remote server responds to the request.
Save a copy of the resource for caching.
Your web server external.php script then returns that response over the SSL connection.
The web browser only makes 1 request, your web server just has to make an additional one.
This is the only way you'll be able to get rid of the message.
Looks even simpler to retrieve the image: use curl to download indirect image file
It happens cause your making non-secure (HTTP) calls from a secured-page (HTTPS).
try changing your code to:
<?php echo '<a href="https://www.not_my_site.com/image.jpg" alt="">'; ?>
I'm attempting to use curl inside php to grab a page from my own web server. The page is pretty simple, just has some plain text output. However, it returns 'null'. I can successfully retrieve other pages on other domains and on my own server with it. I can see it in the browser just fine, and I can grab it with command line wget just fine, it's just that when I try to grab that one particular page with curl, it simply comes up null. We can't use file_get_contents because our host has it disabled.
Why in the world would this be different behavior be happening?
Found the issue. I was putting my url someplace that was not in curl_init(), and that place was truncating the query string. Once I moved it back to curl_init, it worked.
Try setting curl's user agent. Sometimes hosts will block "bots" by blocking things like wget or curl - but usually they do this just by examining the user agent.
You should check the output of curl_error() and also take a look at your logfiles for the http server.
any idea why fopen would timeout for a file if it is on my server and I know the url is correct?
update: sorry, i should have mentioned this is in php.
the code is:
fopen($url, 'r');
It works if i put in a relative path for the file, but not if $url is a url in my server (but it works for google.com). Thanks for the help.
Alaitnik's answer was right. The problem only appears when i access my own server files through the ethernet interface. How can I fix this? I need to be able to access the file from the ethernet interface because the url loads dynamically (it's generated from a wordpress cms, so the url doesn't technically exist as a file on my server)
you can use
ini_set('default_socket_timeout',2);
before opening the fopen $url . This actually set the default socket connection timout without responding.
Stream_set_timeout sets time out on the stream that is established via fopn or socket opening functions.
Try this may be helpful for you.
It appears that you're trying to download a file from your own server using the HTTP protocol from a program running on that same server?
If so, the timeout problem is likely to be web server or network configuration related. Timeouts normally only happen because either:
the server really is taking a long time to send back the answer, or
the TCP connection is being blocked
For example, it may be that your local firewall rules only permit access to www.example.com if those queries come from the ethernet interface, but a locally made connection would try to go via the loopback interface.
maybe your "allow_url_fopen" is set to "Off"
check your php.ini file or phpinfo()
If you are trying to get the HTML of a URL, I suggest using curl instead of fopen.
fopen is best used with local files, coz it does not "know" how to deal with the idiosyncrasies of a network resource.
Check the comments on the documentation of fopen. There's a whole lot of gold in there.
Took me ages to solve this, but here I found it, thanks to Alnitak. Opening the file with localhost in the URL instead of the hostname was what did the trick for me.