I have following code:
$url = 'http://' . $host . ':' . $port . $params;
$results = file( $url );
I am getting following exception:
file(http://someurl.com/asd/asd): failed to open stream: HTTP request failed! HTTP/1.0 403 Forbidden
But I access URL(stored in $url variable) directly in browser, it is working perfectly fine. Why am I getting problem while accessing it from PHP?
This error message:
HTTP/1.0 403 Forbidden
...is actually coming directly from the server that you're trying to access. So it's not a problem with PHP getting out to access an outside file.
It's instead more a problem of convincing the server to actually give you the file just like it would give it to a web browser.
There's a couple issues I would check:
Is the file password-protected? A web browser will save the password you enter, and allow you to re-access the URL without entering the password again, but PHP doesn't know what password to use.
Perhaps the server is restricting access from non-browsers by examining the user-agent string?
Perhaps the server is restricting access unless you're referred from another page on the same site?
One thing you could try would be using an online HTTP header viewer tool to try requesting the file directly. You can experiment with different headers, user-agents, etc to see if you can reproduce the problem.
If I had to guess, my money would be on the first problem - requiring a password. There's no way to know without knowing the actual URL though.
If you do figure out what it is that's blocking you from accessing the file, you should read up on the http wrapper, stream contexts, and particularly the http stream context options to find out how to set the missing information from PHP. There are ways to specify a password, a user-agent, a referrer header, or even any random HTTP header you want.
Related
I am using file_get_contents() to fetch the contents from a page. It was working perfectly, but suddenly stopped working and started to show the error below:
"Warning: file_get_contents(https://uae.souq.com/ae-en/apple-iphone-x-with-facetime-256gb-4g-lte-silver-24051446/i/): failed to open stream: HTTP request failed! in /home/xxx/xxxx/xxx/index.php on line 6.
So I tried the same code on localserver, it was working perfectly. Then I tried on another server, and it was working perfectly there too. So I contacted the hosting provider, they said the problem is with the url that they may be preventing the access. So I tried another url (https://www.w3schools.com/) and it is getting contents without any error.
Now I am really confused what the problem is. If the problem is with the server, other urls shouldn't have worked. And if the problem is with url, it shouldn't have worked on the second server and local server.
Here is the test code:
<?php
$html= file_get_contents("https://uae.souq.com/ae-en/apple-iphone-x-with-facetime-256gb-4g-lte-silver-24051446/i/");
echo $html;
?>
What is the problem here? Even if the problem is with url or server, why was it working perfeclty earlier?
It sounds like that site (souq.com) has blocked your server. The block may be temporary or it may be permanent. This may have happened because you made too many requests in a short time, or did something else that looked "suspicious," which triggered a mechanism that prevents misbehaving robots from scraping the site.
You can try again after a while. Another thing you can try is setting the User-Agent request header to impersonate a browser. You can find how to do that here: PHP file_get_contents() and setting request headers
If your intention is to make a well behaved robot, you should set the User-Agent header to something that identifies the request as coming from a bot, and follow the rules the site specifies in its robots.txt.
I was trying to get RSS info from a website the other day, but when I tried to load it using PHP it returned a 403 error.
This was my PHP code:
<?php
$rss = file_get_contents('https://hypixel.net/forums/-/index.rss');
echo $rss;
?>
And the error I got was:
failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden
I must say that loading it regularly from a browser works just fine, but when I try loading it using PHP or any other server-side method it won't work.
Some people don't like servers accessing their stuff. They provide a service intended for human consumers, and not bots. Therefore they may include code that checks whether you are in fact a human using a web browser, which your naïve PHP script is failing to provide. Therefore, the third-party is returning a 403 Forbidden error, indicating that it is forbidden for your program to access it.
There are ways around this, of course, depending on how it's implemented. The most obvious thing to do is send a User-Agent header pretending to be a browser. But servers may do more clever checks than this, and it's questionably moral.
I am trying to simply use file_get_contents() to get content of http://www.google.com/search?hl=en&tbm=nws&authuser=0&q=Pakistan with same code on 2 different servers. One is getting every thing file while other is getting 403 error. I am unable to know what exactly the reason is. I used phpinfo() on both servers.
One difference I observe is that one use apache2 while other use some other HTTP server named LiteSpeed V6.6. But i don't know how if it affect this file_get_contents() method. For more detail you can see their phpinfo() page link below.
Where file_get_contents getting 403 the phpinfo is; http://zavahost.com/newsreader/phpinfo.php
while where it is working file , here is the phpinfo: http://162.243.5.14/info.php
I will be thankful if someone can tell that what is effecting file_get_contents()? Please let me know if any idea?
403 is an Unauthorized Error. That means you lack sufficient permission to connect to the content at that server. I'm not sure if this could be due to the inability to fetch data from your hosting provider, but it could also be denied based on header information the remote server has flagged as unauthorized.
Try using the answer on this post: php curl: how can i emulate a get request exactly like a web browser? to curl the same data from the server that is getting the 403
hey guys,
i developed a website on my local apache setup on my mac. I'm using two requests to foreign domains. One goes out to geoplugin.net to get the current geolocation.
This works just fine on my local setup. However when I transfer the files to my real server the website prints the following:
Warning:
file_get_contents(http://www.geoplugin.net/php.gp?ip=185.43.32.341)
[function.file-get-contents]: failed
to open stream: HTTP request failed!
HTTP/1.0 403 Forbidden in
/home/.sites/74/site484/web/testsite/wp-content/themes/test/header.php
on line 241
what can I do here? What am I doing wrong?
Furthermore I'm using a curl request on my website which doesn't retrieve data as well. Both works fine on my local mamp setup.
any ideas?
The server responds with an "403 FORBIDDEN" status code. So file_get_contents() works fine, but the server you are trying to access (or a proxy or something in between) dont allow it.
This can have many reasons. For example (like the comment of the question) you are banned, or blocked (because of to much requests), or something.
HTTP/1.0 403 Forbidden
means you are not allowed to access this files! Try to add an user agent header.
You need to create an account at geoplugin.com and subscribe your domain to use the webservice without limitation, then you will stop to receive de 403 forbidden error. Don't worry about costs, it's a free service, i'm using it in three sites.
try to urlencode the query string.
also I would recommend using curl extension.
That is because geoPlugin is limited to 120 lookups per minute.
http://www.geoplugin.com/premium
So, any web-site feature based on this solution can be damaged suddenly.
I would recommend to use both www.geoplugin.net/json.gp?ip={ip} and freegeoip.net/json/{ip} . And check if first one returns null (means that limit already reached) then use another one.
i need to open the following url
$file = "http://en.wikipedia.org/w/api.php?action=parse&page=Kundapura&prop=text&format=xml";
$fp = fopen($file, "r");
but i am geeting warning http request failed/ 403 forbidden
The default PHP user agent is blocked; see Wikimedia's User-Agent policy for details. You can easily enough change your user agent using ini_set at the top of your script, like this:
ini_set("user_agent", "Testing for http://stackoverflow.com/questions/5509640");
Do note that the English Wikipedia forbids downloading many pages via the API (offering database dumps instead), and that automated processes that actually edit the wiki are forbidden unless approved. See their bot policy for details.
What do you need the file for? If you just need the output, you can try file_get_contents() instead, and then load and manipulate it as a string instead of a file.
but i am geeting warning http request failed/ 403 forbidden
The 403 error is coming from their server.
Chances are that you or someone on the IP address or block you are using has been aggressively banned from using the Wikipedia API. You will need to contact a responsible admin at Wikipedia to investigate.