I was using file_get_contents() to fetch information from an external URL. It was working perfectly on the server before. But now, somehow it fails to work on the server. (No changes in the codes.) It keeps giving me the error: failed to open stream: Connection timed out.
I have tested it on localhost and it works perfectly. I have checked allow_url_fopen option, it is still On.
So, what could be the reason(s)?
file_get_contents does not work well at all with getting remote files and should not be used. It does not deal with slow network connections or redirects, and does not return error codes. You should use curl instead to fetch remote files.
There is an example in the manual for curl_exec:
http://us3.php.net/manual/en/function.curl-exec.php
Related
The problem:
file_get_contents('http://google.com'); works fine (works for any external url)
file_get_contents('http://my.server.org/existing_file.php'); returns nothing, and var_dump($http_response_header); returns an empty array when loading any local url.
I get the same empty response using 127.0.0.1 and localhost in place of the domain name.
I've tried/checked:
I've tried using cURL instead as suggested here - I get the same result (works for external URLs, empty response for local).
allow_url_fopen=On and cURL is enabled in php.ini according to phpinfo() - this must be ok, as I can load external URLs without a hitch.
I cannot use include instead, as I need to load a URL with variables to pass to a PDF renderer.
Is there some other php.ini setting that needs to change? Or could it be some other problem? I think it is a configuration issue because my code works fine locally in MAMP, but not on the production server.
Why I am doing this at all:
It's all tied up in a Wordpress functions.php file - I'm basically loading a url like http://my.server.org/path/to/file/?pdf-template, which I then check against the path to load the correct rendering template, which then feeds the rendered php to DOMPDF to generate a PDF file on the fly. Ultimately it means the client can append /pdf/ to any url in their wordpress site and get a PDF version.
What's odd is that this used to work fine on their server, and the person in charge of the server at the client says nothing has changed with the configuration. They have changed webhost in the last month though, which probably has something to do with it.
Update
I managed to pull this from the error log
Notice: file_get_contents(): send of 2 bytes failed with errno=104 Connection reset by peer in /home/path/to/my/file.php on line 19
The problem also appears to be intermittent - sometimes file_get_contents works, sometimes it fails with the above error. There doesn't seem to be a pattern.
Could this be a firewall problem?
I am trying to simply use file_get_contents() to get content of http://www.google.com/search?hl=en&tbm=nws&authuser=0&q=Pakistan with same code on 2 different servers. One is getting every thing file while other is getting 403 error. I am unable to know what exactly the reason is. I used phpinfo() on both servers.
One difference I observe is that one use apache2 while other use some other HTTP server named LiteSpeed V6.6. But i don't know how if it affect this file_get_contents() method. For more detail you can see their phpinfo() page link below.
Where file_get_contents getting 403 the phpinfo is; http://zavahost.com/newsreader/phpinfo.php
while where it is working file , here is the phpinfo: http://162.243.5.14/info.php
I will be thankful if someone can tell that what is effecting file_get_contents()? Please let me know if any idea?
403 is an Unauthorized Error. That means you lack sufficient permission to connect to the content at that server. I'm not sure if this could be due to the inability to fetch data from your hosting provider, but it could also be denied based on header information the remote server has flagged as unauthorized.
Try using the answer on this post: php curl: how can i emulate a get request exactly like a web browser? to curl the same data from the server that is getting the 403
Well, this is a problem I have never seen before.
I am trying to stream an FTP file using PHP's fopen() and feof() in order to copy it from one server to my own. This works fine when using normal http:// URLs. However, when using the ftp:// protocol, I'm receiving the following error:
fopen(ftp://.../streaming/planted2.mp4) [0function.fopen0]: failed to open stream: FTP server reports 550 /streaming/planted2.mp4: not a plain file.
Bear in mind that I have confirmed the URL is correct.
If I pop it into my browser's search bar, it always loads correctly.
Following this error, any attempt to use feof() or fread() on the wrapper results in an error complaining that the respective function expects a resource, but that instead a boolean is being provided. This would not be the case if fopen() was not failing.
As the files are quite large (several gigabytes at times) streaming is mandatory. Also, due to the configuration of the server, I need a way to loop through each chunk in order to buffer some output. If I don't, the server holds up. It's a weird configuration on Rackspace's behalf. That's why I'm using feof().
So, without further ado, my question is this: What does the 550 error mean, and what is a "plain file"? If it is a problem with the configuration of the server I am attempting to download from, what are my options, given my limitations?
EDIT
I have determined this is a server issue. However, the problem is still unresolved.
I contacted my boss and our server administrator, and the server guy told me to test this out on a different Windows-based server instead of the Linux-based one I was playing with. My script works with the Windows server, so I have confirmed my script is not in error.
Unfortunately, my boss still wants me to figure out the problem, and find out why it's not working on the Linux box. I have absolutely no idea why, and don't know where to look. Any hints would be greatly appreciated!
I've just come across this issue when trying to get a file from a sco unix server.
Server File Location: /home/user/data/myfile.csv
I can put the below into any browser and it gets the file.
ftp://user:password#host/data/myfile.csv
However if I run the below, I get the same error as you
$f = fopen("ftp://user:password#host/data/myfile.csv", "r");
However, if I put the full path into fopen - it works fine.
$f = fopen("ftp://user:password#host/home/user/data/myfile.csv", "r");
I'm not sure if this will fix it for you, but works for me.
hey guys,
i developed a website on my local apache setup on my mac. I'm using two requests to foreign domains. One goes out to geoplugin.net to get the current geolocation.
This works just fine on my local setup. However when I transfer the files to my real server the website prints the following:
Warning:
file_get_contents(http://www.geoplugin.net/php.gp?ip=185.43.32.341)
[function.file-get-contents]: failed
to open stream: HTTP request failed!
HTTP/1.0 403 Forbidden in
/home/.sites/74/site484/web/testsite/wp-content/themes/test/header.php
on line 241
what can I do here? What am I doing wrong?
Furthermore I'm using a curl request on my website which doesn't retrieve data as well. Both works fine on my local mamp setup.
any ideas?
The server responds with an "403 FORBIDDEN" status code. So file_get_contents() works fine, but the server you are trying to access (or a proxy or something in between) dont allow it.
This can have many reasons. For example (like the comment of the question) you are banned, or blocked (because of to much requests), or something.
HTTP/1.0 403 Forbidden
means you are not allowed to access this files! Try to add an user agent header.
You need to create an account at geoplugin.com and subscribe your domain to use the webservice without limitation, then you will stop to receive de 403 forbidden error. Don't worry about costs, it's a free service, i'm using it in three sites.
try to urlencode the query string.
also I would recommend using curl extension.
That is because geoPlugin is limited to 120 lookups per minute.
http://www.geoplugin.com/premium
So, any web-site feature based on this solution can be damaged suddenly.
I would recommend to use both www.geoplugin.net/json.gp?ip={ip} and freegeoip.net/json/{ip} . And check if first one returns null (means that limit already reached) then use another one.
When my script parses an RSS feed from my local Apache server, it parses fine but when I upload script to the remote hosting server it gives the error:
Warning: Magpie RSS: Failed to fetch (url) and cache is off in magpierss-0.6/rss_fetch.inc
on line 231.
I have searched for possible answers and any suggestions are around enabling and changing cache lifetime. I think that is not the problem - it looks like the problem is with remote hosting Apache server or denied access to my host.
Can anyone help?
check that magpie can open the rss url. Usually with curl being enabled or allow_url_fopen on your server. Also make sure that magpie is able/allowed/permitted to create caching files (if any)
This could be caused by a bad url, the server is busy or not having remote file access allowed in your host.
$errormsg = "Failed to fetch $url "; in source indicative of problem getting to host.
Try checking access from the remote host - if you have ssh & curl enabled just do
curl (url)