I am trying to run a php script via a cronjob and sometimes (about half the time) I get the following warning:
PHP Warning: file_get_contents(http://url.com): failed to open stream: HTTP request failed! in /path/myfile.php on line 285
The program continues to run after that which makes me think it is not a timeout problem or a memory issue (timeout is set to 10 minutes and memory to 128M), but the variable that I am storing the results of that function call in is empty. The weird part is that I am making several other calls to this same website with other url parameters and they never have a problem. The only difference with this function call is that the file it is downloading is about 70 mb while the others are all around 300 kb.
Also, I never get this warning if I SSH into the web server and run the php script manually, only when it is run from a cron.
I have also tried using cURL instead of file_get_contents but then I run out of memory.
Thanks, any help here would be appreciated.
Perhaps the remote server on URL.com is sometimes timing out or returning an error for that particular (large) request?
I don't think you should be trying to store 70mb in a variable.
You can configure cURL to download directly to a file. Something like:
$file = fopen ('my.file', 'w');
$c = curl_init('http://url.com/whatever');
curl_setopt($c, CURLOPT_FILE, $file);
curl_exec($c);
curl_close($c);
fclose($file);
If nothing else, curl should provide you with much better errors about what's going wrong.
From another answer .. double check that this issue isn't occurring some of the time with the URL parameters you're using:
Note: If you're opening a URI with special characters, such as spaces, you need to encode the URI with urlencode() - http://docs.php.net/file%5Fget%5Fcontents
Related
I am running Windows Server and it hosts my PHP files.
I am using "file_get_contents()" to call another PHP script and return the results. (I have also tried cURL with the same result)
This works fine. However if I execute my script, then re execute it almost straight away, I get an error:
"Warning: file_get_contents(http://...x.php): failed to open stream: HTTP request failed!"
So this works fine if I leave a minute or two between calling this PHP file via the browser. But after a successful attempt, if I retry too quickly, then it fails. I have even changed the URL in the line "$html = file_get_contents($url, false, $context);" to an empty file that simply prints out a line, and the HTTP stream still doesn't open.
What could be preventing me to open a new HTTP stream?
I suspect my server is blocking further outgoing streams but cannot find out where this would be configured in IIS.
Any help on this problem would be much appreciated.
**EDIT: ** In the script, I am calling a Java file that takes around 1.5 mins, and it is after this that I then call the PHP script that fails.
Also, when it fails, the page hangs for quite some time. During this time, if I open another connection to the initial PHP page then the previous page (still hanging) then completes. It seems like a connection timeout somewhere.
I have set the timeout appropriately in IIS Manager and in PHP
I am using file_get_contents in PHP to get information from a client's collections on contentDM. CDM has an API so you can get that info by making php queries, like, say:
http://servername:port/webutilities/index.php?q=function/arguments
It has worked pretty well thus far, across computers and operating systems. However, this time things work a little differently.
http://servername/utils/collection/mycollectionname/id/myid/filename/myname
For this query I fill in mycollection, myid, and myname with relevant values. myid and mycollection have to exist in the system, obviously. However, myname can be anything you want. When you run the query, it doesn't return a web page or anything to your browser. It just automatically downloads a file with myname as the name of the file, and puts it in your local /Downloads folder.
I DON'T WISH TO DOWNLOAD THIS FILE. I just want to read the contents of the file it returns directly into PHP as a string. The file I am trying to get just contains xml data.
file_get_contents works to get the data in that file, if I use it with PHP7 and Apache on my laptop running Ubuntu. But, on my desktop which runs Windows 10, and XAMPP (Apache and PHP5), I get this error (I've replaced sensitive data with ###):
Warning:
file_get_contents(###/utils/collection/###/id/1110/filename/1111.cpd):
failed to open stream: No such file or directory in
D:\Titus\Documents\GitHub\NativeAmericanSCArchive\NASCA-site\api\update.php
on line 18
My coworkers have been unable to help me so I am curious if anyone here can confirm or deny whether this is an operating system issue, or a PHP version issue, and whether there's a solid alternative method that is likely to work in PHP5 and on both Windows and Ubuntu.
file_get_contents() is a simple screwdriver. It's very good for getting data by simply GET requests where the header, HTTP request method, timeout, cookiejar, redirects, and other important things do not matter.
fopen() with a stream context or cURL with setopt are powerdrills with every bit and option you can think of.
In addition to this, due to some recent website hacks, we had to secure our sites more. In doing so, we discovered that file_get_contents failed to work, where curl still would work.
Not 100%, but I believe that this php.ini setting may have been blocking the file_get_contents request.
; Disable allow_url_fopen for security reasons
allow_url_fopen = 0
Either way, our code now works with curl.
reference :
http://25labs.com/alternative-for-file_get_contents-using-curl/
http://phpsec.org/projects/phpsecinfo/tests/allow_url_fopen.html
So, You can solve this problem by using PHP cURL extension. Here is an example that does the same thing you were trying:
function curl($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$url = 'your_api_url';
$data = curl($url);
And finally you can check your data by print_r($data). Hope it you it will works and you will understand.
Reference : http://php.net/manual/en/book.curl.php
I notice that when I use file_get_contents I seem to be using more bandwidth than I should. For example:
file_get_contents('https://example.com',false,$ctx,0,99000);
Will cause my network RX to jump up about 1.6mb (just using ifconfig and comparing before and after).... I would think it should only jump by 99kb, because I've specified that with the 99000?
file_get_contents is a rather buggy function in PHP. Consider using curl and following this solution:
how to set a maximum size limit to php curl downloads
Okay, I have a problem that I hope you can help me fix.
I am running a server that stores video files that are very large, some up to 650 MB. I need a user to be able to request this page and have it download the file to their machine. I have tried everything, but a plain readfile() request hangs for about 90 seconds before quitting and gives me a "No data received error 324 code," a chunked readfile script that I have found from several websites doesn't even start a download, FTP through PHP solutions did nothing but give me errors when I tried to get the file, and the only cURL solutions that I have found just create another file on my server. That is not what I need.
To be clear I need the user to be able to download the file to their computer and not to the server.
I don't know if this code is garbage or if it just needs a tweak or two, but any help is appreciated!
<?php
$fn = $_GET["fn"];
echo $fn."<br/>";
$url = $fn;
$path = "dl".$fn;
$fp = fopen($path, 'w');
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);
?>
I wouldn't recommend serving large binary files using PHP or any other scripting technology for that matter. They where never design for this -- you can use apache, nginx or whatever standard http server you have on the back end. If you still need to use PHP, then you should probably check out readfile_chunked.
http://php.net/readfile#48683
and here's a great tutorial.
http://teddy.fr/blog/how-serve-big-files-through-php
good luck.
readfile() doesnt buffer. However, php itself might buffer. Turn buffering off
while (ob_get_level())
ob_end_clean();
readfile($file);
Your web server might buffer. Turn that off too. How you do it depends on the webserver, and why its buffering.
I see two problem that can happen:
First: Your web server may be closed the connection by timeout. you should look the web server config.
Second: Timeout with curl. I recommend to see this post.
This has been bugging me for literally hours already.
I can't seem to figure out why PHP cURL won't download data to a file. (CURLOPT_FILE is set to a local file.) I am not getting any data. I periodically check the file size of the destination file and it is always zero. To give you a background, I am downloading a 90kb jpeg file (for testing purposes).
This is working on my local computer (XP) but not in the website I am working on (Windows Server 2003).
I did several tests which made the scenario even weirder.
I disabled CURLOPT_FILE to print the data returned by curl into standard output, and the binary data printed.
Having experienced blocked websites before (since the server implements access control), I tried accessing the file from internet explorer and i was able to see it.
Having experienced blocked downloads before, I tried downloading the file from internet explorer and it was downloaded.
The file is created by fopen('', 'w') but the size remains 0. Despite this successful file creation, I thought maybe PHP has a problem with filesystem write privileges, I set the exe to be run even by non-admin users. Still no download.
Has this ever occured to anybody?
Any pointers will be appreciated. I am really stuck.
Thank you.
Here's the curl options I set:
$connection = curl_init($src);
// If these are not set, curl_exec outputs data.
// If these are set, curl_exec does not send any data to the file
// pointed to by $file_handler. $file_handler is not null
// because it is opened as write (non-existing file is created)
curl_setopt($connection, CURLOPT_RETURNTRANSFER, true);
curl_setopt( $connection, CURLOPT_FILE, $file_handler );
PS: I'm doing these tests using the command line and not the browser.
you might not have permissions to write to the file.
I don't think you have to set CURLOPT_RETURNTRANSFER and if you are running from the command line be sure to run the php with admin rights. Not sure how it works in windows but in linux I always sudo every command line script I run.
Also if php safe mode is on be sure to also give the the directory the same (UID) owner as the php file. Hmh but since you can create the file (with 0 filesize) it might have nothing to do with rights... could you check the *open_basedir* php setting on your server? If it is set cUrl is not allowed to use file protocol... did you check the log files from your server? maybe there is an error.
You may need to figure out what user runs your php, if the user running the php script (the one that calls php ) is not authorized to write to the directory of the file, or to the /path/to/file , you may need to adjust your file permissions.