set fopen() deadline with Wordpress on google-app-engine - php

I have Wordpress 3.8 running on google-app-engine. Everything works fine except the paypal return page with the s2Member® plugin. I think its related to an fopen() or URL fetch error.
The Server Scan By: s2Member® (http://www.s2member.com/kb/server-scanner) in my application reports following issue:
[ERROR] cURL Extension / Or fopen() URL One or more HTTP connection
tests failed against localhost. Cannot connect to self over HTTP —
possible DNS resolution issue. Can't connect to:
http://foto-box.appspot.com
In order to run s2Member®, your installation of PHP needs one of the
following...
Either the cURL extension for remote communication via PHP (plus the OpenSSL extension for PHP).
Or, set: allow_url_fopen = on in your php.ini file (and enable the OpenSSL extension for PHP).
The app-engine Log report is:
PHP Warning: file_get_contents(http://foto-box.appspot.com): failed
to open stream: Request deadline exceeded in
/base/data/home/apps/s~foto-box/3.372404596384852247/wordpress/s2-server-scanner.php
on line 1002
I know there is no cURL on app-engine, but fopen should work by default.
How do I exactly modify the deadline time to figure out if that is the problem?
Where do i have to include
deadline=60
or
$options = ["http" => ["timeout" => 60]];
$context = stream_context_create($options);
$data = file_get_contents("http://foo.bar", false, $context);
in my wordpress or app-engine files exactly to increase the timeout? php.ini, index.php,... or wp-config.php?

I had a look at the script - you can change the timeout on line 1000. It is currently 5 seconds, change it to something like 30 seconds.
if(is_resource($_fopen_test_resource = stream_context_create(array('http' => array('timeout' => 5.0, 'ignore_errors' => FALSE)))))
P.S. It might be a good idea not to run arbitrary scripts that you download from the internet - just sayin.

Related

Intermittently failing to open stream (HTTP request)

I am running Windows Server and it hosts my PHP files.
I am using "file_get_contents()" to call another PHP script and return the results. (I have also tried cURL with the same result)
This works fine. However if I execute my script, then re execute it almost straight away, I get an error:
"Warning: file_get_contents(http://...x.php): failed to open stream: HTTP request failed!"
So this works fine if I leave a minute or two between calling this PHP file via the browser. But after a successful attempt, if I retry too quickly, then it fails. I have even changed the URL in the line "$html = file_get_contents($url, false, $context);" to an empty file that simply prints out a line, and the HTTP stream still doesn't open.
What could be preventing me to open a new HTTP stream?
I suspect my server is blocking further outgoing streams but cannot find out where this would be configured in IIS.
Any help on this problem would be much appreciated.
**EDIT: ** In the script, I am calling a Java file that takes around 1.5 mins, and it is after this that I then call the PHP script that fails.
Also, when it fails, the page hangs for quite some time. During this time, if I open another connection to the initial PHP page then the previous page (still hanging) then completes. It seems like a connection timeout somewhere.
I have set the timeout appropriately in IIS Manager and in PHP

Quick and reliable HTTP call from PHP script

I am adding some external HTTP calls (for internal status monitoring) to a large PHP application which is already very complex and prone to errors. The HTTP call should be made quickly and without raising errors/exceptions. It is okay for HTTP calls to fail.
My first thought was to use Curl but it is not installed on the server. This would let me supress errors, set timeouts and prevent unnecessary blocking if the status server is unreachable/slow.
I know of several built-in PHP functions which can make an HTTP request (and these are enabled on the server) - file(), file_get_contents(), http-get() and I can prefix it with # to suppress errors. But if the monitoring server is unreachable it will hang the script for a number of seconds. Is there a way to set a timeout?
You can set a timeout, as the documentation/comments of file_get_content says:
$ctx = stream_context_create(array(
'http' => array(
'timeout' => 1
)
));
file_get_contents("http://example.com/", 0, $ctx);

PHP SOAP awfully slow: Response only after reaching fastcgi_read_timeout

The constructor of the SOAP client in my wsdl web service is awfully slow, I get the response only after the fastcgi_read_timeout value in my Nginx config is reached, it seems as if the remote server is not closing the connection. I also need to set it to a minimum of 15 seconds otherwise I will get no response at all.
I already read similar posts here on SO, especially this one
PHP SoapClient constructor very slow and it's linked threads but I still cannot find the actual cause of the problem.
This is the part which takes 15+ seconds:
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl");
It seems as it is only slow when called from my php script, because the file opens instantly when accessed from one of the following locations:
wget from my server which is running the script
SoapUI or Postman (But I don't know if they cached it before)
opening the URL in a browser
Ports 80 and 443 in the firewall are open. Following the suggestion from another thread, I found two work arounds:
Loading the wsdl from a local file => fast
Enabling the wsdl cache and using the remote URL => fast
But still I'd like to know why it doesn't work with the original URL.
It seems as if the web service does not close the connection, or in other words, I get the response only after reaching the timeout set in my server config. I tried setting keepalive_timeout 15; in my Nginx config, but it does not work.
Is there any SOAP/PHP parameter which forces the server to close the connection?
I was able to reproduce the problem, and found the solution to the issue (works, maybe not the best) in the accepted answer of a question linked in the question you referenced:
PHP: SoapClient constructor is very slow (takes 3 minutes)
As per the answer, you can adjust the HTTP headers using the stream_context option.
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl",array(
'stream_context'=>stream_context_create(
array('http'=>
array(
'protocol_version'=>'1.0',
'header' => 'Connection: Close'
)
)
)
));
More information on the stream_context option is documented at http://php.net/manual/en/soapclient.soapclient.php
I tested this using PHP 5.6.11-1ubuntu3.1 (cli)

Problems with PHP file_get_contents() on RHEL 6

Ok, I have two Linux boxes running behind a proxy server. Both boxes are set to bypass the filtering by connecting on port 801.
Box A - Fedora Core 12 / PHP 5.3.1
Box B - RHEL 6 / PHP 5.3.3
On Box A I am able to use file_get_contents() to conenct to an external site.
<?php
$opts = array(
'http' => array(
'proxy' => 'tcp://10.136.132.1:801',
'request_fulluri' => true
)
);
$cxContext = stream_context_set_default($opts);
echo file_get_contents("http://www.google.com");
This results in Google's homepage being displayed.
On Box B I run the same code, but get this error:
Warning: file_get_contents(http://www.google.com): failed to open stream: Permission denied
Both boxes are on the same network and behind the same proxy server. Is there a setting I am missing in Apache or PHP that will allow file_get_contents to work on Box B?
It sounds like you have SELinux enabled, it blocks any outgoing connections by Apache by default. Try running this in your shell as root:
setsebool -P httpd_can_network_connect on
More info on SELinux booleans can be found here:
http://wiki.centos.org/TipsAndTricks/SelinuxBooleans
That could be because google blocks requests that seem to come from Bot or Script.
Maybe because Box A sets additional headers when requesting to google.
Try to open other website on Box b

file_get_contents returns empty string

I am hesitated to ask this question because it looks weird.
But anyway.
Just in case someone had encountered the same problem already...
filesystem functions (fopem, file, file_get_contents) behave very strange for http:// wrapper
it seemingly works. no errors raised. fopen() returns resource.
it returns no data for all certainly working urls (e.g. http://google.com/).
file returns empty array, file_get_contents() returns empty string, fread returns false
for all intentionally wrong urls (e.g. http://goog973jd23le.com/) it behaves exactly the same, save for little [supposedly domain lookup] timeout, after which I get no error (while should!) but empty string.
url_fopen_wrapper is turned on
curl (both command line and php versions) works fine, all other utilities and applications works fine, local files opened fine
This error seems inapplicable because in my case it doesn't work for every url or host.
php-fpm 5.2.11
Linux version 2.6.35.6-48.fc14.i686 (mockbuild#x86-18.phx2.fedoraproject.org)
I fixed this issue on my server (running PHP 5.3.3 on Fedora 14) by removing the --with-curlwrapper from the PHP configuration and rebuilding it.
Sounds like a bug. But just for posterity, here are a few things you might want to debug.
allow_url_fopen: already tested
PHP under Apache might behave differently than PHP-CLI, and would hint at chroot/selinux/fastcgi/etc. security restrictions
local firewall: unlikely since curl works
user-agent blocking: this is quite common actually, websites block crawlers and unknown clients
transparent proxy from your ISP, which either mangles or blocks (PHP user-agent or non-user-agent could be interpreted as malware)
PHP stream wrapper problems
Anyway, first let's proof that PHPs stream handlers are functional:
<?php
if (!file_get_contents("data:,ok")) {
die("Houston, we have a stream wrapper problem.");
}
Then try to see if PHP makes real HTTP requests at all. First open netcat on the console:
nc -l 80000
And debug with just:
<?php
print file_get_contents("http://localhost:8000/hello");
And from here you can try to communicate with PHP, see if anything returns if you variate the response. Enter an invalid response first into netcat. If there's no error thrown, your PHP package is borked.
(You might also try communicating over a "tcp://.." handle then.)
Next up is experimenting with http stream wrapper parameters. Use http://example.com/ literally, which is known to work and never block user-agents.
$context = stream_context_create(array("http"=>array(
"method" => "GET",
"header" => "Accept: xml/*, text/*, */*\r\n",
"ignore_errors" => false,
"timeout" => 50,
));
print file_get_contents("http://www.example.com/", false, $context, 0, 1000);
I think ignore_errors is very relevant here. But check out http://www.php.net/manual/en/context.http.php and specifically try to set protocol_version to 1.1 (will get chunked and misinterpreted response, but at least we'll see if anything returns).
If even this remains unsuccessful, then try to hack the http wrapper.
<?php
ini_set("user_agent" , "Mozilla/3.0\r\nAccept: */*\r\nX-Padding: Foo");
This will not only set the User-Agent, but inject extra headers. If there is a processing issue with construction the request within the http stream wrapper, then this could very eventually catch it.
Otherwise try to disable any Zend extensions, Suhosin, PHP xdebug, APC and other core modules. There could be interferences. Else this is potentiallyan issue specific to the Fedora package. Try a new version, see if it persists on your system.
When you use the http stream wrapper PHP creates an array for you called $http_response_header after file_get_contents() (or any of the other f family of functions) is called. This contains useful info on the state of the response. Could you do a var_dump() of this array and see if it gives you any more info on the response?
It's a really weird error that you're getting. The only thing I can think of is that something else on the server is blocking the http requests from PHP, but then I can't see why cURL would still be ok...
Is http stream registered in your PHP installation? Look for "Registered PHP Streams" in your phpinfo() output. Mine says "https, ftps, compress.zlib, compress.bzip2, php, file, glob, data, http, ftp, phar, zip".
If there is no http, set allow_url_fopen to on in your php.ini.
My problem was solved dealing with the SSL:
$arrContextOptions = array(
"ssl" => array(
"verify_peer" => false,
"verify_peer_name" => false,
),
);
$context = stream_context_create($arrContextOptions);
$jsonContent = file_get_contents("https://www.yoursite.com", false, $context);
What does a test with fsockopen tell you?
Is the test isolated from other code?
I had the same issue in Windows after installing XAMPP 1.7.7. Eventually I managed to solve it by adding the following line to php.ini (while having allow_url_fopen = On):
extension=php_openssl.dll
Use http://pear.php.net/reference/PHP_Compat-latest/__filesource/fsource_PHP_Compat__PHP_Compat-1.6.0a2CompatFunctionfile_get_contents.php.html and rename it and test if the error occurs with this rewritten function.

Categories