is mod_userdir preventing localhost http request? - php

I'm using CPanel on CentOS 8 server. some of my php APIs need to make http request to other php API on the same server through localhost. I have already tried to make the call by using:
$opts = array('http' => array('method' => "GET", 'header' => "Content-type: application/x-www-form-urlencoded"));
$file = file_get_contents("http://127.0.0.1/<path to api file from public_html>", false, stream_context_create($opts));
but I get file not found error. the point is that mod_userdir is turned off in the CPanel, and when I want to turn it on, it says: Detected mod_ruid2 in use. Using both mod_userdir and mod_ruid2 is not a supported configuration.
is the problem because of mod_userdir being turned off? if so, how to turn it on?

Related

PHP cURL request fails on https call with errno 56: "Received HTTP code 403 from proxy after CONNECT"

Server: Centos 8 (fully updated)
PHP Version: 7.4 (FPM)
I am trying to make a call with cURL (and also tried with Guzzle) and I am getting the same problem with only HTTPS URLs. HTTP urls are fine and no problems.
Here is some sample code:
curl_setopt_array($curl, array(
CURLOPT_RETURNTRANSFER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_VERIFYPEER => false,
CURLOPT_CAINFO => BASEPATH . "/data/cacert.pem",
CURLOPT_URL => $url
));
The exact cURL error is: Received HTTP code 403 from proxy after CONNECT
The Apache virtual host is not using any proxy. I think this is somehow being stopped between Apache and PHP-FPM. The logs out put is not helpful. I have ruled out SELinux and mod_security as being the issue. Disabled both and still get the same result.
Another question mentions adding the following to https_proxy.conf
ProxyRequests On
AllowCONNECT 443 563 5000
But this has not yielded any results. The reason is because I am not using an http proxy in any http configurations.
Well, I found out what the problem was. I had an unused environment variable in my .env file which was setting a proxy that I usually use on my dev machine. The script was failing on the production machine.
Turns out cURL and Guzzle automatically load those environment variables in with their requests, thus causing a proxy connection refusal. I removed the variable and everything works as expected.
Live and learn.

file_get_contents not working on CentOS 7

I was using phpoffice template processing functionality with Laravel and it worked on Ubuntu, Windows and Mac. The functionality stopped working when I migrated to centos server. I figured out that the phpoffice was calling file_get_contents to open the template file and it is failing.
So started testing the file_get_contents function.
echo file_get_contents('http://my-server-ip-address/api/public/storage/templates/template.docx');
Error,
ErrorException (E_WARNING)
file_get_contents(http://my-server-ip-address/api/public/storage/templates/template.docx): failed to open stream: Connection timed out
php.ini configuration,
allow_url_fopen = On // tried allow_url_fopen = 1 also
Tried,
user_agent = "PHP"
I cannot change the function call to CURL approach as this is internally handled in the phpoffice composer package. Is there a solution for this?
I can access the file directly on the browser. No issues with that.
Edit:
echo file_get_contents('http://my-ip/api/public/storage/templates/template.docx');
echo file_get_contents('http://my-domain/api/public/storage/templates/template.docx');
echo file_get_contents('https://www.google.com/');
echo file_get_contents('http://localhost/api/public/storage/templates/template.docx');
Here 1 & 2 not working from the same server which the IP/domain points to but it works from any other systems including local.
Summary
Same issue with wget and CURL.
The server can't find itself using IP or domain but other systems can communicate with server.
The server identifies itself only as localhost.
you can give it a try. hope it helps you out.
$url= 'https://example.com';
$arrContextOptions=array(
'http' => array(
'method'=>"GET",
'header'=>"Content-Type: text/html; charset=utf-8"
) ,
"ssl"=>array(
"verify_peer"=>false,
"verify_peer_name"=>false,
"allow_self_signed" => true, // or "allow_self_signed" => false,
),
);
$response = file_get_contents($url, false, stream_context_create($arrContextOptions));
you can read out all the manual about the stramContext here on the php manual https://www.php.net/manual/en/function.file-get-contents.php

Steps for debugging php Curl on windows

I had a lot of trouble figuring out why my php Curl API worked fine on a Mac using MAMP, but would not work under windows.
I asked fot debugging tips or useful information for finding curl configuration issues under windows.
The accepted answer contains a list of the steps that helped me get curl working on windows 7 32 bits.
If Curl still doesn't work, you can use file_get_contents to make POST requests. It works on all hostingers, all OS and on local.
$url = 'WhateverUrlYouWant';
$postdata = http_build_query(
array(
'id' => '202',
'form' => 'animal',
.....
)
);
$opts = array('http' =>
array(
'method' => 'POST',
'header' => 'Content-type: application/x-www-form-urlencoded',
'content' => $postdata
)
);
$context = stream_context_create($opts);
$result = file_get_contents($url,false, $context);
echo $result
Here is a list of steps for debugging Curl:
Check in phpinfo module that Curl IS enabled.
Verify extensions dir path is propperly set in php.ini and that extension=php_curl.dll is uncommented.
Check that Environment Variables are propperly set as per: http://php.net/manual/en/faq.installation.php#faq.installation.addtopath
Run deplister.exe ext\php_curl.dll to verify all dependencies are correctly satisfied.
If all of the above is working then check the output of CURLOPT_VERBOSE. As per #hp95 suggestion in the following thread: No Response getting from Curl Request to https server
If you are reaching a site that uses SSL check the following post: HTTPS and SSL3_GET_SERVER_CERTIFICATE:certificate verify failed, CA is OK
And fix it like these:
https://snippets.webaware.com.au/howto/stop-turning-off-curlopt_ssl_verifypeer-and-fix-your-php-config/
reverse
curl_close($curl);
print_r($response);
** this is a broken protocol **
to
print_r($response);
curl_close($curl);
** print your results BEFORE closing, which destroys (empties) the $response **

cURL - Protocol https not supported or disabled in libcurl (post PHP update)

I know there are dozens of questions about this particular topic, but the usual suggestions don't really seem to help. I have checked the server under the root account and the specific account I'm working with and in both cases (if they even can be different) SSL is listed as a feature and HTTPS is listed as a protocol.
My cURL functions were working fine until yesterday when we upgraded PHP from ~5.1/5.2 to 5.4.26. My assumption was that PHP and/or cURL were compiled without SSL support, but that doesn't seem to be the case.
If it helps, the functions are calling Appcelerator's cloud services. This is one of the login functions, the first to throw the "trying to get property of non-object" error because $res is false:
function login() {
$url = 'https://api.cloud.appcelerator.com/v1/users/login.json?key=<MY_APP_KEY>';
$options = array(
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_POST => TRUE,
CURLOPT_POSTFIELDS => array(
'login' => '<MY_APP_LOGIN>',
'password' => '<MY_APP_PASSWORD>'
)
);
$curl_session = curl_init($url);
curl_setopt_array($curl_session, $options);
$res = curl_exec($curl_session);
curl_close($curl_session);
$this->session_id = json_decode($res)->meta->session_id;
}
Is it possible that even though SSL and HTTPS are listed that they're not actually in effect? Is there a way to check and if necessary fix that?
One possible problem is that you have more than one libcurl version installed in your machine, so your checking as root returns the info about a global installation while your PHP environment runs a different version.
In your PHP program you can instead run curl_version() and see what it says regarding SSL support and versions etc.

SOAP-ERROR: Parsing WSDL: Couldn't load from - but works on WAMP

This works fine on my WAMP server, but doesn't work on the linux master server!?
try{
$client = new SoapClient('http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl', ['trace' => true]);
$result = $client->checkVat([
'countryCode' => 'DK',
'vatNumber' => '47458714'
]);
print_r($result);
}
catch(Exception $e){
echo $e->getMessage();
}
What am I missing here?! :(
SOAP is enabled
Error
SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl' : failed to load external entity "http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl"/taxation_customs/vies/checkVatService.wsdl"
Call the URL from PHP
Calling the URL from PHP returns error
$wsdl = file_get_contents('http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl');
echo $wsdl;
Error
Warning: file_get_contents(http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl): failed to open stream: HTTP request failed! HTTP/1.0 503 Service Unavailable
Call the URL from command line
Calling the URL from the linux command line HTTP 200 is returned with a XML response
curl http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl
For some versions of php, the SoapClient does not send http user agent information. What php versions do you have on the server vs your local WAMP?
Try to set the user agent explicitly, using a context stream as follows:
try {
$opts = array(
'http' => array(
'user_agent' => 'PHPSoapClient'
)
);
$context = stream_context_create($opts);
$wsdlUrl = 'http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl';
$soapClientOptions = array(
'stream_context' => $context,
'cache_wsdl' => WSDL_CACHE_NONE
);
$client = new SoapClient($wsdlUrl, $soapClientOptions);
$checkVatParameters = array(
'countryCode' => 'DK',
'vatNumber' => '47458714'
);
$result = $client->checkVat($checkVatParameters);
print_r($result);
}
catch(Exception $e) {
echo $e->getMessage();
}
Edit
It actually seems to be some issues with the web service you are using. The combination of HTTP over IPv6, and missing HTTP User Agent string, seems to give the web service problems.
To verify this, try the following on your linux host:
curl -A '' -6 http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl
this IPv6 request fails.
curl -A 'cURL User Agent' -6 http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl
this IPv6 request succeeds.
curl -A '' -4 http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl
curl -A 'cURL User Agent' -4 http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl
both these IPv4 request succeeds.
Interesting case :) I guess your linux host resolves ec.europa.eu to its IPv6 address, and that your version of SoapClient did not add a user agent string by default.
Security issue: This answer disables security features and should not be used in production!
Try this. I hope it helps
$options = [
'cache_wsdl' => WSDL_CACHE_NONE,
'trace' => 1,
'stream_context' => stream_context_create(
[
'ssl' => [
'verify_peer' => false,
'verify_peer_name' => false,
'allow_self_signed' => true
]
]
)
];
$client = new SoapClient($url, $options);
This issue can be caused by the libxml entity loader having been disabled.
Try running libxml_disable_entity_loader(false); before instantiating SoapClient.
It may be helpful for someone, although there is no precise answer to this question.
My soap url has a non-standard port(9087 for example), and firewall blocked that request and I took each time this error:
ERROR - 2017-12-19 20:44:11 --> Fatal Error - SOAP-ERROR: Parsing
WSDL: Couldn't load from 'http://soalurl.test:9087/orawsv?wsdl' :
failed to load external entity "http://soalurl.test:9087/orawsv?wsdl"
I allowed port in firewall and solved the error!
Try changing
$client = new SoapClient('http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl', ['trace' => true]);
to
$client = new SoapClient('http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl', ['trace' => true, 'cache_wsdl' => WSDL_CACHE_MEMORY]);
Also (whether that works or not), check to make sure that /tmp is writeable by your web server and that it isn't full.
Try enabling openssl extension in your php.ini if it is disabled.
This way I could access the web service without need of any extra arguments, i.e.,
$client = new SoapClient(url);
None of the above works for me, so after a lot of research, I ended up pre-downloading the wsdl file, saving it locally, and passing that file as the first parameter to SoapClient.
Worth mentioning is that file_get_contents($serviceUrl) returned empty response for me, while the url opened fine in my browser. That is probably why SoapClient also could not load the wsdl document. So I ended up downloading it with the php curl library. Here is an example
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $serviceUrl);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$wsdl = curl_exec($ch);
curl_close($ch);
$wsdlFile = '/tmp/service.wsdl';
file_put_contents($wsdlFile, $wsdl);
$client = new SoapClient($wsdlFile);
You can of course implement your own caching policy for the wsdl file, so it won't be downloaded on each request.
503 means the functions are working and you're getting a response from the remote server denying you. If you ever tried to cURL google results the same thing happens, because they can detect the user-agent used by file_get_contents and cURL and as a result block those user agents. It's also possible that the server you're accessing from also has it's IP address blackballed for such practices.
Mainly three common reasons why the commands wouldn't work just like the browser in a remote situation.
1) The default USER-AGENT has been blocked.
2) Your server's IP block has been blocked.
3) Remote host has a proxy detection.
After hours of analysis reading tons of logs and internet, finally found problem.
If you use docker and php 7.4 (my case) you probably get error because default security level in OpenSSL it too high for wsdl cert. Even if you disable verify and allow self-signed in SoapClient options.
You need lower seclevel in /etc/ssl/openssl.cnf from DEFAULT#SECLEVEL=2 to
DEFAULT#SECLEVEL=1
Or just add into Dockerfile
RUN sed -i "s|DEFAULT#SECLEVEL=2|DEFAULT#SECLEVEL=1|g" /etc/ssl/openssl.cnf
Source: https://github.com/dotnet/runtime/issues/30667#issuecomment-566482876
You can verify it by run on container
curl -A 'cURL User Agent' -4 https://ewus.nfz.gov.pl/ws-broker-server-ewus/services/Auth?wsdl
Before that change I got error:
SSL routines:tls_process_ske_dhe:dh key too small
It was solved for me this way:
Every company from which you provide "Host" has a firewall.
This error occurs when your source IP is not defined in that firewall.
Contact the server administrator to add the IP.
Or the target IP must be defined in the server firewall whitelist.
I use the AdWords API, and sometimes I have the same problem.
My solution is to add
ini_set('default_socket_timeout', 900);
on the file
vendor\googleads\googleads-php-lib\src\Google\AdsApi\AdsSoapClient.php line 65
and in the
vendor\googleads-php-lib\src\Google\AdsApi\Adwords\Reporting\v201702\ReportDownloader.php line 126
ini_set('default_socket_timeout', 900);
$requestOptions['stream_context']['http']['timeout'] = "900";
Google package overwrite the default php.ini parameter.
Sometimes, the page could connect to 'https://adwords.google.com/ap
i/adwords/mcm/v201702/ManagedCustomerService?wsdl and sometimes no.
If the page connects once, The WSDL cache will contain the same page, and the program will be ok until the code refreshes the cache...
Adding ?wsdl at the end and calling the method:
$client->__setLocation('url?wsdl');
helped to me.
I might have read all questions about this for two days. None of the answers worked for me.
In my case I was lacking cURL module for PHP.
Be aware that, just because you can use cURL on terminal, it does not mean that you have PHP cURL module and it is active.
There was no error showing about it. Not even on /var/log/apache2/error.log
How to install module:
(replace version number for the apropiated one)
sudo apt install php7.2-curl
sudo service apache2 reload
I had the same problem
From local machines everything work (wamp + php5.5.38 or vagrant + php 7.4), but from prod linux server I had error
SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl' : failed to load external entity "http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl"
From redirect path plugin in chrome I discovered permanent redirect to https, but change url to https doesnt help.
Status Code URL IP Page Type Redirect Type Redirect URL
301 http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl 147.67.210.30 server_redirect permanent https://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl
200 https://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl 147.67.210.30 normal none none
After few attempts of different code solutions helped my our server provider. He discovered problem in IP forwarding with ipv6.
http://ec.europa.eu/
Pinging ec.europa.eu [147.67.34.30] with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Recommendation was user stream_context_create with socket and bind to 0:0. this forces ipv4
// https://www.php.net/manual/en/context.socket.php
$streamContextOptions = [
'socket' => [
'bindto' => '0:0'
],
];
$streamContext = stream_context_create($streamContextOptions);
$soapOptions = [
'stream_context' => $streamContext
];
$service = new SoapClient('https://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl', $soapOptions);
I had similar error because I accidently removed attribute
[ServiceContract] from my contract, yet the service host was still opening successfully. Blunders happen
May try to see if the endpoint supports https as well
Below solution worked for me.
1- Go to php.ini in ubuntu with apache is /etc/php/7.4/apache2
( note: you should use your php version replace by 7.4 )
2- Remove ; from this line ;extension=openssl to make in uncommented.
3- Restart your web server sudo service apache2 restart

Categories