I have a web application that consumes multiple SOAP APIs. I'm running PHP (7.2.34) and using SoapClient to connect to the API. Recently my hosting provider, MediaTemple, performed some mandatory updates and directly after that, for reasons I am unable to determine, one of the APIs now takes 5+ seconds to make a __soapCall() when the request is initiated from the browser on a site running PHP 7.2. I have multiple subdomains on my server 2 are running PHP 7.2 and the others PHP 8.0. It is only when a request is initiated from the browser and only on the subdomains running on PHP 7.2 that I am experiencing these delays. I've tested making calls to this API through the terminal, with Postman and through the subdomains running PHP 8.0 and in none of those setups is there any delay to speak of.
Here's the code that I have been using for some time now with no issues (at least 2-3 years).
$soapClient = new SoapClient($wsdl_location, [
'location' => $location,
'connection_timeout' => 5,
'cache_wsdl' => WSDL_CACHE_DISK,
'features' => SOAP_SINGLE_ELEMENT_ARRAYS,
]);
$arguments = []; // array of arguments to pass with the request
$response = json_decode(json_encode($soapClient->__soapCall($endpoint, [$arguments])), true);
I have tried decreasing "connection timeout", deleting the cached WSDL files and adding the options below to the SoapClient options but this last attempt actually seemed to increase the call time by a second or two.
'stream_context' => stream_context_create([
'http' => [
'protocol_version' => '1.1',
'header' => 'Connection: Close',
],
])
The thing that has me baffled the most is that this issue only affects 1 API and only occurs when called from a browser on a website running PHP 7.2.
Related
We have this Windows Server 2008 R2 on which a webapplication is running using ZendFramework 2, PHP 5.4.34 and Apache 2.4.3 on which also a SOAP webservice is active.
But after migrating the source code to a Windows 2019 server with a newer PHP 7.4.24, the webservice seems to be faulty and following error message is shown :
[WSDL] SOAP-ERROR: Parsing WSDL: Couldn't load from 'HTTPS://xxx/xxx/xxx/AuthenticateTaskService/V1/auth?wsdl' : failed to load external entity
The code where the webservice is called:
$soapAuth = new SoapClient("HTTPS://xxx/xxx/xxx/AuthenticateTaskService/V1/auth?wsdl", array('soap_version' => SOAP_1_2, 'trace' => 1, 'exceptions' => true, 'style' => SOAP_DOCUMENT, 'use' => SOAP_LITERAL, 'encoding' => 'UTF-8'));
var_dump($soapAuth);
I have used both the webservice page on the newer server as the one on the older server, both having the same error.
Does anybody know what might be the cause here?
I've searched plenty of StackOverflow and other internet pages, but can't seem to pinpoint the exact cause. I was thinking I might be because of a newer PHP version?
About 1.5 years ago I have written an integration with UPS (United Parcel Service) that generates shipping labels. I have stopped using UPS in January and am now coming back to this integration.
Recently I've been constantly seeing "error fetching http headers" on almost every other request I make to the UPS API. Their production server returns this error usually every other request (but when it does, it keeps doing it 2-3 times in a row), and their development server returns this error constantly. My requests are reasonably spaced (at least 10 seconds apart).
The application is written in PHP (v5.3.28) and uses SoapClient with UPS' WSDL forms.
From the experience of others with similar problems, it looks like it has something to do with UPS not wanting to keep a connection alive and my server persisting it for some reason.
So far I've tried:
increasing the default_socket_timeout and max_execution_time
passing keep_alive = 0 and high value connection_timeout to the constructor of the SoapClient
writing a wrapper around the SoapClient to destroy the connection right after a __call() and __soapCall()
turning off HTTP Keep-Alive in IIS' common headers (please don't judge)
I'm running out of ideas on how to approach this problem or even where to continue diagnosing it. It seems that most people on SO were able to solve their issues with increasing the socket_timeout, but that did not help me.
Any pointers are appreciated.
// EDIT
I've figured it out. I was able to talk an admin into upgrading to PHP 5.6 and with this piece of code, I was able to solve the problem:
$client = new SoapClient("wsdl_goes_here", array(
'keep_alive' => 0,
'soap_version' => SOAP_1_1,
'trace' => 1,
'stream_context' => stream_context_create(array(
'ssl' => array(
'verify_peer' => false,
'verify_peer_name' => false
)
))
));
A little background first: for some reason, making curl calls inside my Vagrant machine works only if I use --tlsv1.2 option, without that I get:
cURL error 35: SSL connect error (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
So I've put that value into configuration file, ~/.curlrc, so every time I run
curl https://myapi.com on the command line that option is used automatically, and it works fine.
However, I am currently playing with Guzzle 6, which uses curl to make API calls in the background. I assumed that curl that Guzzle uses will use the same configuration file ~/.curlrc, but seems not because I'm getting again: cURL error 35: SSL connect error.
This is the code that I'm using:
$client = new HttpClient(['defaults' => [
'verify' => false
]]);
$response = $client->request('GET', 'https://myapi.com', ['curl' => [
CURLOPT_SSLVERSION => 6,
]]);
As you can see, I even tried to pass TLSV1.2 value(value 6 is mapped to TLSV1.2 according to curl php documentation) to curl, but still nothing. Anybodu have an idea what could be wrong here?
EDIT: yeah, just confirmed that Guzzle uses some other curl binary. I moved the original one to another location and can't no longer accessed it from the command line, but after that Guzzle still returns the same error.
I am currently using AWS Elastic Beanstalk to launch a LAMP environment. Due to Elastic Beanstalk being an multiple instance environment, $_SESSION is not configured to work correctly and it is recommended to use DynamoDB Session Handler. This works fine for me with the following code inserted prior to session_start();
require 'vendor/autoload.php';
use Aws\DynamoDb\DynamoDbClient;
use Aws\DynamoDb\Session\SessionHandler;
$dynamoDb = DynamoDbClient::factory(array(
'key' => 'XXXX',
'secret' => 'XXXX',
'region' => 'us-east-1'
));
$sessionHandler = SessionHandler::factory(array(
'dynamodb_client' => $dynamoDb,
'table_name' => 'sessions',
));
$sessionHandler->register();
But, this does not work app wide and is causing issues getting phpMyAdmin up and running. How do I make this work app wide?
AFAIK, there is no way to configure a custom session handler from a php.ini, and to use the DynamoDB Session Handler, you must bootstrap it somehow. For an app with multiple entry points, this presents a challenge. One idea you could try is using the auto_prepend_file INI setting to run the bootstrap code.
I´m a newbie to soap in php, so I apologize if I´m not precise in my description.
I have working soap clients consuming wsdl´s in a providers remote server (eg www.remoteaddress.com/wsdl/webservice.wsdl). I was wondering if I could speed up that first wsdl call (before getting it into de cache) by downloading that wsdl from the remote server and uploading it locally to the same folder that contains the php that makes the call.
php.net says...
$client = new SoapClient(null, array('location' => "http://localhost/soap.php",
'uri' => "http://test-uri/",
'style' => SOAP_DOCUMENT,
'use' => SOAP_LITERAL));
So, questions please...does the location always have to be an http address or can a local apache server address be used (so as to reference a folder on a higher level than the public_html)? or in other words how do I reference in "location" the folder containing the the local uploaded wsdl? Would this speed things up, and if the local wsdl is in a public directory in my local server, does that pose some sort of security risk? Tried some combinations with the localhost above but none worked...
Thanks in advance for your help,
Pablo