Problems with PHP Soap Server caching response - php

I've successfully created a client and server soap object... but having real problems with what I think is caching on the server side. I'm disabling all caching on both client and server scripts with:
ini_set("soap.wsdl_cache", "0");
ini_set("soap.wsdl_cache_ttl", "0");
ini_set("soap.wsdl_cache_enabled", "0");
But I seem to get exactly the same response from the server no matter what I do. I've changed the object names, changed the WSDL name and even appended a timestamp to the object names to make sure it's never the same each call. Then suddenly, after about 10 or 20 minutes or so it will update and I'll get a different response. I've checked phpinfo() and it says the caching ttl is a day long (globally), so I think it's definitely shorter than that.
Any ideas about killing off any kind of caching?

You can try passing the options to SOAP objects:
$client = new SoapClient("some.wsdl", array('cache_wsdl' => WSDL_CACHE_NONE));
$server = new SoapServer("some.wsdl", array('cache_wsdl' => WSDL_CACHE_NONE));
If this does not help, try to clear the wsdl cache file. On linux, it's usually in /tmp folder and it's name starts with wsdl-. If clearing this file does not help, maybe some other cache is used? Is it just SoapServer or are some additional libs used?

I had the same issue, and trying to set:
new SoapClient("some.wsdl", array('cache_wsdl' => WSDL_CACHE_NONE))
did nothing.
In the end, I located the /tmp folder the server used to cache the wsdl file and just delete it. Fixed!
The /tmp folder was not in my virtual domain /tmp folder, but at the root of the server directory.

Related

PHP CLI corrupts executed file

Problem description
Sometimes after running PHP CLI, the primary executed PHP file is erased. It's simply gone.
The file is missing / corrupted. Undelete (Netbeans History - revert delete) still shows the file, but it is not possible to recover it. It is also not possible to replace / reinstate the file with the identical filename.
Trial and error attempts
The issue occurs on (3) different computers, all Windows 10 or Windows 11.
I've used different versions of PHP (php-7.3.10-Win32-VC15-x64, php-8.1.4-Win32-vs16-x64).
The code is not doing any file IO at all. It uses a React event loop, listening to a websocket - "server_worklistobserver.php":
<?php
require __DIR__ . '/vendor/autoload.php';
$context = array(
'tls' => array(
'local_cert' => "certificate.crt",
'local_pk' => "certificate.key",
'allow_self_signed' => true, // Set to false in production
'verify_peer' => false
)
);
// Set up websocket server
$loop = React\EventLoop\Factory::create();
$application = new Ratchet\App('my.domain.com', 8443, 'tls://0.0.0.0', $loop, $context);
// Set up controller component
$controller = new WorklistObserver();
$application->route('/checkworklist', $controller, array('*'));
$application->run();
die('-server stopped-');
The disappearance happens when the PHP execution is cancelled. Either by Ctrl-Break, or when run as a service, a service stop/restart.
Execution is started by: php.exe server_worklistobserver.php in a dos-box (cmd).
Using administrator permissions has no effect. Performing a harddisk scan has no effect; there are no issues found. The issue is rather persistent, but not regular; it seems to happen by chance.
Associated PHP files are left intact.
The issue has never occurred on the Apache driven PHP execution.
Please help
What could I do different? Does have anyone have a similar experience? I can't find anything alike on the internet...
Thanks in advance.
Thanks so much, Chris! Indeed, I found the entries in the quarantaine of my virusscanner. I've added an exception rule and don't expect to have this happen again.

XMPP (XMPPHP) session won't start

Fellows I'm working in a new server and, at first, it looks all good. The eJabberd webadmin runs OK and I was able to even create an user by that interface.
The situation is, the same application that usually ran on my previous server freezes at the waiting for the session to start, the code:
$this->lnk->processUntil('session_start');
The $this->lnk->connect(); works fine but it seems that the session can't be set. Any suggestions for where or what I should go take a look first?
ACKs:
The XMPP application has been set the same way than was in the older server.
Here is the whole code:
$this->lnk = new XMPPHP_XMPP($this->config['host'],
$this->config['port'],
$this->config['username'],
$this->config['password'],
$this->config['service'],
$this->config['domain'],
$printlog = false,
$loglevel = XMPPHP_Log::LEVEL_VERBOSE);
$this->lnk->useEncryption(true);
$this->lnk->connect();
$this->lnk->processUntil('session_start');
The issue was caused by $this->lnk->useEncryption(true);. Since my new server hadn't proper SSL/TLS settings, this line caused the freezing of the code.
Possible solvings are disabling encryption and adjusting you SSL/TLS credentials.

PHP SOAP awfully slow: Response only after reaching fastcgi_read_timeout

The constructor of the SOAP client in my wsdl web service is awfully slow, I get the response only after the fastcgi_read_timeout value in my Nginx config is reached, it seems as if the remote server is not closing the connection. I also need to set it to a minimum of 15 seconds otherwise I will get no response at all.
I already read similar posts here on SO, especially this one
PHP SoapClient constructor very slow and it's linked threads but I still cannot find the actual cause of the problem.
This is the part which takes 15+ seconds:
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl");
It seems as it is only slow when called from my php script, because the file opens instantly when accessed from one of the following locations:
wget from my server which is running the script
SoapUI or Postman (But I don't know if they cached it before)
opening the URL in a browser
Ports 80 and 443 in the firewall are open. Following the suggestion from another thread, I found two work arounds:
Loading the wsdl from a local file => fast
Enabling the wsdl cache and using the remote URL => fast
But still I'd like to know why it doesn't work with the original URL.
It seems as if the web service does not close the connection, or in other words, I get the response only after reaching the timeout set in my server config. I tried setting keepalive_timeout 15; in my Nginx config, but it does not work.
Is there any SOAP/PHP parameter which forces the server to close the connection?
I was able to reproduce the problem, and found the solution to the issue (works, maybe not the best) in the accepted answer of a question linked in the question you referenced:
PHP: SoapClient constructor is very slow (takes 3 minutes)
As per the answer, you can adjust the HTTP headers using the stream_context option.
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl",array(
'stream_context'=>stream_context_create(
array('http'=>
array(
'protocol_version'=>'1.0',
'header' => 'Connection: Close'
)
)
)
));
More information on the stream_context option is documented at http://php.net/manual/en/soapclient.soapclient.php
I tested this using PHP 5.6.11-1ubuntu3.1 (cli)

file_get_contents returns empty string

I am hesitated to ask this question because it looks weird.
But anyway.
Just in case someone had encountered the same problem already...
filesystem functions (fopem, file, file_get_contents) behave very strange for http:// wrapper
it seemingly works. no errors raised. fopen() returns resource.
it returns no data for all certainly working urls (e.g. http://google.com/).
file returns empty array, file_get_contents() returns empty string, fread returns false
for all intentionally wrong urls (e.g. http://goog973jd23le.com/) it behaves exactly the same, save for little [supposedly domain lookup] timeout, after which I get no error (while should!) but empty string.
url_fopen_wrapper is turned on
curl (both command line and php versions) works fine, all other utilities and applications works fine, local files opened fine
This error seems inapplicable because in my case it doesn't work for every url or host.
php-fpm 5.2.11
Linux version 2.6.35.6-48.fc14.i686 (mockbuild#x86-18.phx2.fedoraproject.org)
I fixed this issue on my server (running PHP 5.3.3 on Fedora 14) by removing the --with-curlwrapper from the PHP configuration and rebuilding it.
Sounds like a bug. But just for posterity, here are a few things you might want to debug.
allow_url_fopen: already tested
PHP under Apache might behave differently than PHP-CLI, and would hint at chroot/selinux/fastcgi/etc. security restrictions
local firewall: unlikely since curl works
user-agent blocking: this is quite common actually, websites block crawlers and unknown clients
transparent proxy from your ISP, which either mangles or blocks (PHP user-agent or non-user-agent could be interpreted as malware)
PHP stream wrapper problems
Anyway, first let's proof that PHPs stream handlers are functional:
<?php
if (!file_get_contents("data:,ok")) {
die("Houston, we have a stream wrapper problem.");
}
Then try to see if PHP makes real HTTP requests at all. First open netcat on the console:
nc -l 80000
And debug with just:
<?php
print file_get_contents("http://localhost:8000/hello");
And from here you can try to communicate with PHP, see if anything returns if you variate the response. Enter an invalid response first into netcat. If there's no error thrown, your PHP package is borked.
(You might also try communicating over a "tcp://.." handle then.)
Next up is experimenting with http stream wrapper parameters. Use http://example.com/ literally, which is known to work and never block user-agents.
$context = stream_context_create(array("http"=>array(
"method" => "GET",
"header" => "Accept: xml/*, text/*, */*\r\n",
"ignore_errors" => false,
"timeout" => 50,
));
print file_get_contents("http://www.example.com/", false, $context, 0, 1000);
I think ignore_errors is very relevant here. But check out http://www.php.net/manual/en/context.http.php and specifically try to set protocol_version to 1.1 (will get chunked and misinterpreted response, but at least we'll see if anything returns).
If even this remains unsuccessful, then try to hack the http wrapper.
<?php
ini_set("user_agent" , "Mozilla/3.0\r\nAccept: */*\r\nX-Padding: Foo");
This will not only set the User-Agent, but inject extra headers. If there is a processing issue with construction the request within the http stream wrapper, then this could very eventually catch it.
Otherwise try to disable any Zend extensions, Suhosin, PHP xdebug, APC and other core modules. There could be interferences. Else this is potentiallyan issue specific to the Fedora package. Try a new version, see if it persists on your system.
When you use the http stream wrapper PHP creates an array for you called $http_response_header after file_get_contents() (or any of the other f family of functions) is called. This contains useful info on the state of the response. Could you do a var_dump() of this array and see if it gives you any more info on the response?
It's a really weird error that you're getting. The only thing I can think of is that something else on the server is blocking the http requests from PHP, but then I can't see why cURL would still be ok...
Is http stream registered in your PHP installation? Look for "Registered PHP Streams" in your phpinfo() output. Mine says "https, ftps, compress.zlib, compress.bzip2, php, file, glob, data, http, ftp, phar, zip".
If there is no http, set allow_url_fopen to on in your php.ini.
My problem was solved dealing with the SSL:
$arrContextOptions = array(
"ssl" => array(
"verify_peer" => false,
"verify_peer_name" => false,
),
);
$context = stream_context_create($arrContextOptions);
$jsonContent = file_get_contents("https://www.yoursite.com", false, $context);
What does a test with fsockopen tell you?
Is the test isolated from other code?
I had the same issue in Windows after installing XAMPP 1.7.7. Eventually I managed to solve it by adding the following line to php.ini (while having allow_url_fopen = On):
extension=php_openssl.dll
Use http://pear.php.net/reference/PHP_Compat-latest/__filesource/fsource_PHP_Compat__PHP_Compat-1.6.0a2CompatFunctionfile_get_contents.php.html and rename it and test if the error occurs with this rewritten function.

Force re-cache of WSDL in php

I know how to disable WSDL-cache in PHP, but what about force a re-caching of the WSDL?
This is what i tried: I run my code with caching set to disabled, and the new methods showed up as espected. Then I activated caching, but of some reason my old non-working wsdl showed up again. So: how can I force my new WSDL to overwrite my old cache?
I guess when you disable caching it will also stop writing to the cache. So when you re-enable the cache the old cached copy will still be there and valid. You could try (with caching enabled)
ini_set('soap.wsdl_cache_ttl', 1);
I put in a time-to-live of one second in because I think if you put zero in it will disable the cache entirely but not remove the entry. You probably will only want to put that line in when you want to kill the cached copy.
In my php.ini there's an entry which looks like this:
soap.wsdl_cache_dir="/tmp"
In /tmp, I found a bunch of files named wsdl-[some hexadecimal string]
I can flush the cached wsdl files with this command:
rm /tmp/wsdl-*
Delete the old WSDL from the cache.
I'd try
$limit = ini_get('soap.wsdl_cache_limit');
ini_set('soap.wsdl_cache_limit', 0);
ini_set('soap.wsdl_cache_limit', $limit);
Or possibly set soap.wsdl_cache_ttl to 0 and back

Categories