Force re-cache of WSDL in php - php

I know how to disable WSDL-cache in PHP, but what about force a re-caching of the WSDL?
This is what i tried: I run my code with caching set to disabled, and the new methods showed up as espected. Then I activated caching, but of some reason my old non-working wsdl showed up again. So: how can I force my new WSDL to overwrite my old cache?

I guess when you disable caching it will also stop writing to the cache. So when you re-enable the cache the old cached copy will still be there and valid. You could try (with caching enabled)
ini_set('soap.wsdl_cache_ttl', 1);
I put in a time-to-live of one second in because I think if you put zero in it will disable the cache entirely but not remove the entry. You probably will only want to put that line in when you want to kill the cached copy.

In my php.ini there's an entry which looks like this:
soap.wsdl_cache_dir="/tmp"
In /tmp, I found a bunch of files named wsdl-[some hexadecimal string]
I can flush the cached wsdl files with this command:
rm /tmp/wsdl-*

Delete the old WSDL from the cache.

I'd try
$limit = ini_get('soap.wsdl_cache_limit');
ini_set('soap.wsdl_cache_limit', 0);
ini_set('soap.wsdl_cache_limit', $limit);
Or possibly set soap.wsdl_cache_ttl to 0 and back

Related

Does PHP's require() function cache its results in PHP 5.6?

In a PHP script intended to work on generic shared LAMP hosting, I am using require() as a function to read data from a file. The data file looks like this:
<?php
# storage.php
return [
// Rotating secrets
'last_rand' => '532e89355b78aafdb85f5f01f0eed20440d6bd9e0a2d6ae1bd17be4e1d7d21c7bb7a822a2077e3f4',
];
And I read it like this:
$data = require($root . '/storage.php');
$lastRand = $data['last_rand'];
In my script, I will read information from a configuration file using this approach. In some cases, the operation will re-write a new config file (using file_put_contents()) and then re-read it, again using require().
This has worked solidly so far, on a variety of LAMP hosts, but today I found a web host where the require() does not seem to notice changes in the file. This host is using PHP 5.6, whereas all the others I have tested are using 7+.
What's extremely odd is that require() gets the old contents of the file even though file_get_contents() can see the new version, as if require() is doing its own internal caching.
Failed fix attempts
I have even tried waiting for require() to catch up, in case some underlying cache needed time to expire:
$ok = true;
$t = microtime(true);
while ($oldRandom != $this->getFileService()->requirePhp($this->getStoragePath())['last_rand'])
{
$elapsedTime = microtime(true) - $t;
if ($elapsedTime > 4) // Wait 4 seconds
{
$ok = false;
break;
}
usleep(1000);
}
That did not work either (the timeout just expires with no change in the results brought back from require()).
However, upon the next run of the PHP script, require() suddenly sees the new file contents.
I've also tried to delete the storage.php file before re-reading it, and that has no effect either! I was really expecting that to help.
Future actions
So, I don't understand this behaviour at all. To work around it I could:
rather than modifying a new config, saving it to file and then reloading it from file, I could refactor my code so that I save the new config to memory, thus avoiding having to reload it
refactor so that I use file_get_contents() rather than require() (for rather dull reasons it's not as convenient, but I'll prefer whatever works reliably).
However, these both feel like I am ignoring a bug, and that I should investigate the cause. I happen to know also that this host (a free one) is nearly always under heavy CPU load. It's a 64 bit server running Linux on the rather old 2.6 kernel, and phpinfo() indicates the CGI/FastCGI server API is in force.
Can anything be done to mitigate this behaviour?
More attempts
Some helpful commenters suggest this problem could be related to opcaching, which seems to fit the general purpose of require(). I've added this code after the file_put_contents() that re-writes the config:
$reset = opcache_reset(); // Returns true
$invalid = opcache_invalidate($this->getStoragePath()); // Returns false
However, it makes no difference - require() stubbornly reads the same value. I have confirmed this by doing a require() before and after the file write, and also a file_get_contents() to get the true contents.
When I originally encountered this error, I was hesitant to work around it, since it felt like a critical bug that ought not be ignored. However, now that the culprit is likely to be opcaching (and thus related to require() specifically), I think working around it by keeping values in memory is not a bad solution. I shall have to remember that require() is only good for one call per HTTP request, even if that does not hold true for all PHP installations.
I am conscious also that if I did try to work around this, I would have to contend with a number of opcache invalidation mechanisms, which is more complexity than I am comfortable with.
I found the same problem with a config file, the only valid solution for now is to change the name of the config file with each modification:
1 - load config file
// get the config file name
$config_file=glob('config/config*.php'))[0] ?? null;
// load the settings
if($config_file) $settings=require_once($config_file);
2 - save the config file when needed
...
$data=['var'=>'value','other-var'=>'other value'];
$vars=var_export($data, TRUE);
// new file name
$file_name='config/config-'.time().'.php';
file_put_contents($file_name,"<?php\n return $vars;");
// delete old config file
if($config_file) #unlink($config_file);

Using PHP APC cache in CLI mode using dumpfiles

I've recently started using APC cache on our servers. One of the most important parts of our product is a CLI (Cron/scheduled) process, whose performance is critical. Typically the batchjob consists of running some 16-32 processes in parallel for about an hour (they "restart" every few minutes).
By default, using APC cache in CLI is a waste of time due to the opcode cache not being retained between individual calls. But APC also contains apc_bin_dumpfile() and apc_load_dumpfile() functions.
I was thinking these two function might be used to make APC efficient in CLI mode by having it all compiled sometime outside the batchjob, stored in a single dumpfile and having the individual processes load the dumpfile.
Does anybody have any experience with such a scenario or can you give good reasons why it will or will not work? If any significant gains could reasonably be had, either in memory use or performance? What pitfalls are lurking in the shadows?
Disclaimer: As awesome as APC is when it works in CLI, and it is awesome, it can equally be as frustrating. Use with a healthy load of patience, be thorough, step away from the problem if you're spinning, keep in mind you are working with cache that is why it seems like its doing nothing, it is actually doing nothing. Delete dump file, start with just the basics, if that doesn't work forget it try a new machine, new OS, if it is working make a copy, piece by piece expand functionality - there are loads of things that won't work, if it is working commit or make a copy, add another piece and test again, for sanity-check recheck the copies that were working before, cliches or not; if at first you don't succeed try try again, you can't keep doing the same thing expecting new results.
Ready? This is what you've been waiting for:
Enable apc for cli
apc.enable-cli=1
it is not ideal to create, populate and destroy the APC cache on every CLI request
- previous answer by unknown poster since removed.
You're absolutely right that sucks, lets fix it shall we?
If you try and use APC under CLI and it is not enabled you will get warnings.
something like:
PHP Warning: apc_bin_loadfile(): APC is not enabled,
apc_bin_loadfile not available.
PHP Warning: apc_bin_dumpfile(): APC is not enabled,
apc_bin_dumpfile not available.
Warning: I suggest you don't enable cli in php.ini, it is not worth the frustration, you are going to forget you did it and have numerous other headaches with other scripts, trust me its not worth it, use a launcher script instead. (see below)
apc_loadfile and apc_dumpfile in cli
As per the comment by mightye php we need to disable apc.stat or you will get a warnings
something like:
PHP Warning: apc_bin_dumpfile(): Excluding some files from apc_bin_dump[file].
Cached files must be included using full path with apc.stat=0.
launcher script - php-apc.sh
We will use this script to launch our apc enabled scripts (ex. ./php-apc.sh apc-cli.php) instead of changing the properties in php.ini directly.
#/bin/sh
php -d apc.enable_cli=1 -d apc.stat=0 $1
Ready for the basic functionality? Sure you are =)
basic APC persisted - apc-cli.php
<?php
/** check if dump file exists, you don't want to use file_exists */
if (false !== $dump_file = stream_resolve_include_path('apc.dump'))
/** so where were we lets have a look see shall we */
if (false !== apc_bin_loadfile($dump_file))
/** fetch what was stored last run just for fun */
if (false !== $value = apc_fetch('my.awesome.apc.store'))
echo "$value from apc\n";
/** store what gets fetched the next run just for fun */
apc_store('my.awesome.apc.store', 'awesome in cli');
/** what a shlep lets not do that all over again shall we */
apc_bin_dumpfile(array(),null,'apc.dump');
Notice: Why not use file_exists? Because file_exists == stat you see and we want to reap the reward that is apc.stat=0 so; work within the include path; use absolute and not relative paths - as returned by stream_resolve_include_path(); avoid include_once, require_once use the non *_once counterparts; check your stat usage, when not using APC(Muchos important senor), with the help of a StreamWrapper echo for calls to method url_stat; Oops: Fatal scope over-run error! aborting notice thread. see url_stat
message: Error caused by StreamWrapper outside the scope of this discussion.
The smoke test
Using the launcher execute the basic script
./php-apc.sh apc-cli.php
A whole bunch of nothing happened that's what we want right, why else do you want to use cache? If it did output anything then it didn't work, sorry.
There should be a dump file called apc.dump see if you can find it? If you can't find it then it didn't work, sorry.
Good we have the dump file there were no errors lets run it again.
./php-apc.sh apc-cli.php
What you want to see:
awesome in cli from apc
Success! =)
There are few in PHP as satisfying as a working APC implementation.
nJoy!
I would definitely not use it in the CLI as when you restart it, it's almost as if it was never running in the first place!
The better way of using APC is to have it running on the webserver itself all the time, this way with it being active it will actually do what it's supposed to do!
I tryed with curl and APC.it works
use these commands in CLI
curl --data "param1=value2" http://testsite.com/test.php
so it will post data to test.php and you writes the code in it.

How to check if APC opcode cache is working fine in PHP?

I am using PHP with APC cache enabled:
apc.cache_by_default => On
apc.enabled => On
apc.ttl => 7200
Now how can I know if it is using the opcode cache 100%.
For example, let us say that I have this PHP file:
<?php
echo "Hi there";
?>
Now after running this file, let us change it to echo "Bye there";
Shouldn't it echo "Hi there" since the TTL of 7200 seconds is not over yet? Am I right? If so, why does it echo "Bye there"? And if I am wrong how can I force it to use the opcode cache even after changing the file?
The simplest way that I could find to tell whether APC is working was to create a new PHP file containing this code...
<pre><?php
print_r(apc_cache_info());
It dumps the contents of apc_cache_info() to the screen (be careful, on a large, live site this could be lots of data!).
Every time you reload this PHP file, you should see num_hits increase, meaning that the opcode cache was used. A miss indicates that APC had to recompile the file from source (usually done on every change).
For a nicer interface to this information you can use the apc.php file that comes with APC. I copied this to my website directory using this console command (your folder locations may differ)...
cp /usr/share/doc/php-apc/apc.php /usr/share/nginx/html/apc-stats.php
Running this file in your browser gives you nice colours and graphs!
See this link for further info:
http://www.electrictoolbox.com/apc-php-cache-information/
I don't think you'll want to do it in production, but you could always use apc_cache_info().
function is_file_cached($file) {
$info = apc_cache_info();
foreach ($info['cache_list'] as $cache) {
if ($cache['filename'] == $file) return true;
}
return false;
}
Note that this will iterate over every single file that's cached checking for the specified one, so it's not efficient.
And as far as your specific question, APC will automatically invalidate the cache for a file when it changes. So when you edit the file, APC silently detects this and serves the new file. You can disable this by setting apc.stat = 0.
Normally APC checks if the requested file has been modified since it's been cached. You can control this with apc.stat.

session_start hangs

since a few hours our server hangs every time you do a session_start.
For testing purposes i created a script which looks like this:
<?php
session_start();
?>
Calling it from the console hangs and it can't even be stopped with ctrl-c, only kill -9 works. The same for calling it via Apache. /var/lib/php/session/ stays empty but permissions are absolutely fine, www can write and also has read permissions for all parent folders.
According to the admins there were no changes made on the server and there is no special code registered for sessions. The Server is CentOS 4 or 5 and yesterday everything was working perfectly. We rebooted the server and updated PHP, but nothing changed.
I've ran out of ideas, any suggestions?
UPDATE
We solved this problem by moving the project to another server, so while the problem still exists on one server there is no immediate need for a solution anymore.
I will keep the question open in case someone has an idea for others having a similar problem in the future, though.
There are many reasons for that, here are a few of them:
A. The session file could be opened exclusively.
When the file lock is not released properly for whatever reason, it is causing session_start() to hang infinitely on any future script executions.
Workaround: use session_set_save_handler() and make sure the write function uses fopen($file, 'w') instead of fopen($file, 'x')
B. Never use the following in your php.ini file (entropie file to "/dev/random"), this will cause your session_start() to hang:
<?php
ini_set("session.entropy_file", "/dev/random");
ini_set("session.entropy_length", "512");
?>
C.
session_start() needs a directory to write to.
You can get Apache plus PHP running in a normal user account. Apache will then of course have to listen to an other port than 80 (for instance, 8080).
Be sure to do the following things:
- create a temporary directory PREFIX/tmp
- put php.ini in PREFIX/lib
- edit php.ini and set session.save_path to the directory you just created
Otherwise, your scripts will seem to 'hang' on session_start().
If this helps:
In my scenario, session_start() was hanging at the same time I was using the XDebug debugger within PHPStorm, the IDE, on Windows. I found that there was a clear cause: Whenever I killed the debug session from within PHPStorm, the next time I tried to run a debug session, session_start() would hang.
The solution, if this is your scenario, is to make sure to restart Apache every time you kill an XDebug session within your IDE.
I had a weird issue with this myself.
I am using CentOS 5.5x64, PHP 5.2.10-1. A clean ANSI file in the root with nothing other than session_start() was hanging. The session was being written to disk and no errors were being thrown. It just hung.
I tried everything suggested by Thariama, and checked PHP compile settings etc.
My Fix:
yum reinstall php; /etc/init.d/httpd restart
Hope this helps someone.
To everyone complaining about the 30 seconds of downtime being unacceptable, this was an inexplicable issue on a brand new, clean OS install, NOT a running production machine. This solution should NOT be used in a production environment.
Ok I face the same problem on 2 PC, 1 is MAC mini XAMPP, 1 is Windows 10 Xampp.
Both is php spent infinity to run session_start(). Both PHP version is 7.x.x
I found that session files is lock to read and write. So that I added code to make PHP read session files and immediately unlock when done with
<?php
session_start([
'read_and_close' => true,
]);
?>
or
<?php
//For PHP 5.x
session_start();
session_write_close();
?>
After this PHP unlock session file => Problems solve
The problem: -
Iv experienced (and fixed) the problem where file based sessions hang the request, and database based sessions get out of sync by storing out of date session data (like storing each session save in the wrong order).
This is caused by any subsequent request that loads a session (simultaneous requests), like ajax, video embed where the video file is delivered via php script, dynamic resource file (like script or css) delivered via php script, etc.
In file based sessions file locking prevents session writing thus causing a deadlock between the simultaneous request threads.
In database based session the last request thread to complete becomes the most recent save, so for example a video delivery script will complete long after the page request and overwrite the since updated session with old session data.
The fix: -
If your ajax or resource delivery script doesnt need to use sessions then easiest to just remove session usage from it.
Otherwise you'd best make yourself a coffee and do the following: -
Write or employ a session handler (if not already doing so) as per http://www.php.net//manual/en/class.sessionhandler.php (many other examples available via google search).
In your session handler function write() prepend the code ...
// processes may declare their session as read only ...
if(!empty($_SESSION['no_session_write'])) {
unset($_SESSION['no_session_write']);
return true;
}
In your ajax or resource delivery php script add the code (after the session is started) ...
$_SESSION['no_session_write'] = true;
I realise this seems like a lot of stuffing around for what should be a tiny fix, but unfortunately if you need to have simultaneous requests each loading a session then it is required.
NOTE if your ajax or resource delivery script does actually need to write/save data, then you need to do it somewhere other than in the session, like database.
Just put session_write_close(); befor Session_start();
as below:
<?php
session_write_close();
session_start();
.....
?>
I don't know why, but changing this value in /etc/php/7.4/apache2/php.ini worked for me:
;session.save_path = "/var/lib/php/sessions"
session.save_path = "/tmp"
To throw another answer into the mix for those going bananas, I had a session_start() dying only in particular cases and scripts. The reason my session was dying was ultimately because I was storing a lot of data in them after a particularly intensive script, and ultimately the call to session_start() was exhausting the 'memory_limit' setting in php.ini.
After increasing 'memory_limit', those session_start() calls no longer killed my script.
For me, the problem seemed to originate from SeLinux. The needed command was chcon -R -t httpd_sys_content_t [www directory] to give access to the right directory.
See https://askubuntu.com/questions/451922/apache-access-denied-because-search-permissions-are-missing
If you use pgAdmin 4 this can happen as well.
If you have File > Preferences > SQL Editor > Options > "Auto Commit" disabled, and you just ran a query using the query tool but didn't manually commit, then session_start() will freeze.
Enable auto commit, or manually commit, or just close pgAdmin, and it will no longer freeze.
In my case it seems like it was the NFS Share that was locking the session , after restarting the NFS server and only enabled 1 node of web clients the sessions worked normally .
Yet another few cents that might help someone. In my case I was storing in $_SESSION complex data with several different class objects in them and session_start() couldn't handle the whole unserialization as not every class was loaded on session_start. The solution is my case was to serialize/jsonify data before saving it into the $_SESSION and reversing the process after I got the data out of session.

How to clear APC cache entries?

I need to clear all APC cache entries when I deploy a new version of the site.
APC.php has a button for clearing all opcode caches, but I don't see buttons for clearing all User Entries, or all System Entries, or all Per-Directory Entries.
Is it possible to clear all cache entries via the command-line, or some other way?
You can use the PHP function apc_clear_cache.
Calling apc_clear_cache() will clear the system cache and calling apc_clear_cache('user') will clear the user cache.
I don't believe any of these answers actually work for clearing the APC cache from the command line. As Frank Farmer commented above, the CLI runs in a process separate from Apache.
My solution for clearing from the command line was to write a script that copies an APC clearing script to the web directory and accesses it and then deletes it. The script is restricted to being accessed from the localhost.
apc_clear.php
This is the file that the script copies to the web directory, accesses, and deletes.
<?php
if (in_array(#$_SERVER['REMOTE_ADDR'], array('127.0.0.1', '::1')))
{
apc_clear_cache();
apc_clear_cache('user');
apc_clear_cache('opcode');
echo json_encode(array('success' => true));
}
else
{
die('SUPER TOP SECRET');
}
Cache clearing script
This script copies apc_clear.php to the web directory, accesses it, then deletes it. This is based off of a Symfony task. In the Symfony version, calls are made to the Symfony form of copy and unlink, which handle errors. You may want to add checks that they succeed.
copy($apcPaths['data'], $apcPaths['web']); //'data' is a non web accessable directory
$url = 'http://localhost/apc_clear.php'; //use domain name as necessary
$result = json_decode(file_get_contents($url));
if (isset($result['success']) && $result['success'])
{
//handle success
}
else
{
//handle failure
}
unlink($apcPaths['web']);
I know it's not for everyone but: why not to do a graceful Apache restart?
For e.g. in case of Centos/RedHat Linux:
sudo service httpd graceful
Ubuntu:
sudo service apache2 graceful
This is not stated in the documentation, but to clear the opcode cache you must do:
apc_clear_cache('opcode');
EDIT: This seems to only apply to some older versions of APC..
No matter what version you are using you can't clear mod_php or fastcgi APC cache from a php cli script since the cli script will run from a different process as mod_php or fastcgi. You must call apc_clear_cache() from within the process (or child process) which you want to clear the cache for. Using curl to run a simple php script is one such approach.
If you are running on an NGINX / PHP-FPM stack, your best bet is to probably just reload php-fpm
service php-fpm reload (or whatever your reload command may be on your system)
If you want to clear apc cache in command : (use sudo if you need it)
APCu
php -r "apcu_clear_cache();"
APC
php -r "apc_clear_cache(); apc_clear_cache('user'); apc_clear_cache('opcode');"
Another possibility for command-line usage, not yet mentioned, is to use curl.
This doesn't solve your problem for all cache entries if you're using the stock apc.php script, but it could call an adapted script or another one you've put in place.
This clears the opcode cache:
curl --user apc:$PASSWORD "http://www.example.com/apc.php?CC=1&OB=1&`date +%s`"
Change the OB parameter to 3 to clear the user cache:
curl --user apc:$PASSWORD "http://www.example.com/apc.php?CC=1&OB=3&`date +%s`"
Put both lines in a script and call it with $PASSWORD in your env.
apc_clear_cache() only works on the same php SAPI that you want you cache cleared. If you have PHP-FPM and want to clear apc cache, you have do do it through one of php scripts, NOT the command line, because the two caches are separated.
I have written CacheTool, a command line tool that solves exactly this problem and with one command you can clear your PHP-FPM APC cache from the commandline (it connects to php-fpm for you, and executes apc functions)
It also works for opcache.
See how it works here: http://gordalina.github.io/cachetool/
As defined in APC Document:
To clear the cache run:
php -r 'function_exists("apc_clear_cache") ? apc_clear_cache() : null;'
If you want to monitor the results via json, you can use this kind of script:
<?php
$result1 = apc_clear_cache();
$result2 = apc_clear_cache('user');
$result3 = apc_clear_cache('opcode');
$infos = apc_cache_info();
$infos['apc_clear_cache'] = $result1;
$infos["apc_clear_cache('user')"] = $result2;
$infos["apc_clear_cache('opcode')"] = $result3;
$infos["success"] = $result1 && $result2 && $result3;
header('Content-type: application/json');
echo json_encode($infos);
As mentioned in other answers, this script will have to be called via http or curl and you will have to be secured if it is exposed in the web root of your application. (by ip, token...)
if you run fpm under ubuntu, need to run the code below (checked on 12 and 14)
service php5-fpm reload
The stable of APC is having option to clear a cache in its interface itself. To clear those entries you must login to apc interface.
APC is having option to set username and password in apc.php file.
apc.ini
apc.stat = "1" will force APC to stat (check) the script on each request to determine if it has been modified. If it has been modified it will recompile and cache the new version.
If this setting is off, APC will not check, which usually means that to force APC to recheck files, the web server will have to be restarted or the cache will have to be manually cleared. Note that FastCGI web server configurations may not clear the cache on restart. On a production server where the script files rarely change, a significant performance boost can be achieved by disabled stats.
New APC Admin interface have options to add/clear user cache and opcode cache, One interesting functionality is to add/refresh/delete directory's from opCode Cache
APC Admin Documentation
A good solution for me was to simply not using the outdated user cache any more after deploy.
If you add prefix to each of you keys you can change the prefix on changing the data structure of cache entries. This will help you to get the following behavior on deploy:
Don't use outdated cache entries after deploy of only updated structures
Don't clean the whole cache on deploy to not slow down your page
Some old cached entries can be reused after reverting your deploy (If the entries wasn't automatically removed already)
APC will remove old cache entries after expire OR on missing cache space
This is possible for user cache only.
Create APC.php file
foreach(array('user','opcode','') as $v ){
apc_clear_cache($v);
}
Run it from your browser.
My work-around for Symfony build having loot of instances at the same server:
Step 1.
Create trigger or something to set a file flag (eg. Symfony command) then create marker file ..
file_put_contents('clearAPCU','yes sir i can buggy')
Step 2.
On index file at start add clearing code and remove marker file.
if(file_exists('clearAPCU')){
apcu_clear_cache();
unlink('clearAPCU');
}
Step 2.
Run app.
TL;DR: delete cache files at /storage/framework/cache/data/
I enabled APC but it wasn't installed (and also couldn't be installed), so it threw Call to undefined function Illuminate\Cache\apc_store().
"Ok, I'd just disable it and it should work".
Well, nope. Then I got stuck with Call to undefined function Illuminate\Cache\apc_delete()
We had a problem with APC and symlinks to symlinks to files -- it seems to ignore changes in files itself. Somehow performing touch on the file itself helped. I can not tell what's the difference between modifing a file and touching it, but somehow it was necessary...

Categories