CakePHP 1.2 - Cache::delete doesn't work in prod - php

I have a curious problem. I have a shell file runned by cron each 15 minutes to gathers different RSS data. I use cache helper in CakePHP to save the result as this :
echo 'Update cache...';
Cache::delete('AggregatedNews.getHome');
Cache::delete('AggregatedNews.getHome.fr');
Cache::delete('AggregatedNews.getHome.en');
Cache::write('AggregatedNews.getHome',$this->AggregatedNews->getHome());
Cache::write('AggregatedNews.getHome.fr',$this->AggregatedNews->getHome(array('AggregatedNews.language'=>'fr'))); Cache::write('AggregatedNews.getHome.en',$this->AggregatedNews->getHome(array('AggregatedNews.language'=>'en')));
echo 'Cache updated!';
This code works well on my computer and in dev environment on the server. But in prod, nothing happens. If I manually delete the cache file to see if Cache::Write works, it's still the same.... Somebody have an idea?
Thanks!

The most probable reason is you forgot to set write permission to tmp folder.

Related

PHP filemtime() not working - cache problem?

I am calling filemtime() from a PHP file executed by POST from a JavaScript/HTML app. It returns the same time stamp for a separate test HTML file every two seconds even when I edit the test file with a text editor and I can see its DTM change in the local file system.
If I reload the entire app (Ctrl+F5), the timestamp reported stays the same. At times (once after 4 hours) the time stamp changes, but I don't know what makes this happen.
The PHP part of my code looks like this:
clearstatcache(true,$FileArg);
$R=filemtime($FileArg);
if ($R===false)
echo "error: file not found";
else
echo $R;
This code is called by synchronous Ajax, given only its PHP filename, using setInterval every 2 seconds.
Windows 10 Home, Apache 2.4.33 running locally for HTTP access, PHP 7.0.30 .
ADDED:
The behavior is the same in Firefox, Chrome, Opera, and Edge.
The results are being cached: http://php.net/manual/en/function.filemtime.php
Note: The results of this function are cached. See clearstatcache() for more details.
It almost sounds like Windows is doing some write caching...
stat() on the other hand has an additional note:
Note:
Note that time resolution may differ from one file system to another.
Maybe worth checking stat output.
edit
Maybe it's a bug, or Windows not playing nice, but you could also do a shell_exec with the Windows command showing DTM.
News: it turns out to be an ordinary bug in my app. I copied my Ajax call and forgot to edit it to apply to the test file. So it applied to one of my app files instead and the DTM only got updated when I edited that app file (FTAdjust.js).
When I specify the correct test file, the DTM updates just fine each time I edit it in another process.
It can sometimes be hard to find one's own bug even when it stares one in the face! I kept looking everywhere else but where the mistake was.
Is there a way to delete a thread from Stack Overflow, since it is irrelevant to others?

Magento 1.9 custom cache issue

I have an issue with Magento Custom Cache.
I have Observer method which launches by cron, i write value to the cache:
Mage::app()->saveCache($visitorsCount, 'cached_google_analytics_visitors_count', [], $twoDaysInSeconds);
Value is successfuly saved and i'm able to extract it from cache here. And files
mage---4ae_CACHED_GOOGLE_ANALYTICS_VISITORS_COUNT
and
mage---internal-metadatas---4ae_CACHED_GOOGLE_ANALYTICS_VISITORS_COUNT
are here two.
Now it's time to extract value from cache in my block, so i do this way:
$visitorsCount = Mage::app()->loadCache('cached_google_analytics_visitors_count');
But it returns me false. I've investigated that the reason is that there is no CACHED_GOOGLE_ANALYTICS_VISITORS_COUNT in metadatasArray in Zend_Cache_Backend_File class, but the file of metadatas exists.
More then, metadatasArray has this value when i'm writting value to the cache.
Hope your help.
Regards, Nikolay
i've got the reason of error:
cron was running from another user than web-server, so php-proccess didn't have permissions to read the file with metadatas. I've launched cron from www-data user and it works correctly now

PHP Reload Time

Hello I have a Problem with my PHP. Im coding in two ways:
I upload a File to my FTP Sever
Save it Local and run it with MAMP (OSX)
But in both ways i save/upload the new file but it takes about 2-5 Min until i can see the changes.
Example:
Old PHP:
<?php
echo "test";
?>
New PHP:
<?php
echo "test2";
?>
So i save the second file but until i see the second text it may taxes aboout 2-5 Minutes?
Can i change something in my PHP Info file or something else ? Or is there another way to code in PHP ?
This sounds like a caching problem. Try hitting Cmd+Shift+R* and see if the changes are instant then. If that's the problem, see this answer for how to disable the cache to prevent this problem.
Also, as loveNoHate points out in the comments, it is possible that this is a server- or ISP-side caching problem. Because you have the same problem running it locally on MAMP, however, it sounds like a browser issue.
* The Mac OS X shortcut. For future visitors: you would use Ctrl+F5 for Windows.
Since this is mac you might want to do
For Safari: Opt+Cmd+E to clear cache and Cmd+R to refresh
For Chrome: hold down Cmd and Shift key and then press R.

phpmyadmin token mismatch for long time idle

I installed phpMyAdmin 4.0.4.1 on my local develop enviroment, I set auth_type to config. Also I provide authentication requirements by this settings:
$cfg['Servers'][$i]['auth_type'] = 'config';
$cfg['Servers'][$i]['host'] = 'localhost';
$cfg['Servers'][$i]['password'] = 'somepassword';
But after a while that it is idle, if I click on any link of it , it shows me an error token mismatch, Is there any way that I increase its TTL? or make it alive permanently?
Above picture shows error.
I solve this annoying problem by following instructions below:
open /etc/php5/apache2/php.ini
find ;session.save_path = "/tmp", this line may look also like this ;session.save_path = "/var/lib/php5"
remove first semicolon from this line
restart apache by executing sudo service apache2 restart
FYI: I work under Ubuntu 12.04 with apache2, php5, phpMyAdmin 4.0.5 so for different systems and servers file path may be a little different.
In case of any troubles check if directory from step 2. is writable for server.
Good luck.
in file libraries/common.inc.php
line 1076
delete this part
/*
* There is no point in even attempting to process
* an ajax request if there is a token mismatch
*/
if (isset($response) && $response->isAjax() && $token_mismatch) {
$response->isSuccess(false);
$response->addJSON(
'message',
PMA_Message::error(__('Error: Token mismatch'))
);
exit;
}
For me this seemed to be caused by my root partition being full up, and I guess this error was triggered by php being unable to write to the session directory.
I had to turn my cookies on in my browser and it worked for me. (Using MAMP on OSX)
After doing all that was recommended here and in other places with no success, I found out that my /tmp was full.
To check it, just run from command line: df
It reports file system disk space usage.
In my case I had to remove some files to make some space in this directory (\tmp) and the error was gone for now.
Clearing your browser cache then it will work.
To stop this issue, delete the "tmp" folder and make a new one called "tmp" or just clear the content.
Try using another browser ex IE if it works then remove suspected chrome extensions .
Forr me pageXray was the problem.
ISSUE RESOLVED -
I just cleared the browsing history and data for last 7 days. It solved the problem for me.
Try it.
I was spend my 2-3 days to solve this problem .. on Stack overflow but i didn't got any working solution for my case.. but
finally..
I solve this annoying problem i was running phpmyadmin from localhost using chrome
but after running from firefox.. PROBLEM IS GONE..
so.. I think that was cookies problem not PMA so.. you should try with any other browser..

session_start hangs

since a few hours our server hangs every time you do a session_start.
For testing purposes i created a script which looks like this:
<?php
session_start();
?>
Calling it from the console hangs and it can't even be stopped with ctrl-c, only kill -9 works. The same for calling it via Apache. /var/lib/php/session/ stays empty but permissions are absolutely fine, www can write and also has read permissions for all parent folders.
According to the admins there were no changes made on the server and there is no special code registered for sessions. The Server is CentOS 4 or 5 and yesterday everything was working perfectly. We rebooted the server and updated PHP, but nothing changed.
I've ran out of ideas, any suggestions?
UPDATE
We solved this problem by moving the project to another server, so while the problem still exists on one server there is no immediate need for a solution anymore.
I will keep the question open in case someone has an idea for others having a similar problem in the future, though.
There are many reasons for that, here are a few of them:
A. The session file could be opened exclusively.
When the file lock is not released properly for whatever reason, it is causing session_start() to hang infinitely on any future script executions.
Workaround: use session_set_save_handler() and make sure the write function uses fopen($file, 'w') instead of fopen($file, 'x')
B. Never use the following in your php.ini file (entropie file to "/dev/random"), this will cause your session_start() to hang:
<?php
ini_set("session.entropy_file", "/dev/random");
ini_set("session.entropy_length", "512");
?>
C.
session_start() needs a directory to write to.
You can get Apache plus PHP running in a normal user account. Apache will then of course have to listen to an other port than 80 (for instance, 8080).
Be sure to do the following things:
- create a temporary directory PREFIX/tmp
- put php.ini in PREFIX/lib
- edit php.ini and set session.save_path to the directory you just created
Otherwise, your scripts will seem to 'hang' on session_start().
If this helps:
In my scenario, session_start() was hanging at the same time I was using the XDebug debugger within PHPStorm, the IDE, on Windows. I found that there was a clear cause: Whenever I killed the debug session from within PHPStorm, the next time I tried to run a debug session, session_start() would hang.
The solution, if this is your scenario, is to make sure to restart Apache every time you kill an XDebug session within your IDE.
I had a weird issue with this myself.
I am using CentOS 5.5x64, PHP 5.2.10-1. A clean ANSI file in the root with nothing other than session_start() was hanging. The session was being written to disk and no errors were being thrown. It just hung.
I tried everything suggested by Thariama, and checked PHP compile settings etc.
My Fix:
yum reinstall php; /etc/init.d/httpd restart
Hope this helps someone.
To everyone complaining about the 30 seconds of downtime being unacceptable, this was an inexplicable issue on a brand new, clean OS install, NOT a running production machine. This solution should NOT be used in a production environment.
Ok I face the same problem on 2 PC, 1 is MAC mini XAMPP, 1 is Windows 10 Xampp.
Both is php spent infinity to run session_start(). Both PHP version is 7.x.x
I found that session files is lock to read and write. So that I added code to make PHP read session files and immediately unlock when done with
<?php
session_start([
'read_and_close' => true,
]);
?>
or
<?php
//For PHP 5.x
session_start();
session_write_close();
?>
After this PHP unlock session file => Problems solve
The problem: -
Iv experienced (and fixed) the problem where file based sessions hang the request, and database based sessions get out of sync by storing out of date session data (like storing each session save in the wrong order).
This is caused by any subsequent request that loads a session (simultaneous requests), like ajax, video embed where the video file is delivered via php script, dynamic resource file (like script or css) delivered via php script, etc.
In file based sessions file locking prevents session writing thus causing a deadlock between the simultaneous request threads.
In database based session the last request thread to complete becomes the most recent save, so for example a video delivery script will complete long after the page request and overwrite the since updated session with old session data.
The fix: -
If your ajax or resource delivery script doesnt need to use sessions then easiest to just remove session usage from it.
Otherwise you'd best make yourself a coffee and do the following: -
Write or employ a session handler (if not already doing so) as per http://www.php.net//manual/en/class.sessionhandler.php (many other examples available via google search).
In your session handler function write() prepend the code ...
// processes may declare their session as read only ...
if(!empty($_SESSION['no_session_write'])) {
unset($_SESSION['no_session_write']);
return true;
}
In your ajax or resource delivery php script add the code (after the session is started) ...
$_SESSION['no_session_write'] = true;
I realise this seems like a lot of stuffing around for what should be a tiny fix, but unfortunately if you need to have simultaneous requests each loading a session then it is required.
NOTE if your ajax or resource delivery script does actually need to write/save data, then you need to do it somewhere other than in the session, like database.
Just put session_write_close(); befor Session_start();
as below:
<?php
session_write_close();
session_start();
.....
?>
I don't know why, but changing this value in /etc/php/7.4/apache2/php.ini worked for me:
;session.save_path = "/var/lib/php/sessions"
session.save_path = "/tmp"
To throw another answer into the mix for those going bananas, I had a session_start() dying only in particular cases and scripts. The reason my session was dying was ultimately because I was storing a lot of data in them after a particularly intensive script, and ultimately the call to session_start() was exhausting the 'memory_limit' setting in php.ini.
After increasing 'memory_limit', those session_start() calls no longer killed my script.
For me, the problem seemed to originate from SeLinux. The needed command was chcon -R -t httpd_sys_content_t [www directory] to give access to the right directory.
See https://askubuntu.com/questions/451922/apache-access-denied-because-search-permissions-are-missing
If you use pgAdmin 4 this can happen as well.
If you have File > Preferences > SQL Editor > Options > "Auto Commit" disabled, and you just ran a query using the query tool but didn't manually commit, then session_start() will freeze.
Enable auto commit, or manually commit, or just close pgAdmin, and it will no longer freeze.
In my case it seems like it was the NFS Share that was locking the session , after restarting the NFS server and only enabled 1 node of web clients the sessions worked normally .
Yet another few cents that might help someone. In my case I was storing in $_SESSION complex data with several different class objects in them and session_start() couldn't handle the whole unserialization as not every class was loaded on session_start. The solution is my case was to serialize/jsonify data before saving it into the $_SESSION and reversing the process after I got the data out of session.

Categories