Apc doesn't cache files, it only caches user data. When I tested on localhost, APC cached all files I used. But it doesn't work on my shared hosting. Is this a configuration issue?
These are the stats from my apc.php (APC 3.0.19):
On the above picture, APC doesn't use any memory.
This is what phpinfo() gives me:
On localhost, i only access http://localhost/test.php. Apc will cache localhost/test.php ( type file ) imediately. but on shared host, i don't see it cache file ( it can cache variable, if i store but don't with file );
apc_add('APC TEST', '123');
echo apc_fetch('APC TEST'); //-- it work with this code
i want Apc cache test.php if i access test.php.
Is there a configure make APC can't cache file type or it is limit of shared hosting?.
In response to your comment "Apc is enabled , and apc.cache_by_default = 1; php setup with CGI, i checked phpinfo();": That's the problem. If you run PHP over CGI a new PHP process is created on every page load. As APC is bound to the PHP process it is newly instantiated on every page access, too. So it obviously doesn't have any data in it. Your user cache example only works, because you store and fetch the variable on a single page load.
So: APC can not work with PHP over CGI. Use FastCGI (which keeps the processes alive, thus making the Cache work and generally being faster).
APC in CGI mode on shared hosting is generally not feasible although it may be possible. Depending on your application it may also be a security risk. As nikic said you should be able to get it working with FastCGI but even that it not easy depending on your host. Here is a detailed account of someone that got it working. It may give you some help in trying to get it to work in CGI mode
FastCGI with a PHP APC Opcode Cache
If your hosting is setup with php in fastcgi mode APC may not work. can you check this with a standard phpinfo() page?
edit: I stand corrected, the chosen answer is right. I confused CGI/fastcgi. yeah CGI will not work. But I want to note that even fastcgi is not that great with opcode caching.
Related
I have noticed that multiple users per day are being assigned the same session_id. I am using php 7.2 now, but looking back into the history of user sessions this has been happening since I was using php 5.4.
I am just using php defaults of session_start(), no custom session handler.
I have read that the session_id is a combo of the client ip and time, but give that I am using a load balancer, that might be limiting the randomness of the ip_addresses?
What is the proper way to increase the uniqueness of session_ids to prevent collisions when using a load balancer?
If you are using Nginx you may want to check if FastCGI micro-caching is enabled and disable it. This has caused some errors before noted in PHP.net developers bugs listings in PHP 7.1 running nginx
Bug #75496 Session ID Collision happened few times
After the first case [of collision] we changed a hash entropy php settings in php.ini so session_id is now 48 chars but it didn't help to prevent second case.
Solution:
FastCGI micro caching at nginx that cached 200 responses together with session cookies.
Of course maybe it was set wrong at our server, but its definitely has nothing to do with PHP.
Please see:
https://bugs.php.net/bug.php?id=75496
Assuming you're running PHP-FPM on each web node, but I guess #2 and #3 would probably work running PHP as an Apache plugin as well.
You've got a few options here:
Keep your LB'd web nodes with apache and have each one point to the same upstream PHP-FPM server. Obviously the third box running the single PHP-FPM process might need to be beefier, since it's handling PHP parsing for both web nodes.
Change the session storage to point to a file location (probably a NFS or SMB share) that both servers can access. Not sure if I've ever done this honestly but it seems like it would work. Really, your web files should probably be on an NFS/SMB share already so you can deploy changes to only one location.
Spin up a redis server and have both web nodes' PHP-FPM process use that for session.
Number three is probably the best option in most cases.
We have PHP5 FPM set up under Nginx. We use Memcached as our session handler.
session.save_handler=memcached
My expectation is that, without fail (notwithstanding some fatal error like the death of our Memcached server) that all sessions should make it to Memcached and explicitly NOT disk.
However, upon inspecting our application, I've found sessions on Memcached AND in /var/lib/php5/fpm/.
Some troubleshooting:
We are definitely getting new sessions set on Memcached. However, some sessions that I found on disk, don't appear on Memcached
The timestamps on the file based sessions are definitely recent - there are files in the current minute.
Permissions on the files are for the installation user - not root.
Despite having said point 3 above, there are SOME files that have the root user and group ownership. This I find weird. Why would there be sessions owned by root? That would mean that anyone trying to check the file (that has 0600 permissions btw) would fail.
So, I guess my questions amount to:
Is there any scenario in which it is valid that new session files are created on disk despite the fact that we use Memcached?
Any idea why we'd have session files that have a root ownership?
For context: I'm researching very sporadic session expiry issues. After having increased Memcached memory limits and concurrent connections (and that ultimately fixing a large number of the instances) we're still experiencing a small amount of the session expiries. Anyway, that is simply context - might not be important.
The session files were created by php-cli started by cron. cli config differs from fpm one and uses default file session handler.
Edit
Importantly, the cronjob must either be hitting a piece of code that manually starts the session
OR
the configuration directive session.auto_start for PHP5-cli must be set to true
Im having a problem where we run an upgrade for our web application.
After the upgrade script completes and access the web app via the browser, we get file not found errors on require_once() because we shifted some files around and PHP still has the old directory structure cached.
If we have the default 120 seconds for the realpath_cache_ttl to expire, then everything resolves itself, but this is not acceptable for obvious reasons.
So I tried using clearstatcache with limited success. I created a separate file (clearstatcache.php) that only calls this function (this is a one line file), and placed a call to it in our install script via curl:
<?php
clearstatcache(true);
This does not seem to work, however if I call this file via the browser it immediately begins to work.
I'm running PHP version 5.3
I started looking at the request header differences between my browser and curl, and the only thing I can see that might matter is the PHPSESSID cookie.
So my question is, does the current PHPSESSID matter (I don't think it should). Am I doing something wrong with my curl script? I am using
curl -L http://localhost/clearstatcache.php
EDIT: Upon further research, I've decided this probably has something to do with multiple apache processes running. clearstatcache will only clear the cache of the current apache process - when the browser is making a request a different apache process serves the request, and this process still has the old cache.
Given that the cache is part of the Apache child process thanks to mod_php, your solution here is probably going to be restarting the Apache server.
If you were using FastCGI (under Apache or another web server), the solution would probably be restarting whatever process manager you were using.
This step should probably become part of your standard rollout plan. Keep in mind that there may be other caches that you might also need to clear.
A have setup an internal proxy kind of thing using Curl and PHP. The setup is like this:
The proxy server is a rather cheap VPS (which has slow disk i/o at times). All requests to this server are handled by a single index.php script. The index.php fetches data from another, fast server and displays to the user.
The data transfer between the two servers is very fast and the bottleneck is only the disk i/o on the proxy server. Since there is only one index.php - I want to know
1) How do I ensure that index.php is permanently "cahced" in Apache on the proxy server? (Googling for php cache, I found many custom solutions that will cache the "data" output by php I want to know if there are any pre-build modules in apache that will cache the php-script itself?).
2) Is the data fetched from the backend server alway in the RAM/cache on the proxy server? (assuming there is enough memory)
3) Does apache read any config files or other files from disk when handling requests?
4) Does apache wait for logs to be written to disk before serving the content - if so I will disable logging on the proxy server (or is there way to ensure content is first served without waiting for logs to be written).?
Basically, I want to eliminate disk i/o all together on the 'proxy' server.
Thanks,
JP
1) Install APC (http://pecl.php.net/apc), this will compile your PHP script once and keep it in shared memory for the lifetime of the webserver process (or a given TTL).
2) If your script fetches data and does not cache/store it on the filesystem, it will be in RAM, yes. But only for the duration of the request. PHP uses a 'share-nothing' strategy which means -all- memory is released after a request. If you do cache data on the filesystem, consider using memcached (http://memcached.org/) instead to bypass file i/o.
3) If you have .htaccess support activated, Apache will search for those in each path leading to your php file. See Why can't I disable .htaccess in Apache? for more info.
4) Not 100% sure, but it probably does wait.
Why not use something like Varnish which is explicitly built for this type of task and does not carry the overhead of Apache?
I would recommend "tinyproxy" for this puprose.
Does everything you want very efficeintly.
We having been using multiple webservers for our cakephp application,
problem is there are two cache directories, server 2 is clearing his cache before doing any insertion in his database. But server 1 doesn't know the about the database has been changed , so server 1 cached is not cleared
When new web request comes to server 2 , he creates new cache files and return good results. BUT when it comes to server 1 , he show same old results :( .
I wonder is there any way we can share the cache directory among various physical servers, without compromising with performance.
We may increase the web servers, so please recommend good long term solution for this
Thanks for reading
I would look into using Memcache as for your caching mechanism. You can set up the Memcache daemon (memcached) on one server and then connect both servers to the one cache. See the core.php for set up details. Of course you will also have to install the memcache php ext and the daemon, but it will be worth it.
Easiest way would be setting up the cache directory to be served from an nfs (file) server. You wouldn't have to change anything else.
For performance i'd also stick to Memcache as bucho said, but it will likely require you to change your application code.