Caching issue of cakephp on distributed server architecture - php

We having been using multiple webservers for our cakephp application,
problem is there are two cache directories, server 2 is clearing his cache before doing any insertion in his database. But server 1 doesn't know the about the database has been changed , so server 1 cached is not cleared
When new web request comes to server 2 , he creates new cache files and return good results. BUT when it comes to server 1 , he show same old results :( .
I wonder is there any way we can share the cache directory among various physical servers, without compromising with performance.
We may increase the web servers, so please recommend good long term solution for this
Thanks for reading

I would look into using Memcache as for your caching mechanism. You can set up the Memcache daemon (memcached) on one server and then connect both servers to the one cache. See the core.php for set up details. Of course you will also have to install the memcache php ext and the daemon, but it will be worth it.

Easiest way would be setting up the cache directory to be served from an nfs (file) server. You wouldn't have to change anything else.
For performance i'd also stick to Memcache as bucho said, but it will likely require you to change your application code.

Related

Memcached and PHP Sessions in multiple servers

I will like to know how memcached manage cache for php sessions i mean. I would like to design a php app that scale out and in each http-PHP server include a memcached layer for (db,app cache and session caching), but if memcached dont replicate de data when a user come to webserver1 dont see the same session in webserver2.
memcached1 and memcached2 need to be replicated to handle php sessions
thanks in advance.
regards.
While I agree there is no question here we could try to help the OP understand how memcache works.
When you use memcache which is an in-memory cache how you set it up is determined upon your current infrastructure.
For instance if you only have 1 web server you could install memcache on that same machine along with the database layer being on that machine as well. This works for increasing performance of the site because the site can get data from memcache (in memory) rather than from the database (on disk, and slower to read). Using it in this manner is good but as your site requires better performance or scalability you would probably start up a cluster of web servers behind a load balancer.
This is when things can get a bit tricky. You have all these machines and you are thinking that you need to have memcache on every machine so how do we replicate these instances? The simple answer is you don't. If you have multiple web servers the best method is to put memcache on it's own server (or cluster behind a load balancer), this way every web server is hitting the same IP address for the memcache server(s).
You do not need to worry about keeping anything in sync because the way memcache works is it creates a hash that specifies which server the key has been assigned to (when you have a cluster of memcache servers).
Based on this question it would appear that you would need to do one of the following:
1.) Read up on system architecture
2.) Hire someone to architect your systems layer.
My best suggestion would be to use a single server for your memcache instance and set the web servers to use that for memcache rather than trying to run memcache on each of the web servers.
Joseph.
I undertand your point, I already test the architecture with a separate memcached server (and redis too). My intentions is to "pack" the application server in a unit (docker) and the measure the load parameters to deploy a new instance, to scale out the infraestructure.
I found this.
https://www.digitalocean.com/community/tutorials/how-to-share-php-sessions-on-multiple-memcached-servers-on-ubuntu-14-04
thanks for your reply!
regards.

Php Session on a new Lamp linux server

Short introduction : i have one server that is running a LAMP stack with a login form and 2 different servers with other IP's hosted in the same data center so i can setup Private networking between the servers .
To Accomplish :
User 1 logs in on webserver 1 that starts a php session on Webserver 2
without giving access to Webserver 3
User 2 logs in on webserver 1 that starts a php session on Webserver 3
without giving access to Webserver 2
i have been looking into the situation so i think it can be done with memcache ?
But i'm not sure on that any help on how this setup could be configured would be greatly appreciated .
When you start using multiple application servers they should be stateless. Eg. No database, no sessions, no caches. All of these resources should be moved to their own servers that can be accessed by any part of the system that needs them.
Your choice of session management store very much depends on your needs. If you have very low traffic you can get away with storing sessions in the database, otherwise look into something like Redis for high performance storage.
For sure You'll have to setup a server where You can store session information between webservers. Memcache is a very good choice. Another can be database server like MySQL, but this will be a poor performance solution.
You can configure PHP to store session information in memcache only by installing memcache extension and modifying .ini files. This is pretty easy.
More difficult task will be passing information about session ID between servers, because session started on WS1 must be restored on WS2 but not on WS3.
To help You with this, please describe more how the login and accessing servers flow look like.

Memcahced on multiple servers in PHP

I have the following question regarding the memcached module in PHP:
Intro:
We're using the module to prevent the same queries from being sent to the Database server, on a site with 500+ users in every moment.
Sometimes (very rarely) the memcahed process defuncts and all active users start generating queries to the database, so everything stops working.
Question:
I know, that memcached supports multiple servers, but I want to know what happens when one of them dies? Is there some sort of balancer background or something, that can tell Ow! server 1 is dead. I'll send everything to server 2 until the server 1 goes back online. or the load is being sent equally to each one?
Possible sollutions:
I need to know this, because if it's not supported our sysadmin can set the current memcached server to be a load ballancer and to balance the load between several other servers.
Should I ask him to create the load-balancing manualy or is this feature supported by default and what are the risks for both methods?
Thank you!
You add multiple servers in your PHP script, not in Memcache's configuration.
When you use Memcached::addServers(), you can specify a weight for every server. In your case, you might set one Memcache server to be higher than the other and only have the second act as a failover.
Using Memcached::setOption(), you can set how often a connection should be retried and set the timeout. If you know your Memcache servers die a lot, it might be worth it to set this lower than the defaults, but it shouldn't be necessary.

PHP Sessions to handle Multiple Servers

All,
I have a PHP5 web application written with Zend Framework and MVC. This application is installed on 2 servers with the same setup. Server X has php5/MySql/Apache and Server Y also have the same. We don't have a common DB server between both the servers.
My application works when accessed individually via https on Server X and Server Y. But when we turn on load balancing and have both servers up, the sessions get lost.
How can I make sure my sessions persist across servers? Should I maintain my db on a third server and write sessions to it? IF so, what's the easiest and most secure way to do it?
Thanks
memcached is a popular way to solve this problem. You just need to get it up and running (easy) and update your php.ini file to tell it to use memcached as the session storage.
In php.ini you would modify:
session.save_handler = memcache
session.save_path = ""
For the general idea: PHP Sessions in Memcached.
There are any number of tutorials on setting up the Zend session handler to work with memcached. Take your pick.
Should I maintain my db on a third
server and write sessions to it?
Yes, one way to handle it is to have a 3rd machine running the database that both webservers use for the application. I've done that for several projects in the past and its worked well. The question with that approach is... is the bottleneck at the webservers or the database. If its at the database, you wont see much improvement by throwing load balancing of the web servers into the mix. You may need to instead think of mirroring schemes for the database.
Another option is to use the sticky sessions feature on your load balancer. What this will do is keep users on certain servers. So when user 1 comes to the site, they will be directed to server X. Every subsequent request will also be directed to server X. This allows you to not worry about persisting sessions between servers, as each user will continue to be directed to the server they have their session on.
The one downside of this is that when you take a web server out of the pool, half the users with a session will be logged out. So the effectiveness of this solution depends on how often you take servers out of the pool.

PHP Code on Separate Server From Apache?

This is something I've never seen done, and I'm not turning up in my research, but my boss is interested in the idea. We are looking at some load balancing options, and wonder if it is possible to have apache and php installed on multiple servers, managed by a load balancer, but have all the actual php code on one server, with the various apache servers pointing to the one central code base?
For instance NFS mounts are certainly possible, but I wouldn't recommend it. A lot of the advantage of loadbalancing is lost, and you're reintroducing a single point of failure. When syncing code, and rsync cronjob can handle itself very nicely, or a manual rsync on deployment can be done.
What is the reason you would want this central code base? I'm about 99% sure there is a better solution then a single server dishing out code.
I believe it is possible. To add to Wrikken's answer, I can imagine NFS could be a good choice. However, there are some drawbacks and caveats. For one, when Apache tries access files on an NFS share that has gone away (connection dropped, host failed, etc) very bad things happen. Apache locks up, and continues to try to retrieve the file. The processes attempting to access the share, for whatever reason, do not die, and it is necessary to re-boot the server.
If you do wind up doing this, I would recommend an opcode cache, such as APC. APC will cache the pre-processed php locally, and eliminate round trips to your storage. Just be prepared to clear the opcode cache whenever you update your application/
PHP has to run under something to act as a web processor, Apache is the most popular. I've done NFS mounts across servers without problem. Chances are if NFS is down, the network is down. But it doesn't take long to do an rsync across servers to replicate files, and is really a better idea.
I'm not sure what your content is like, but you can separate static files like javascript, css and images so they are on their own server. lighttpd is a good, light weight web server for things like this. Then you end up with a "dedicated" php server. You don't even need a load balancer for this setup.
Keep in mind that PHP stores sessions on the local file system. So if you are using sessions, you need to make sure users always return to the same server. Otherwise you need to do something like store sessions in memcache.

Categories