If Server 1 has both my database and Memcached as well as www.website1.co.uk - the site will work fine.
But What if I have the following scenario:
Server 1 - Database - Memcached - website1.co.uk
Server 2 - website2.co.uk
Server 3 - website3.co.uk
How would I set this up so website2 and website3 can all connect, read and write to the memcached db on Server 1 (they can already connect to server 1 and read and write to the DB without memcache).
Would I need to install Memcache on server 2 and server 3 just to be able to connect?
I've never used memcache before so it's a learning experience.
If Servers 1 to 3 are on the same network, you could install memcache on each of the application servers without worry because Memcache is designed for clustered architecture. This simply means that you can run as MANY instances as you want but your application 'sees' it as 1 giant memory cache.
To paraphrase from the memcached's project wiki:
//in your configuration file:
$MEMCACHE_SERVERS = array(
"10.1.1.1", //web1
"10.1.1.2", //web2
"10.1.1.3", //web3
);
//at the 'bootstrapping' phase of your app somewhere:
$memcache = new Memcache();
foreach($MEMCACHE_SERVERS as $server){
$memcache->addServer ( $server );
}
Is your question is related to scaling? If so:
I've seen some people say to have your cache server on the DB server itself. IMHO, this is not very effective as you would want to give your DB server as much physical RAM as you can possibly afford (depending on how large your web app is in terms of traffic and load).
I would allocate a portion of memory on each of the application servers (Server 2 and Server 3) for caching purposes. That way, if you want to scale out, you just provision one more application server, checkout your source code and add it to your network. This way the size of your memory cache would grow in a linear manner (more or less) as you add for application servers to your server pool.
All of the above assumes all the servers are on 1 network obviously.
Related
I'm gathering sports data every minute with PHP scripts and store them into Redis. It's all done on one ubuntu 16.04 server. Let's call it the collector server.
My goal is to have that Redis generated database available to our customers. The DB will only be read-only to our customers.
The way we connect customers servers to our Redis content is by directly
pointing them to the Redis host: port of that collector server. If all our clients would want to access the DB, I'm afraid the collector server would get stuck (40+ customers)...
That Redis content is updated every minute, and we are the owners of the customers' servers and content.
Is there setup to do in Redis or ways to have 40 +external servers reading the same Redis content DB without killing the collector server?
Before scaling, I recommend that you benchmark your application against Redis with real and/or simulated load - a single Redis server can handle an impressive load (see https://redis.io/topics/benchmarks) so you may be over engineering this.
That said, to scale reads only, read about Redis' replication. If you want to scale writes as well, read about Redis cluster.
+1 For Itamar's answer. But one more important thing you should keep in mind, letting your customers connect to your Redis resource directly is dangerous and should be avoided.
They will have your host:port and password and they will be able to connect, write, modify, delete, and even shutdown or change your password.
It is not scalable, and you'll probably notice it when it is already too late and too hard to change.
Some customers might have troubles connecting and passing some routers and FW with the non standard TCP port.
You should have an app server(s) that does the Redis communication for your customers.
I'm using Rackspace Cloud Servers. I have installed NGINX with PHP and Memcache.
When the Web server is approaching capacity, I plan to clone the server, and then add a load balancer on top of it i.e. two servers with one load balancer managing the traffic between the two. All this is done automatically using the Rackspace API.
However, I'm lost as to what is going to happen to Memcache. I now have two Memcache servers. So the cache will no longer work as expected being that there are now, essentially, two Memcache servers.
Is it possible to just install Memcache on a unique server and then have my main Web server access it, this way when I want to create a situation where there is a load-balancer i.e. two web servers, they would both be referencing the same Memcache server?
Yes, you can have a single Memcached server and all Memcache clients connect and use it (rather than local installs of Memcached). You can use two Memcached servers if the data inconsistency is acceptable and the cost of calculating any stored data twice is acceptable to you. It'll save you time in the short-term, but ultimately it will probably complicate things.
In relation to Rackspace, make sure you're using the private direct IP address Rackspace gives you to network across machines instead of the external WAN IP. This will be faster, more secure, and won't count against your bandwidth allocation.
We have our database servers separate from our webserver. The database servers are replicated (we know there is overhead here). Even with replication turned off however, performance for large number of queries in a PHP script is 4 times slower than our staging server that has the db and apache on the same machine. I realize that network latency and other issues with a network mean that there is no way they will be equal, but our productions servers are exponentially more powerful and our production network is all on gigabit switches. We have tuned MYSQL as best as we can but the performance marker is still at 4x slower. We are running over nginx with Apache proxies and replicated MYSQL dbs. UCarp is also running. What are some suggestions for areas to look for improving the performance? I would be happy with twice as slow on production.
It's difficult to do much more than stab in the dark given your description, but here's some starting points to try independently, which will hopefully narrow down the cause:
Move your staging DB to another host
Add your staging host to the production pool and remove the others
Profile your PHP script to ensure it's the queries causing the delay
Use an individual MySQL server rather than via your load balancer
Measure a single query to the production pool and the staging server from the MySQL client
Run netperf between your web server and your DB cluster
Profile the web server with [gb]prof
Profile a MySQL server receiving the query with [gb]prof
If none of these illuminate anything other than the expected degradation due to the remote host, then please provide a reproducible test case and your full MySQL config (with sensitive data redacted.) That will help someone more skilled in MySQL assist you ;)
Not every web request on a web site will (if properly designed) need a mysql connection. Most likely, if you are requiring a connection on every http request, your application will not scale and will start having issues very quickly.
Do more caching at app. server to request mysql less often. E.g. use
memcache.
Try to use persistent connections from application to your mysql servers.
Use mysql data compression.
Minify data (limit your selects, use column names instead of "*" in select statements)
Shamanic tuning:
Make sure, that nothing slows down network at mysql servers: big firewall rulesets, network filters, etc.
Add another (client inaccesible) network interface for app. server
and mysql server.
Tune network connection between app. server and mysql. Sometimes you
can win several ms by creating hardcoded network routes.
Don't think any of above would help - if network connection is slow, nothing of above will significantly speed it up.
I'm running a pressflow site with over 40,000 unique visitors a day, and almost 80,000 records in node_revision, and my site hangs randomly giving 'site offline' message. I have moved my db to innodb and it still continues. I'm using my-huge.cnf as my mysql config. Please advice me on a better configuration and reasons for all this. I'm running on a dedicated server with more than 300GB and 4GB RAM.
The my-huge.cnf file was tuned for a "huge" server by standards of a decade ago, but it barely qualifies as a reasonable production configuration now. I would check other topics related to MySQL tuning and especially consider using a tool like Varnish to (since you're already on Pressflow) to cache anonymous traffic.
I suspect that you are having excessive connections to the database server which can exhaust your server RAM. This is very likely to be the case if you are running Apache in pre-fork mode and PHP as Apache module with persistent connections, and using the same server to serve images, CSS, JavaScript and other static content.
If that is the case, the way to go is to move the static content to a separate multi-threaded Web server like lighttpd or ngynx. That will avoid Apache forking too many processes that end up making PHP establish too many persistent connections that exhaust your RAM.
I have 2 load balanced web servers and a DB server. Each one has 6GB of ram dedicated to memcache.
On the 2 web servers, I'm having issues with memcache where they don't seem to have access to the same pool of data, sometimes.
Currently I have it setup so each of the 2 web servers connects to localhost first, and then adds the other 2 servers to the pool. Should I keep the connection string the same and have both of them connect to the DB server memcache instance initially, and then add themselves to the pool after in the same order?
The order of the memcached servers in your list is important. Also important is not using "localhost", ever. The key hashes are built based on the pool of servers you have provided. If your data is different, the hashes come out differently.
http://code.google.com/p/memcached/wiki/NewConfiguringClient#Configuring_Servers_Consistently