Using Memcache on Load Balanced Servers - php

I'm using Rackspace Cloud Servers. I have installed NGINX with PHP and Memcache.
When the Web server is approaching capacity, I plan to clone the server, and then add a load balancer on top of it i.e. two servers with one load balancer managing the traffic between the two. All this is done automatically using the Rackspace API.
However, I'm lost as to what is going to happen to Memcache. I now have two Memcache servers. So the cache will no longer work as expected being that there are now, essentially, two Memcache servers.
Is it possible to just install Memcache on a unique server and then have my main Web server access it, this way when I want to create a situation where there is a load-balancer i.e. two web servers, they would both be referencing the same Memcache server?

Yes, you can have a single Memcached server and all Memcache clients connect and use it (rather than local installs of Memcached). You can use two Memcached servers if the data inconsistency is acceptable and the cost of calculating any stored data twice is acceptable to you. It'll save you time in the short-term, but ultimately it will probably complicate things.
In relation to Rackspace, make sure you're using the private direct IP address Rackspace gives you to network across machines instead of the external WAN IP. This will be faster, more secure, and won't count against your bandwidth allocation.

Related

Improve response time when database is on a dedicated server

Overview
I have a Laravel 9 application which is hosted with Digital Ocean. I use Laravel Forge to handle provisioning of the servers, management, etc. I've created two separate servers for my production environment. One to host my Laravel application code and another for the database which runs MySQL 8. These two servers are networked together and communicate over their VPC assigned private IP addresses.
Problem
I initially provisioned one server to host my application. This single server hosted both the Laravel application code and database. I have an endpoint that I hit to measure the response time for my application.
With one server that hosts the codebase and database the average response time was: ~70ms
When I hit the same endpoint again but with my two dedicated servers the average response time was: ~135ms
Other endpoints in my application also have a significant increase in response time when the database lives on a dedicated server vs a single server that houses everything.
Things I have done
All database queries have been optimized. (n+1, etc.)
Both networked servers are in the same region.
Both networked server's resources (CPU, RAM) are low and are not capping out.'
I've turned on Laravel's database config "sticky" option with no noticeable improvements.
I've enabled PHP OPcache for PHP 8.1.
Questions
How can I achieve a faster response time when my database is on a separate server than my codebase?
Am I sacrificing performance for scalability with dedicated servers?
TLDR
I'm experiencing slower response times in my Laravel application when the codebase and database run on separate dedicated servers vs hosting everything on one server.
Are your servers in the same data center and on the same VLAN?
Are you sure that you are connecting with your private VLAN IP address?
Some latency is expected if you need to connect to a database on another server. Have you tried to ping between the servers to see what the latency is?
Do you really need to have the web server and the database on separate servers? If so, I would probably try Digital Oceans managed database. I have used that for several projects and it works great.
Q: How can I achieve a faster response time when my database is on a separate server than my codebase?
A: If hosted in the same data center, the connection latency should be 30ms or less. Tested between AWS RDS and EC2 instances. Your mileage could vary depending on host.
Q: Am I sacrificing performance for scalability with dedicated servers?
A: It's standard practice to host databases separately to your application. It would be unrealistic to do otherwise for bigger projects. You can soften the impact by selectively caching data that doesn't change regularly on the main server. Unfortunately, PHP is not particularly good at this kind of fine tuning so you might be out of luck.
I can tell you that I currently run a central MySQL RDS instance that many ubuntu EC2 instances communicate with. While the queries take around 30ms, smart use of caching gives the majority of my web requests a 30ms response time in their own right. I do have the advantage of using NodeJS which is always doing things in the background without needing a request before performing work.
You may unfortunately find that you're running into one of the limitations of PHP.

redis : 40+ servers reading the same redis content

I'm gathering sports data every minute with PHP scripts and store them into Redis. It's all done on one ubuntu 16.04 server. Let's call it the collector server.
My goal is to have that Redis generated database available to our customers. The DB will only be read-only to our customers.
The way we connect customers servers to our Redis content is by directly
pointing them to the Redis host: port of that collector server. If all our clients would want to access the DB, I'm afraid the collector server would get stuck (40+ customers)...
That Redis content is updated every minute, and we are the owners of the customers' servers and content.
Is there setup to do in Redis or ways to have 40 +external servers reading the same Redis content DB without killing the collector server?
Before scaling, I recommend that you benchmark your application against Redis with real and/or simulated load - a single Redis server can handle an impressive load (see https://redis.io/topics/benchmarks) so you may be over engineering this.
That said, to scale reads only, read about Redis' replication. If you want to scale writes as well, read about Redis cluster.
+1 For Itamar's answer. But one more important thing you should keep in mind, letting your customers connect to your Redis resource directly is dangerous and should be avoided.
They will have your host:port and password and they will be able to connect, write, modify, delete, and even shutdown or change your password.
It is not scalable, and you'll probably notice it when it is already too late and too hard to change.
Some customers might have troubles connecting and passing some routers and FW with the non standard TCP port.
You should have an app server(s) that does the Redis communication for your customers.

Use memcache on another IP

OK, here's my problem, I have 1 main server with a measly 128M of RAM. I also have a few other servers but they cannot support certain things making them not usable for a web server (not ideal for development (technical reason)). But the thing is that these servers have 4GB of ram. I want to put them to some good use and allow them to be used as memcached buckets?
Is this possible?
Of course you will think I am crazy for not using the 4GB server but I am not able to as the service provider disallows certain ports (25 is the one causing me issues as my web application requires mail).
I am using PHP. Please tell me, if this can work then what I need to install if I am not using the memcached server on my web server.
Also what ports will I have to forward?
Thanks in advance.
If your application needs memory, run it on those servers anyway! Use your server that has full access to the internet as nothing more than a gateway to those servers.
You can do this a variety of ways, from simple routing, NAT, proxying, or even just mapping some ports over.

PHP/MySql clusters

I am currently planing a web application and I want to plan it to eventually run on a cluster later.
The cluster would be made of a php web cluster and a mysql cluster and a standalone storage unit (maybe a cluster of it I really don't know how that works :s)
I want to know if the code will be different than when php and mysql are on the same machine and what would be different?
The fact that the web and database servers are on different physical machines wouldn't change your code at all. The only place you'd need to change code is where you connect to the database - replacing the localhost reference with the IP address or hostname of the database server.
A clustered web server may need a different approach for storing sessions. If you got multiple webservers behind a load balancer, consequitive requests from the same session may end up on different servers. You should store the session data in a different place, like a central memcache.
Apart from a few of those issues, you should be fine regarding the web server.
As far as I know, MySQL and clustering are no friends. Although I wasn't really involved in the process, I know there has been a lot of trouble to get two database servers run together in our environment and even now they are not really clustered. They syncronize, but only one is actively used while the other is a fallback server.

Does it matter which memcache server I connect to first?

I have 2 load balanced web servers and a DB server. Each one has 6GB of ram dedicated to memcache.
On the 2 web servers, I'm having issues with memcache where they don't seem to have access to the same pool of data, sometimes.
Currently I have it setup so each of the 2 web servers connects to localhost first, and then adds the other 2 servers to the pool. Should I keep the connection string the same and have both of them connect to the DB server memcache instance initially, and then add themselves to the pool after in the same order?
The order of the memcached servers in your list is important. Also important is not using "localhost", ever. The key hashes are built based on the pool of servers you have provided. If your data is different, the hashes come out differently.
http://code.google.com/p/memcached/wiki/NewConfiguringClient#Configuring_Servers_Consistently

Categories