PHP/MySql clusters - php

I am currently planing a web application and I want to plan it to eventually run on a cluster later.
The cluster would be made of a php web cluster and a mysql cluster and a standalone storage unit (maybe a cluster of it I really don't know how that works :s)
I want to know if the code will be different than when php and mysql are on the same machine and what would be different?

The fact that the web and database servers are on different physical machines wouldn't change your code at all. The only place you'd need to change code is where you connect to the database - replacing the localhost reference with the IP address or hostname of the database server.

A clustered web server may need a different approach for storing sessions. If you got multiple webservers behind a load balancer, consequitive requests from the same session may end up on different servers. You should store the session data in a different place, like a central memcache.
Apart from a few of those issues, you should be fine regarding the web server.
As far as I know, MySQL and clustering are no friends. Although I wasn't really involved in the process, I know there has been a lot of trouble to get two database servers run together in our environment and even now they are not really clustered. They syncronize, but only one is actively used while the other is a fallback server.

Related

Improve response time when database is on a dedicated server

Overview
I have a Laravel 9 application which is hosted with Digital Ocean. I use Laravel Forge to handle provisioning of the servers, management, etc. I've created two separate servers for my production environment. One to host my Laravel application code and another for the database which runs MySQL 8. These two servers are networked together and communicate over their VPC assigned private IP addresses.
Problem
I initially provisioned one server to host my application. This single server hosted both the Laravel application code and database. I have an endpoint that I hit to measure the response time for my application.
With one server that hosts the codebase and database the average response time was: ~70ms
When I hit the same endpoint again but with my two dedicated servers the average response time was: ~135ms
Other endpoints in my application also have a significant increase in response time when the database lives on a dedicated server vs a single server that houses everything.
Things I have done
All database queries have been optimized. (n+1, etc.)
Both networked servers are in the same region.
Both networked server's resources (CPU, RAM) are low and are not capping out.'
I've turned on Laravel's database config "sticky" option with no noticeable improvements.
I've enabled PHP OPcache for PHP 8.1.
Questions
How can I achieve a faster response time when my database is on a separate server than my codebase?
Am I sacrificing performance for scalability with dedicated servers?
TLDR
I'm experiencing slower response times in my Laravel application when the codebase and database run on separate dedicated servers vs hosting everything on one server.
Are your servers in the same data center and on the same VLAN?
Are you sure that you are connecting with your private VLAN IP address?
Some latency is expected if you need to connect to a database on another server. Have you tried to ping between the servers to see what the latency is?
Do you really need to have the web server and the database on separate servers? If so, I would probably try Digital Oceans managed database. I have used that for several projects and it works great.
Q: How can I achieve a faster response time when my database is on a separate server than my codebase?
A: If hosted in the same data center, the connection latency should be 30ms or less. Tested between AWS RDS and EC2 instances. Your mileage could vary depending on host.
Q: Am I sacrificing performance for scalability with dedicated servers?
A: It's standard practice to host databases separately to your application. It would be unrealistic to do otherwise for bigger projects. You can soften the impact by selectively caching data that doesn't change regularly on the main server. Unfortunately, PHP is not particularly good at this kind of fine tuning so you might be out of luck.
I can tell you that I currently run a central MySQL RDS instance that many ubuntu EC2 instances communicate with. While the queries take around 30ms, smart use of caching gives the majority of my web requests a 30ms response time in their own right. I do have the advantage of using NodeJS which is always doing things in the background without needing a request before performing work.
You may unfortunately find that you're running into one of the limitations of PHP.

Can a computer run a localhost (MAMP etc) as a backup for if internet is not available - and update server database?

I have been looking into creating a custom mySQL Point Of Sales system so that there is one centralised database for inventory levels between multiple stores and online etc.
The biggest problem I see is the unlikely event that the internet drops out in the bricks and mortar stores. If this were to happen, could it be set up so that the POS system is running off a local mySQL database on that computer (using MAMP or something similar) and then once internet is available again, automatically sync the databases to update sales and inventory levels?
In regards to 'how is the actual POS system going to be accessed without internet' I'm was thinking that the POS system would be run on the server when internet is available, and then when the net drops out it would be run from files stores on the machine pointing to the local database on the machine.
Yes, a minimal & viable solution would just be to have all of the POS data entered locally as well as on the remote database, then it serves as a sort of backup in case anything happens to the central DB.
As far as automating the 'fix' of the central DB after an outage, maybe the best way is to have the central system request sales data from the local DBs of each store. If the workflow is setup like this, then you don't really have to do anything 'special' about internet outages.
The problem here is obviously writes. You can use replication to always have a local readable copy of the database, but it's tricky to have multiple masters when using replication. I haven't used MySQL Cluster, but it may be what you need.
But since the problem is writes you can possibly implement the writing part of the POS system as a service you send messages to. When the network is down, queue the messages and send them when online.
An easier solution may actually be to always ensure network stability. Set up some mobile (GSM/3G) connection for failover and possibly even a standard POTS telephone line as well.
MySQL Master/Slave Replication would seem the logical approach.
Your MAMP code works directly against the local (slave) database; and when the MAMP server has access to the master database, the databases stay synchronised. If the slave loses access to the Internet, it's still working locally; and when the internet connection is restored it will synchronise again with the master.
With a little care in the database design (particularly autoincrement pks), you can run multiple local/slave servers all with their own local store data, and the master is a repository of data across all stores.

Usage of Mysql in offline internet state

I'm using a self-made customer system in PHP running with a local mySQL Database.
Now i have a second computer on a different location which has to use this Database too. So i gave this mysql Database on a Server reachable through internet.
My problem is now, that the first one has often problems with the internet connection and then the program will not work. But it has to work every time!
Now i do not know how i should handle this problem?
A local Database and one in the internet, but how should i merge them?
Should i make a local DB per computer and match them together in one?
I also want to change the framework behind this system to symfony2 so is there a way to solve this problem with this framework (e.g. doctrine?)
Thanks for your help!
Update:
My limitation is the Internet connection on the first computer which could not be eliminated.
If you really have limitations of (1) not being able to move the database off of the machine with a bad connection and (2) not being able to fix the bad connection; you are going to have to keep some sort of local instance on the second machine.
I would try to setup master-master replication from the first machine with the bad connection to the second machine. I'm not sure how reliable this will be considering the replication will be failing often due to the first machine's bad connection. This problem may be extrapolated if one or both machines are using old versions of MySQL. MySQL 5.5, for example, can be configured to actively monitor replication connectivity.
If the majority of your application does READS instead of WRITES, perhaps you could install Memcached (or something similar) on the second machine so that the application can pull data from local memory without requiring a connection to the MySQL server.
There are a few ways to achieve what you want (although maybe not exactly how you described), but the best way is definitely do host the database on a server that doesn't have Internet connectivity problems. Look for hosting that allows remote MySQL connections.

Using Memcache on Load Balanced Servers

I'm using Rackspace Cloud Servers. I have installed NGINX with PHP and Memcache.
When the Web server is approaching capacity, I plan to clone the server, and then add a load balancer on top of it i.e. two servers with one load balancer managing the traffic between the two. All this is done automatically using the Rackspace API.
However, I'm lost as to what is going to happen to Memcache. I now have two Memcache servers. So the cache will no longer work as expected being that there are now, essentially, two Memcache servers.
Is it possible to just install Memcache on a unique server and then have my main Web server access it, this way when I want to create a situation where there is a load-balancer i.e. two web servers, they would both be referencing the same Memcache server?
Yes, you can have a single Memcached server and all Memcache clients connect and use it (rather than local installs of Memcached). You can use two Memcached servers if the data inconsistency is acceptable and the cost of calculating any stored data twice is acceptable to you. It'll save you time in the short-term, but ultimately it will probably complicate things.
In relation to Rackspace, make sure you're using the private direct IP address Rackspace gives you to network across machines instead of the external WAN IP. This will be faster, more secure, and won't count against your bandwidth allocation.

Can local intranet application (built on php) query mysql database stored in offsite location?

I have a local intranet application which runs off a basic WAMP server in our offices. Every morning, one of our team members manually syncs our internal mysql db with our external mysql db (where our online enrollments occur). If a change is made during the day on the intranet application, it is not reflected on the external db until the following day.
I am wondering if it is possible to (essentially) tunnel to an external mysql connection from say a wamp or xampp server from within our offices and work in 'real-time'.
Anybody had any luck or advice?
Yes
Replication enables data from one MySQL database server (the master) to be replicated to one or more MySQL database servers (the slaves). Replication is asynchronous - slaves need not to connected permanently to receive updates from the master. This means that updates can occur over long-distance connections and even over temporary or intermittent connections such as a dial-up service. Depending on the configuration, you can replicate all databases, selected databases, or even selected tables within a database.
If you use the external server directly, performance is likely to suffer. A Gigabit LAN might be a thousand times faster than your Internet connection - particularly the upload speed of an ADSL connection.
Just make your internal application use the database from the external one. You may need to add permission to the external server to allow connections from your internal server IP, but otherwise this is just like having a webserver and sperate db server that need to access each other.
Can't really tell you how to do this here - it all depends on your specific configuration, something that I would thing is a little complicated (and too specialized) to figure out on SO.

Categories