How to improve MYSQL performance on a network - php

We have our database servers separate from our webserver. The database servers are replicated (we know there is overhead here). Even with replication turned off however, performance for large number of queries in a PHP script is 4 times slower than our staging server that has the db and apache on the same machine. I realize that network latency and other issues with a network mean that there is no way they will be equal, but our productions servers are exponentially more powerful and our production network is all on gigabit switches. We have tuned MYSQL as best as we can but the performance marker is still at 4x slower. We are running over nginx with Apache proxies and replicated MYSQL dbs. UCarp is also running. What are some suggestions for areas to look for improving the performance? I would be happy with twice as slow on production.

It's difficult to do much more than stab in the dark given your description, but here's some starting points to try independently, which will hopefully narrow down the cause:
Move your staging DB to another host
Add your staging host to the production pool and remove the others
Profile your PHP script to ensure it's the queries causing the delay
Use an individual MySQL server rather than via your load balancer
Measure a single query to the production pool and the staging server from the MySQL client
Run netperf between your web server and your DB cluster
Profile the web server with [gb]prof
Profile a MySQL server receiving the query with [gb]prof
If none of these illuminate anything other than the expected degradation due to the remote host, then please provide a reproducible test case and your full MySQL config (with sensitive data redacted.) That will help someone more skilled in MySQL assist you ;)

Not every web request on a web site will (if properly designed) need a mysql connection. Most likely, if you are requiring a connection on every http request, your application will not scale and will start having issues very quickly.
Do more caching at app. server to request mysql less often. E.g. use
memcache.
Try to use persistent connections from application to your mysql servers.
Use mysql data compression.
Minify data (limit your selects, use column names instead of "*" in select statements)
Shamanic tuning:
Make sure, that nothing slows down network at mysql servers: big firewall rulesets, network filters, etc.
Add another (client inaccesible) network interface for app. server
and mysql server.
Tune network connection between app. server and mysql. Sometimes you
can win several ms by creating hardcoded network routes.
Don't think any of above would help - if network connection is slow, nothing of above will significantly speed it up.

Related

Improve response time when database is on a dedicated server

Overview
I have a Laravel 9 application which is hosted with Digital Ocean. I use Laravel Forge to handle provisioning of the servers, management, etc. I've created two separate servers for my production environment. One to host my Laravel application code and another for the database which runs MySQL 8. These two servers are networked together and communicate over their VPC assigned private IP addresses.
Problem
I initially provisioned one server to host my application. This single server hosted both the Laravel application code and database. I have an endpoint that I hit to measure the response time for my application.
With one server that hosts the codebase and database the average response time was: ~70ms
When I hit the same endpoint again but with my two dedicated servers the average response time was: ~135ms
Other endpoints in my application also have a significant increase in response time when the database lives on a dedicated server vs a single server that houses everything.
Things I have done
All database queries have been optimized. (n+1, etc.)
Both networked servers are in the same region.
Both networked server's resources (CPU, RAM) are low and are not capping out.'
I've turned on Laravel's database config "sticky" option with no noticeable improvements.
I've enabled PHP OPcache for PHP 8.1.
Questions
How can I achieve a faster response time when my database is on a separate server than my codebase?
Am I sacrificing performance for scalability with dedicated servers?
TLDR
I'm experiencing slower response times in my Laravel application when the codebase and database run on separate dedicated servers vs hosting everything on one server.
Are your servers in the same data center and on the same VLAN?
Are you sure that you are connecting with your private VLAN IP address?
Some latency is expected if you need to connect to a database on another server. Have you tried to ping between the servers to see what the latency is?
Do you really need to have the web server and the database on separate servers? If so, I would probably try Digital Oceans managed database. I have used that for several projects and it works great.
Q: How can I achieve a faster response time when my database is on a separate server than my codebase?
A: If hosted in the same data center, the connection latency should be 30ms or less. Tested between AWS RDS and EC2 instances. Your mileage could vary depending on host.
Q: Am I sacrificing performance for scalability with dedicated servers?
A: It's standard practice to host databases separately to your application. It would be unrealistic to do otherwise for bigger projects. You can soften the impact by selectively caching data that doesn't change regularly on the main server. Unfortunately, PHP is not particularly good at this kind of fine tuning so you might be out of luck.
I can tell you that I currently run a central MySQL RDS instance that many ubuntu EC2 instances communicate with. While the queries take around 30ms, smart use of caching gives the majority of my web requests a 30ms response time in their own right. I do have the advantage of using NodeJS which is always doing things in the background without needing a request before performing work.
You may unfortunately find that you're running into one of the limitations of PHP.

scaling PHP application with MySQL replication without PHP code change?

I am planning to increase My site performance by adding another MySQL server beside the current one because the current server is too busy.
Is it possible to scale PHP application with MySQL replication without PHP code change? I means all quires will be sent to the master and the master will distribute the load between itself and the slave.
Is there any easy way to send all write quires to the master and distribute read quires between the master and slave?
I think you need to put a load balancer / proxy between your db servers and clients (your code). Example solutions are:
HAProxy: http://haproxy.1wt.eu/
MySQL Proxy: https://launchpad.net/mysql-proxy
If you don't want to do the "load balancing" manually, you might want to look into MySQL Proxy.
I think you should also optimize your application's code (PHP) and then you should optimize your architecture.
First of all you can check your MYSQL queries. Mysql slow query log can help you. If you have a connection issues (MYSQL server has gone away or too many connections etc) you should manage your application's connection pooling mechanism.
And other steps and also your answer is (I think), you can set up MYSQL master-master replication. When you set replication clearly, you can put a load balancer (HAProxy) front of your replication.
You have 2 nodes for mysql (server A and server B, both of them master server)
You can configure HAProxy with server A is master and server B is backup server. Your all MYSQL operations comes server A via HAProxy and your data is automaticly sync with server B.
When server A is down, HAProxy sends all queries server B automaticly.
Also you can configure HAProxy with server A is all insert queries and server B is for all read queries.
All this cases your code should connect MYSQL via HAProxy

Apart from MySQL concurrent connection limit, does PHP/Apache play any role on dropping connections?

If MySQL is dropping connections in a PHP application, and MySQL connection limit is set above the number of concurrent users in the application, which other factors can contribute to this behavior? Also, analyzing moodle logs (the only app running at these servers), yesterday I had 4 times more activity and it didn't drop anything, but today there were times where it was frustrating the number of dropped connections.
My main question here is why the database is rejecting connections when it didn't before with 4 times more activity (and everything it's the same, I didn't change anything in between).
Some background: I've got 2 servers contracted at my hosting:
shared server running Debian Linux/PHP 5.3 (FastCGI)
VPS running Debian Linux/MySQL 5.1.39
On this environment I'm running only moodle 1.9.12 (for the database connection using adoDB and persistent connections), php part on the shared server, database on the VPS. I suspect that, by PHP running on a shared server, other hosting accounts are affecting me (the database rejecting connections I mean, I really don't care about RAM/CPU).
Reading about the issue I've seen in places that persistent connections don't work well with PHP as CGI/FastCGI and that if both servers are in the local lan it really doesn't matter using persistent or not persistent connections because of the connection is going to be quick anyway. So now I'm using not persistent connections. I guess that may be part of the problem, but I fail to understand why it worked with more load. Which PHP/Apache settings are involved here?
Since your database and web server are on two different machines, there are two other possible causes: the network in between, and the network layer of the operating system.
If you did not change anything in your configuration and it worked with higher loads before, it is more likely to be an issue with the network connectivity between the two machines. Since the database machine is a VPS you also don't know how much real network load it is handling. If your ISP has competent support personnel (which unfortunately isn't always the case) it can't hurt to ask them if they have an explanation.
The same goes for your "shared" web-server. While it is unlikely, it is not impossible that it is a an issue of too many connections on that machine.
It would also help to know how exactly you are measuring dropped connections. If you are looking at the aborted-connections counter of mySQL, it is not necessarily a measure of an actual problem: http://dev.mysql.com/doc/refman/5.1/en/communication-errors.html. A user aborting a page load already may increase this counter.
If PHP throws an error, because it could not connect to the server or lost connection to the server during a query, then it is an issue.

pressflow innodb database hanging

I'm running a pressflow site with over 40,000 unique visitors a day, and almost 80,000 records in node_revision, and my site hangs randomly giving 'site offline' message. I have moved my db to innodb and it still continues. I'm using my-huge.cnf as my mysql config. Please advice me on a better configuration and reasons for all this. I'm running on a dedicated server with more than 300GB and 4GB RAM.
The my-huge.cnf file was tuned for a "huge" server by standards of a decade ago, but it barely qualifies as a reasonable production configuration now. I would check other topics related to MySQL tuning and especially consider using a tool like Varnish to (since you're already on Pressflow) to cache anonymous traffic.
I suspect that you are having excessive connections to the database server which can exhaust your server RAM. This is very likely to be the case if you are running Apache in pre-fork mode and PHP as Apache module with persistent connections, and using the same server to serve images, CSS, JavaScript and other static content.
If that is the case, the way to go is to move the static content to a separate multi-threaded Web server like lighttpd or ngynx. That will avoid Apache forking too many processes that end up making PHP establish too many persistent connections that exhaust your RAM.

Can local intranet application (built on php) query mysql database stored in offsite location?

I have a local intranet application which runs off a basic WAMP server in our offices. Every morning, one of our team members manually syncs our internal mysql db with our external mysql db (where our online enrollments occur). If a change is made during the day on the intranet application, it is not reflected on the external db until the following day.
I am wondering if it is possible to (essentially) tunnel to an external mysql connection from say a wamp or xampp server from within our offices and work in 'real-time'.
Anybody had any luck or advice?
Yes
Replication enables data from one MySQL database server (the master) to be replicated to one or more MySQL database servers (the slaves). Replication is asynchronous - slaves need not to connected permanently to receive updates from the master. This means that updates can occur over long-distance connections and even over temporary or intermittent connections such as a dial-up service. Depending on the configuration, you can replicate all databases, selected databases, or even selected tables within a database.
If you use the external server directly, performance is likely to suffer. A Gigabit LAN might be a thousand times faster than your Internet connection - particularly the upload speed of an ADSL connection.
Just make your internal application use the database from the external one. You may need to add permission to the external server to allow connections from your internal server IP, but otherwise this is just like having a webserver and sperate db server that need to access each other.
Can't really tell you how to do this here - it all depends on your specific configuration, something that I would thing is a little complicated (and too specialized) to figure out on SO.

Categories