I have PHP application running as a Docker container based on Alpine Linux inside Kubernetes. Everything went well until I tried removing container with test database and replacing it with Azure PostgreSQL. This led to significant latency increase from under 250ms to above 1500ms.
According to profiler most time is spent in PDO constructor which is establishing connection to database. The SQL queries themselves, after connection was established, then run in about 20ms.
I tried using IP instead of address and it was still slow.
I tried connecting from container using psql and it was slow (see full command below)
I tried DNS resolution using bind-tools and it was fast.
Database runs in same region as Kubernetes nodes, tried even same resource group, different network settings and nothing helped.
I tried requiring/disabling SSL mode on both client and server
I tried repeatedly running 'select 1' inside an already established connection and it was fast (average 1.2ms, median 0.9ms) (see full query below)
What can cause such a latency?
How can I further debug/investigate this issue?
psql command used to try connection:
psql "sslmode=disable host=host dbname=postgres user=user#host.postgres.database.azure.com password=password" -c "select 1"
Query speed
\timing
SELECT;
\watch 1
As far as I can tell it is caused by Azure specific authentication on top of PostgreSQL. Unfortunately Azure support was not able to help from their side.
Using connection pool (PgBouncer) solves this problem. It is another piece of infrastructure we have to maintain (docker file, config/secret management, etc.), which we hoped to outsource to cloud provider.
Related
I have a MySQL database running on Google Cloud. It has SSL enforced, and so I use certificates and keys to connect to it:
On the command line, I use: mysql --ssl-ca=/path/to/server-ca.pem --ssl-cert=/path/to/client-cert.pem --ssl-key=/path/to/client-key.pem --host=ip-address --user=username --password
With PDO, I use the PDO::MYSQL_ATTR_SSL_CA, PDO::MYSQL_ATTR_SSL_CERT and PDO::MYSQL_ATTR_SSL_KEY options to indicate the required files.
Both connections work fine, except that the PDO connection in PHP is very slow. The CLI method takes milliseconds to connect and execute a queries, while the PDO method takes 5-10 times longer for the same amount of connections. I tried both methods from the same machine, so it doesn't seem to be a hardware/network issue. Could PDO be causing issues here?
I'm using Laravel in case that might be relevant.
Update: things I've tried
Run any other PHP script (that doesn't include a MySQL connection) on the same server: perfectly fast.
Run a PHP script that connects to a database on 127.0.0.1 / localhost and performs a query: perfectly fast.
Connect and query using MySQL CLI (as already mentioned in the question): perfectly fast - although hard to verify how fast so I could be imagining it.
Connect and query though PHP/PDO from different machines using all the same settings: very slow, just like the original machine I tried it on.
So the only thing I haven't tried yet is turning off SSL/TLS. Unfortunately, I cannot do that with this instance for security reasons. Also, based on the fact that a SSL/TLS connection using the CLI is very fast, I'm concluding that it must be related to something PHP- or PDO-specific.
I'm going to do some debugging myself and will add any relevant results once I have them.
I ended up opting for a Google Cloud Compute VM to host my PHP and then connect through a private IP. It appears that this is working.
I'm not sure if PDO was actually slower than the MySQL CLI anymore, it may have just seemed so.
I've noticed that while reasonably fast, the connection to the database (google's cloud SQL mysql-compatible one) is a large part of the request. I'm using PDO for the abstraction.
So, since that's the case the obvious solution is to enable PHP's PDO persistent connections.
To my understanding, and I've verified this in PHP's source code (links bellow), the way these work is as follows:
when you connect with the persistent flag on, PHP caches the connection using a hash of the connection string, username and password for the key
when you try to re-connect in another request it checks if a persistent connection exists then checks the liveness of the connection (which is driver specific; mysql version is what is executed in my case) and kills the cached version it if it fails the test
if the cached version was killed a new connection is created and returned; otherwise you get to skip the overhead of creating the connection (around 30x faster creation process based on xdebug profiles executed directly in devel versions on the cloud)
Everything sounds good so far? Not sure how all of this works when say you have a cached connections and two requests hit it (stress testing it doesn't appear to cause issues), but otherwise sounds okey and in testing works fine.
Well, here's what happens in the real world after some time passes...
Once a connection "dies" PDO will stall the entire request for 60s or more. This happens I believe after maybe 1h or more; so for a short while everything will work just fine and PDO will connect super fast to Cloud SQL. I've tried several ways to at least mitigate the stalling being more then 1s but to no result (ini_set socket timeout wont affect it, expires flag on PDO is ignored I believe, exception and status checks for the "has gone away" are useless since it stalls on making the connection, etc). I assume most likely the connection "expires" but reasons are unknown to me. I assume Cloud SQL drops it since its not in "show processlist;", but it's possible I'm not looking at it correctly.
Is there any secret sauce that makes PDO persistent connections work with Cloud SQL for more then a brief time?
Are persistent connections to Cloud SQL not supported?
You haven't described where your application is running (e.g. Compute Engine, App Engine, etc), so I will make an educated guess based on the described symptoms.
It's likely your TCP keepalive time is set too high on the application host. You can change the settings via these instructions.
Assuming a Linux host, the following command will show your current setting, in seconds:
cat /proc/sys/net/ipv4/tcp_keepalive_time
TCP connections without any activity for a period of time may be dropped anywhere along the path between the application host and Cloud SQL, depending on how everything in the communication path is configured.
TCP keepalive sends a periodic "ping" on idle connections to work around this problem/feature.
Cloud SQL supports long-running connections.
I am trying to automate the backups of our MySQL databases.
The databases are hosted on shared servers, and can be restarted at anytime, which mean CRON jobs won't be persistent (according to support at my web hosts).
I am currently running the jobs manually via MySQL Workbench at given intervals.
I am trying to automate this process, but I cannot fathom how to do it. There seems to be no options in MySQL Workbench, and Google seems to yield nothing.
I have attempted to run mysqldump on my local machine, with the view to creating some kind of script to do this from my machine. But I get an error - mysqldump: Got error: 2049: Connection using old (pre-4.1.1) authentication protocol refused (client option 'secure_auth' enabled) when trying to connect - which I can't seem to disable at the server end or override at my end.
Any advice?
The standard automatic backup for MySQL is the socalled "MySQL Enterprise Backup" (MEB). MySQL Workbench works with MEB, but as the name indicates this solution is only available for MySQL enterprise servers.
Another solution would be to run a cron job on the target server (using mysqldump). Missed jobs will be executed after the system is up again, so this is a reliable solution too.
If you cannot install a cron job on the target machine then you will have no choice but to manually trigger the backup either with MySQL Workbench or on the commandline.
The target is simple: clients post http requests to query data and update record by some keys。 Highest request: 500/sec (the higher the better, but the best is to fulfil this requirement while making the system easy to achieve and using less mashines)
what I've done: nginx + php-cgi(using php) to serve http request, the php use thrift RPC to retrieve data from a DB proxy which is only used to query and update DB(mysql). The DB proxy uses mysql connection pool and thrift's TNonblockingServer. (In my country, there are 2 ISP, DB Proxy will be deployed in multi-isp machine and so is the db, web servers can be deployed on single-isp mashine according to the experience)
what trouble me: when I do stress test(when >500/sec), I found " TSocket: Could not connect to 172.19.122.32:9090 (Connection refused [111]" from php log. I think it may be caused by the port's running out(may be incorrect conclusion). So I design to use thrift connection bool to reduce thrift connection. But there is no connection pool in php (there seem to be some DB connection pool tech) and php does not support the feature.
So I think maybe the project is designed in the wrong way from the beginning(like use php ,thrift). Is there a good way to solve this based on what i've done? And I think most people will doubt my awkward scheme. Well, your new scheme will help a lot
thanks.
"TSocket: Could not connect to 172.19.122.32:9090 (Connection refused [111])" from php log shows the ports running out because of too many short connections in a short time. So I config the tcp TIME_WAIT status to recycle port in time using:
sysctl -w net.ipv4.tcp_timestamps=1
sysctl -w net.ipv4.tcp_tw_recycle=1
it works!
what droubles me is sloved, but to change the kernal parameter will affect the NAT. It's not a perfect solution. I think a new good design of this system can be continue to discuss.
I'm using the following to connect to a mysql database from the localhost
<?php
function testdb_connect ()
{
$dbh = new PDO("mysql:host=localhost;dbname=test", "testuser", "testpass");
return ($dbh);
}
?>
However when I tried to connect to this database (database is running on ec2-12-34-56-78.compute-1.amazonaws.com) from a different server, using the following code
$dbh = new PDO("mysql:host=ec2-12-34-56-78.compute-1.amazonaws.com;dbname=test", "testuser", "testpass");
I'm unable to connect.
Is it possible to connect to a remote database on an ec2 instance with php pdo?
How would I pass an authentication parameter (ex. private key)
You should probably consider using RDS for your database rather than implementing on EC2 unless you have a very unique database that requires a high degree of customization (i.e. clustered configurations, etc.). Running on EBS-backed volume (which you would need to do to be able to persist the physical data files), will subject you to slow disk I/O. If you are not running on EBS-backed EC2, then your data is transient and can not be considered as being on reliable physical storage. If this is OK for your design (you just need transient info in your database), then you would probably be even better served but just putting your information into Elasticache or some form of in-memory cache.
RDS uses MySQL (well, you can also opt to use Oracle). You would access it EXACTLY like you would access your own MySQL server (same PHP abstraction, same SQL, same almost everything (you don't get root access, but rather a form of super-user access). RDS also provide you easy to implement (i.e. push button) configuration for multi-az (high-availability, synchronously-updated standby), replication slaves, DB instance re-sizing, and data snapshots.
In either case (for RDS or EC2), you would need to make sure that your EC2 or RDS security groups allows access from the EC2 instances (or other servers) that host your application. In case of EC2 only you could either place the servers in the same security group, and provide port 3306 access on that group, or better would be to create two security groups (one for app and one for db). In the db security group provide port 3306 (or whatever port you are using) to the security group(s) to which the app server(s) belong.
For an RDS, you would need EC2 security group for app server(s) and a DB security group for the RDS instance). You would need to provide access to the app server security group in the RDS security config.
I don't know the specifics of how this might work with AWS but the first thing I would do is get an SSH tunnel running between the machines.
Then PHP/PDO would basically just think that you're connecting to a local database. In my experience it also makes the connection faster to establish as it doesn't have to do a DNS lookup to find the remote server... quite a big deal when you think that every PHP page load might have to connect to the remote DB.
I use this on intranets when an application needs to manage data stored on a remote database and it works like a champ.
I find SSH tunnels perfectly stable but I use a program called autossh to attempt to reconnect SSH tunnels when they go down.
For completeness here's the command I use to start autossh so it establishes and maintains a particular SSH tunnel. Added here because I found the autossh docs pretty confusing to work out what options I wanted.
autossh -M 0 -f -L3307:127.0.0.1:3306 -p 22 -N -f username#xxx.xxx.xxx.xxx
This forwards port 3307 on your web server to 3306 on the remote DB server. So in PHP you would connect to 3307. You could choose 3306 if you wanted, I chose local port 3307 just in case you had a local MySQL as well as a remote. The -p switch is the port that SSH is running on on the remote machine.
You can add this command to /etc/rc.local (on CentOS at least) to establish the SSH tunnel on server start.
Hope this helps!