MySQL connection fast through mysql cli, but slow using PDO - php

I have a MySQL database running on Google Cloud. It has SSL enforced, and so I use certificates and keys to connect to it:
On the command line, I use: mysql --ssl-ca=/path/to/server-ca.pem --ssl-cert=/path/to/client-cert.pem --ssl-key=/path/to/client-key.pem --host=ip-address --user=username --password
With PDO, I use the PDO::MYSQL_ATTR_SSL_CA, PDO::MYSQL_ATTR_SSL_CERT and PDO::MYSQL_ATTR_SSL_KEY options to indicate the required files.
Both connections work fine, except that the PDO connection in PHP is very slow. The CLI method takes milliseconds to connect and execute a queries, while the PDO method takes 5-10 times longer for the same amount of connections. I tried both methods from the same machine, so it doesn't seem to be a hardware/network issue. Could PDO be causing issues here?
I'm using Laravel in case that might be relevant.
Update: things I've tried
Run any other PHP script (that doesn't include a MySQL connection) on the same server: perfectly fast.
Run a PHP script that connects to a database on 127.0.0.1 / localhost and performs a query: perfectly fast.
Connect and query using MySQL CLI (as already mentioned in the question): perfectly fast - although hard to verify how fast so I could be imagining it.
Connect and query though PHP/PDO from different machines using all the same settings: very slow, just like the original machine I tried it on.
So the only thing I haven't tried yet is turning off SSL/TLS. Unfortunately, I cannot do that with this instance for security reasons. Also, based on the fact that a SSL/TLS connection using the CLI is very fast, I'm concluding that it must be related to something PHP- or PDO-specific.
I'm going to do some debugging myself and will add any relevant results once I have them.

I ended up opting for a Google Cloud Compute VM to host my PHP and then connect through a private IP. It appears that this is working.
I'm not sure if PDO was actually slower than the MySQL CLI anymore, it may have just seemed so.

Related

PHP postgres failover

I'm running PHP web app that uses PDO to connect to postgres (https://github.com/fusionpbx/fusionpbx/blob/bc1e163c898ea2e410787f8e938ccbead172aa5a/resources/classes/database.php#L202).
I'm running a failover cluster and so basically I just put 2 hosts names and my connection string looks like this:
"pgsql:host=host1,host2 port=5432 dbname=fusionpbx user=fusionpbx password=password target_session_attrs=read-write"
This works ok, if host1 is standby, host2 is selected with very little delay. The only issue is the host1 is unreachable or down. In this case PDO/driver (?) always tries host1 first, waits 30s until it timeouts and goes to host2. It seems that the fact that host1 is not available is not being remembered.
I found 3 workarounds:
add PDO::ATTR_TIMEOUT=2 when creating PDO. Yes, stupid I know, but allows at least temporary workaround, in case of failure, until I figure out the right solution.
externally monitor postgres and change nodes order in the connection string, putting active node always first. I'm starting to think it's least evasive.
PDO::ATTR_PERSISTENT => true - I've tested this and on the first look it works quite nicely, but given that I'm not really PHP guy, and the application is not mine but 3rd party application, I'm reluctant to make such impactful change.
Maybe someone could share their experience? I'm quite surprised how little can be found about this over the net. Also, on same box I'm running lua scripts, also connecting to same postgres in the same way, and it seem to have no problem in handling this scenario. It's the same version of libpq since it's the same linux box and I'm not adding anything specific to the connection string.

PDO establishing connection to Azure PostgreSQL is very slow

I have PHP application running as a Docker container based on Alpine Linux inside Kubernetes. Everything went well until I tried removing container with test database and replacing it with Azure PostgreSQL. This led to significant latency increase from under 250ms to above 1500ms.
According to profiler most time is spent in PDO constructor which is establishing connection to database. The SQL queries themselves, after connection was established, then run in about 20ms.
I tried using IP instead of address and it was still slow.
I tried connecting from container using psql and it was slow (see full command below)
I tried DNS resolution using bind-tools and it was fast.
Database runs in same region as Kubernetes nodes, tried even same resource group, different network settings and nothing helped.
I tried requiring/disabling SSL mode on both client and server
I tried repeatedly running 'select 1' inside an already established connection and it was fast (average 1.2ms, median 0.9ms) (see full query below)
What can cause such a latency?
How can I further debug/investigate this issue?
psql command used to try connection:
psql "sslmode=disable host=host dbname=postgres user=user#host.postgres.database.azure.com password=password" -c "select 1"
Query speed
\timing
SELECT;
\watch 1
As far as I can tell it is caused by Azure specific authentication on top of PostgreSQL. Unfortunately Azure support was not able to help from their side.
Using connection pool (PgBouncer) solves this problem. It is another piece of infrastructure we have to maintain (docker file, config/secret management, etc.), which we hoped to outsource to cloud provider.

How do you get PDO persistent connections to Google's Cloud SQL to work?

I've noticed that while reasonably fast, the connection to the database (google's cloud SQL mysql-compatible one) is a large part of the request. I'm using PDO for the abstraction.
So, since that's the case the obvious solution is to enable PHP's PDO persistent connections.
To my understanding, and I've verified this in PHP's source code (links bellow), the way these work is as follows:
when you connect with the persistent flag on, PHP caches the connection using a hash of the connection string, username and password for the key
when you try to re-connect in another request it checks if a persistent connection exists then checks the liveness of the connection (which is driver specific; mysql version is what is executed in my case) and kills the cached version it if it fails the test
if the cached version was killed a new connection is created and returned; otherwise you get to skip the overhead of creating the connection (around 30x faster creation process based on xdebug profiles executed directly in devel versions on the cloud)
Everything sounds good so far? Not sure how all of this works when say you have a cached connections and two requests hit it (stress testing it doesn't appear to cause issues), but otherwise sounds okey and in testing works fine.
Well, here's what happens in the real world after some time passes...
Once a connection "dies" PDO will stall the entire request for 60s or more. This happens I believe after maybe 1h or more; so for a short while everything will work just fine and PDO will connect super fast to Cloud SQL. I've tried several ways to at least mitigate the stalling being more then 1s but to no result (ini_set socket timeout wont affect it, expires flag on PDO is ignored I believe, exception and status checks for the "has gone away" are useless since it stalls on making the connection, etc). I assume most likely the connection "expires" but reasons are unknown to me. I assume Cloud SQL drops it since its not in "show processlist;", but it's possible I'm not looking at it correctly.
Is there any secret sauce that makes PDO persistent connections work with Cloud SQL for more then a brief time?
Are persistent connections to Cloud SQL not supported?
You haven't described where your application is running (e.g. Compute Engine, App Engine, etc), so I will make an educated guess based on the described symptoms.
It's likely your TCP keepalive time is set too high on the application host. You can change the settings via these instructions.
Assuming a Linux host, the following command will show your current setting, in seconds:
cat /proc/sys/net/ipv4/tcp_keepalive_time
TCP connections without any activity for a period of time may be dropped anywhere along the path between the application host and Cloud SQL, depending on how everything in the communication path is configured.
TCP keepalive sends a periodic "ping" on idle connections to work around this problem/feature.
Cloud SQL supports long-running connections.

Connect to Oracle database on a different server from PHP

Hello I have a database engine sitting on a remote server, while my webserver is present locally. I have worked pretty much with client-server architecture, where the server has both the webserver and the database engine. Now I need to connect to an Oracle database which is situated on a different server.
Can anybody give me any suggestions?? I believe ODBC_CONNECT might not work. Do I use OCI8 drivers?? How would I connect to my database server.
Also I would have a very high number of database calls going back and forth, so is it good to go with persistent connection or do I still use individual database calls?
If you're using ODBC, then you need to use the PHP's ODBC driver rather than the OCI8 driver. Otherwise, you need the Oracle client installed on your webserver (even if it's just Oracle's Instant Client) and then you can use OCI8.
EDIT
Personally I wouldn't recommend persistent connections. While there is a slowdown when connecting to a database (especially a remote database), persistent connections can cause more issues if you have a high hit count (exceeding the number of persistent connections available), or if there's a network hiccup of any kind that leaves orphaned connections on the database, and potentially orphaned pconnectiosn as well.
Oracle client comes for each platform. In summary it is collection of needed files to talk to oracle and a command line utility for oracle. Just go to oracle.com and downloads

Connecting to external MySQL DB from a web server not running MySQL

While I've been working with MySQL for years, this is the first time I've run across this very newbie-esq issue. Due to a client demand, I must host their website files (PHP) on a IIS server that is not running MySQL (instead, they are running MSSQL). However, I have developed the site using a MySQL database which is located on an external host (Rackspace Cloud). Obviously, my mysql_connect function is now bombing because MySQL is not running on localhost.
Question: Is it even possible to hit an external MySQL database if localhost is not running MySQL?
Apologies for the rookie question, and many thanks in advance.
* To clarify, I know how to connect to a remote MySQL server, but it is the fact that my IIS web server is not running ANY form of MySQL (neither server nor client) that is giving me trouble. Put another way, phpinfo() does not return anything about MySQL. *
Yes, you can use a MySQL database that's not on the same machine as Apache+PHP.
Basically, you'll connect from PHP to MySQL via a network connection -- TCP-based, I suppose ; which means :
MySQL must be configured to listen to, and accept connections on the network interface
Which means configuring MySQL to do that
And given the required privileges to your MySQL user, so he can connect from a remote server
And PHP must be able to connect to the server hosting MySQL.
Note, though, that habing MySQL on a server that's far away might not be great for performances : each SQL query will have to go through the network, and this could take some time...
If phpinfo is not returning anything about MySQL you need to install the MySQL plugin for PHP, easiest way to do that probably is to just upgrade PHP to the latest version. If not there is a .DLL file that you will need.
http://www.php.net/manual/en/mysql.installation.php
you will need to install the mysql extensions. this link should help: http://php.net/manual/en/install.windows.extensions.php
The MySQL server has nothing to do with PHP itself. What "mysql support" in PHP means is that it's been compiled with (or has a module loaded) that implements the MySQL client interface. For windows, it'd be 'mysql.dll', and on Unix-ish systems it'd be 'mysql.so'. Once those are loaded, then the various MySQL intefaces (mysql_xxx(), mysqli_xxx(), PDO, MDB2, etc...) will be able to access any MySQL server anywhere, as long as you have the proper connection string.

Categories