Oracle Database connections always miss the DRCP cache - php

I have configured Oracle to use DRCP and setup PHP to connect to the pooled connection, but when I pull the connection stats there are always zero hits while the number of requests and number of misses continues to climb.
CCLASS_NAME NUM_REQUESTS NUM_HITS NUM_MISSES NUM_WAITS WAIT_TIME CLIENT_REQ_TIMEOUTS
BIGTUNACAN.drcp_pooling_test 9828 0 9828 6 0 0
My connection in tnsnames.ora is using SERVER = POOLED and my php.ini has drcp_pooling_test set.
I'm at a loss right now why cached connections would never be used.
TNS entry below
TESTPOOL.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = test-db.bigtunacan.com)(PORT = 1521))
)
(CONNECT_DATA =
(SID = TEST)
(SERVER = POOLED)
)
)

The problem won't be the use of SID (although you should still change that, since you potentially lose a bunch of inherent functionality), but that you are using oci_connect(). Use oci_pconnect() instead. This is a guess, since you missed giving the testcase.
NUM_HITS is the "Total number of times client requests found matching pooled servers and sessions in the pool.", but oci_connect() has to recreate the session, so it won't give a 'hit'. See table 11 on page 261 of The Underground PHP and Oracle Manual which says that oci_connect() "Gets a pooled server from the DRCP pool and creates a brand new session.". You will get some benefits from reusing a pooled server but not the full benefit that oci_pconnect() can give you.
However, you should step back and really review why you want DRCP. If you're not already using oci_pconnect() then your PHP connection calls will be slow. Change to use oci_pconnect(). You may then be able to reduce the number of Apache processes needed, which will reduce the number of concurrent connections needed. Implement other best practices such as using bind variables. Only if your database host doesn't have enough memory to handle all the open connections concurrently would you then move to use Shared Servers or DRCP. DRCP is a pool solution, so there is some overhead (and a small amount of extra administration).

Related

PHP mysql persistent connection not reused ( opens more than one connection per fpm-worker )

I'm facing a really weird behaviour while testing persistent connections from php to mysql. I have a small script that looks like this:
<?php
$db = new mysqli('p:db-host','user','pass','schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
My setup is :
OS: CentOS 7.3
PHP/7.1.18
php-fpm
nginx/1.10.2
MySQL-5.6.30
I tried to do some requests with ab:
$ ab -c 100 -n 500 http://mysite/my_test_script.php
PHP-FPM was configured to have 150 workers ready, and i saw what i was expecting, 150 established connections to mysql, which stayed open after the ab finished. I launched ab once again, and the behaviour was still the same, 150 connections, no new connections where opened. All fine. Then i created a script which did the the same exact requests, same IP, same HTTP headers, but used curl to make the request, and BOOM i had 300 connections on mysql instead of 150. I launched the script again, i got still 300 connections. Subsequent runs of the same script didn't increase the number of connections. Did anyone ever faced anything like this? Does anyone know what could make php open more connections than needed? Am I missing something obvious?
If it's not clear what i'm asking, please comment below and i will try to better my explain problem.
P.S. I tried this with PDO too, same behaviour.
EDIT: My tests where not accurate
After further testing i noticed that my first tests where not accurate. I was in a multi-tenant environment and different connections ( different schema ) where initialized when i launched ab. In my case the php documentation was a bit missleading, it says:
PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link. An 'identical' connection is a connection that was opened to the same host, with the same username and the same password (where applicable).
http://php.net/manual/en/features.persistent-connections.php
Maybe its i obvious to everyone, I don't know, it was not for me. Passing the 4th parameter to mysqli made php consider connections not identical. Once i changed my code to something like this:
<?php
$db = new mysqli('p:db-host','user','pass');
$db->select_db('schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
The application started to behave as i expected, one connection per worker.

PHP memcache not closing connections?

I am using PHP-memcache on various web servers to connect memcache servers.
I connect like this:
$memcache = new Memcache;
$memcache->addServer('memcache_host', 11211);
$memcache->addServer('memcache_host2', 11211);
Then fetch or set the data using get & set.
It works fine in most of the case but if something goes slow down then i see a sudden increase in memcache connections, which will create issues.
I think this is because by default addServer creates persistent connections & may be not closing them quickly after serving the request.
A similar issue has been reported here also.
So please let me know is this only because of default behavior of addServer function. Should i use non-persistent connection by passing false as third argument in addServer function.
Because memcached open connections might be kept by the kernel in keepalive mode if not explicitely closed by the client, lowering the following parameters might help, but will affect any other connection, like SSH. So putting tcp_keepalive_time too low is not a good idea.
Create the following file :
vim /etc/sysctl.d/low-tcp-timeout.conf
# Keep connections in keepalive for 600 seconds. Default 7200s = 2h.
net.ipv4.tcp_keepalive_time = 600
# 0 probes. Default 9
net.ipv4.tcp_keepalive_probes = 0
# Default 75 seconds between each probe
net.ipv4.tcp_keepalive_intvl = 75
and run sysctl -p to apply these values.
You can also have a look at net.ipv4.tcp_fin_timeout

postgresql pdo very slow connect

We are facing performance issue with our web server. We are using an apache server (2.4.4) with php 5.4.14 (it's a uniserver package) and a postgresql database 9.2. It’s on a Windows system (can be XP, 7 or server…).
Problem is that requests answers from the web server are too slow; we have made some profiling and found that database connection is around 20 ms (millisecond).
We are using PDO like this:
$this->mConnexion = new \PDO(“postgres: host=127.0.0.1;dbname=”, $pUsername,$pPassword, array(\PDO::ATTR_PERSISTENT => false));
We have made some time profiling like this:
echo "Connecting to db <br>";$time_start = microtime();
$this->mConnexion = new \PDO(…
$time_end = microtime();$time = $time_end - $time_start;
echo "Connecting to db done in $time sec<br>";
We have made a test with ATTR_PERSISTENT to true and we came up with a connection time much faster. Code reports connection time = 2. E-5 second (whereas it’s 0.020 s with persistent to false).
Is 20 ms a normal value (and we have to move to persistent connection) ?
we have also made a test with mysql, connection time for non persistent connection is around 2 ms.
We have these options set in postgresql configuration file :
listen_addresses = '*'
port = 5432
max_connections = 100
SSL = off
shared_buffers = 32MB
EDIT
We do not use permanent (yet) because there are some drawbacks, if the script fail connection will be in a bad state (so we will have to manage these cases, and it’s what we will have to do…). I would like to have more points of view concerning this database connection time before directly switching to persistent connection.
To answer Daniel Vérité question, SSL is off (I already checked this option from my previous search about the subject).
#Daniel : i have tested on a intel core 2 Extreme CPU X9100 # 3.06Ghz 4Gb RAM
Try using unix domain socket by leaving host empty. It's a little bit faster.

Difference between PHP SQL Server Driver and SQLCMD when running queries

Why is that the SQL Server PHP Driver has problms with long running queries?
Every time I have a query that takes a while to run, I get the following errors from sqlsrv_errors() in the below order:
Shared Memory failure, Communication
Link Failure, Timeout failure
But if I try the same query with SQLCMD.exe it comes back fine. Does the PHP SQL Server Driver have somewhere that a no timeout can be set?
Whats the difference between running queries via SQLCMD and PHP Driver?
Thanks all for any help
Typical usage of the PHP Driver to run a query.
function already_exists(){
$model_name = trim($_GET['name']);
include('../includes/db-connect.php');
$connectionInfo = array('Database' => $monitor_name);
$conn = sqlsrv_connect($serverName, $connectionInfo);
$tsql = "SELECT model_name FROM slr WHERE model_name = '".$model_name."'";
$queryResult = sqlsrv_query($conn, $tsql);
if($queryResult != false){
$rows = sqlsrv_has_rows($queryResult);
if ($rows === true){
return true;
}else{
return false;
}
}else{
return false;
}
sqlsrv_close($conn);
}
SQLCMD has no query execution timeout by default. PHP does. I assume you're using mssql_query? If so, the default timeout for queries through this API is 60 seconds. You can override it by modifying the configuration property mssql.timeout.
See more on the configuration of the MSSQL driver in the PHP manual.
If you're not using mssql_query, can you give more details on exactly how you're querying SQL Server?
Edit [based on comment]
Are you using sqlsrv_query then? Looking at the documentation this should wait indefinately, however you can override it. How long is it waiting before it seems to timeout? You might want to time it and see if it's consistent. If not, can you provide a code snippet (edit your question) to show how you're using the driver.
If MSDTC is getting involved (and I don't know how you can ascertain this), then there's a 60-second timeout on that by default. This is configured in the Component Services administration tool and lives in a different place dependent on version of Windows.
SQL Server 2005 limits the maximum
number of TDS packets to 65,536 per
connection (limit that was removed in
SQL Server 2008). As the default
PacketSize for the SQL Server Native
Client (ODBC layer) is 4K, the PHP
driver has a de-facto transfer limit
of 256MB per connection. When
attempting to transfer more than
65,536 packets, the connection is
reset at TDS protocol level.
Therefore, you should make sure that
the BULK INSERT is not going to push
through more than 256 MB of data;
otherwise the only alternative is to
migrate your application to SQL Server
2008.
From MSDN Forums
http://social.msdn.microsoft.com/Forums/en-US/sqldriverforphp/thread/4a8d822f-83b5-4eac-a38c-6c963b386343
PHP itself has several different timeout settings that you can control via php.ini. The one that often causes problems like you're seeing is max_execution_time (see also set_time_limit()). If these limits are exceeded, php will simply kill the process without regard for ongoing activities (like a running db query).
There is also a setting, memory_limit, that does as its name suggests. If the memory limit is exceeded, php just kills the process without warning.
good luck.

MySql Proccesslist filled with "Sleep" Entries leading to "Too many Connections"?

I'd like to ask your help on a longstanding issue with php/mysql connections.
Every time I execute a "SHOW PROCESSLIST" command it shows me about 400 idle (Status: Sleep) connections to the database Server emerging from our 5 Webservers.
That never was much of a problem (and I didn't find a quick solution) until recently traffic numbers increased and since then MySQL reports the "to many connections" Problems repeatedly, even so 350+ of those connections are in "sleep" state. Also a server can't get a MySQL connection even if there are sleeping connection to that same server.
All those connections vanish when an apache server is restated.
The PHP Code used to create the Database connections uses the normal "mysql" Module, the "mysqli" Module, PEAR::DB and Zend Framework Db Adapter. (Different projects). NONE of the projects uses persistent connections.
Raising the connection-limit is possible but doesn't seem like a good solution since it's 450 now and there are only 20-100 "real" connections at a time anyways.
My question:
Why are there so many connections in sleep state and how can I prevent that?
-- Update:
The Number of Apache requests running at a time never exceeds 50 concurrent requests, so i guess there is a problem with closing the connection or apache keeps the port open without a phpscript attached or something (?)
my.cnf in case it's helpful:
innodb_buffer_pool_size = 1024M
max_allowed_packet = 5M
net_buffer_length = 8K
read_buffer_size = 2M
read_rnd_buffer_size = 8M
query_cache_size = 512M
myisam_sort_buffer_size = 128M
max_connections = 450
thread_cache = 50
key_buffer_size = 1280M
join_buffer_size = 16M
table_cache = 2048
sort_buffer_size = 64M
tmp_table_size = 512M
max_heap_table_size = 512M
thread_concurrency = 8
log-slow-queries = /daten/mysql-log/slow-log
long_query_time = 1
log_queries_not_using_indexes
innodb_additional_mem_pool_size = 64M
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table
Basically, you get connections in the Sleep state when :
a PHP script connects to MySQL
some queries are executed
then, the PHP script does some stuff that takes time
without disconnecting from the DB
and, finally, the PHP script ends
which means it disconnects from the MySQL server
So, you generally end up with many processes in a Sleep state when you have a lot of PHP processes that stay connected, without actually doing anything on the database-side.
A basic idea, so : make sure you don't have PHP processes that run for too long -- or force them to disconnect as soon as they don't need to access the database anymore.
Another thing, that I often see when there is some load on the server :
There are more and more requests coming to Apache
which means many pages to generate
Each PHP script, in order to generate a page, connects to the DB and does some queries
These queries take more and more time, as the load on the DB server increases
Which means more processes keep stacking up
A solution that can help is to reduce the time your queries take -- optimizing the longest ones.
The above solutions like run a query
SET session wait_timeout=600;
Will only work until mysql is restarted. For a persistant solution, edit mysql.conf and add after [mysqld]:
wait_timeout=300
interactive_timeout = 300
Where 300 is the number of seconds you want.
Increasing number of max-connections will not solve the problem.
We were experiencing the same situation on our servers. This is what happens
User open a page/view, that connect to the database, query the database, still query(queries) were not finished and user leave the page or move to some other page.
So the connection that was open, will remains open, and keep increasing number of connections, if there are more users connecting with the db and doing something similar.
You can set interactive_timeout MySQL, bydefault it is 28800 (8hours) to 1 hour
SET interactive_timeout=3600
Before increasing the max_connections variable, you have to check how many non-interactive connection you have by running show processlist command.
If you have many sleep connection, you have to decrease the value of the "wait_timeout" variable to close non-interactive connection after waiting some times.
To show the wait_timeout value:
SHOW SESSION VARIABLES LIKE 'wait_timeout';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 28800 |
+---------------+-------+
the value is in second, it means that non-interactive connection still up to 8 hours.
To change the value of "wait_timeout" variable:
SET session wait_timeout=600;
Query OK, 0 rows affected (0.00 sec)
After 10 minutes if the sleep connection still sleeping the mysql or MariaDB drop that connection.
Alright so after trying every solution out there to solve this exact issues on a wordpress blog, I might have done something either really stupid or genius... With no idea why there's an increase in Mysql connections, I used the php script below in my header to kill all sleeping processes..
So every visitor to my site helps in killing the sleeping processes..
<?php
$result = mysql_query("SHOW processlist");
while ($myrow = mysql_fetch_assoc($result)) {
if ($myrow['Command'] == "Sleep") {
mysql_query("KILL {$myrow['Id']}");}
}
?>
So I was running 300 PHP processes simulatenously and was getting a rate of between 60 - 90 per second (my process involves 3x queries). I upped it to 400 and this fell to about 40-50 per second. I dropped it to 200 and am back to between 60 and 90!
So my advice to anyone with this problem is experiment with running less than more and see if it improves. There will be less memory and CPU being used so the processes that do run will have greater ability and the speed may improve.
Look into persistent MySQL connections: I connected using mysqli('p:$HOSTNAME') and had Laravel database.php settings like:
'options' => [
PDO::ATTR_PERSISTENT => true,
],
For some reason, for some time, I believed it was smart to keep connections persistent as I thought my applications would share them. They didn't. They just opened connections and left them unused until they timed out.
After I removed my mad dream of persistency I went from 120-150+ connections from several hosts to only a handful, most of the time actually just one (being the one that runs SHOW PROCESSLIST).

Categories