Memcached Performance - php

I feel the speed of Memcached in my website is slower than Mysql queries. Please see the screenshot of performance of my website I got from New Relic.
I don't know how to optimize memcached in my CentOS server. Please see the configuration and performance screenshots of Memcached. I feel the number of Total Connections is high.
Please see Live Stats below
The following is how I use Memcached in my website
<?php
class dataCache {
function setMemData($key, $var, $flag = false, $expire = 36000) {
global $memcache;
if (!$memcache) {
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die("Could not connect");
}
if ($result = $memcache->set($key, $var, $flag, time() + $expire)) {
return TRUE;
} else {
return FALSE;
}
}
function getMemData($key) {
global $memcache;
if (!$memcache) {
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die("Could not connect");
}
if ($data = $memcache->get($key)) {
return $data;
} else {
return FALSE;
}
}
function delMemData($key) {
global $memcache;
if (!$memcache) {
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die("Could not connect");
}
if ($result = $memcache->delete($key)) {
return TRUE;
} else {
return FALSE;
}
}
}
And at the end of each PHP page, I use the following way to close the connection
if(isset($memcache)){
$memcache->close();
}
Do I need more memories for this server? How to reduce the number of connections? Any suggestions to improve the performance?
--------------EDIT------------------------
As the comments mentioned, the current connections are 9. The total connections are 671731. The number of connections might be not a problem.

Here are a few ways to make this go faster.
Your total connections are actually how many connections have been created to memcached.
First, use the memcached php extension NOT the memcache extension. They are entirely different and the memcache extension is pretty much deprecated to death. Memcached extension uses libmemcached which is incredibly faster and has better features (like binary protocol, better timeouts, udp)
Second, use persistent connections. For your workload these should be entirely sufficient and reduce the cost of constantly reconnecting to memcache.
Third, use multi get/set/delete/etc. If you know you will need 10 memcache keys in your request, ask for all 10 at once. This can give you a big performance increase if you are looping over memcache requests at any point.
Another caveat is that NewRelic is historically BAD at reporting time spent in memcache. It can misreport time spent in php as time spent in memcache due to how the instrument the zend engine.
As for why your memcache and mysql performance are so close, you are most likely running rather simple queries so the time spent on memcache and on mysql are comparable. Memcache is extremely fast but it is also a network hop and that is usually the largest amount of the time spent waiting for memcache. You could try running memcache locally or using APC instead of memcache if you really only have 1 server.

Related

"Could not connect. Too many connections" Error in MySQLi

I have the following code
function openDBConn($params){
$conn_mode = $params['conn_mode'];
$db_conn = $params['db_conn'];
//create connections
if(empty($db_conn->info)) {
$db_conn = new mysqli("localhost", $user, $password, "database");
$db_conn->set_charset('UTF8');
$mysqli_error = $db_conn->connect_error;
}
if($mysqli_error !== NULL){
die('Could not connect <br/>'. $mysqli_error);
}else{
return $db_conn;
}
}
//close db connection
function closeDBConn( $params ){
$db_conn = $params['db_conn'];
$db_conn->close;
}
//used as below
$db_conn = openDBConn();
save_new_post( $post_txt, $db_conn );
closeDBConn( array('db_conn'=>$db_conn));
From time to time, I get the "Could not connect. Too many connections" error.
This tends to happen when I have Google bot scanning my website.
This problem seems to have started ever since upgrading to MySQLi from MySQL.
Is there any advice on how ensure all connections are closed?
Thanks
You need to increase the number of connections to your MySQL server (the default is only 100 and typically each page load consumes one connection)
Edit /etc/my.cnf
max_connections = 250
Then restart MySQL
service mysqld restart
http://major.io/2007/01/24/increase-mysql-connection-limit/
Some hosters have a hard limit how many open database connections your are allowed to have. Maybe you want to contact your hoster to know how many you are allowed to open. For websites with hight traffic load more connections can be helpful.
Do you have access to the server directly or is it a hosted solution?
If you have direct access you can check the mySQL config files to see how many connections are allowed and increase it.
If you don't you might want to contact your webhost about increasing the limit and see if they will comply.

How can I get php pdo code to keep retrying to connect if there are too many open connections?

I have an issue, it has only cropped up now. I am on a shared web hosting plan that has a maximum of 10 concurrent database connections. The web app has dozens of queries, some pdo, some mysql_*.
Loading one page in particular peaks at 5-6 concurrent connections meaning it takes a minimum of 2 users loading it at the same time to spit an error on one or both of them.
I know this is inefficient, I'm sure I can cut that down quite a bit, but that's what my idea is at the moment is to move the pdo code into a function and just pass in a query string and an array of variables, then have it return an array (partly to tidy my code).
THE ACTUAL QUESTION:
How can I get this function to continue to retry until it manages to execute, and hold up the script that called it (and any script that might have called that one) until it manages to execute and return it's data? I don't want things executing out of order, I am happy with code being delayed for a second or so during peak times
Since someone will ask for code, here's what I do at the moment. I have this in a file on it's own so I have a central place to change connection parameters. the if statement is merely to remove the need to continuously change the parameters when I switch from my test server to the liver server
$dbtype = "mysql";
$server_addr = $_SERVER['SERVER_ADDR'];
if ($server_addr == '192.168.1.10') {
$dbhost = "localhost";
} else {
$dbhost = "xxxxx.xxxxx.xxxxx.co.nz";
}
$dbname = "mydatabase";
$dbuser = "user";
$dbpass = "supersecretpassword";
I 'include' that file at the top of a function
include 'db_connection_params.php';
$pdo_conn = new PDO("mysql:host=$dbhost;dbname=$dbname", $dbuser, $dbpass);
then run commands like this all on the one connection
$sql = "select * from tbl_sub_cargo_cap where sub_model_sk = ?";
$capq = $pdo_conn->prepare($sql);
$capq->execute(array($sk_to_load));
while ($caprow = $capq->fetch(PDO::FETCH_ASSOC)) {
//stuff
}
You shouldn't need 5-6 concurrent connections for a single page, each page should only really ever use 1 connection. I'd try to re-architect whatever part of your application is causing multiple connections on a single page.
However, you should be able to catch a PDOException when the connection fails (documentation on connection management), and then retry some number of times.
A quick example,
<?php
$retries = 3;
while ($retries > 0)
{
try
{
$dbh = new PDO("mysql:host=localhost;dbname=blahblah", $user, $pass);
// Do query, etc.
$retries = 0;
}
catch (PDOException $e)
{
// Should probably check $e is a connection error, could be a query error!
echo "Something went wrong, retrying...";
$retries--;
usleep(500); // Wait 0.5s between retries.
}
}
10 concurrent connections is A LOT. It can serve 10-15 online users easily.
Heavy efforts needed to exhaust them.
So there is something wrong with your code.
There are 2 main reasons for it:
slow queries take too much time and thus serving one hit uses one mysql connection for too long.
multiple connections opened from every script.
The former one have to be investigated but for the latter one it's simple:
Do not mix myqsl_ and PDO in one script: you are opening 2 connections at a time.
When using PDO, open connection only once and then use it throughout your code.
Reducing the number of connections in one script is the only way to go.
If you have multiple instances of PDO class in your code, you will need to add that timeout handling code you want to every call. So, heavy code rewriting required anyway.
Replace these new instances with global $pdo; instead. It will take the same amount of time but it will be permanent solution, not temporary patch as you want it.
Please be sensible.
PHP automatically closes all the connections st the end of the script, you don't have to care about closing them manually.
Having only one connection throughout one script is a common practice. It is used by ALL the developers around the world. You can use it without any doubts. Just use it.
If you have transaction and want to log something in database you sometimes need 2 connections in one script

PHP memcache connect

I have a page where few thousands of users can hit a method at the same time . I do have following code where I connect every time . Since this will go to a seperate memcache server will this cause slowdowns is there a way to connect just once and reuse that connection ? Do I have to close connection after every request ?
$primary_connected = $memcache_primary->connect($primary_memcache_server, 11211);
if($primary_connected){
$data = $memcache_primary->get($key);
if ($data != NULL) {
return data;
}
}
else{
/////Get data from database
}
If you are using the PHP memcached class (the one with the d on the end, not memcache) then yes, you can open a persistent connection.
You can pass a persistent ID to the constructor which will open a persistent connection and subsequent instances that use the same persistent ID will use that connection.
$memcached = new Memcached('method_name_or_persistent_identifier');
$memcached->addServer(...);
// use it
Hope that helps.
See Memcached::__construct() for more details.

Memcache + Mysqli Couldn't fetch mysqli_result

I'm trying to add some speed performance to a project I'm working on using memcache. However I'm having a problem with the following
public function sql_query($query){
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die ("Could not connect");
$memkey = md5($query);
$get_result = $memcache->get($memkey);
if ($get_result){
return $get_result;
} else {
$q = $this->mysqli->query($query);
$memcache->set($memkey, $q, false, 120) or die ("Failed to save data at the server");
if ($this->mysqli->error){
App::Error($this->mysqli->error);
}
return $q;
}
}
It is certainly putting something into memcached but I get this error when pulling it back out and use it as I would if it wasn't from the cache.
"Warning: mysqli_result::fetch_assoc(): Couldn't fetch mysqli_result"
Am I missing something? I'm sure I've used memcache before in a similar way without any issues.
You can not store all data-types into memcached (and other data-stores), one which most often does not work are resources Docs, for example a database link or a result identifier.
You stored such a value and on another request you pulled it out again. But as the database already is in another state, this resource does not work any longer. You get the error then.
Instead you need to store the result data from the query, that's static data you can put into memcache.

php memcache client performance

I am benchmarking an app I made which uses repcached (memcached with replication) for storing objects and taking some load off the db.
While benchmarking the index page I run
ab -c 400 -n 5000 http://mysite
When I use just one memcache server with
list($server, $port) = explode(':', $settings->memcached_servers[0]);
$this->link = new Memcache();
$this->link->connect($server, (int) $port);
I get 1000 reqs/sec
When I out more than one server to the pool with
$this->link = new Memcache();
foreach($settings->memcached_servers as $server){
list($server, $port) = explode(':', $server);
$this->link->addServer($server, (int) $port, 0, 10);
}
I only get 300 reqs/sec
The difference is huge
Any idea why?
I really need to have 2 server for redundancy but the performance is also crucial
It it normal to have such a huge difference?
Basically, the index page makes justs 2 calls to the db getting just one row, so the rows are cached while running the test.
But I am surprised to see memcached fall so much behind in the test.
Well it seems that the culprit was the weight parameter of the addServer
Changing it to 1 for the 1st server and 2 for the second did it and performance is now the same

Categories