I have a page where few thousands of users can hit a method at the same time . I do have following code where I connect every time . Since this will go to a seperate memcache server will this cause slowdowns is there a way to connect just once and reuse that connection ? Do I have to close connection after every request ?
$primary_connected = $memcache_primary->connect($primary_memcache_server, 11211);
if($primary_connected){
$data = $memcache_primary->get($key);
if ($data != NULL) {
return data;
}
}
else{
/////Get data from database
}
If you are using the PHP memcached class (the one with the d on the end, not memcache) then yes, you can open a persistent connection.
You can pass a persistent ID to the constructor which will open a persistent connection and subsequent instances that use the same persistent ID will use that connection.
$memcached = new Memcached('method_name_or_persistent_identifier');
$memcached->addServer(...);
// use it
Hope that helps.
See Memcached::__construct() for more details.
Related
I'm having issue using the SDK for PHP.
If I try to estabilish a connection it takes ages...
Here some code:
$old = microtime (true);
$db = new Couchbase(...);
echo microtime (true)-$old."
";
$old = microtime (true);
$db->get(...);
echo microtime (true)-$old;
The output is this:
2.2835459709167 (couchbase establishing)
0.0011978149414062 (get command)
Why does the connection to couchbase take so long time?
The initial connection does take a while, but there is a flag for using persistent connections with the Couchbase() object. It's the last parameter. Generally, it's a good idea to set it to true.
The project is considering setting it to true by default in a future release.
Check which value you're using for the server host, if you use, for example:
$cb = new Couchbase("couchbase_hostname:8091", "user", "pass", "default" , true);
the issue may be the DNS resolution for "couchbase_hostname", try passing the host IP, you didn't paste the whole script code so I can't tell which value you're passing.
I have an issue, it has only cropped up now. I am on a shared web hosting plan that has a maximum of 10 concurrent database connections. The web app has dozens of queries, some pdo, some mysql_*.
Loading one page in particular peaks at 5-6 concurrent connections meaning it takes a minimum of 2 users loading it at the same time to spit an error on one or both of them.
I know this is inefficient, I'm sure I can cut that down quite a bit, but that's what my idea is at the moment is to move the pdo code into a function and just pass in a query string and an array of variables, then have it return an array (partly to tidy my code).
THE ACTUAL QUESTION:
How can I get this function to continue to retry until it manages to execute, and hold up the script that called it (and any script that might have called that one) until it manages to execute and return it's data? I don't want things executing out of order, I am happy with code being delayed for a second or so during peak times
Since someone will ask for code, here's what I do at the moment. I have this in a file on it's own so I have a central place to change connection parameters. the if statement is merely to remove the need to continuously change the parameters when I switch from my test server to the liver server
$dbtype = "mysql";
$server_addr = $_SERVER['SERVER_ADDR'];
if ($server_addr == '192.168.1.10') {
$dbhost = "localhost";
} else {
$dbhost = "xxxxx.xxxxx.xxxxx.co.nz";
}
$dbname = "mydatabase";
$dbuser = "user";
$dbpass = "supersecretpassword";
I 'include' that file at the top of a function
include 'db_connection_params.php';
$pdo_conn = new PDO("mysql:host=$dbhost;dbname=$dbname", $dbuser, $dbpass);
then run commands like this all on the one connection
$sql = "select * from tbl_sub_cargo_cap where sub_model_sk = ?";
$capq = $pdo_conn->prepare($sql);
$capq->execute(array($sk_to_load));
while ($caprow = $capq->fetch(PDO::FETCH_ASSOC)) {
//stuff
}
You shouldn't need 5-6 concurrent connections for a single page, each page should only really ever use 1 connection. I'd try to re-architect whatever part of your application is causing multiple connections on a single page.
However, you should be able to catch a PDOException when the connection fails (documentation on connection management), and then retry some number of times.
A quick example,
<?php
$retries = 3;
while ($retries > 0)
{
try
{
$dbh = new PDO("mysql:host=localhost;dbname=blahblah", $user, $pass);
// Do query, etc.
$retries = 0;
}
catch (PDOException $e)
{
// Should probably check $e is a connection error, could be a query error!
echo "Something went wrong, retrying...";
$retries--;
usleep(500); // Wait 0.5s between retries.
}
}
10 concurrent connections is A LOT. It can serve 10-15 online users easily.
Heavy efforts needed to exhaust them.
So there is something wrong with your code.
There are 2 main reasons for it:
slow queries take too much time and thus serving one hit uses one mysql connection for too long.
multiple connections opened from every script.
The former one have to be investigated but for the latter one it's simple:
Do not mix myqsl_ and PDO in one script: you are opening 2 connections at a time.
When using PDO, open connection only once and then use it throughout your code.
Reducing the number of connections in one script is the only way to go.
If you have multiple instances of PDO class in your code, you will need to add that timeout handling code you want to every call. So, heavy code rewriting required anyway.
Replace these new instances with global $pdo; instead. It will take the same amount of time but it will be permanent solution, not temporary patch as you want it.
Please be sensible.
PHP automatically closes all the connections st the end of the script, you don't have to care about closing them manually.
Having only one connection throughout one script is a common practice. It is used by ALL the developers around the world. You can use it without any doubts. Just use it.
If you have transaction and want to log something in database you sometimes need 2 connections in one script
I'm currently writing a PHP application and i noticed that my page loads kinda slow. I takes about 2 seconds (2.0515811443329 to be exact).
I've tracked down what the bottleneck was and it's the part where i'm creating a PDO connection to my MySQL database.
My 'connect()' method doesn't do any exoctic stuff. It simply looks like this:
public function connect ( $database, $host, $username, $password )
{
try
{
$this->db = new \PDO("mysql:dbname=".$database.";host=".$host, $username, $password);
if ( !$this->db )
{
throw new \Exception('Failed to connect to the database!');
}
$this->db->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION);
}
catch ( \Exception $e )
{
echo '<strong>Exception: </strong>'.$e->getMessage();
return false;
}
return true;
}
So when i comment out the call to the 'connect()' method, then my page loads in: 0.035506010055542
This is a huge difference. I can imagine that creating a connection to a database does take up some time, but it takes more than 1,5 seconds... I'm not sure if this is normal?
If it is normal, that it takes up that amount of time then is there a way to store the database connection? Like putting it in a session? Actually, as far as i know storing it in a session isn't possible. But it would be the ideal solution. Storing the connection somewhere until the user closes his browser.
In anyway, is there a problem with my PDO / MySQL? And can i simply store the connection resource somehow? So that i don't have to reconnect to my database everytime for every new page?
PS. I'm doing this all on a localhost (Windows).
You're probably making a connection with 'localhost' as address. Try to change that to '127.0.0.1'. That should fix the problem.
You can create a persistent connection to database using PDO. From the manual
Many web applications will benefit from making persistent connections to database servers. Persistent connections are not closed at the end of the script, but are cached and re-used when another script requests a connection using the same credentials. The persistent connection cache allows you to avoid the overhead of establishing a new connection every time a script needs to talk to a database, resulting in a faster web application.
And example:
<?php
$dbh = new PDO('mysql:host=localhost;dbname=test', $user, $pass, array(
PDO::ATTR_PERSISTENT => true
));
?>
Just want to apologise in advance for writing so much text. Here is the problem: I use a persistent connection to connect to the database with a wait_timeout of 60 seconds and I store session data in a MySQL table. The problem I have is that the sessions just don't seem to use their own rows; each page refresh keeps starting a new session instead of using the old one. What is more, the persistent connections mentioned earlier keep starting new processes insead of using their own as they should. Since these two problems seem to have the same origin, I decided to put them here together. My PHP code is the following:
(View it on Pastebin)
mysql_pconnect('localhost', 'root') or die('Could not connect: ' . mysql_error());
mysql_set_charset('utf8');
mysql_select_db('azgoth') or die('Could not choose DB: ' . mysql_error());
session_set_cookie_params(3600,'/','www.azgoth',FALSE,TRUE);
session_set_save_handler('_open','_close','_read','_write','_destroy','_clean');
function _open(){
return true;}
function _close(){
return true;}
function _read($id){
$id = mysql_real_escape_string($id);
if ($result = mysql_query("SELECT data FROM sess_en WHERE id='$id'")) {
if (mysql_num_rows($result)) {
$record = mysql_fetch_assoc($result);
return $record['data'];}}
return '';}
function _write($id, $data){
$access = $_SERVER['REQUEST_TIME'];
$id = mysql_real_escape_string($id);
$access = mysql_real_escape_string($access);
$data = mysql_real_escape_string($data);
return mysql_query("REPLACE INTO sess_en VALUES('$id', '$access', '$data')");}
function _destroy($id){
$id = mysql_real_escape_string($id);
return mysql_query("DELETE FROM sess_en WHERE id='$id'");}
function _clean($max){
$old = $_SERVER['REQUEST_TIME'] - $max;
$old = mysql_real_escape_string($old);
return mysql_query("DELETE FROM sess_en WHERE access<'$old'");}
session_start();
Any ideas on what could be causing this issue?
EDIT: I thought at first that it was just in my head, but I can now confirm this: this weird thing keeps appearing randomly: it usually does, but not sometimes (rarely, in fact) doesn't..
The new session starting every time problem is probably because of the domain parameter you have passed to session_set_cookie_params(). You have passed www.azgoth, presumably because you have more than one top-level domain (TLD) and you want the cookies to be shared across all of them. This is not allowed. With what you have set, the TLD is azgoth, which is not (currently) possible, therefore the cookie will be invalid and will never be sent back to the server, ergo a new session will be started every time.
The persistent DB problem is probably server configuration related. The PHP manual states, on the page for mysql_pconnect():
Note, that these kind of links only work if you are using a module version of PHP. See the Persistent Database Connections section for more information.
...and...
Using persistent connections can require a bit of tuning of your Apache and MySQL configurations to ensure that you do not exceed the number of connections allowed by MySQL.
I have a Master - Slave setup for a web application written in PHP. I have a pool of slaves I use for reading, and a Master that is used for writes (and reads if a write has been sent this request). I would like to incorporate an automated system for removed crashed servers from the read pool. Currently I am using:
foreach($readers as $reader)
{
$fp = #fsockopen($reader['host'],3306,$errno,$errstr,1);
if(!$fp)
{
//Remove from pool
}
unset($fp);
}
My primary question is there a more reliable method. I have had quite a few false positives, and vice versa because it is not actually checking for a MySQL server, but rather just a connection on port 3306. Is there a way to check for a MySQL server without raising an exception, which is the behaviour of the PDO and MySQLi extensions in PHP.
You could just use mysql_connect() and check the result for false, and close the connection right away on success. You can make a dummy account with no privileges for that if you like.
That's really the only reliable way, especially if you want to distinguish a running MySQL server from any other random process listening on port 3306.
You could use mysql_ping() to check if a current DB Connection you have open is still alive
Here is the example posted at http://www.php.net/manual/en/function.mysql-ping.php
<?php
set_time_limit(0);
$conn = mysql_connect('localhost', 'mysqluser', 'mypass');
$db = mysql_select_db('mydb');
/* Assuming this query will take a long time */
$result = mysql_query($sql);
if (!$result) {
echo 'Query #1 failed, exiting.';
exit;
}
/* Make sure the connection is still alive, if not, try to reconnect */
if (!mysql_ping($conn)) {
echo 'Lost connection, exiting after query #1';
exit;
}
mysql_free_result($result);
/* So the connection is still alive, let's run another query */
$result2 = mysql_query($sql2);
?>
The best way to check if any service is alive is to actually use it. So for MySQL try to connect and execute some fast query, for web server try to fetch some file, for PHP try to fetch some simple script...
For MySQL master/slave setup, one of the solutions is to actually check the state of replication. You can check how many transactions is the slave behind master and decide to stop using that slave when/while it has old data. (I don't do the replication myself, but I think you need to compare the variables Read_Master_Log_Pos and Relay_Log_Pos)