Mongo fatal error - php

I am using php and mongo db... I want to get user details using userkey which is a unique
key.. My mongo query is :
$obj= $mongo->user;
$filter = array(
'userkey'=>$value
);
$exist = $obj->findone($filter);
When i execute this query, am getting error as ..
Fatal error: Maximum execution time of 30 seconds exceeded in line 5
ie $exist = $obj->findone($filter); shows error
How to solve this problem
Can someone help me plz...

Can you get any mongo queries to work, or is it just this one that fails? It could be something with your connection to the database. I believe that "findone()" will wait forever on a stalled connection. I would recommend using "find()" instead, and giving it a low timeout, like 100ms. This may help.
$cursor = $collection->find($query,$fields)->timeout(100);
$record = $cursor->getNext();
Obviously, you will want to wrap this in a try/catch block to capture the timeout exception. Be sure to use the latest mongo drivers (1.2.10 or better), because earlier versions of the php mongo drivers would segfault on a connection timeout.

Related

2006 MySQL server has gone away while saving object

Im getting this error "General error: 2006 MySQL server has gone away" when saving an object.
Im not going to paste the code since it way too complicated and I can explain with this example, but first a bit of context:
Im executing a function via Command line using Phalcon tasks, this task creates a Object from a Model class and that object calls a casperjs script that performs some actions in web page, when it finishes it saves some data, here's where sometimes I get mysql server has gone away, only when the casperjs takes a bit longer.
Task.php
function doSomeAction(){
$object = Class::findFirstByName("test");
$object->performActionOnWebPage();
}
In Class.php
function performActionOnWebPage(){
$result = exec ("timeout 30s casperjs somescript.js");
if($result){
$anotherObject = new AnotherClass();
$anotherObject->value = $result->value;
$anotherObject->save();
}
}
It seems like the $anotherObject->save(); method is affected by the time exec ("timeout 30s casperjs somescript.js"); takes to get an answer, when it shouldn`t.
Its not a matter of the data saved since it fails and saves succesfully with the same input, the only difference I see is the time casperjs takes to return a value.
It seems like if for some reason phalcon opens the MySQL conection during the whole execution of the "Class.php" function, provoking the timeout when casperjs takes too long, does this make any sense? Could you help me to fix it or find a workaround to this?
Problem seems that either you are trying to fetch heavy data in single packet than allowed in your mysql config file or your wait_timeout variable value is not set properly as per your code requirement.
check your wait_timeout and max_allowed_packet values, you can check by below command-
SHOW GLOBAL VARIABLES LIKE 'wait_timeout';
SHOW GLOBAL VARIABLES LIKE 'max_allowed_packet';
And increase these values as per your requirement in your my.cnf (linux) or my.ini (windows) config file and restart mysql service.

PHP/MongoDB: find() is ignoring timeout setting

Trying mongodb global timeout etc. is still ignored by find() queries in my PHP script.
I'd like a findOne({...}) or find({...}) lookup and wait max 20ms for the DB server before timeout.
How to make sure that PHP does not utilize this setting as soft limit? It's still ignored and processing answers even 5sec later.
Is this a PHP mongo driver bug?
Example:
MongoCursor::$timeout=20;
$nosql_server=new Mongo('mongodb://user:pw#'.implode(",",$arr_replicas).'',array("replicaSet" => "gmt","timeout"=>10)) OR troubles("too slow to connect");
$nosql_db=$nosql_server->selectDB('aDB');
$nosql_collection_mcol=$nosql_db->mcol;
$testFind=$nosql_collection_mcol->find(array('crit'=>123));
//If PHP considered the MongoCursor::$timeout, I'd expect the prev. line to be skipped or throwing a mongo/timeout exception if DB does not return the find result cursor ready within 20ms.
//However, I arrive with this line after seconds, without exception whenever the DB has some lock or delay, without skipping previous line.
In the PHP documentation for $timeout the following is the explanation for the cursor timeout:
Causes methods that fetch results to throw a
MongoCursorTimeoutException if the query takes longer than the
specified number of milliseconds.
I believe that the timeout is referring to the operations performed on the cursor (e.g. getNext()).
Do not do this:
MongoCursor::$timeout=20;
That is calling a static method and won't do you any good AFAIK.
What you need to realize is that in your code example, $testFind is the MongoCursor object. Therefore in the code snippet you gave, what you should do is add this after everything else in order to set the timeout of the $testFind MongoCursor:
$testFind->timeout(100);
NOTE: If you want to deal with $testFind as an an array you need to do:
$testFindArray = iterator_to_array($testFind);
That one threw me for a loop for awhile. Hope this helps someone.
Pay attention on the readPreference attribute. The possible values are:
MongoClient::RP_PRIMARY
MongoClient::RP_PRIMARY_PREFERRED
MongoClient::RP_SECONDARY
MongoClient::RP_SECONDARY_PREFERRED
MongoClient::RP_NEAREST

Getting "Maximum execution time" error intermittently

I am facing this problem, firstly I would like to say that I am very new to PHP and MySQL.
Fatal error: Maximum execution time of 30 seconds exceeded in
.........................\cdn\wished.php on line 3
I don't know what is wrong in line 3, its giving error only sometimes. Here's my code:
<?php
//wished.php
$CheckQuery = mysql_query("SELECT * FROM users WHERE ownerid='$user->id'");
$wished = 0;
while($row = mysql_fetch_assoc($CheckQuery))
{
// echo $row['fname']."<br/>";
$wished++;
}
echo $wished;
?>
It was perfect when I run this in localhost with XAMPP. As soon as I hosted my app on my domain and used their database, it start getting error.
thanks every one :)
The issue is that the SQL query is taking too long, and your production PHP configuration has the PHP script time limit set too low.
At the beginning of the script you can add more time to the PHP time limit:
http://php.net/manual/en/function.set-time-limit.php
set_time_limit(60);
for example to add 30 more seconds (or use 0 to let the PHP script continue running).
If your production database is different than your development DB (and I'm assuming production has way more data) then it might be a really expensive call to get everything from the user table.

mysql Link to server lost, unable to reconnect

I get the following error:
Link to server lost, unable to reconnect
I have a mysql daemon coded, and I use mysql_pconnect to connect to mysql but after a while, I get the following error and the daemon stops functioning properly:
Link to server lost, unable to reconnect
What I do is the following:
while(true)
{
$connect = mysql_pconnect(...);
$db = mysql_select_db(...);
}
What can I do to prevent this? I need mysql connection to stay steady for the whole duration of the daemon - which may be forever.
You have a few choices.
But just as a precursor, you should try and move away from relying on mysql_* and start using the PDO instead.
Anyway...
When you open a mysql connection, it will stay "available" until the wait timeout has expired. What you are doing is constantly creating a new connection, (also without closing the other connection). This means you will either hit the server mysql limit, or your unix box socket limit very quickly.
Wait Timeout
You should first check with your server to see what this timeout is set to.
You might want to consider increasing it in your my.cnf if it is terribly low. The default period is 28800 seconds (if I recall correctly).
You can check by issuing this query:
SHOW variables like "%wait_timeout%"
You can then change the value in your my.cnf to increase it or you can also set it using
SET ##GLOBAL.wait_timeout=288000
SELECT 1
Okay, now, with a reasonable set timeout set, the "typical" way that you can make sure that you don't get the mysql has gone away message, is to just do a SELECT 1. This will do another query and make sure that the connection is held open.
And, the benefit of using the PDO is that you can then catch PDOExceptions which might be thrown when a SELECT 1 fails.
This means, that you can then try and connect again, if an exception is thrown, and then check again. If you can't connect after that then you should probably kill your daemon.
Here is some code.... you would clearly have to make your PDO object using a valid connection string.
// $pdo holds the connection and a PDO object.
try {
$pdo->execute("SELECT 1");
} catch (PDOException $e) {
// Mysql has gone away.
$pdo = null
$pdo = new PDO( $connectionString );
echo "Mysql has gone away - But we attempted to reconnect\n";
try {
$pdo->execute("SELECT 1");
} catch (PDOException $e) {
echo "Mysql has failed twice - Kill Daemon\n";
throw($e);
}
}
This is the solution that I would probably take.
You can easily substitute the SELECT 1 with your own query that you actually want to use. This is just an example.

PHP MySQLi timeout on locked tables?

I have a weird problem with mysqli timeout options, here you go:
I am using mysqli_init() and real_connect() in order to set MYSQLI_OPT_CONNECT_TIMEOUT
$this->__mysqli = mysqli_init();
if(!$this->__mysqli->options(MYSQLI_OPT_CONNECT_TIMEOUT,1))
throw new Exception('Timeout settings failed')
$this->__mysqli->real_connect(host,user,pass,db);
....
Then I am initiating query on locked table (LOCKE TABLE users WRITE) and its just hanging, ignoring all my settings even:
set_time_limit(1);
ini_set('max_execution_time',1);
ini_set('default_socket_timeout',1);
ini_set('mysql.connect_timeout',1);
I understand why set_time_limit(1) and max_execution_time is ignored but why other timeouts and especially MYSQLI_OPT_CONNECT_TIMEOUT are ignored and how to solve it.
I am using PHP 5.3.1 on Windows and Linux boxes, please help.
MYSQLI_OPT_CONNECT_TIMEOUT seems to configure the timeout at connect-time :
connection timeout in seconds
(supported on Windows with TCP/IP
since PHP 5.3.1)
Here, you are trying to execute a query on a locked table... Which means you have a query that takes a lot of time (like forever) ; but you are already connected to the database.
So, what should be configured is not the connection timeout ; but some "query timeout".
Not sure how to set that "query timeout", though...
Maybe the MYSQLI_CLIENT_INTERACTIVE flag for mysql_real_connect could help, in a way or another ?
There's innodb_lock_wait_timeout. But as the name suggests it's only for InnoDB tables.
The timeout in seconds an InnoDB transaction may wait for a row lock before giving up. The default value is 50 seconds. A transaction that tries to access a row that is locked by another InnoDB transaction will hang for at most this many seconds before issuing the following error:
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
In addition to Pascal MARTIN's answer:
PHP is sleeping until the query completes - so anything you've configured for PHP is ignored. If the query ever does return, then PHP will wake up and continue processing - at which point it will realise it has run out execution time and end abruptly - will it release the locks it has acquired? Maybe.
One solution would be to implement your own locking schema, e.g.
$qry="UPDATE mydb.mylocks SET user='$pid' WHERE tablename='$table_to_lock' AND user IS NULL";
$basetime=time();
$nottimedout=5;
do {
mysql_query($qry);
$locked=mysql_affected_rows();
if (!$locked && $nottimedout--) sleep(1);
} while (!$locked && $nottimedout);
if ($nottimedout) {
// do stuff here
}
mysql_qry("UPDATE mydb.mylocks SET user=NULL WHERE tablename='$table_to_lock' AND user='$pid'";
I guess a neater solution would be to implement this as a trigger on the table - but my MySQL PL/SQL is a bit ropey.
C.

Categories