Im getting this error "General error: 2006 MySQL server has gone away" when saving an object.
Im not going to paste the code since it way too complicated and I can explain with this example, but first a bit of context:
Im executing a function via Command line using Phalcon tasks, this task creates a Object from a Model class and that object calls a casperjs script that performs some actions in web page, when it finishes it saves some data, here's where sometimes I get mysql server has gone away, only when the casperjs takes a bit longer.
Task.php
function doSomeAction(){
$object = Class::findFirstByName("test");
$object->performActionOnWebPage();
}
In Class.php
function performActionOnWebPage(){
$result = exec ("timeout 30s casperjs somescript.js");
if($result){
$anotherObject = new AnotherClass();
$anotherObject->value = $result->value;
$anotherObject->save();
}
}
It seems like the $anotherObject->save(); method is affected by the time exec ("timeout 30s casperjs somescript.js"); takes to get an answer, when it shouldn`t.
Its not a matter of the data saved since it fails and saves succesfully with the same input, the only difference I see is the time casperjs takes to return a value.
It seems like if for some reason phalcon opens the MySQL conection during the whole execution of the "Class.php" function, provoking the timeout when casperjs takes too long, does this make any sense? Could you help me to fix it or find a workaround to this?
Problem seems that either you are trying to fetch heavy data in single packet than allowed in your mysql config file or your wait_timeout variable value is not set properly as per your code requirement.
check your wait_timeout and max_allowed_packet values, you can check by below command-
SHOW GLOBAL VARIABLES LIKE 'wait_timeout';
SHOW GLOBAL VARIABLES LIKE 'max_allowed_packet';
And increase these values as per your requirement in your my.cnf (linux) or my.ini (windows) config file and restart mysql service.
Related
I am using a timer function in matlab to continuously execute a certain script. Within this script, I am using urlread to retrieve data from webservices, which works like a charm.
I am now trying to use urlread to execute a simple http-request within this script to insert data into a mysql-database. Thus, I simply specify the url-string and define the value to be parsed to the php parser.
Code-within script being executed in timer-function:
db_url = 'http://someurl/update.php?value=';
db_url = strcat(db_url,num2str(value));
urlread(db_url);
clear db_url
My problem is the following: When I run the timer, it works fine for one execution, but then stops displaying the following error:
"Either this URL could not be parsed or the protocol is not supported."
What is going wrong? When I check my mysql database, I see that one new line has been added to my database, which means it generally works, just won't execute multiple times within the timer.
Any idea what is going wrong? Many thanks in advance!
I figured out what the problem was. The value variable is an array with increasing in size each iteration. Thus, what I needed to do was specify value(end), like so:
db_url = 'http://someurl/update.php?value=';
db_url = strcat(db_url,num2str(value(end)));
urlread(db_url);
clear db_url
I coded a function to help me handle transaction with files in CodeIgniter.
today I was trying this code:
function($db_trans_func, $context){
if(is_callable($db_trans_func)){
$context = $db_trans_func($context);
FirePHP_::info_(time(), "After Db trans");
}
}
that is just a snippet from my helper. But the problem is, when this code runs and in the case where the execution of the function $db_trans_func takes place it takes more time to run, php passes to next code FirePHP_::info_($context, "From db transaction"); before the ending of the line before.
That is abnormal for me. because in the normal case the lines should run one after the other.
Can anyone help me solve this problem ? How can I tell php to not run
FirePHP_::info_(time(), "After Db trans");
after that:
$context = $db_trans_func($context);
finishes its execution?
I'm not entirely clear, but my assumption is:
db_trans_func is running some function against the DB (such as setting a transaction begin)
you are comparing the php function FirePHP_::info_(time(), "After Db trans"); against the time recorded in the db, or similar
In other words, you have a function that DOES fire first in php, then a second one. They ARE running consecutively; BUT, the DB result takes longer, of course, and so the db effect is seen afterwards. In other words, these are different threads running asynchronously
Does that make sense to you, and is it possible?
Trying mongodb global timeout etc. is still ignored by find() queries in my PHP script.
I'd like a findOne({...}) or find({...}) lookup and wait max 20ms for the DB server before timeout.
How to make sure that PHP does not utilize this setting as soft limit? It's still ignored and processing answers even 5sec later.
Is this a PHP mongo driver bug?
Example:
MongoCursor::$timeout=20;
$nosql_server=new Mongo('mongodb://user:pw#'.implode(",",$arr_replicas).'',array("replicaSet" => "gmt","timeout"=>10)) OR troubles("too slow to connect");
$nosql_db=$nosql_server->selectDB('aDB');
$nosql_collection_mcol=$nosql_db->mcol;
$testFind=$nosql_collection_mcol->find(array('crit'=>123));
//If PHP considered the MongoCursor::$timeout, I'd expect the prev. line to be skipped or throwing a mongo/timeout exception if DB does not return the find result cursor ready within 20ms.
//However, I arrive with this line after seconds, without exception whenever the DB has some lock or delay, without skipping previous line.
In the PHP documentation for $timeout the following is the explanation for the cursor timeout:
Causes methods that fetch results to throw a
MongoCursorTimeoutException if the query takes longer than the
specified number of milliseconds.
I believe that the timeout is referring to the operations performed on the cursor (e.g. getNext()).
Do not do this:
MongoCursor::$timeout=20;
That is calling a static method and won't do you any good AFAIK.
What you need to realize is that in your code example, $testFind is the MongoCursor object. Therefore in the code snippet you gave, what you should do is add this after everything else in order to set the timeout of the $testFind MongoCursor:
$testFind->timeout(100);
NOTE: If you want to deal with $testFind as an an array you need to do:
$testFindArray = iterator_to_array($testFind);
That one threw me for a loop for awhile. Hope this helps someone.
Pay attention on the readPreference attribute. The possible values are:
MongoClient::RP_PRIMARY
MongoClient::RP_PRIMARY_PREFERRED
MongoClient::RP_SECONDARY
MongoClient::RP_SECONDARY_PREFERRED
MongoClient::RP_NEAREST
I have set up a cronjob to run a script daily. This script pulls out a list of Ids from a database, loops through each to get more data from the database and geneates an XML file based on the data retrieved.
This seems to have run fine for the first few days, however, the list of Ids is getting bigger and today I have noticed that not all of the XML files have been generated. It seems to be random IDs that have not run. I have manually run the script to generate the XML for some of the missing IDs individually and they ran without any issues.
I am not sure how to locate the problem as the cron job is definately running, but not always generating all of the XML files. Any ideas on how I can pin point this problem and quickly find out which files have not been run.
I thought perhaps add timestart and timeend fields to the database and enter these values at the start and end of each XML generator being run, this way I could see what had run and what hadn't, but wondered if there was a better way.
set_time_limit(0);
//connect to database
$db = new msSqlConnect('dbconnect');
$select = "SELECT id FROM ProductFeeds WHERE enabled = 'True' ";
$run = mssql_query($select);
while($row = mssql_fetch_array($run)){
$arg = $row['id'];
//echo $arg . '<br />';
exec("php index.php \"$arg\"", $output);
//print_r($output);
}
My suggestion would be to add some logging to the script. A simple
error_log("Passing ID:".$arg."\n",3,"log.txt");
Can give you some info on whether the ID is being passed. If you find that that is the case, you can introduce logging to index.php to further evaluate the problem.
Btw, can you explain why you are using exec() to run a php script? Why not excute a function in the loop. This could well be the source of the problem.
Because with exec I think the process will run in the background and the loop will continue, so you could really choke you server that way, maybe that's worth trying out as well. (I think this also depends on the way of outputting:
Note: If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
Maybe some other users can comment on this.
Turned out the apache was timing out. Therefore nothing to do with using a function or the exec() function.
I know people complain usually about scripts not working, but here is a case where it keeps working even if I want it to stop.
I have a CSV parser that analyzes lines and inserts entries in a DB table. I am using PDO and Zend Framwork for the project. The code works fine.. too fine in fact.
public function save()
{
$memory_limit = ini_get('memory_limit');
ini_set('memory_limit', '512M');
$sql = "
INSERT INTO my_table (
date_start,
timeframe,
type,
country_to,
country_from,
code,
weight,
value
) VALUES (?,?,?,?,?,?,?,?)
ON DUPLICATE KEY UPDATE
weight = VALUES(weight),
value = VALUES(value)
";
if ($this->test_mode) {
echo $sql;
return;
}
$stmt = new Zend_Db_Statement_Pdo($this->_db, $sql);
foreach($this->parsed_data as $entry){
$stmt->execute(array_values($entry));
$affected_rows = $stmt->rowCount();
if ($affected_rows){
$this->_success = true;
}
}
unset($this->parsed_data, $stmt, $sql);
ini_set('memory_limit', $memory_limit);
}
The script takes various seconds to complete as I am parsing a big file. The problem appears when I am trying to stop the script, with ESC or even by closing the page. The script does not stop until it finishes to insert all entries. Not even an Apache reload is not fixing this, probably a restart will do it.
I am thinking that this is not normal behaviour and maybe I am doing something wrong so I am asking for suggestions.
Thanks.
UPDATE
ignore_user_abort is off (default behaviour) so user abort should be considered..
I'm pretty sure that's standard PHP behaviour - just because the browser goes away doesn't mean it won't stop processing the script. (Although restarting Apache, etc. will achieve this goal.)
To change this behaviour, you can use ignore_user_abort.
That said, "PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client", which I suspect may be the issue you're experiencing.
See the above link and the PHP runtime configuration information for more info.
It is not wrong. Your tries won't work because:
ESCape - because it is totally unrelated to the working of a page - most browsers don't actually react to it
closing (or refreshing) the page - again, not related - the SERVER is doing something, and PHP will NOT stop when the client-side stops - server can't actually know if the client closed or refreshed a page
Apache reload - won't kill the PHP forked process
Restart WOULD do it - this will kill PHP processes and stuff. Although it is kinda troublesome.
Way to do this (if the long execution is undesirable), is to actually set an execution time limit, using PHP function set_time_limit(), or to make the parsing more optimal (if it is not).