Close a php connection - php

i have a cron Job running following script every 5 min. However it seems that the script doesn't closes the connection once its run . How can I close the connection in this script?
function __construct($config)
{
$this->server = $config['server'];
$this->certificate = $config['certificate'];
$this->passphrase = $config['passphrase'];
// Create a connection to the database.
$this->pdo = new PDO(
'mysql:host=' . $config['db']['host'] . ';dbname=' . $config['db']['dbname'],
$config['db']['username'],
$config['db']['password'],
array());
// If there is an error executing database queries, we want PDO to
// throw an exception.
$this->pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
// We want the database to handle all strings as UTF-8.
$this->pdo->query('SET NAMES utf8');
}
// This is the main loop for this script. It polls the database for new
// messages, sends them to APNS, sleeps for a few seconds, and repeats this
// forever (or until a fatal error occurs and the script exits).
function start()
{
writeToLog('Connecting to ' . $this->server);
if (!$this->connectToAPNS())
exit;
while (true)
{
// Do at most 20 messages at a time. Note: we send each message in
// a separate packet to APNS. It would be more efficient if we
// combined several messages into one packet, but this script isn't
// smart enough to do that. ;-)
$stmt = $this->pdo->prepare('SELECT * FROM messages WHERE time_sent IS NULL LIMIT 20');
$stmt->execute();
$messages = $stmt->fetchAll(PDO::FETCH_OBJ);
foreach ($messages as $message)
{
if ($this->sendNotification($message->id, $message->token, $message->payload))
{
$stmt = $this->pdo->prepare('UPDATE messages SET time_sent = NOW() WHERE id = ?');
$stmt->execute(array($message->id));
}
else // failed to deliver
{
$this->reconnectToAPNS();
}
}
unset($messages);
sleep(5);
}
}

I may have misread, but apparently you are launching every 5 minutes a script that doesn't exit (infinite loop). So you're stacking the instances, until Earth (or more modestly, your server) eventually explodes.
To answer your question, PHP automatically frees all resources, including DB connexions, when the script execution has ended.
When a script runs for a very long time (like infinite), or there are very specific memory considerations, one can manually frees resources using unset($foo) or $foo = null.[1]
DB connexions can be closed and freed this way too, just with unset($this->pdo).
[1] see also What's better at freeing memory with PHP: unset() or $var = null

Related

Sytems running out of memory when Postgresql connection is established

I have built an api that is called 20 times in a second to perform a function that establishes connection to a postgresql but anytime the call is made the system memory gets full. I am a novie in postgresql and any assistance will be appreciated. Below is my php code
try
{
$input=file_get_contents("php://input");
$pgconn = new PgSql();
$selectRecords="SELECT messageid, msisdn, smsmessage, serviceid, isbilled, linkid FROM sdpmtn_mt.smssendingtable WHERE priority = 0 LIMIT 1";
foreach($pgconn->getRows($selectRecords) as $rows){
$msisdn = $rows->msisdn;
$message = $rows->smsmessage;
$serviceid = $rows->serviceid;
$isbilled = $rows->isbilled;
$linkid = $rows->linkid;
$msgid = $rows->messageid;
}
}
catch(Exception $e)
{
print $e->getMessage();
}
I am calling this API twenty times in a second. Please assist with any configurations I have to make.

Is there a way for PHP PDO to detect if a t-sql database is being restored?

I'd like my PHP script (using PDO) to detect whether or not a target database is in the middle of a restore process other than waiting several minutes for a response from a failed connection.
My database connection code eventually returns the message below if a database is being restored, but it happens because the connection fails and it takes several minutes to respond when this happens. Searching on StackOverflow and Google doesn't seem to find anything that fits my need, nor does searching through PHP's documentation.
function getParameterizedPDOConnection($host = false, $overrideOptions = []) {
include(INCLUDE_DIR . "/itrain.config.php");
$host = strtolower($_SERVER["HTTP_HOST"]);
if (count($overrideOptions) > 0) {
$configOptions["host"][$host] = $overrideOptions;
}
$sthUserName = $configOptions["userName"];
$pwd = $configOptions["pwd"];
$addr = $configOptions["host"][$host]["addr"];
$db = $configOptions["host"][$host]["db"];
try {
$pdo = new PDO("sqlsrv:Server=$addr;Database=$db;ConnectionPooling=0", $sthUserName, $pwd, array(PDO::ATTR_ERRMODE => PDO::ERRMODE_SILENT));
return($pdo);
} catch (PDOException $e) {
return "Database connection failure: " . $e->getMessage();
}
}
Returns: "42000",927,"[Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Database 'Workforce' cannot be opened. It is in the middle of a restore.

Best practices when using PHP with Firebird or Interbase

The documentation for php-interbase is good - but not complete. In particular, there's no complete examples for working with Firebird. So how would you do it?
Basic guidelines.
Choosing between ibase_connect() vs ibase_pconnect() - the less time connections are active the less possible conflicts and the easier maintenance & backups can be performed. Unless connecting to the database is "expensive" in terms of processing time (you're performing large amounts of real-time reads/writes) use ibase_connect() as needed.
Always use explicit transactions. Always. It's simple - assume every call to ibase_prepare() or ibase_query() requires a transaction handle - never a "raw" connection handle.
Always follow a transaction with either a ibase_commit() or a ibase_rollback() as appropriate.
Basic template for a read operation:
// These would normally come from an include file...
$db_path = '/var/lib/firebird/2.5/data/MyDatabase.fdb';
$db_user = 'SYSDBA';
$db_pass = 'masterkey';
// use php error handling
try {
$dbh = ibase_connect( $db_path, $db_user, $db_pass );
// Failure to connect
if ( !$dbh ) {
throw new Exception( 'Failed to connect to database because: ' . ibase_errmsg(), ibase_errcode() );
}
$th = ibase_trans( $dbh, IBASE_READ+IBASE_COMMITTED+IBASE_REC_NO_VERSION);
if ( !$th ) {
throw new Exception( 'Unable to create new transaction because: ' . ibase_errmsg(), ibase_errcode() );
}
$qs = 'select FIELD1, FIELD2, from SOMETABLE order by FIELD1';
$qh = ibase_query( $th, $qs );
if ( !$qh ) {
throw new Exception( 'Unable to process query because: ' . ibase_errmsg(), ibase_errcode() );
}
$rows = array();
while ( $row = ibase_fetch_object( $qh ) ) {
$rows[] = $row->NODE;
}
// $rows[] now holds results. If there were any.
// Even though nothing was changed the transaction must be
// closed. Commit vs Rollback - question of style, but Commit
// is encouraged. And there shouldn't <gasp>used the S word</gasp>
// be an error for a read-only commit...
if ( !ibase_commit( $th ) ) {
throw new Exception( 'Unable to commit transaction because: ' . ibase_errmsg(), ibase_errcode() );
}
// Good form would dictate error traps for these next two...
// ...but these are the least likely to break...
// and my fingers are getting tired.
// Release PHP memory for the result set, and formally
// close the database connection.
ibase_free_result( $qh );
ibase_close( $dbh );
} catch ( Exception $e ) {
echo "Caught exception: $e\n";
}
// do whatever you need to do with rows[] here...

How to solve General error: 2006 MySQL server has gone away

I'm doing an operation that inserts hundreds of records into a MySQL database.
After inserting exactly 176 records I get this error:
[PDOException] SQLSTATE[HY000]: General error: 2006 MySQL server has gone away
Any ideas of how could I solve it?
The process is with PHP.
I would venture to say the problem is with wait_timeout. It is set to 30 seconds on my shared host and on my localhost is set for 28800.
I found that I can change it for the session, so you can issue the query: SET session wait_timeout=28800
UPDATE The OP determined that he also needed to change the variable interactive_timeout as well. This may or may not be needed for everyone.
The code below shows the setting before and after the change to verify that it has been changed.
So, set wait_timeout=28800 (and interactive_timeout = 28800) at the beginning of your query and see if it completes.
Remember to insert your own db credentials in place of DB_SERVER, DB_USER, DB_PASS, DB_NAME
UPDATE Also, if this does work, you want to be clear on what you are doing by setting wait_timeout higher. Setting it to 28800 is 8 hours and is a lot.
The following is from this site. It recommends setting wait_timeout to 300 - which I will try and report back with my results (after a few weeks).
wait_timeout variable represents the amount of time that MySQL will
wait before killing an idle connection. The default wait_timeout
variable is 28800 seconds, which is 8 hours. That's a lot.
I've read in different forums/blogs that putting wait_timeout too low
(e.g. 30, 60, 90) can result in MySQL has gone away error messages. So
you'll have to decide for your configuration.
<?php
$db = new db();
$results = $db->query("SHOW VARIABLES LIKE '%timeout%'", TRUE);
echo "<pre>";
var_dump($results);
echo "</pre>";
$results = $db->query("SET session wait_timeout=28800", FALSE);
// UPDATE - this is also needed
$results = $db->query("SET session interactive_timeout=28800", FALSE);
$results = $db->query("SHOW VARIABLES LIKE '%timeout%'", TRUE);
echo "<pre>";
var_dump($results);
echo "</pre>";
class db {
public $mysqli;
public function __construct() {
$this->mysqli = new mysqli(DB_SERVER, DB_USER, DB_PASS, DB_NAME);
if (mysqli_connect_errno()) {
exit();
}
}
public function __destruct() {
$this->disconnect();
unset($this->mysqli);
}
public function disconnect() {
$this->mysqli->close();
}
function query($q, $resultset) {
/* create a prepared statement */
if (!($stmt = $this->mysqli->prepare($q))) {
echo("Sql Error: " . $q . ' Sql error #: ' . $this->mysqli->errno . ' - ' . $this->mysqli->error);
return false;
}
/* execute query */
$stmt->execute();
if ($stmt->errno) {
echo("Sql Error: " . $q . ' Sql error #: ' . $stmt->errno . ' - ' . $stmt->error);
return false;
}
if ($resultset) {
$result = $stmt->get_result();
for ($set = array(); $row = $result->fetch_assoc();) {
$set[] = $row;
}
$stmt->close();
return $set;
}
}
}
Thanks #mseifert.
Your idea worked by doing the same with two variables.
interactive_timeout & wait_timeout
I copied the config from a local database:
SHOW VARIABLES LIKE '%timeout%'
Local db:
Remote db:
I did this inside the connect and disconnect and worked:
mysql_query("SET SESSION interactive_timeout = 28800;");
$result = mysql_query("SHOW VARIABLES LIKE 'interactive_timeout';");
$row = mysql_fetch_array($result);
$interactive_timeout = $row["Value"];
echo("interactive_timeout" . " = " . $interactive_timeout . "\n");
mysql_query("SET SESSION wait_timeout = 28800;");
$result = mysql_query("SHOW VARIABLES LIKE 'wait_timeout';");
$row = mysql_fetch_array($result);
$wait_timeout = $row["Value"];
echo("wait_timeout" . " = " . $wait_timeout . "\n");
Surprisingly it worked with GoDaddy.
I will accept your answer as valid #mseifert since you gave me the original idea.
Thanks a lot.
Let us hope this is useful in the future to solve the 2006 MySQL error for other developers.
In my case, when I got this error on the client side, the server side was
(Got a packet bigger than 'max_allowed_packet' bytes)
So I increase the value of the max_allowed_packet, and so far, no more issues.
On Google Cloud Platform, I edit the DB and add a Database flag and set the value to
max_allowed_packet=134217728
(which is 2^27 = 128M)
As you can only input numbers.
On regular instances, you can follow the doc here :
https://dev.mysql.com/doc/refman/8.0/en/packet-too-large.html
Assume your codes is:
// your codes
$pdo = db::connection()->getPdo();
$stmt = $pdo->prepare($sql);
$result = $stmt->execute($params);
So add below codes before your sql query:
$pdo = db::connection()->getPdo();
// Increase interactive_timeout
$prepend_sql = "SET SESSION interactive_timeout = 28800;";
$stmt = $pdo->prepare($prepend_sql);
$stmt->execute($params);
// Increase wait_timeout
$prepend_sql = "SET SESSION wait_timeout = 28800;";
$stmt = $pdo->prepare($prepend_sql);
$stmt->execute($params);
// your codes
/* $pdo = db::connection()->getPdo(); */
$stmt = $pdo->prepare($sql);
$result = $stmt->execute($params);
Another possible reason would be your client is trying to connect using SSL. while the MySQL/MariaDB server is not expecting that.
a solution is to check if the connection is active, if not re-establishing the connection, here it worked perfectly
<?php
require_once ('config.php');
class DB {
private static $instance;
private function __construct() {
;
}
public static function getInstance() {
if (!isset(self::$instance)) {
try {
self::$instance = new PDO('mysql:host=' . DB_HOST . ';dbname=' . DB_NAME, DB_USER, DB_PASS);
self::$instance->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
self::$instance->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_OBJ);
} catch (PDOException $e) {
echo $e->getMessage();
}
}
try {
$testConn = self::$instance->prepare('SELECT 1');
$testConn->execute();
$testConn = $testConn->fetchAll(PDO::FETCH_ASSOC)[0];
} catch (Exception $ex) {
try {
self::$instance = new PDO('mysql:host=' . DB_HOST . ';dbname=' . DB_NAME, DB_USER, DB_PASS);
self::$instance->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
self::$instance->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_OBJ);
} catch (PDOException $e) {
echo $e->getMessage();
}
}
return self::$instance;
}
public static function prepare($sql) {
return self::getInstance()->prepare($sql);
}
public static function lastInsertId() {
return self::getInstance()->lastInsertId();
}
}

Multiple Database Migration improve performance

I have an application where each user gets their own database where each database has the same schema.
I have a script that performs migrations in this fashion:
SHOW databases
Iterate through databases
Execute sql statements
This can take a long time when there are complicated queries that take a lot of time (3 or more seconds)
Is there a way where I can run the sql statements for each database at the same time from one script? Is this dangerous/too resource intensive to do this?
The reason I want to do this, is to prevent downtime as much as possible.
Here is my script now:
<?php
ini_set('display_errors', 1);
set_time_limit(0);
$sql = file_get_contents('../update.sql');
$sql_queries = explode(";", $sql);
$exclude_dbs = array('horde','phppoint_forums','phppoint_site','roundcube', 'pos', 'bntennis_site', 'mysql', 'information_schema', 'performance_schema');
$conn = mysqli_connect("localhost", "root", "PASSWORD");
$show_db_query = mysqli_query($conn, 'SHOW databases');
$databases = array();
while ($row = mysqli_fetch_assoc($show_db_query))
{
if (!in_array($row['Database'], $exclude_dbs))
{
$databases[] = $row['Database'];
}
}
foreach($databases as $database)
{
mysqli_select_db($conn, $database);
echo "Running queries on $database\n***********************************\n";
foreach($sql_queries as $query)
{
if (!empty($query))
{
echo "$query;";
if (!mysqli_query($conn, $query))
{
echo "\n\nERROR: ".mysqli_error($conn)."\n\n";
}
}
}
echo "\n\n";
}
?>
I don't know if the database will hold for that load but basically I would fork the process or spawn it into the background, depending on language.
For php you can fork the process for each database basically running the migrations in parallel.

Categories