Laravel jobs/queue unclosed SQL Server database sessions - php

I noticed large amout of sessions running on database. Almost half of them have a query, take a look at . In my project I use queue worker to execute a code in background and use database as queue connection.
Here is the code I use:
Passing jobs to Batch:
$jobs = [];
foreach($data as $d){
$jobs[] = new EstimateImportJob($d);
}
$batch = Bus::batch($jobs)->dispatch();
Job source code:
$current_date = Carbon::now();
// Using these as tables don`t have increment traits
$last_po_id = \DB::connection("main")->table('PO')->latest('ID')->first()->ID;
$last_poline_id = \DB::connection("main")->table('POLINE')->latest('ID')->first()->ID;
$last_poline_poline = \DB::connection("main")->table('POLINE')->latest('POLINE')->first()->POLINE;
\DB::connection('main')->table('POLINE')->insert($d);
As I know Laravel is supposed to close DB connection after code executions is finished. But I can`t find a reason why I have so many database sessions. Any ideas would be appretiated!
Normally, even with working queue worker expected result is to have 3-4 database sessions.

Related

How to optimize the PHP code to upload SQL dump

I'm using the code below to create and migrate tables, and while it works, it's very slow. It takes about 10 mins to complete creating about 250 tables and migrating the data. The total file size of the dump is ~1 Mb. Note that this is on localhost, and I'm afraid it'll take 5 times longer when deployed to a server with an unreliable network.
Could this code be optimized to to run within something more like 30 seconds?
function uploadSQL ( $myDbName ) {
$host = "localhost";
$uname = "username";
$pass = "password";
$database = $myDbName;
$conn = new mysqli($host, $uname, $pass, $database);
$filename = 'db.sql';
$op_data = '';
$lines = file($filename);
foreach ($lines as $line)
{
if (substr($line, 0, 2) == '--' || $line == '')
{
continue;
}
$op_data .= $line;
if (substr(trim($line), -1, 1) == ';')
{
$conn->query($op_data);
$op_data = '';
}
}
echo "Table Created Inside " . $database . " Database.......";
}
You can use cron job for automatically complete this process without waiting. Sometime this process failed for PHP execution timeout.
For increase the execution timeout in php you need to change some setting in your php.ini:
max_execution_time = 60
; also, higher if you must - sets the maximum time in seconds
The problem is - this question should not be asked with PHP, it is with the database.
During the import, indexes are rebuilt, foreign keys are checked and so on, and this is where the import actually takes a lot of time depending on your database structure.
Additionally to this, hardware might be at fault (i.e if the database is on HDD, the import will take noticeably more time than on the SSD drive).
I suggest looking into mysqltuner.pl result first, and start optimizing your database from there. Maybe post a question in SO on how to improve the database (as a separate question of course)
disabling SET FOREIGN_KEY_CHECKS=0 before the import and then enabling it with SET FOREIGN_KEY_CHECKS=1 after the import might help a bit, but it won't do all the optimizations you can do.
If all you simply want to do it SCHEDULE the task so you don't have to wait for the database import to finish, you need to implement a queue of tasks (i.e within database table) and handle thee queue via crontab, just as Muyin suggested.
If the dump comes from mysqldump, then don't use PHP at all. Simply do
mysql -h host.name -u ... -p... < dump_file.sql

Close MySQL Connection in Zend Framework (Or "Too many connections" solution)

After hours of trying and searching, I am hoping someone can help with my issue.
I created a CSV upload which works great on small numbers(at least 100 works fine). When testing a file with 2,000 records or so, I get an error of "Connect Error: SQLSTATE[08004] [1040] Too many connections".
From what I can tell, this error can be resolved by closing my DB connection after insert, which I can not figure out how to do in Zend 2, all the results are for Zend 1.
If there is a better way to insert large records into MySQL from CSV, then I am open to those suggestions as well.
My controller code:
while ($line = fgetcsv($fp, 0, ",")){
$record = array_combine($header, $line);
$record['group_id'] = $extra1;
$employee->exchangeArray($record);
$this->getEmployeeTable()->saveEmployee($employee);
}
And my saveEmployee code is just a basic Insert:
$adapter = new Adapter($dbAdapterConfig);
$sql = "INSERT INTO......;
$resultSet = $adapter->query($sql, \Zend\Db\Adapter\Adapter::QUERY_MODE_EXECUTE);
I believe adding a closeconnection() or something after $resultSet would resolve my issue.
On every iteration you call your saveEmployee method so on every iteration you make a new instance of Adapter.
Move your Adapter initialization out of the while loop (before the loop) and you should be fine.

Sphinxsearch periodically throws error searching an rt-index via zf2 db adapter

I'm having an intermittent problem with quite a complex search system. Every once in a while a PHP Daemon I wrote, which adds new content to our database and an RT index for sphinx throws a mysterious exception.
Message is simply "Statement could not be executed".
The code that causes it is (trimmed):
<?php
$itemIds = Array( 79555 );
$index = 'doc';
$adapter = $this->dbAdapter;
$qi = function($name) use ($adapter) {
return $adapter->platform->quoteIdentifier($name);
};
$checkSql = '
SELECT * FROM
'. $qi( $index ) . '
WHERE
id = ' . (int)$itemIds[0];
$checkStatement = $this->dbAdapter->query($checkSql);
$result = $checkStatement->execute();
The exception doesn't seem to occur on any particular trigger, but persists from the time it's first thrown to the time I restart the daemon. I've outputted the sql generated by Zend\DB\Adapter and bar ids being different, there seems to be no diference in the queries from ones that succeed to ones that fail.
There's no associated error in the sphinx logs (that I can see) and if I load neutron/sphinxsearch-api/sphinxapi.php and run GetLastError() it returns a blank string.
My thinking is that it's a connection error - or possibly a misconfiguration with the sphinx config making it timeout, but I'm not sure.
This sounds like you are using persistant connections. At times a connection might be dropped, but your code doesnt account for this, and is still trying to use a connection that has been closed. Maybe try checking for the error, and if get it, reconnects.
In short, make the code resilient to the connection occasionally having been closed.

PHP script restarting while importing a big MySQL table

This is a problem I'm having for quite some time now.
I work for a banking institute: our service provider lets us access our data via ODBC through a proprietary DB engine.
Since I need almost a hundred tables for our internal procedures and whatnot, I set up some "replication" scripts, put em in a cron and basically reloading from scratch the tables I need every morning.
When the number of records is small (approx. 50.000 records and 100 columns or so) everything goes smooth, but whenever I get a medium-big table (approx. 700.000 records), more often than not the script restarts itself (I look at my MySQL tables while the import scripts are running and I see them going 400k, 500k... and back from 1).
This is an example of one of my import scripts:
<?php
ini_set('max_execution_time', '0');
$connect = odbc_connect('XXXXX', '', '') or die ('0');
$empty_query = "TRUNCATE TABLE SADAS.".$NOME_SCRIPT;
$SQL->query($empty_query);
$select_query = " SELECT...
FROM ...";
$result = odbc_exec($connect, $select_query);
while($dati = odbc_fetch_object($result)) {
$insert_query = " INSERT INTO ...
VALUES ...";
$SQL->Query($insert_query);
}
// Close ODBC
odbc_close($connect);
?>
Any ideas?

How can I get php pdo code to keep retrying to connect if there are too many open connections?

I have an issue, it has only cropped up now. I am on a shared web hosting plan that has a maximum of 10 concurrent database connections. The web app has dozens of queries, some pdo, some mysql_*.
Loading one page in particular peaks at 5-6 concurrent connections meaning it takes a minimum of 2 users loading it at the same time to spit an error on one or both of them.
I know this is inefficient, I'm sure I can cut that down quite a bit, but that's what my idea is at the moment is to move the pdo code into a function and just pass in a query string and an array of variables, then have it return an array (partly to tidy my code).
THE ACTUAL QUESTION:
How can I get this function to continue to retry until it manages to execute, and hold up the script that called it (and any script that might have called that one) until it manages to execute and return it's data? I don't want things executing out of order, I am happy with code being delayed for a second or so during peak times
Since someone will ask for code, here's what I do at the moment. I have this in a file on it's own so I have a central place to change connection parameters. the if statement is merely to remove the need to continuously change the parameters when I switch from my test server to the liver server
$dbtype = "mysql";
$server_addr = $_SERVER['SERVER_ADDR'];
if ($server_addr == '192.168.1.10') {
$dbhost = "localhost";
} else {
$dbhost = "xxxxx.xxxxx.xxxxx.co.nz";
}
$dbname = "mydatabase";
$dbuser = "user";
$dbpass = "supersecretpassword";
I 'include' that file at the top of a function
include 'db_connection_params.php';
$pdo_conn = new PDO("mysql:host=$dbhost;dbname=$dbname", $dbuser, $dbpass);
then run commands like this all on the one connection
$sql = "select * from tbl_sub_cargo_cap where sub_model_sk = ?";
$capq = $pdo_conn->prepare($sql);
$capq->execute(array($sk_to_load));
while ($caprow = $capq->fetch(PDO::FETCH_ASSOC)) {
//stuff
}
You shouldn't need 5-6 concurrent connections for a single page, each page should only really ever use 1 connection. I'd try to re-architect whatever part of your application is causing multiple connections on a single page.
However, you should be able to catch a PDOException when the connection fails (documentation on connection management), and then retry some number of times.
A quick example,
<?php
$retries = 3;
while ($retries > 0)
{
try
{
$dbh = new PDO("mysql:host=localhost;dbname=blahblah", $user, $pass);
// Do query, etc.
$retries = 0;
}
catch (PDOException $e)
{
// Should probably check $e is a connection error, could be a query error!
echo "Something went wrong, retrying...";
$retries--;
usleep(500); // Wait 0.5s between retries.
}
}
10 concurrent connections is A LOT. It can serve 10-15 online users easily.
Heavy efforts needed to exhaust them.
So there is something wrong with your code.
There are 2 main reasons for it:
slow queries take too much time and thus serving one hit uses one mysql connection for too long.
multiple connections opened from every script.
The former one have to be investigated but for the latter one it's simple:
Do not mix myqsl_ and PDO in one script: you are opening 2 connections at a time.
When using PDO, open connection only once and then use it throughout your code.
Reducing the number of connections in one script is the only way to go.
If you have multiple instances of PDO class in your code, you will need to add that timeout handling code you want to every call. So, heavy code rewriting required anyway.
Replace these new instances with global $pdo; instead. It will take the same amount of time but it will be permanent solution, not temporary patch as you want it.
Please be sensible.
PHP automatically closes all the connections st the end of the script, you don't have to care about closing them manually.
Having only one connection throughout one script is a common practice. It is used by ALL the developers around the world. You can use it without any doubts. Just use it.
If you have transaction and want to log something in database you sometimes need 2 connections in one script

Categories