I'm using the code below to create and migrate tables, and while it works, it's very slow. It takes about 10 mins to complete creating about 250 tables and migrating the data. The total file size of the dump is ~1 Mb. Note that this is on localhost, and I'm afraid it'll take 5 times longer when deployed to a server with an unreliable network.
Could this code be optimized to to run within something more like 30 seconds?
function uploadSQL ( $myDbName ) {
$host = "localhost";
$uname = "username";
$pass = "password";
$database = $myDbName;
$conn = new mysqli($host, $uname, $pass, $database);
$filename = 'db.sql';
$op_data = '';
$lines = file($filename);
foreach ($lines as $line)
{
if (substr($line, 0, 2) == '--' || $line == '')
{
continue;
}
$op_data .= $line;
if (substr(trim($line), -1, 1) == ';')
{
$conn->query($op_data);
$op_data = '';
}
}
echo "Table Created Inside " . $database . " Database.......";
}
You can use cron job for automatically complete this process without waiting. Sometime this process failed for PHP execution timeout.
For increase the execution timeout in php you need to change some setting in your php.ini:
max_execution_time = 60
; also, higher if you must - sets the maximum time in seconds
The problem is - this question should not be asked with PHP, it is with the database.
During the import, indexes are rebuilt, foreign keys are checked and so on, and this is where the import actually takes a lot of time depending on your database structure.
Additionally to this, hardware might be at fault (i.e if the database is on HDD, the import will take noticeably more time than on the SSD drive).
I suggest looking into mysqltuner.pl result first, and start optimizing your database from there. Maybe post a question in SO on how to improve the database (as a separate question of course)
disabling SET FOREIGN_KEY_CHECKS=0 before the import and then enabling it with SET FOREIGN_KEY_CHECKS=1 after the import might help a bit, but it won't do all the optimizations you can do.
If all you simply want to do it SCHEDULE the task so you don't have to wait for the database import to finish, you need to implement a queue of tasks (i.e within database table) and handle thee queue via crontab, just as Muyin suggested.
If the dump comes from mysqldump, then don't use PHP at all. Simply do
mysql -h host.name -u ... -p... < dump_file.sql
Related
I noticed large amout of sessions running on database. Almost half of them have a query, take a look at . In my project I use queue worker to execute a code in background and use database as queue connection.
Here is the code I use:
Passing jobs to Batch:
$jobs = [];
foreach($data as $d){
$jobs[] = new EstimateImportJob($d);
}
$batch = Bus::batch($jobs)->dispatch();
Job source code:
$current_date = Carbon::now();
// Using these as tables don`t have increment traits
$last_po_id = \DB::connection("main")->table('PO')->latest('ID')->first()->ID;
$last_poline_id = \DB::connection("main")->table('POLINE')->latest('ID')->first()->ID;
$last_poline_poline = \DB::connection("main")->table('POLINE')->latest('POLINE')->first()->POLINE;
\DB::connection('main')->table('POLINE')->insert($d);
As I know Laravel is supposed to close DB connection after code executions is finished. But I can`t find a reason why I have so many database sessions. Any ideas would be appretiated!
Normally, even with working queue worker expected result is to have 3-4 database sessions.
I have a large table with 500,000 records, on clicking of a button the data gets downloaded, but it is very slow especially on a bad internet.
I was thinking of Zipping the file and then save it but again I am sure it will take up extra memory for the whole process.
Is there a better way to optimize this CSV download.
<?php
// mysql database connection details
$host = "localhost";
$username = "admin";
$password = "root";
$dbname = "db_books";
// open connection to mysql database
$connection = mysqli_connect($host, $username, $password, $dbname) or die("Connection Error " . mysqli_error($connection));
// fetch mysql table rows
$sql = "select * from tbl_books";
$result = mysqli_query($connection, $sql) or die("Selection Error " . mysqli_error($connection));
$fp = fopen('books.csv', 'w');
while($row = mysqli_fetch_assoc($result))
{
fputcsv($fp, $row);
}
fclose($fp);
//close the db connection
mysqli_close($connection);
?>
I would use MySQL INTO OUTFILE. It is much faster than looping through the results of your query. You add this to your select statement and MySQL will take care of creating your file for you.
See more documentation on the abilities here.
It sounds like you're using the page-load thread to compile the CSV before sending to the user. This is why it seems so slow.
If possible, you might want to simply pre-compile the CSV downloads, before the user gets to that point. That way their browser will simply receive the file, not hang while you generate it. If you're concerned about wasting too much time generating files that users never download, perhaps have a background job that generates files when needed, but only if the user has logged on (or into a certain area of your site) within the last X hours.
Alternatively, maybe you could use jQuery/Ajax to display a pop-up dialog that tells the user to wait while their file is being generated, and then disappears once the download is ready.
I have an sqlite database that I query from PHP periodically. The query is always the same and it returns me a string. Once the string changes in the database the loop ends.
The following code is working, but I am pretty sure this is not the optimal way to do this...
class MyDB extends SQLite3
{
function __construct()
{
$this->open('db.sqlite');
}
}
$loop = True;
while ($loop == True) {
sleep(10);
$db = new MyDB();
if (!$db) {
echo $db->lastErrorMsg();
} else {
echo "Opened database successfully\n";
}
$sql = 'SELECT status from t_jobs WHERE name=' . $file_name;
$ret = $db->query($sql);
$state = $ret->fetchArray(SQLITE3_ASSOC);
$output = (string)$state['status'];
if (strcmp($output, 'FINISHED') == 0) {
$loop = False;
}
echo $output;
$db->close();
}
If you want to get an output immediately and a kind of interface, I think The best solution for your problem might be to use HTTP long polling. This way, it will not hold the connection for hours if the job is not done:
you will need to code a javascript snippet (in another html or php page) that runs an ajax call to your current php code.
Your web server (and so, your php code) will keep the connection opened for a while until the job is done or a time limit is reached (say 20-30 seconds)
if the job is not done, the javascript will make another ajax call and everything will start again, keeping the connection, etc... until you get the expected output status...
BEWARE : this solution will not work on any hosting provider
You will need to set the max_execution_time to a higher value than the default one see php doc for this.
I think you can find many things on http long polling with php/javascript on google / stack overflow...
My first post, because I haven't found answer to this problem anywhere! And i looked way beyond Google.. :)
DESCRIPTION:
So I have a set-up where an arduino device is connected to a laptop via USB serial cable and the laptop is connected to internet.
Like this: http://postimg.org/image/cz1g0q2ib/
arduino ---USB---> laptop (transit.py) ---WWW---> server (insert.php)-> mysql DB
There is a python script (transit.py) on the pc running continuously and listening to the COM port, analyzing received data and forwarding it to a file (insert.php) on a remote server (a free hosting site)
See code to learn how that works...
Then there is the insert.php script that receives this data (still almost every second), analyzes it and stores it in the mySql database.
This, however, is not the only file that requires mySql connection, therefore i include connect.php at the beginning of every such file.
PROBLEM:
Warning: mysqli::mysqli() [mysqli.mysqli]: (42000/1226): User 'user' has exceeded the 'max_connections_per_hour' resource (current value: 1500) in /server/connect.php on line 8
As a result of all this data travel and it's frequency (and cheapness of the hosting) i run into a "maximum connections per hour exceeded" error. The limit is 1500 per hour and i can't change it (it's a remote server). And no, i don't want to pay for hosting to get a bigger allowance - that's not the point- the issue is inefficiency of my code. Can i have one, persistent connection? Like a service?
Sending data from python script straight to remote mysql is not an option, because i don't have access to this feature.
CODE:
transit.py:
try:
ser = serial.Serial('COM4',9600,timeout=4)
except:
print ('=== COULD NOT CONNECT TO BOARD ===')
value = ser.readline()
strValue = value.decode("utf-8")
if strValue:
mylist = strValue.split(',')
print(mylist[0] + '\t\t' + mylist[1]+ '\t\t' + mylist[2])
path = 'http://a-free-server.com/insert.php'
dataLine = {"table": mylist[0], "data": mylist[1], "value": mylist[2]}
toServer = requests.post(path, params=dataLine, timeout=2)
insert.php:
<?php
include 'connect.php';
//some irrelevant code here...
if (empty($_GET['type']) && isset($_GET['data'])) {
$table = $_GET['table'];
$data = $_GET['data'];
$value = $_GET['value'];
if($mysqli->connect_errno > 0){
die('Unable to connect to database [' . $mysqli->connect_error . ']');
}
else
{
date_default_timezone_set("Asia/Hong_Kong");
$clock = date(DATE_W3C);
if (isset($_GET['time'])) {
$time = $_GET['time'];
}
else{
$time = $clock;
}
echo "Received: ";
echo $table;
echo ",";
echo $data;
echo ",";
echo $value;
echo ",";
echo $time;
if ($stmt = $mysqli->prepare("INSERT INTO ".$table." (`id`, `data`, `value`, `time`) VALUES (NULL, ?, ?, ?) ON DUPLICATE KEY UPDATE time='".$time."'"))
{
$stmt->bind_param('sss', $data, $value, $time);
$stmt->execute();
$stmt->free_result();
$stmt->close();
}
else{
echo "Prepare failed: (" . $mysqli->errno . ") " . $mysqli->error;
}
}
}else{
echo " | DATA NOT received!";
}
?>
connect.php:
<?php
define("HOST", "p:a-free-host.com"); // notice the p: for persistence
define("USER", "user");
define("PASSWORD", "strongpassword1"); // my password. don't look!
define("DATABASE", "databass");
$GLOBALS["mysqli"] = new mysqli(HOST, USER, PASSWORD, DATABASE, 3306);
$count = intval(file_get_contents('conns.txt'));
file_put_contents('conns.txt', ++$count); //just something i added to monitor connections
?>
P.S. Everything works fine and all data is handled in a rather desirable manner, except for exceeding the limit and perhaps some other hidden caveats.
Any suggestion on how to decrease the connection count but still receive data every second?
If I have understood your issue correctly, your web host sucks. If you are limited to 1500 connections / hour, and each page requires a connection, that means you can never exceed 1500 page views per hour; that's not very much.
Many programming languages support connection pooling; in this model, the server opens one or more connection at start-up, and individual page requests get one of those connections when they need them. This reduces the overhead of opening and closing connections. See here for a discussion of connection pooling and PHP. You may be able to use one of the answers without too much trouble.
The alternative - and probably better - solution is to batch up data in your Python scripts so you don't have to connect to the web server so often. The classic waty to do this for applications that aren't time critical is to use a message bus. I'm not a Pythonist, but this should do the job...
Did you try to create a script that is all the time alive(here you make the connection)(S1) and then the rest?
(S2)
In the script that you are doing the operations first check if the connection is alive and if is not redo connection.
Close the connection in S1 at the end of the script.
I have an issue, it has only cropped up now. I am on a shared web hosting plan that has a maximum of 10 concurrent database connections. The web app has dozens of queries, some pdo, some mysql_*.
Loading one page in particular peaks at 5-6 concurrent connections meaning it takes a minimum of 2 users loading it at the same time to spit an error on one or both of them.
I know this is inefficient, I'm sure I can cut that down quite a bit, but that's what my idea is at the moment is to move the pdo code into a function and just pass in a query string and an array of variables, then have it return an array (partly to tidy my code).
THE ACTUAL QUESTION:
How can I get this function to continue to retry until it manages to execute, and hold up the script that called it (and any script that might have called that one) until it manages to execute and return it's data? I don't want things executing out of order, I am happy with code being delayed for a second or so during peak times
Since someone will ask for code, here's what I do at the moment. I have this in a file on it's own so I have a central place to change connection parameters. the if statement is merely to remove the need to continuously change the parameters when I switch from my test server to the liver server
$dbtype = "mysql";
$server_addr = $_SERVER['SERVER_ADDR'];
if ($server_addr == '192.168.1.10') {
$dbhost = "localhost";
} else {
$dbhost = "xxxxx.xxxxx.xxxxx.co.nz";
}
$dbname = "mydatabase";
$dbuser = "user";
$dbpass = "supersecretpassword";
I 'include' that file at the top of a function
include 'db_connection_params.php';
$pdo_conn = new PDO("mysql:host=$dbhost;dbname=$dbname", $dbuser, $dbpass);
then run commands like this all on the one connection
$sql = "select * from tbl_sub_cargo_cap where sub_model_sk = ?";
$capq = $pdo_conn->prepare($sql);
$capq->execute(array($sk_to_load));
while ($caprow = $capq->fetch(PDO::FETCH_ASSOC)) {
//stuff
}
You shouldn't need 5-6 concurrent connections for a single page, each page should only really ever use 1 connection. I'd try to re-architect whatever part of your application is causing multiple connections on a single page.
However, you should be able to catch a PDOException when the connection fails (documentation on connection management), and then retry some number of times.
A quick example,
<?php
$retries = 3;
while ($retries > 0)
{
try
{
$dbh = new PDO("mysql:host=localhost;dbname=blahblah", $user, $pass);
// Do query, etc.
$retries = 0;
}
catch (PDOException $e)
{
// Should probably check $e is a connection error, could be a query error!
echo "Something went wrong, retrying...";
$retries--;
usleep(500); // Wait 0.5s between retries.
}
}
10 concurrent connections is A LOT. It can serve 10-15 online users easily.
Heavy efforts needed to exhaust them.
So there is something wrong with your code.
There are 2 main reasons for it:
slow queries take too much time and thus serving one hit uses one mysql connection for too long.
multiple connections opened from every script.
The former one have to be investigated but for the latter one it's simple:
Do not mix myqsl_ and PDO in one script: you are opening 2 connections at a time.
When using PDO, open connection only once and then use it throughout your code.
Reducing the number of connections in one script is the only way to go.
If you have multiple instances of PDO class in your code, you will need to add that timeout handling code you want to every call. So, heavy code rewriting required anyway.
Replace these new instances with global $pdo; instead. It will take the same amount of time but it will be permanent solution, not temporary patch as you want it.
Please be sensible.
PHP automatically closes all the connections st the end of the script, you don't have to care about closing them manually.
Having only one connection throughout one script is a common practice. It is used by ALL the developers around the world. You can use it without any doubts. Just use it.
If you have transaction and want to log something in database you sometimes need 2 connections in one script