This question already has answers here:
Automated or regular backup of mysql data
(2 answers)
Closed 6 years ago.
I try to transfer a huge amount of data (around 200k records every few minutes) from one to another database (also on two different servers). The tableschema on both tables is both dbs is the same.
So whats the best way to transfer a huge resultset into a db without causing an memory limit error.
my current solution looks like this. But this means I run about 200k Insert querys in writeToDB2() and that seems to be not very effective to me.
$stmt = $this->db_1->query("SELECT foo from bar");
while($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
writeToDB2($row);
}
Does anyone knew a better solution to bulk transfer the data?
The other answer works only if the same user has access to both databases.
A general purpose solution is to forget about PHP and PDO at all and use console mysql shell and mysqldump like
mysqldump -uuser1 -ppassword1 db1 tablename | mysql -uuser2 -ppassword2 db2
Fortunately mysql supports INSERT SELECT spanning databases.
$stmt = $this->db_1->query("INSERT INTO db2.bar(foo) SELECT foo from db1.bar");
$stmt->execute();
Related
This question already has an answer here:
MYSQL last_insert_id() and concurrency
(1 answer)
Closed 8 years ago.
I know that mySQL has the query SELECT LAST_INSERT_ID().
This works properly well when it's a small site and not many users uses it. But, when your website has more users making use of it, there could be a problem with this select, because I can insert something at the same time as another one, so that could generate the problems.
I've been reading something from mySQLi Transactions, but I don't know much about it. Is this the only way to get the last id that I have inserted, not from other that can insert content at the same time as me??
Thanks in advance.
It won't be a problem, because last_insert_id() will get you only the last id inserted in the same session. You don't have to worry.
And as a note, it's just SELECT LAST_INSERT_ID();, not ...FROM table, you don't want to execute this function for every row in the table.
Read more about it here.
This question already has an answer here:
Performance/efficiency of 2 SELECT statements vs UNION vs anything else in MySQL-PHP
(1 answer)
Closed 8 years ago.
I have to run a query which retrieves records from two databases. The query is the same for both databases and I use the UNION relationship to just made one query.
The following example will describe what I actually want to do
SELECT col1 FROM db1.table1 UNION SELECT col1 FROM db2.table1 ;
I am using php to execute the above query so I need to know which is better for performance run the above query once or make two queries and merge the results by php
SELECT col1 FROM db1.table1
SELECT col1 FROM db2.table1
please note that I am using a complicated mysql queries which uses regex and sometimes i use subqueries.
Thanks
The only way to be sure is to benchmark it, which I haven't. The following is my best guess about the differences, assuming that:
In both cases there is some post processing in PHP (e.g. printing a result to a browser),
You will always run both queries together (either both are in query cache or none).
If you run them separately you implement the duplicate row removal in PHP.
If there are no duplicate rows:
The union should be faster by a constant amount of time (e.g. 30ms), because of the overhead of running 2 queries instead of one.
If there are duplicate rows:
The union will save you some traffic and PHP processing and might get noticeably faster (if there are a lot of duplicates).
This question already has answers here:
What is the best way to achieve speedy inserts of large amounts of data in MySQL?
(6 answers)
Closed 9 years ago.
I run a PHP script that reads data rows from a file, analyses them and inserts them one by one into a local MySQL database:
$mysqli = new mysqli($db_host, $db_user, $db_password, $db_db);
if ($mysqli->connect_errno) {
echo "Failed to connect to MySQL: (" . $mysqli->connect_errno . ") " . $mysqli->connect_error;
} else {
/* As long as there is data in the file */
while(...) {
... // analyse each row (contained in an object $data)
/* Write it to the database table. */
$mysqli->query($data->getInsertQuery($db_table));
}
}
I have 40 million data rows. The first couple of million datasets were inserted very fast, But in the last 6 hours only two million were inserted (I'm now at 30 million), and it seems that it becomes slower and slower (so far, no index was defined!).
I was wondering, if the is a more efficient way of writing data to the table. If possible I'd preferr a solution without extra (temporary) files.
You will be more efficient to first translate your file into a SQL one (so simple change your script to write down statements into a file) and then load it using mysql command line like that:
mysql -uuser -p dbname < file.sql
Over such large import, it will save you quite a bit on overhead that comes from using PHP. Just remember to stream the data into file one query at a time ;)
It's possible to pregenerate and store SQL inserts command to file and then import data to MySQL.
mysql --default-character=utf8 --user=your_user -p your_db < tbl.sql
You can use prepared statements to speed it up a bit:
See http://devzone.zend.com/239/ext-mysqli-part-i_overview-and-prepared-statements/ which I found googling for "stackoverflow mysqli prepared statements"
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
MySQL Injection - Use SELECT query to UPDATE/DELETE
So I have found in my site bug that allows to perform sql injection
http://mysite.com/script.php?id=1 union select 1,2,3 will output all fields that has Id property equal to one plus one additional row with 1,2,3. I know that I have to validate user input to close my bug.
However my question is quite another. Is it possible to perform update query or insert query? I am able to comment query using --, however I cannot use multiple statements that are delimited by ;. So is it possible to perform update query in my case. I can show PHP code and SQL query if needed.
$sql = "SELECT id, title, text from table where cId=$val";
$result = mysql_query($sql);
$array = mysql_fetch_array($result);
//echo rows in table
Judging from MySQL Injection - Use SELECT query to UPDATE/DELETE
all that is protecting you is a limitation of mysql_query. I would not rely on this, and in particular not that it remains this way over time. You should never rely on a feature to be disabled by default. Maybe the next version already allows statements such as.
SELECT id, title, text from table where cId=1; DROP table table
Nope it is not possible. Most probably you ar running mysql_query, that would not allow multiple queries to be run in one pass. And hence if your query starts with SELECT (as it does), it would not allow any UPDATE injection
Edit: Use mysql_real_escape_string on your input even then
By default this should not be possible. Although there are options for mysql_query to run multiple statements in one string since MySQL 5.0 which you have to set with mysql_set_server_option.
Please consider changing your statement command like this to use mysql_real_escape_string:
$q = mysql_fetch_array(mysql_query("SELECT id, title, text from table where cId = " . mysql_real_escape_string($val)));
At the very best you change your code to use PDO since all mysql_* functions are officially deprecated.
Anyone ran into this one before?
I have a Stored Procedure in SQL with the following Parameters :
exec PaySummary 'DemoTest', 'DemoTest-Mohn-00038', '5/14/12', '5/27/12', 'manager', 'DemoTest-Boyd-00005'
And the following MSSQL Query in PHP running the exact same query.
private function dataTest(){
$strSQL = 'exec PaySummary \'DemoTest\', \'DemoTest-Mohn-00038\', \'5/14/12\', \'5/27/12\', \'manager\', \'DemoTest-Boyd-00005\'';
$a = mssql_query($strSQL);
echo $strSQL;
while($row=mssql_fetch_array($a)){
var_dump($row);
}
}
When run in SQL for this query I will get 3 results...
When run in PHP through SQL I get 2 Results...
Is there any run time settings (Set NoCOUNT on) that you must set on a SQL Stored Procedure to ensure accuracy of the output of results? Or is there a known issue with passing date parameters that would impact the results of a date driven stored procedure?
Microsoft-IIS/5.0 / PHP/5.2.5 / SQL Server 2008 R2 (Where the stored procedure is executed).
For anyone in this same situation... It is caused by the NULL_CONCAT_NULL (or whatever) option in SQL. This one flag can make a stored procedure run a little bit differently depending on how you use concat etc. A good way to solve this problem is via an ISNULL around a lot of your items which seemed to get rid of the issue of getting different results.
Further another option if you do not want to fix your sprocs is to check the path that sql is going through (TCP/IP etc). I noticed when watching the audits that some settings were wildly different depending on the port that sql was running through.