am working on a news paper project but without Rss Feed, so i was Compelled to make it's feed programatically using php , so i have 2300 process of processing pages and inserting in Mysql the results of the processing ,
so the technique i used is to process every single page and then insert it's contents in mysql , it's working good but some times i got "MySQL server gone" ,
i tried to process 30 page and insert them in one request but it stop's after some time
so i am asking about any way to optimize this processing to reduce the time used in ?!
thanks alot
Your batch insert approach is correct and likely to help. You need to find out why it stops after some time like you say.
It is likely the php script timing out. Look for max_execution_time on your php.ini file to make sure it's high enough to allow for the script to finish.
Also, make sure your mysql config allows for a large enough packet because you're sending large batches which may be large.
Hope that helps!
There are plenties of reasons why "MySQL server has gone away". Take a look.
Anyway, it is strange that you load the WHOLE pages. Usually RSS feed implies that you put there just a subject and some text snippet. I'd create RSS feed as simple XML file so it is not necessary to load data from MySQL on EVERY hit users do. You create news -> regenerate RSS XML file, you wrote new article -> regenerate RSS XML file.
If you still want to prepare your data to be inserted, just create a file with ALL inserts and then load data from this file.
$ mysql -u root
mysql> \. /tmp/data_to_load.sql
Yes! all 2300 at a time ;)
$generated_sql = 'insert into Table (c1,c1,c3) values (data1,data2,data3);insert into Table (c1,c1,c3) values (data4,data5,data6);';
$sql_file = '/tmp/somefile';
$fp = fopen($sql_file, 'w');
fwrite($fp, $generated_sql, strlen($generated_sql)); // wrote sql script
fclose($fp);
`mysql -u $mysql_username --password=$mysql_password < $sql_file`;
Backticks are necessary in the last line!
$ mysql -u root test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 171
Server version: 5.1.37-1ubuntu5.5 (Ubuntu)
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create table test (id int(11) unsigned not null primary key);
Query OK, 0 rows affected (0.12 sec)
mysql> exit
Bye
$ echo 'insert into test.test values (1); insert into test.test values (2);' > file
$ php -a
Interactive shell
php > `mysql -u root < /home/nemoden/file`;
php > exit
$ mysql -u root test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 180
Server version: 5.1.37-1ubuntu5.5 (Ubuntu)
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> select * from test
-> ;
+----+
| id |
+----+
| 1 |
| 2 |
+----+
2 rows in set (0.00 sec)
So, as you can see, it worked perfectly.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a mysql database called 'sample1' on one of my windows laptops and another mysql database called 'sample2' in the other machine. I want to connect both these machines together first and link the databases 'sample1' and 'sample2' so that the query I execute in 'sample1' must be reflected in 'sample2' also (distributed query processing).
Eg: if sample1 and sample2 contain 5 records, by deleting a record in sample1 must be reflected in sample2 also.
I use WAMP and work on PHP alongside MySQL. Kindly help...
As I understand, You need that sample1 is identical to sample2 and that automaticly the queries are distributed.
It looks like the best way (maybe not the easiest) is to use the replication of mysql : https://dev.mysql.com/doc/refman/5.0/en/replication-howto.html
EDIT : this answer may not be the answer that you need because with replication you will need that 1 of the two server (The master server) stay up everytime or you will need a third server if you want to shutdown the two other.
The following code opens two MySQL server connections ($conn1 and $conn2) and then each connection will select one database to use.
$database1 = "students";
$database2 = "employees";
$conn1 = mysql_connect('host1', 'user', 'password');
if(!$conn1) {
die("Not connected: ". mysql_error());
}else{
mysql_select_db($database1, $conn1);
}
$conn2 = mysql_connect('host2', 'user', 'password', TRUE);
if(!$conn2) {
die("Not connected: ". mysql_error());
}else{
mysql_select_db($database2, $conn2);
}
There have 2 solutions for your reference.
Mysql maser-slave mode(mysql replication),in your case we assume 'sample1' as master , other database('sample2') as slave , when you delete data from foo table of sample1 , this operation will reflect the foo table of 'sample2'.More details please see https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-in-mysql
Mysql FEDERATED Storage Engine , this engine will mapping remote data to local,the effect same as item 1 .
please see the link:
https://dev.mysql.com/doc/refman/5.1/en/federated-storage-engine.html]
Hope this can help you!
You can follow this article in order to do a database replication or follow the steps bellow.
You have here a tutorial written by Falko Timme that will show how to replicate the database exampledb from the master with the IP address 192.168.0.100 to a slave. Both systems (master and slave) are running Debian Sarge; however, the configuration should apply to almost all distributions with little or no modification.
To configure the master we first have to edit /etc/mysql/my.cnf. We have to enable networking for MySQL, and MySQL should listen on all IP addresses, therefore we comment out these lines (if existant):
#skip-networking
#bind-address = 127.0.0.1
Furthermore we have to tell MySQL for which database it should write logs (these logs are used by the slave to see what has changed on the master), which log file it should use, and we have to specify that this MySQL server is the master. We want to replicate the database exampledb, so we put the following lines into /etc/mysql/my.cnf:
log-bin = /var/log/mysql/mysql-bin.log
binlog-do-db=exampledb
server-id=1
Then we restart MySQL:
/etc/init.d/mysql restart
Then we log into the MySQL database as root and create a user with replication privileges:
mysql -u root -p
Enter password:
Now we are on the MySQL shell.
GRANT REPLICATION SLAVE ON *.* TO 'slave_user'#'%' IDENTIFIED BY '<some_password>'; (Replace <some_password> with a real password!)
FLUSH PRIVILEGES;
Next (still on the MySQL shell) do this:
USE exampledb;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
The last command will show something like this:
+---------------+----------+--------------+------------------+
| File | Position | Binlog_do_db | Binlog_ignore_db |
+---------------+----------+--------------+------------------+
| mysql-bin.006 | 183 | exampledb | |
+---------------+----------+--------------+------------------+
1 row in set (0.00 sec)
Write down this information, we will need it later on the slave!
Then leave the MySQL shell:
quit;
There are two possibilities to get the existing tables and data from exampledb from the master to the slave. The first one is to make a database dump, the second one is to use the LOAD DATA FROM MASTER; command on the slave. The latter has the disadvantage the the database on the master will be locked during this operation, so if you have a large database on a high-traffic production system, this is not what you want, and I recommend to follow the first method in this case. However, the latter method is very fast, so I will describe both here.
If you want to follow the first method, then do this:
mysqldump -u root -p<password> --opt exampledb > exampledb.sql (Replace <password> with the real password for the MySQL user root! Important: There is no space between -p and <password>!)
This will create an SQL dump of exampledb in the file exampledb.sql. Transfer this file to your slave server!
If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.
Finally we have to unlock the tables in exampledb:
mysql -u root -p
Enter password:
UNLOCK TABLES;
quit;
I copied the contents of a large data table from one table to another with 2 additional columns,
the table1(original data) is getting queried by
select * from cc2;
But the same data with 2 more additional columns having NULL values throughout are not getting executed normally. i have to put limit clause to make it execute. like
select * from cc *limit 0,68000*;
the database is same, table and content are same. the question is WHY this weird behavior. and to parse my this data to foreach() loop, i am having to run for() loop and it is affecting the performance.
Any suggestions would be tried and tested asap.
Thanks in advance geniuses
Instead of using php for importing a lot of data, just try to execute directly from the commandline.
first dump your table:
mysqldump -u yourusername -p yourpassword yourdatabase tableName > text_file.sql
then change the tablename in the top of that file (and make sure the extra columns have default NULL ). And import with
mysql -u yourusername -p yourpassword yourdatabase < text_file.sql
Using a textfile containing the query is always preferred for large data sets, so you don't run into problems with PHP or the webserver.
I'm wondering if oci_connect() can cause a 1438 error, because i get this all the time:
Warning: oci_connect() [function.oci-connect]: ORA-00604: error
occurred at recursive SQL level 1 ORA-01438: value larger than
specified precision allowed for this column ORA-06512: at line 8 in
/xxxxxx/some.php on line 220
It's not depending on which table is being queried. It seems like oci_connect() is inserting some trackingstaff in some sys table, or maybe a trigger is related with the logon. But i don't have the permission to figure out this problem in sys.
Any Idea what could be the cause for this error?
Update
Does oracle do some logging somewhere automatically out of box without configured to specifically? Can i somehow let oracle or PHP show me which table or column is affected?
Update
I found out that, when i call the PHP Script in Bash directly, it does work fine. But a call from web will cause titled problem. Any Idea?
The message error occurred at recursive SQL level 1 suggests to me that the error is arising within a trigger. My guess is that there is an AFTER LOGON ON SCHEMA or DATABASE trigger, and for some reason it causes an error when your web server process attempts to connect.
Here's an example of how to generate the error you're getting. I have a table called TINY, with a single column that can only take values up to 99:
SQL> desc tiny;
Name Null? Type
----------------------------------------- -------- ----------------------------
N NUMBER(2)
Now let's create a user account and verify that they can connect:
SQL> create user fred identified by fred account unlock;
User created.
SQL> grant connect to fred;
Grant succeeded.
SQL> connect fred/fred
Connected.
Good - let's log back in as me and create a trigger that will cause an error if FRED attempts to connect:
SQL> connect luke/password
Connected.
SQL> create or replace trigger after_logon_error_if_fred
2 after logon on database
3 begin
4 if user = 'FRED' then
5 insert into tiny (n) values (100);
6 end if;
7 end;
8 /
Trigger created.
Recall that our TINY table can only store values up to 99. So, what happens when FRED attempts to connect?
SQL> connect fred/fred
ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-01438: value larger than specified precision allowed for this column
ORA-06512: at line 3
Other than the line number, and the bit PHP added, that's exactly the message you got.
If you want to see whether there are any AFTER LOGON triggers in your database, try running the query
SELECT trigger_name, owner FROM all_triggers
WHERE TRIM(triggering_event) = 'LOGON';
On my database (Oracle 11g XE beta), I get the following output:
TRIGGER_NAME OWNER
------------------------------ ------------------------------
AFTER_LOGON_ERROR_IF_FRED LUKE
I don't believe Oracle does any logging out-of-the-box, and I'd be surprised if PHP's oci_connect does either.
I can only speculate as to why the error arises only for your web server and not when you run PHP from a bash script. Perhaps the trigger is querying V$SESSION and trying to figure out what user account is trying to connect to the database?
Well, depending on the column, you're either trying to insert a number that's larger than the allowed bounds for a numeric column, or you're trying to insert a string into a varchar2(n) column that is longer than n characters. Here are more specifics on Oracle datatypes.
Without more specific information as to what's being inserted into what column in what table at line 220 of some.php, I can't be of much more direct help.
I have an application at Location A (LA-MySQL) that uses a MySQL database; And another application at Location B (LB-PSQL) that uses a PostgreSQL database. (by location I mean physically distant places and different networks if it matters)
I need to update one table at LB-PSQL to be synchronized with LA-MySQL but I don't know exactly which are the best practices in this area.
Also, the table I need to update at LB-PSQL does not necessarily have the same structure of LA-MySQL. (but I think that isn't a problem since the fields I need to update on LB-PSQL are able to accommodate the data from LA-MySQL fields)
Given this data, which are the best practices, usual methods or references to do this kind of thing?
Thanks in advance for any feedback!
If both servers are in different networks, the only chance I see is to export the data into a flat file from MySQL.
Then transfer the file (e.g. FTP or something similar) to the PostgreSQL server and import it there using COPY
I would recommend to import the flat file into a staging table. From there you can use SQL to move the data to the approriate target table. That will give you the chance to do data conversion or do updates on existing rows.
If that transformation is more complicated you might want to think about using an ETL tool (e.g. Kettle) to do the migration on the target server .
Just create a script on LA that will do something like this (bash sample):
TMPFILE=`mktemp` || (echo "mktemp failed" 1>&2; exit 1)
pg_dump --column-inserts --data-only --no-password \
--host="LB_hostname" --username="username" \
--table="tablename" "databasename" \
awk '/^INSERT/ {i=1} {if(i) print} # ignore everything to first INSERT' \
> "$TMPFILE" \
|| (echo "pg_dump failed" 1>&2; exit 1)
(echo "begin; truncate tablename;"; cat "$TMPFILE"; echo 'commit;' ) \
| mysql "databasename" < "$TMPFILE" \
|| (echo "mysql failed" 1>&2; exit 1) \
rm "$TMPFILE"
And set it to run for example once a day in cron. You'd need a '.pgpass' for postgresql password and mysql option file for mysql password.
This should be fast enough for a less than a million of rows.
Not a turnkey solution, but this is some code to help with this task using triggers. The following assumes no deletes or updates for brevity. Needs PG>=9.1
1) Prepare 2 new tables. mytable_a, and mytable_b. with the same columns as the source table to be replicated:
CREATE TABLE mytable_a AS TABLE mytable WITH NO DATA;
CREATE TABLE mytable_b AS TABLE mytable WITH NO DATA;
-- trigger function which copies data from mytable to mytable_a on each insert
CREATE OR REPLACE FUNCTION data_copy_a() RETURNS trigger AS $data_copy_a$
BEGIN
INSERT INTO mytable_a SELECT NEW.*;
RETURN NEW;
END;
$data_copy_a$ LANGUAGE plpgsql;
-- start trigger
CREATE TRIGGER data_copy_a AFTER INSERT ON mytable FOR EACH ROW EXECUTE PROCEDURE data_copy_a();
Then when you need to export:
-- move data from mytable_a -> mytable_b without stopping trigger
WITH d_rows AS (DELETE FROM mytable_a RETURNING * ) INSERT INTO mytable_b SELECT * FROM d_rows;
-- export data from mytable_b -> file
\copy mytable_b to '/tmp/data.csv' WITH DELIMITER ',' csv;
-- empty table
TRUNCATE mytable_b;
Then you may import the data.csv to mysql.
I have the following 1 db table in Database 1 and 1db table in Database 2, now the stucture of both tables are exactly the same. Now what happens is table 1 (DB1) gets new rows added daily, I need to update the table 1 (DB 1) new rows in table 1 (DB 2) so that these 2 tables remain the same. A cron will trigger a php script on midnight to do this task. What is the best way to do this and how using PHP/mysql?
You might care to have a look at replication (see http://dev.mysql.com/doc/refman/5.4/en/replication-configuration.html). That's the 'proper' way to do it; it isn't to be trifled with, though, and for small tables the above solutions are probably better (and certainly easier).
This might help you out, its what i do on my database for a similar kinda thing
$dropSQL = "DROP TABLE IF EXISTS `$targetTable`";
$createSQL = "CREATE TABLE `$targetTable` SELECT * FROM `$activeTable`";
$primaryKeySQL = "ALTER TABLE `$targetTable` ADD PRIMARY KEY(`id`)";
$autoIncSQL = "ALTER TABLE `$targetTable` CHANGE `id` `id` INT( 60 ) NOT NULL AUTO_INCREMENT";
mysql_query($dropSQL);
mysql_query($createSQL);
mysql_query($primaryKeySQL);
mysql_query($autoIncSQL);
obviously you will have to modify the taget and active table variables. Dropping the table will lose the primary key when you do this, oh well .. easy enough to add back in
I would recommend replication as has already been suggested. However, another option is to use mysqldump to grab the rows you need and send them to the other table.
mysqldump -uUSER -pPASSWORD -hHOST --compact -t --where="date=\"CURRENT_DATE\"" DB1 TABLE | mysql -uUSER -pPASSWORD -hHOST -D DB2
Replace USER, HOST, and PASSWORD with login info for your database. You can use different information for each part of the command if DB1 and DB2 have different access information. DB1 and DB2 are the names of your databases, and TABLE is the name of the table.
You can also modify the --where option to grab only the rows which need to updated. Hopefully you have some query you can use. As mentioned previously, if the table has a primary key, you could grab the last key which DB2 has using a command something like
KEY=`echo "SELECT MAX(KEY_COLUMN) FROM TABLE;" mysql -uUSER -pPASSWORD -hHOST -D DB2`
for a bash shell script (then use this value in the WHERE clause above). Depending on how your primary key is generated, this may be a bad idea since rows may be added in holes in the keyspace if they exist.
This solution will also work if rows are changed as long as you have a query which can select these rows. Just add the --replace option to the mysqldump command. In your situation, it would be best to add some type of value such as date updated which you can compare by.