I am trying the following which is not working:
update table_name set text_column= load_file('C:\temp\texttoinset.txt') where primary_key=5;
Here text_column is of type TEXT.
This gives:
Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT. Statement is unsafe because it uses a system function that may return a different value on the slave. Rows matched: 1 Changed: 0 Warnings: 1
What is the right way to insert a log file contents in my SQL from PHP?
This is a possible duplicate of this question: https://serverfault.com/questions/316691/mysql-binlog-format-dilemma
Anyway, the key is Statement is unsafe because it uses a system function that may return a different value on the slave.
When the MySQL database is setup to use replication, system functions like load_file can cause issues. The file C:\temp\texttoinset.txt is likely different between the master server and the slave server (or may not even exist on one of them).
When using replication, it is best to avoid system functions (like load_file and NOW()) because the values will be different when executed on different servers. If you want to load a file into a MySQL database that uses replication, consider using PHP's file_get_contents to read the file, and then use that to insert it into the database.
As a side note, I don't know why you're trying to insert a log file into MySQL, especially as a single TEXT column. There is probably a better way to do what you are wanting to do.
Related
Please, if somebody can give me support.
My problem is:
I have a table with 8 fields and about 510 000 records. In a web form, the user select an Excel file and it's read it with SimpleXLSX. The file has about 340 000 lines. With PHP and SimpleXLSX library this file is loaded in memory, then with a for cicle the script read line by line, taken one data of ecah line and search this value in the table, if the data exists in the table, then does not insert the value, other wise, the values read it are stored in the table.
This process takes days to finish.
Can somebody suggest me some operation to speed up the process?
Thanks a lot.
if you have many users, and they maybe use the web at the same time:
you must change SimpleXLSX to js-xlsx, in webbrowser do all work but only write database in server
if you have few users (i think you in this case)
and search this value in the table
this is cost the must time, if your single-to-single compare memory and database, then add/not-add to database.
so you can read all database info in memory, (must use hash-list for compare),then compare all
and add it to memory and mark newable
at last
add memory info to database
because you database and xls have most same count, so...database become almost valueless
just forget database, this is most fast in memory
in memory use hash-list for compare
of course, you can let above run in database if you can use #Barmar's idea.. don't insert single, but batch
Focus on speed on throwing the data into the database. Do not try to do all the work during the INSERT. Then use SQL queries to further clean up the data.
Use the minimal XLS to get the XML into the database. Use some programming language if you need to massage the data a lot. Neither XLS nor SQL is the right place for complex string manipulations.
If practical, use LOAD DATA ... XML to get the data loaded; it is very fast.
SQL is excellent for handling entire tables at once; it is terrible at handling one row at a time. (Hence, my recommendation of putting the data into a staging table, not directly into the target table.)
If you want to discuss further, we need more details about the conversions involved.
I have custom software which includes a function to copy data between tables on postgresql servers. It does this 1 row at a time, which is fine when the servers are close together, but as I've started deploying servers where the latency is > 300ms this does not work well at all.
I believe the solution is to use the "COPY" statement, but I am having difficulty implementing it. I am using the ADODB php library
When I attempt a copy from a file I get the error "must be superuser to COPY to or from a file". The problem is that I don't know how to copy "from STDIN" where stdin is not actually piped to the PHP script. Is there any way to provide the stdin input as part of the sql command using ADODB, or is there an equivalent command which will allow me to do a batch insert without waiting for each individual insert ?
Postgresql extension dblink() allows you to copy data from one server's database to another. You need to know the ip address of the server and the port the database is running on. Here are some links with more info:
http://www.leeladharan.com/postgresql-cross-database-queries-using-dblink
https://www.postgresql.org/docs/9.3/static/dblink.html
I solved the problem by using an insert command which had all the inserts in a single statement using "union all" - ie
$sql="INSERT INTO tablename (name,payload)
select 'hello', 'world' union all
select 'this', 'is a test' union all
select 'and it', 'works'";
$q=$conn->Execute($sql);
One limitation of this solution was that strings need to be enquoted while integers for example, must not be. I thus needed to write some additional code to make sure some fields were enquoted but not others.
To find out which columns needed to be enquoted I used
$coltypes=$todb->GetAll("select column_name,data_type
from information_schema.columns
where table_name='".$totable."'");
I have a theoretical question.
I can't see any difference between declaring a function within a PHP file and creating a stored procedure in a database that does the same thing.
Why would I want to create a stored procedure to, for example, return a list of all the Cities for a specific Country, when I can do that with a PHP function to query the database and it will have the same result?
What are the benefits of using stored procedures in this case? Or which is better? To use functions in PHP or stored procedures within the database? And what are the differences between the two?
Thank you.
Some benefits include:
Maintainability: you can change the logic in the procedure without needing to edit app1, app2 and app3 calls.
Security/Access Control: it's easier to worry about who can call a predefined procedure than it is to control who can access which tables or which table rows.
Performance: if your app is not situated on the same server as your DB, and what you're doing involves multiple queries, using a procedure reduces the network overhead by involving a single call to the database, rather than as many calls as there are queries.
Performance (2): a procedure's query plan is typically cached, allowing you to reuse it again and again without needing to re-prepare it.
(In the case of your particular example, the benefits are admittedly nil.)
Short answer would be if you want code to be portable, don't use stored procedures because if you will want at some point change database for example from MySQL to PostgreSQL you will have to update/port all stored procedures you have written.
On the other hand, sometimes you can achieve better performance results using stored procedures because all that code will run by database engine. You also can make situation worse if stored procedures will be used improperly.
I dont think that selecting country is very expensive operation. So I guess you don't have to use stored procedures for this case.
As most of the guys already explained it, but still i would try to reiterate in my own way
Stored Procedures :
Logic resides in the database.
Lets say some query which we need to execute, then we can do that either by :
Sending the query to DataBase server from client, where it will be parsed, compiled and then executed.
The other way is stationing the query at DataBase server and create an aliasing for the query, which client will use to send the request to database server and when recieved at server it will be executed.
So we have :
Client ----------------------------------------------------------> Server
Conventional :
Query created #Client ---------- then propagate to Server ----------Query : Reached server : Parse, Compiled , execute.
Stored Procedures :
Alias is Created, used by Client----------------then propogate to Server-------- Alias reached at Server : Parse,Compiled, Cached (for the first Time)
Next time same alias comes up, execute the query executable directly.
Advantages :
Reduce Network Traffic : If client is sending a big query, and may be using the same query very frequently then every bit of the query is send to the network and hence which may increase the network traffic and unnecessary increase the network usage.
Faster Query Execution : Since stored procedures are Parsed, Compiled at once, and the executable is cached in the Database. Therefore if same query is
repeated multiple times then Database directly executes the executable and hence Time is saved in Parse,Compile etc. This is good if query is used frequently.
If query is not used frequently, then it might not be good, because storing cached executable takes space, why to put Load on Database unnecessarily.
Modular : If multiple applications wants to use the same query, then with traditional way you are duplicating code unnecessarily at applications, the best
way is to put the code close to Database, this way duplication can be alleviated easily.
Security: Stored procedures are also developed, keeping in mind about Authorization(means who is privileged to run the query and who is not).So for a specific user you can grant permissions, to others you as DBA can revoke the permission. So its a good way as a point wrt to DBAs a DBA you can know who are right persons to get the access.But such things are not that popular now, you can design your Application Database such that only authorized person can access it and not all.
So if you have only Security/Authorization as the point to use Stored Procedures instead of Conventional way of doing things, then Stored procedure might not be appropriate.
ok, this may be a little oversimplified (and possibly incomplete):
With a stored procedure:
you do not need to transmit the query to the database
the DBMS does not need to validate the query every time (validate in a sense of syntax, etc)
the DBMS does not need to optimize the query every time (remember, SQL is declarative, therefore, the DBMS has to generate an optimized query execution plan)
My MySQL Server (running with PHP via PDO on Windows 2008 Server) returns error code 1406 (data too long for field) when inserting strings longer than allowed in a column. The thing is I read somewhere that what MySQL usually truncates the data when it is not in strict mode. I changed the sql_mode in my.ini so that even at startup it doesn't enter in strict mode (it is currently ""), but it is still giving me the error and rolls back, so the data is lost (truncating is the desired behaviour of the site).
I entered to the command line and made an insert with a long string in a shorter varchar field and it does truncate the data and save, but it is the site that doesn't. When I changed the mode back to strict, it didn't truncate in the command line (only the error).
Also, I made the site output the current sql mode, both global and session (##GLOBAL.sql_mode and ##SESSION.sql_mode) and they both output "" but just don't work as desired.
Does anyone know what is causing this and/or how to change it?
My suspicion is that it may have to do with PDO enabled with PDO::ATTR_ERRMODE = PDO::ERRMODE_EXCEPTION, but I have read and can't find anything helpful about that (I don't really think this is definitely the explanation, but I am just putting this out there so that you know as much as possible about the problem).
Thank you very much
You should not really do that - please don't let bad data into your database, and sanitize it in your scripts before you insert.
If you don't know actual width of your VARCHAR columns or don't want to hard-code it in your scripts, you can read it from database by querying INFORMATION_SCHEMA table COLUMNS using query like this:
SELECT
column_name,
data_type,
character_maximum_length,
ordinal_position
FROM information_schema.columns
WHERE table_name = 'mytable'
(you may want to limit this to only data_type = 'varchar').
Having this information and specifically character_maximum_length, you can use PHP substr to trim your input strings to desired length.
Advantage of this approach is that it does not alter any server configuration, and should work for any other databases, not only MySQL.
But if you insist on doing it unsafe way, you can try to temporarily disable strict mode by executing this:
SET sql_mode = '';
After that MySQL should silently trim strings. Read more here.
Probably the mysql server is in "not forgiving" mode, aka. traditional (most times). What you read is true, if mysql server operates in forgiving mode. If you are trying to insert too long string, it can be really important information, therefore mysql will issue a error. You have a few alternatives:
1: Use insert ignore (which will transform all errors into warnings and will proceed to truncating the data)
2: For the current session, set the sql_mode to ''
With either of these your problem should go away.
PS: I have read the error you are getting, but i still think the server is operating in traditional mode (so it's no mistake that i recommended sql_mode set to empty).
PS2: After changing the my.cnf, did you restart the mysql server?
I have undertaken a small project which already evolved a current database. The application was written in php and the database was mysql.
I am rewriting the application, yet I still need to maintain the database's structure as well as data. I have received an sql dump file. When I try running it in sql server management studio I receive many errors. I wanted to know what work around is there to convert the sql script from the phpMyAdmin dump file that was created to tsql?
Any Ideas?
phpMyAdmin is a front-end for MySQL databases. Dumping databases can be done in various formats, including SQL script code, but I guess your problem is that you are using SQL Server, and T-SQL is different from MySQL.
EDIT: I see the original poster was aware of that (there was no MySQL tag on the post). My suggestion would be to re-dump the database in CSV format (for example) and to import via bulk insert, for example, for a single table,
CREATE TABLE MySQLData [...]
BULK
INSERT MySQLData
FROM 'c:\mysqldata.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
This should work fine if the database isn't too large and has only few tables.
You do have more problems than making a script run, by the way: Mapping of data types is definitely not easy.
Here is an article about migration MySQL -> SQL Server via the DTS Import/Export wizard, which may well be a good way if your database is large (and you still have access, ie, not only have the dump).
The syntax between Tsql and Mysql is not a million miles off, you could probably rewrite it through trial and error and a series of find and replaces.
A better option would probably be to install mysql and mysqlconnector, and restore the database using the dubp file.
You could then create a Linked Server on the SQL server and do a series of queries like the following:
SELECT *
INTO SQLTableName
FROM OPENQUERY
(LinkedServerName, 'SELECT * FROM MySqlTableName')
MySQL's mysqldump utility can produce somewhat compatible dumps for other systems. For instance, use --compatible=mssql. This option does not guarantee compatibility with other servers, but might prevent most errors, leaving less for you to manually alter.