I have custom software which includes a function to copy data between tables on postgresql servers. It does this 1 row at a time, which is fine when the servers are close together, but as I've started deploying servers where the latency is > 300ms this does not work well at all.
I believe the solution is to use the "COPY" statement, but I am having difficulty implementing it. I am using the ADODB php library
When I attempt a copy from a file I get the error "must be superuser to COPY to or from a file". The problem is that I don't know how to copy "from STDIN" where stdin is not actually piped to the PHP script. Is there any way to provide the stdin input as part of the sql command using ADODB, or is there an equivalent command which will allow me to do a batch insert without waiting for each individual insert ?
Postgresql extension dblink() allows you to copy data from one server's database to another. You need to know the ip address of the server and the port the database is running on. Here are some links with more info:
http://www.leeladharan.com/postgresql-cross-database-queries-using-dblink
https://www.postgresql.org/docs/9.3/static/dblink.html
I solved the problem by using an insert command which had all the inserts in a single statement using "union all" - ie
$sql="INSERT INTO tablename (name,payload)
select 'hello', 'world' union all
select 'this', 'is a test' union all
select 'and it', 'works'";
$q=$conn->Execute($sql);
One limitation of this solution was that strings need to be enquoted while integers for example, must not be. I thus needed to write some additional code to make sure some fields were enquoted but not others.
To find out which columns needed to be enquoted I used
$coltypes=$todb->GetAll("select column_name,data_type
from information_schema.columns
where table_name='".$totable."'");
Related
I am currently running a script that pulls back transaction data (one line for each transaction) from a database with the following SQL script, which is being run in MySQL Workbench.
SELECT
id,
merchant_id,
affiliate_id,
date,
sale_amount,
commission,
ip
FROM transactions.transaction201505
One of the columns in the table t.transactions is IP address. Is there a way to embed this PHP script (or a function to this effect) within an SQL script:
php function geoip_country_name_by_addr http://php.net/manual/en/function.geoip-country-code-by-name.php
I have seen many examples of adding MySQL to PHP but would like to essentially add in Country/ City of sale to my data the other way around, so that the result could look like the sample below, which could be run off a tool such as MySQL Workbench. I don't have access to run PHP scripts on this database, and therefore need a solution with SQL.
Thanks in advance for any help you can offer on this.
Jamie
No, this is not directly possible. You could write a UDF (user define function) in an exteranl module (e.g. a .DLL, .so or .dylib) but using PHP in such a module is probably not possible either (because PHP is a scripting language and you would need to compile your UDF into binary code). However, MySQL has a number of built-in functions (https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html). Even though not what you want here.
I am trying the following which is not working:
update table_name set text_column= load_file('C:\temp\texttoinset.txt') where primary_key=5;
Here text_column is of type TEXT.
This gives:
Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT. Statement is unsafe because it uses a system function that may return a different value on the slave. Rows matched: 1 Changed: 0 Warnings: 1
What is the right way to insert a log file contents in my SQL from PHP?
This is a possible duplicate of this question: https://serverfault.com/questions/316691/mysql-binlog-format-dilemma
Anyway, the key is Statement is unsafe because it uses a system function that may return a different value on the slave.
When the MySQL database is setup to use replication, system functions like load_file can cause issues. The file C:\temp\texttoinset.txt is likely different between the master server and the slave server (or may not even exist on one of them).
When using replication, it is best to avoid system functions (like load_file and NOW()) because the values will be different when executed on different servers. If you want to load a file into a MySQL database that uses replication, consider using PHP's file_get_contents to read the file, and then use that to insert it into the database.
As a side note, I don't know why you're trying to insert a log file into MySQL, especially as a single TEXT column. There is probably a better way to do what you are wanting to do.
I have undertaken a small project which already evolved a current database. The application was written in php and the database was mysql.
I am rewriting the application, yet I still need to maintain the database's structure as well as data. I have received an sql dump file. When I try running it in sql server management studio I receive many errors. I wanted to know what work around is there to convert the sql script from the phpMyAdmin dump file that was created to tsql?
Any Ideas?
phpMyAdmin is a front-end for MySQL databases. Dumping databases can be done in various formats, including SQL script code, but I guess your problem is that you are using SQL Server, and T-SQL is different from MySQL.
EDIT: I see the original poster was aware of that (there was no MySQL tag on the post). My suggestion would be to re-dump the database in CSV format (for example) and to import via bulk insert, for example, for a single table,
CREATE TABLE MySQLData [...]
BULK
INSERT MySQLData
FROM 'c:\mysqldata.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
This should work fine if the database isn't too large and has only few tables.
You do have more problems than making a script run, by the way: Mapping of data types is definitely not easy.
Here is an article about migration MySQL -> SQL Server via the DTS Import/Export wizard, which may well be a good way if your database is large (and you still have access, ie, not only have the dump).
The syntax between Tsql and Mysql is not a million miles off, you could probably rewrite it through trial and error and a series of find and replaces.
A better option would probably be to install mysql and mysqlconnector, and restore the database using the dubp file.
You could then create a Linked Server on the SQL server and do a series of queries like the following:
SELECT *
INTO SQLTableName
FROM OPENQUERY
(LinkedServerName, 'SELECT * FROM MySqlTableName')
MySQL's mysqldump utility can produce somewhat compatible dumps for other systems. For instance, use --compatible=mssql. This option does not guarantee compatibility with other servers, but might prevent most errors, leaving less for you to manually alter.
I've got one Posgresql database (I'm the owner) and I'd like to drop it and re-create it from a dump.
Problem is, there're a couple applications (two websites, rails and perl) that access the db regularly. So I get a "database is being accessed by other users" error.
I've read that one possibility is getting the pids of the processes involved and killing them individually. I'd like to do something cleaner, if possible.
Phppgadmin seems to do what I want: I am able to drop schemas using its web interface, even when the websites are on, without getting errors. So I'm investigating how its code works. However, I'm no PHP expert.
I'm trying to understand the phppgadmin code in order to see how it does it. I found out a line (257 in Schemas.php) where it says:
$data->dropSchema(...)
$data is a global variable and I could not find where it is defined.
Any pointers would be greatly appreciated.
First, find all current procesid's using your database:
SELECT usename, procpid FROM pg_stat_activity WHERE datname = current_database();
Second, kill the processes you don't want:
SELECT pg_terminate_backend(your_procpid);
This works as of version 8.4, otherwise pg_terminate_backend() is unknown and you have to kill the process at OS level.
To quickly drop all connections connected to a given database, this shortcut works nicely. Must run as superuser:
SELECT pg_terminate_backend(procpid) FROM pg_stat_activity WHERE datname='YourDB';
On more recent Postgres versions (at least 9.2+, likely earlier), the column names have changed and the query is:
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname='YourDB';
Not sure about PostgreSQL but i think a possible solution would be to lock the table so other processes will fail when they try to access it.
See:
http://www.postgresql.org/docs/current/static/sql-lock.html
I've got a site that requires manual creation of the database tables on install. At the moment they are saved (along with any initial data) in a collection of .sql files.
I've tried to auto-create using exec() with CLI MySQL and while it works on a few platforms it's fairly flakey and I don't really like doing it this way, plus it is hard to debug errors and is far from bulletproof (especially if the MySQL executable isn't in the system path).
Is there a better way of doing this? The MySQL query() command only allows one sql statement per query (which is the sticking point).
MySQLi I've heard may solve some of these issues but I am fairly invested in the original MySQL library but would be willing to switch provided it's stable, compatible and is commonly supported on a standard server build and is an improvement in general.
Failing this I'd probably be willing to do some sort of creation from a PHP array/data structure - which is arguably cleaner as it would be able to update tables to match the schema in situ. I am assuming this may be a problem that has already been solved, so any links to any example implementation with pro's/con's would be useful!
Thanks in advance for any insight.
Apparently you can pass 65536 as client flag when connecting to the datebase to allow multi queries, e.g. making use of ; in one SQL string.
You could also just read in the contents of the SQL files, explode by ; if necessary and run the queries inside a transaction to make sure all queries execute properly.
Another option would be to have a look at Phing and dbdeploy to manage databases.
If you're using this to migrate data between systems, consider using the LOAD DATA INFILE syntax (http://dev.mysql.com/doc/refman/5.1/en/load-data.html) after having used SELECT ... INTO OUTFILE (http://dev.mysql.com/doc/refman/5.1/en/select.html)
You can run the schema creation/update commands via the standard mysql_* PHP functions. And if the query() command as you call it will allow only one statement, just call it many times.
I really don't get why do you require everything to be in the same call.
You should check for errors after each statement and take corrective actions if it fails (unless you are using InnoDB, in which case you can wrap all statements in a transaction and rollback if it fails.)