access two different databases on different servers in the same query - php

So far we have a server with 2 databases and a mysql user that accesses any of them. For example
`select * from magento.maintable, erp.maintable`
now our erp is very slow and we want to separate our database on another server, but we have hundreds (almost a thousand) sql queries that have access in the same query to the two databases, for example
`insert into magento.table
select * from erp.maintable`
or
select * from erp.maintable inner join magento.table...
and more and more
How can I make everything work the same without changing these queries? but with the databases on different servers
To access the databases I have created a class for each database and through an object I make the queries, insertions, updates and deletions, like this
` public function exec($query, $result_array = true)
{
$this->data->connect();
$result = $this->data->query($query, $result_array);
$this->data->disconnect();
return $result;
}`
all help is welcome, the point is to find an optimal way to do this and not have to manually change 1000 sql queries made by another programmer

To access more than one database server in one query, you either have to use FEDERATED database engine or use replication to replicate the ERP-data from another server to the original one.
The use of FEDERATED engine is likely to cause additional performance problems and the replication requires some work to set up.
If the sole reason for the new server is the performance in ERP, you might want to see why the ERP is slow and try to solve that (optimize, move both databases to a new server, etc). When you have both databases on the same server, the query optimizer is able to combine and make efficient use of indexes.

Related

How can I connect two tables from different databases using PostgreSQL?

How can I connect two tables from different databases using PostgreSQL? My first database is called "payments_details" and my second one is called "insurance". Also I want to display and highlight the id's they don't have in common using php, is that possible?
Databases are isolated from each other, you cannot access data from different databases with one SQL statement. That is not a bug, but a design feature.
There are three ways to achieve what you want:
Don't put the data in different databases, but in different schemas in one database. It is a common mistake for people who are more experienced with MySQL to split up data that belong to one application in multiple databases and then try to join them. This is because the term database in MySQL is roughly equivalent to what in (standard) SQL is called a schema.
If you cannot do the above, e.g. because the data really belong to different applications, you can use the PostgreSQL foreign data wrapper. This enables you to access tables from a different database (or even on a different machine) as if they were local tables. You'll have to write your statements more carefully, because complicated queries can sometimes be inefficient if large amounts of data have to me transferred between the databases.
You can use dblink, which is an older and less comfortable interface than foreign data wrappers, but can allow you to do things that you could not do otherwise, like call a remote function.

Multitenancy with PHP and MySQL

I have a web application built on CodeIgniter PHP, a MYSQL database, and we use PHPActiveRecords. We keep growing and expanding and now need to offer the application whitelabeled. I have most of that done the only problem I am running into is how to handle the database. I don't want to have two database connections because a good deal of the data between the two site will be shared. I was researched Multitenancy and it sounds like a great option, but if I have to go rewrite every ActiveRecord find to have a condition where tenant_id = 'this site' and then have to train my employees to do the same when they now write code, it isn't scalable. Does anyone have any ideas of how to either A) integrate multitenancy into PHPActiveRecords without a lot of modifications, or B) a better solution then multitenancy.
Thank you in advance.
Depending on how many clients you have, you could create a schema per client, on one host. Prefix the table names for your common tables with your common database names, and rely on the client queries to use the default database.
Your entrypoint may do something like:
$pdo->query('USE client_12345');
And your queries may be something like:
$pdo->query('SELECT * FROM clientspecificdata WHERE ...');
And
$pdo->query('SELECT * FROM common.data WHERE ...');
Note on this: this carries a relatively high risk of exposing data to the wrong clients. Be sure this is appropriate for your scenario. You may be much better off with multiple connections.

How to I review an application to choose the right indexes - php & mysql

We have an existing PHP/MySQL app which doesn't have indexes configured correctly (monitoring shows that we do 85% table scans, ouch!)
What is a good process to follow to identify where we should be putting our indexes?
We're using PHP (Kohana using ORM for the DB access), and MySQL.
The answer likely depends on many things. For example, your strategy might be different if you want to optimize SELECTs at all costs or whether INSERTs are important to you as well. You might do well to read a MySQL Performance Tuning book or web site. There are several decent-to-great ones.
If you have a Slow Query Log, check it to see if there are particular queries that are causing problems. http://dev.mysql.com/doc/refman/5.5/en/slow-query-log.html
If you know the types of queries you'll be running or have identified problematic queries via the Slow Query Log or other mechanisms, you can then use the EXPLAIN command to get some stats on those queries. http://dev.mysql.com/doc/refman/5.5/en/explain.html
Once you have the output from EXPLAIN, you can use it to optimize. See http://dev.mysql.com/doc/refman/5.5/en/using-explain.html
Indexes are not just for the primary keys or the unique keys. If there are any columns in your table that you will search by, you should almost always index them.
I think this will help you on your database problems.
http://net.tutsplus.com/tutorials/other/top-20-mysql-best-practices/

Accessing MySQL from PHP and another process at the same time

I'm writing a program that runs (24/7) on a Linux server and adds entries to a MySQL database.
The contents of the database are presented on a web interface with PHP and the user should be able to delete entries using the web interface.
Is it possible to access the database from multiple processes at the same time?
Yes, databases are designed for this purpose quite well. You'll want to keep a few things in mind in your designs:
Concurrency and race conditions on database writes.
Performance.
Separate database permissions for separate applications.
Unless you're doing something like accessing the DB using a singleton, the max number of simultaneous mysql connections php will use is limited in your php.ini. I believe it defaults to 100.
Yes multiple users can access the database at the same time.
You should however take care that the data is consistent.
If you create/edit entry with many small sql statements and in the meantime someone useses the web interface this may lead to some errors.
If you have a simple db this should not be a problem, else you should consider using transactions.
http://dev.mysql.com/doc/refman/5.0/en/ansi-diff-transactions.html
Yes and there will not be any problems while trying to delete records in the presence of that automated program which runs 24/7 if you are using the InnoDb engine. This is because transactions happen one at a time, one starts after another has finished and the database is consistent everytime.
This answer How to implement the ACID model for a database has many relevant points.
Read about the ACID Properties of a database. A Mysql database with InnoDb engine will take care of all these things for you and you need not worry about that.

How quick is switching DBs with PHP + MySQL?

I'm wondering how slow it's going to be switching between 2 databases on every call of every page of a site. The site has many different databases for different clients, along with a "global" database that is used for some general settings. I'm wondering if there would be much time added for the execution of each script if it has to connect to the database, select a DB, do a query or 2, switch to another DB and then complete the page generation. I could also have the data repeated in each DB, I just need to mantain it (will only change when upgrading).
So, in the end, how fast is mysql_select_db()?
Edit: Yes, I could connect to each DB separately, but as this is often the slowest part of any PHP script, I'd like to avoid this, especially since it's on every page. (It's slow because PHP has to do some kind of address resolution (be it an IP or host name) and then MySQL has to check the login parameters both times.)
Assuming that both databases are on the same machine, you don't need to do the mysql_select_db. You can just specify the database in the queries. For example;
SELECT * FROM db1.table1;
You could also open two connections and use the DB object that is returned from the connect call and use those two objects to select the databases and pass into all of the calls. The database connection is an optional parameter on all of the mysql db calls, just check the docs.
You're asking two quite different questions.
Connecting to multiple database instances
Switching default database schemas.
MySQL is known to have quite fast connection setup time; making two mysql_connect() calls to different servers is barely more expensive than one.
The call mysql_select_db() is exactly the same as the USE statement and simply changes the default database schema for unqualified table references.
Be careful with your use of the term 'database' around MySQL: it has two different meanings.

Categories