How can I connect two tables from different databases using PostgreSQL? My first database is called "payments_details" and my second one is called "insurance". Also I want to display and highlight the id's they don't have in common using php, is that possible?
Databases are isolated from each other, you cannot access data from different databases with one SQL statement. That is not a bug, but a design feature.
There are three ways to achieve what you want:
Don't put the data in different databases, but in different schemas in one database. It is a common mistake for people who are more experienced with MySQL to split up data that belong to one application in multiple databases and then try to join them. This is because the term database in MySQL is roughly equivalent to what in (standard) SQL is called a schema.
If you cannot do the above, e.g. because the data really belong to different applications, you can use the PostgreSQL foreign data wrapper. This enables you to access tables from a different database (or even on a different machine) as if they were local tables. You'll have to write your statements more carefully, because complicated queries can sometimes be inefficient if large amounts of data have to me transferred between the databases.
You can use dblink, which is an older and less comfortable interface than foreign data wrappers, but can allow you to do things that you could not do otherwise, like call a remote function.
Related
I have a database that contains a large number of tables that can be divided into multiple databases. The connection between the tables is, for example:
DB1: users (contains the field 'client_id')
DB2: customers (contains all the tables and relationships)
The two DBs are therefore connected via the 'client_id' field in the users table in DB1, and the 'id' field in the customers table in DB2.
Additionally, I also have a third DB that is connected in a similar way to the second DB.
Is this good practice? I have read that it can create performance problems, but keeping everything in a single DB doesn't seem ideal either.
Do you have any ideas or suggestions? Can this approach work?
In MySQL, databases (aka schemas) are just subdirectories under the datadir. Tables in different schemas on the same MySQL Server instance share the same resources (storage, RAM, and CPU). You can make relationships between tables in different schemas.
Assuming they are on the same MySQL Server instance, there is no performance implication with keeping tables together in one schema versus separating them into multiple schemas.
Using schemas is mostly personal preference. It can make certain tasks more convenient, such as granting privileges, backing up and restoring, or using replication filters. These have no direct effect on query performance.
DATABASE
I have a normalized Postgres 9.1 database and in it I have written some functions. One function in particular "fn_SuperQuery"(param,param, ...)" returns SET OF RECORD and should be thought of as a view (that accepts parameters). This function has lots of overhead because it actually creates several temporary tables while calculating its own results in order to gain performance with large data sets.
On a side note, I used to use WITH (cte's) exclusively for this query, but I needed the ability to add indexes on some columns for more efficient joins.
PHP
I use PHP strictly to connect to the database, run a query, and return the results as JSON. Each query starts with a connection string and then finishes with a call to pg_close.
FRONTEND
I am using jQuery's .ajax function to call the PHP file and accept the results.
My problem is this:
"fn_SuperQuery"(param,param, ...)" is actually the foundation for several other queries. There are some parts of this application that need to run several queries at once to generate all the necessary information for the end user. Many of these queries rely on the output of "fn_SuperQuery"(param,param, ...)" The overhead in running this query is pretty steep, and the fact that it would return the same data if given the same parameters makes me think that it's dumb to make the user wait for it to run twice.
What I want to do is return the results of "fn_SuperQuery"(param,param, ...)" into a temporary table, then run the other queries that require its data, then discard the temporary table.
I understand that PostgreSQL ... requires each session to issue its own CREATE TEMPORARY TABLE command for each temporary table to be used. If I could get two PHP files to connect to the same database session then they should both be able to see the temporary table.
Any idea on how to do this? ... or maybe a different approach I have yet to consider?
May be better will be using normal tables? It will be no much difference. You can speed it up by using unlogged tables.
In 9.3 probably better choice would be using materialized views.
Temporary tables are session-private. If you want to share across different sessions, use normal tables (probably unlogged).
If you are worried about denormalization, the first thing I would look at doing is just storing these temporary normal tables ;-) in a separate schema. This allows you to keep a the denormalized (and working set data) separate for analysis and such and avoids polluting the rest of you dataset with the denormalized tables.
Alternatively you could look at other means short of denormalization. For example if data isn't going to change after a while you could put summary entries periodically for unchangeable data. This is not a denormalization since it allows you to purge old detail records if you need to down the line while continuing to have certain forms of reporting open.
In MySQL, I have two different databases -- let's call them A and B.
Database A resides on server server1, while database B resides on server server2.
Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).
In such a case, is it possible to perform a connection between a table that is in database A, to a table that is in database B of different servers?
If so, how do I go about it, programatically, in php? (I am using php, MySQLDB to separately interact with each one of the databases).
The only way I can think of, is by opening 2 separate connections (i.e. instantiate 2 PDO objects) with all the different parameters, use 2 queries to query all the data you need into PHP, and then work with that on PHP.
You can make two separate MySQL connections in PHP, do two queries to the two tables, and then work with the results in PHP.
Another option, since the servers are physically close, is to setup the one or both servers to replicate the needed database/tables to each other. You can look here for more on MySQL replication:
http://dev.mysql.com/doc/refman/5.6/en/replication.html
I have a MySQL database on my cPanel, this supports only one DB.
So I have to make 2 web sites on one hosting that has only one DB support, unfortunately I also don't have prefix to my DB. can I merge my two databases into one database?
Max tables in a MySQL database
I searched this but cannot understand, I think this is not a problem I have.
I use php and MySQL.
You can have multiple ... things ... in your database, as the underlying platform doesn't really care that Site A only accesses tables W,X, and Y, whereas Site B only accesses tables Z and V.
To make such arrangement somewhat sane ("is this table part of Site A, or Site B, or what?"), it is common to prefix the tables inside the (single) database. Some platforms (e.g. the ModX CMS) have this built-in - you set the table prefix during install, and off you go - e.g. any table prefixed a_ would be a logical part of Site A.
If you already have two sites (and two databases), this can get somewhat icky, especially if the table names overlap - you'd need to go into the code for each site, and add the prefix for that site in every place the database is accessed in code. Again, some frameworks allow you to change this prefix.
What do you mean by " I also don't have prefix to my DB" ?
Because if you cant't have two databases, the easiest workaround would be to name tables from each application with different prefixes, like app1name_table1, app1name_table2, app2name_table1, app2name_table2, and so on... each application would access its own tables, and it doesn't matter that all tables are in the same schema
I wouldn't recommend doing something like that.
Just find a way to get more DBs with your hosting service...
if you can't: consider changing hosting service.
Is it preferred to create tables in mysql using a third party application (phpmyadmin, TOAD, etc...) instead of php?
The end result is the same, I was just wondering if one way is protocol.
No, there isn't a 'set-in-stone' program to manage your database and query to it.
However, I highly recommend MySQL Workbench.
It allows you to graphically design your database, query to your database server and do all kinds of administration tasks.
I'd say it is far easier to do so within an application created for that purpose. The database itself obviously doesn't care as it's just DDL to it. Using Toad or PHP MyAdmin would help you do the job quicker and allow you to catch syntax errors prior to execution or use a wizard where you're not writing it by hand in the first place.
usually a software project provides one or more text files containing the ddl statements to create the necessary tables. what tool you use to execute those statements doesn't really matter. some php projects alwo provide a installer wizard php file which can be executed directly in the browser, so you don't need any additional tools at all.
I'll try to only answer what your question is - "Is it preferred to create tables in mysql using a third party application (phpmyadmin, TOAD, etc...) instead of php?"...
Yes, it is preferred to create tables or alter them or delete them or perhaps do any DB-related activity that is outside the scope of what interfaces your application provides, in MySQL using any of the many available MySQL clients. And the reason is because these applications are designed to perform DB related tasks and are best at doing them.
Though you may as well use PHP for creating tables depending on the situations, like if the application uses dynamic tables or needs "temporary" tables for performing complex jobs or storing intermediary results/calculations. Or perhaps if the application provides interfaces to manage/control certain aspects, like assume that a certain application consists of various user-roles that have their respective columns in the table. If the application provides rights to the admin to delete or add new roles, which will need to delete or add new columns, it's best to do such queries from PHP.
So, putting it again, use MySQL for any DB work that is not related or affected by what functionality or interfaces your PHP code provides.
Sidenote: Though I've used phpMyAdmin, TOAD, WorkBench and a few others, I think nothing's as efficient and quick as the MySQL client itself, i.e. working directly on the MySQL prompt. If you've always used GUI clients, you might find it unattractive to work on the prompt initially but it's real fun and helps you keep syntaxes on your tips :-)
You question might have been misunderstood by some people.
Charles Sprayberry was saying there's no best practice as far as which 3rd party MySQL client (i.e. phpmyadmin, TOAD, etc.) to use to edit your database. It comes down to personal preference.
Abhay was saying (and I really think this was the answer to your question), that typically, your application does not do DDL (although exceptions exist). Rather, your application will usually be performing DML commands only.
DML is Data Manipulation Language. For example:
select
insert
update
delete
DDL is Data Definition Language. For example:
create table
alter table
drop table
Basic SQL statements: DDL and DML