How can I put two databases into one database? - php

I have a MySQL database on my cPanel, this supports only one DB.
So I have to make 2 web sites on one hosting that has only one DB support, unfortunately I also don't have prefix to my DB. can I merge my two databases into one database?
Max tables in a MySQL database
I searched this but cannot understand, I think this is not a problem I have.
I use php and MySQL.

You can have multiple ... things ... in your database, as the underlying platform doesn't really care that Site A only accesses tables W,X, and Y, whereas Site B only accesses tables Z and V.
To make such arrangement somewhat sane ("is this table part of Site A, or Site B, or what?"), it is common to prefix the tables inside the (single) database. Some platforms (e.g. the ModX CMS) have this built-in - you set the table prefix during install, and off you go - e.g. any table prefixed a_ would be a logical part of Site A.
If you already have two sites (and two databases), this can get somewhat icky, especially if the table names overlap - you'd need to go into the code for each site, and add the prefix for that site in every place the database is accessed in code. Again, some frameworks allow you to change this prefix.

What do you mean by " I also don't have prefix to my DB" ?
Because if you cant't have two databases, the easiest workaround would be to name tables from each application with different prefixes, like app1name_table1, app1name_table2, app2name_table1, app2name_table2, and so on... each application would access its own tables, and it doesn't matter that all tables are in the same schema

I wouldn't recommend doing something like that.
Just find a way to get more DBs with your hosting service...
if you can't: consider changing hosting service.

Related

PHP / MySQL - Compare tables from 2 different databases

I've got 2 frameworks (Laravel - web, Codeigniter - API) and 2 different databases. I've built a function (on the API) which detect changes on one database (from 2 tables) and apply the changes in the other database.
Note: there is no way to run both web and API on the same databases - so thats why I'm doing this thing.
Anyway, this is important that every little change will recognized. If the case is new record or delete record - its simple and no problem at all. But, if the records exists in both databases - I need to compare their values to detect changes and this section become challenging.
I know how to do this in the slowest and heavy way (pick each record and compare).
My question is - how do you suggest to make it work in smart and fast way?
Thanks a lot.
As long as the mysql user has select rights on both databases, you can qualify the database in the query like so:
SELECT * FROM `db1`.`table1`;
SELECT * FROM `db2`.`table1`;
It doesn't matter which database has been selected when you connected to PHP. The correct database will be used in the query.
The ticks are optional when the database/table name is only alphanumeric and not an SQL keyword.
Depending on the response-time of the 'slave'-database there are a two options which don't increase the overhead too much:
If you can combine both databases within the same database by prefixing one or both of the tables, you can use FOREIGN KEYS to let the database do the tough work for you.
Use the TIMESTAMP-field which you can set to update itself by the DB whenever the row gets updated.
Option 1 would be my best guess, but that might mean a physical change to the running system, and if FOREIGN KEYS are new for you, you might wanna test since they can be a real PITA (IMHO).
Option 2 is easier to implement, but you still have to manually detect changes to deleted/rows.

Database structure for a system with multisite - Database & PHP

The system I'm working is structured as below. Given that I'm planning to use Joomla as the base.
a(www.a.com),b(www.b.com),c(www.c.com) are search portals which allows user to to search for reservation.
x(www.x.com),y(www.y.com),z(www.z.com) are hotels where booking are made by users.
www.a.com's user can only search for the booking which are in
www.x.com
www.b.com's user can only search for the booking which are in
www.x.com,www.y.com
www.c.com's user can search for all the booking which are in
www.x.com, www.y.com, www.z.com
All a,b,c,x,y,z runs the same system. But they should have separate domains. So according to my finding and research architecture should be as above where an API integrate all database calls.
Given that only 6 instance are shown here(a,b,c,x,y,z). There can be up to 100 with different search combinations.
My problems,
Should I maintain a single database for the whole system ? If so how can I unplug one instance if required(EG : removing www.a.com from the system or removing www.z.com from the system) ? Since I'm using mysql will it not be cumbersome for the system due to the number of records?
If I maintain separate database for each instance how can I do the search? How can I integrate required records into one and do the search?
Is there a different database approach to be used rather than mentioned above ?
The problem you describe is "multitenancy" - it's a fairly tricky problem to solve, but luckily, others have written up some useful approaches. (Though the link is to Microsoft, it applies to most SQL environments except in the details).
The trade-offs in your case are:
Does the data from your hotels fit into a single schema? Do their "vacancy" records have the same fields?
How many hotels will there be? 3 separate databases is kinda manageable; 30 is probably not; 300 is definitely not.
How large will the database grow? How many vacancy records?
How likely is it that the data structures will change over time? How likely is it that one hotel will need a change that the others don't?
By far the simplest to manage and develop against is the "single database" model, but only if the data is moderately homogenous in schema, and as long as you can query the data with reasonable performance. I'd not worry about putting a lot of records in MySQL - it scales very well.
In such a design, you'd map "portal" to "hotel" in a lookup table:
PortalHotelAccess
PortalID HotelID
-----------------
A X
B X
B Y
C X
C Y
C Z
I can suggest 2 approaches. Which one to choose depends from some additional information about whole system. In fact, the main question is whether your system can impersonate (substitune by itself, in legal meaning) any of data providers (x, y, z, etc) from consumers point of view (a, b, c, etc) or not.
Centralized DB
First one is actually based on your original scheme with centralized API. It implies a single search engine, collecting required data from data sources, aggregating it in its own DB, and providing to data cosumers.
This is most likely a preferrable solution if data sources are different in their data representation, so you need to preprocess it for uniformity. Also this variant protects your clients from possible problems in connectivity, that is if one of source site goes offline for a short period (I think this may be even up to several hours without a great impact on the booking service actuality), you can still handle requests to the offline site, and store all new documents in the central DB until the problems solved. On the other hand, this means that you should provide some sort of two-way synchronization between your DB and every data source site. Also the centralized DB should be created with reliability in mind in the first place, so it seems that it should be distributed (preferrably over different data centers).
As a result - this approach will probably give best user experience, but will require sufficient efforts for robust implementation.
Multiple Local DBs
If every data provider runs its own DB, but all of them (including backend APIs) are based on a single standard, you can eliminate the need to copy their data into central DB. Of course, the central point should remain, but it will host a middle-layer logic only without DB. The layer is actually an API which binds (x, y, z) with appropriate (a, b, c) - that is a configuration, nothing more. Every consumer site will host a widget (can be just a javascript or fully-fledged web-application) loaded from your central point with appropriate settings embedded into it.
The widget will request all specified backends directly, and aggregate their results in a single list.
This variant is much like most of todays web-applications work, it's simplier to implement, but it is more error-prone.

Should I use 1 database for posts, configuration, and authentication or split it up?

I'm designing a blog-like website system from the ground up, based off of PHP and MySQL. It works based on this structure:
Everything has a unique ID, known as an entity ID or ENID.
A master table contains all ENIDs, so there are no duplicates.
There are four types of entities: posts, revisions, modules, and users
This is all so that id.php can be asked for any resource on the site and know what to do with it.
Posts are categorized to a module. For example, documents, messages, events, etc. all belong to a separate module.
Posts reference a specific row in a revisions table to be displayed to the user.
I'm wondering, would it be best to split the four entities and the master table up across separate databases, or would it be best to keep them all in one? Security is a TOP priority.
I don't see how splitting those things to separate databases could increase security by itself. It will only complicate your application code without any necessity.
Store them in the same database and focus your security efforts on other areas: firewalls, sql sanitizing, etc.
Keeping authentication information in the same database is absolutely acceptable. Most people do it this way. Just make sure you don't store passwords in plain text (you should store a salted hash instead).
I personally think one DB would be enough.
Also, I don't think using more than one DB would increase security in any way, but I could be wrong.
One database should be fine. I usually set things up so that each application/service uses one database, with different table in it for the various information that I want to store.
I think you don't need to store your information in different databases. All of them belongs to one system. Working with different databases will occur you with many tasks and you have to care about many databases instead of one.
You'd better to have just 1 database in this case and focus yourself on its security issues.
By the way, don't forget that you will need relations between key columns for different reason. So At least, working with different databases will force you to do something more that when you have just one database with different tables.

Creating Tables at runtime vs Creating Databases at runtime

I am building a customer sales and invoicing app for a company.The app is in PHP MYSQL, but I guess that shouldn't matter much.
The app structure is as follows:
website files: .php, ,.htm, images and css
database: containing 20+ tables
The app is currently being used by the company and 2 other sister concerns(beta testing mostly). Since the user base is small, I manually copy the website files and the database to set the app up for usage by a new compnay.
I am looking for a way to make the app more 'scalable' without having to manually do the 'scaling'.(meaning I don't want to manage three different filesets and dbs manually)
Since the code is company neutral and the databases contain the company info, I will only have to recreate the database when a user requests a new company to be setup. There are multiple ways that I can create the database for a new compnay.
At runtime I can create a new databse with the 20+ tables using CREATE DATABASE
At runtime I can create additional 20+ tables with the company name as prefix for the tables using CREATE TABLES
I can add a company column to all of my tables and then continue adding info as before.
The new database method appeals to me because backup and maintenance would be easy, it would probably be a bit more secure since a hacker will only be able to access the details of one company(probably...). This option wont work on a shared hosting with a limit on number of databases.
The second option would mean I can create everything in one database. But this option is a bit more 'shared'.
I wouldn't go for the third option due to table level locking issues in MySQL (I am not using InnoDb for all my tables).
So my choices are between option 1 and 2. Developers who've managed financial apps , please advice, as once the beta testing phase is done with, the usage base will increase, and I don't wish to manually change the same thing in 10 databases and filesets. What will be the best thing to do?
From the security point of view, customers should have separate databases, which restricted access from MySQL users.
That user should only have the permissions needed by the application (often SELECT, INSERT, DELETE and UPDATE), and not administrative permissions (DROP, CREATE, GRANT, ...). In this way, you've a clear overview on databases and tables.
When you need to alter a table structure, you just executes the (thoroughly tested) SQL query on your database.
CSS, images and other static content could be put in a subdomain, or Alias (Apache)
Libraries and neutral classes should be put in one directory too, using include_path to include such a file, so you have only one fileset that needs to be changed.

Prefixing MySQL Tables or Many MySQL databases?

So, first things first, I'm a student. I'm developing an application where other students can have access to a MySQL database. Basically, I wanted to spare the students the need to search for hosting or even installing MySQL on their computers. Another plus is the fact that they can present their works to the class just by browsing a website. So, my idea was to use the same database for everyone, and add a login system for the students. This way, I can associate a prefix to every student, and they can execute any type of query without worrying if it will clash with someone's table, because the system would prefix their queries tables automatically. My idea was to limit how much tables and rows each user can have, which shouldn't be hard with a parser. It doesn't necessarily need to be a parser in PHP, it could be in perl or python. PHP is just more convenient. .NET would be more troublesome because of Windows
By the way, each class of "introduction to database systems" has around 50 students and there are 3 classes, so it could reach about 150 students...
For example, SELECT * FROM employees
has to become
SELECT * FROM prefix_employees
I do not know how the query will look like, it could get fairly complex so I'd probably need a well written parser, which I haven't found yet for PHP.
Thanks guys, I hope I have made myself clear
Unfortunately, MySQL does not (AFAIK) have schemas as some other databases (e.g. PostgreSQL) have them (for seperating content (tables, etc...) logically within one database).
But I would definitely go for the seperate databases-scenario.
Your parser (with the 'prefixing sheme') will be broken (unwillingly and also possibly willingly) unless you are willing to put an extreme amount of time into making this work.
I'd rather go with the "one database per user" approach. This solution requires some administration (you can either create the users/databases manually using a tool like phpMyAdmin, or simply create your own little administration panel in which you allow the students to register), but will require far less amount of work from you than filtering all requests.
This way, each student has his login/password, with preferably a database of the same name on which he has all rights (this can be done automatically with phpMyAdmin), and is able to work without interferring with other students. You can be sure that some will try to break your security, no matter how hard you try and how good-willing you are. Clustering them in different databases will leave them no choice than trying to gain admin access of your DB, which will be pretty hard if you maintain an up to date server and complex enough passwords (and you don't store them in clear on a "readable by all" .txt file on your university server.
Plus, you will be able to monitor the disk space, usage, etc... of each database individually, which is easier than having to look at tables separately.
Depending on your exact requirements, you may be able to use table permissions to prevent one student from modifying (or viewing) data from another student. You would still need a process to allow students to create a new table with their assigned prefix (and create an appropriate permissions entry), but once created, the DB would control access through all queries so you would not have to (just don't allow student accounts to directly create/alter tables).
As for quota, I'm not aware of MySQL directly supporting a quota system but you could create the files that back the tables for each user on a separate directory and use OS level quota systems to limit disk space usage.

Categories