How could this be done within PHP. without reloading or adding an entire new connection.
MySQL has one connection, that gives you access to multiple databases.
Where you could call,
$this->db = my_database_connection;
$this->db->database_one->query();
$this->db->database_two->query();
and when those run, it understands which database it needs to utilize without creating a new db connection.
Is this even possible?
I have an application I want to run queries and data from multiple databases (with a heavy load of traffic) but I don't want to have it use 2 connections per user, seems inefficient. The databases have the same credentials for the connection (host,pass,user) but I have many database within that connection.
Related
This issue was raised using Laravel 5.0.
My project's database setup consists of 1 write node and multiple read-replicas (postgresql). Everytime a connection is initiated for any query, eg:
php
<?php $user = \App\User::find(1); ?>
... a connection is made to the write node. This occurs even when no writing queries are run (including set names 'utf8', etc); a connection will be set up, but all the SELECT queries are run correctly on the read-replicas.
How can I avoid this write connection if I don't need/use it for read-only requests?
There are two classes maintaining the DB connections: Illuminate/Database/DatabaseManager and Illuminate/Database/Connectors/ConnectionFactory.
When any Laravel class want to use a DB connection, it makes a call to DatabaseManager::connection(), which eventually request for an actual connection via ConnectionFactory::make().
Your issue lies here, the underlying of make() process creates both 'read' and 'write' connections at the same time. So 'write' connection is always established. It is a behavior of ConnectionFactory.
So the best way is opening an issue on Laravel asking the dev team whether they would like to improve the connection establishment only when it is actually needed.
Edited:
I just found that you already opened an issue :-)
https://github.com/laravel/framework/issues/10337
I have pages on my site that use the database and pages that don't.
When ever I need the database, I connect using $conn = connect(). But this means I need to put it in everywhere its needed. If I put it in an include file and put that file into every page it would connect even when the database is not needed. Would this be a good idea? Would creating a connection cause problems or other issues when not needed, or should I connect only when needed?
Connecting to a database when you don't need to introduces a small amount of overhead that is avoidable. If you need your pages to run as fast as possible, then you could optimize by avoiding the unnecessary db connection.
How much overhead this represents as a proportion of your total PHP execution time varies, for instance if your PHP script is simple and quick, then proportionally the db connection is a larger percentage of the total time wasted. If your PHP script does a lot of other things, then the db connection is a smaller percentage of total time.
Also the speed of a db connection can vary, depending on the speed of your server, whether MySQL is configured with DNS dependency, etc.
When I worked on the Zend Framework, we implemented "lazy" connections. You can create an instance of a Zend_Db_Adapter object anytime you want, but that class doesn't connect to the class constructor. It connects to the database when you run your first query (or when you explicitly call the getConnection() method).
Another consideration is how soon do you disconnect from the database when you're done running queries.
Suppose you handle 1000 PHP requests per second (one per millisecond on average), and each of your PHP requests lasts 100ms. So at any given instant, you may have 100 PHP requests in progress on average. If the first thing your PHP code does is connect to the db, and the last thing it does is disconnect from the db and other resources (by automatic request cleanup), then you may also have 100 db connections active at any time.
But if you delay connecting to the db, and disconnect promptly when you are done querying the db, and avoid connecting altogether on some requests, then on average you will have a much lower number of concurrent db sessions.
This can help reduce resource use on the db server, allowing more throughput and a higher number of PHP requests to complete per second.
I am currently building a web application that will be using two databases at the same time. The user first creates a general account and afterwards uses it to create multiple characters on multiple 'servers'. (think of it as creating an account on Facebook and then using it to sign up on other websites).
My current database setup is the following:
main_english
main_russian
server1
server2
server3
The main_english and main_russian databases contain the session and general account data for users who've registered at the appropriate languages.
The server1 and server2 databases handle the created character data of the English users, while server3 handles the Russian ones.
Users will be updating tables every 5-10 seconds or so. My question is, how could I efficiently handle having several DBs used?
So far I've come to several options:
Open two DB connections: one for main_ and another for server,
but I've read that opening them is fairly expensive.
Modifying all
my queries to explicitly state which DB to update/select data from
(the project is still young and thus it wouldn't be that
painful)
Modifying my $db class to have select_db() statements
with each query, which, in my opinion, is both messy and
inefficient.
What could you suggest? Perhaps I am overreacting about opening a second connection for the server queries?
MySQL is known to have fast connection handling so thats unlikely to be a bottleneck. That being said you could look into persistent connections in the PDO or mysqli extensions.
In terms of the server access and data modeling its difficult to say without knowing more about your application. How do DB slaves factor in? Is replication lag OK?
I am using multiple databases using CodeIgniter Active Records class. I had to disable persistent connection with MySQL, because CodeIgniter Active Records class can't make persistent connections using multiple databases.
I am looking at the load of my database, it seems like it's using most of it's queries calling "change database" and I am not sure if that's a good sign.
How can I optimize this without having to call "change database" all the time?
It's not as user friendly as most of the Active Record commands, but you can call the SQL directly like this:
$query = $this->db->query("SELECT * FROM database_a.some_table");
$query2 = $this->db->query("SELECT * FROM database_b.another_table");
Are you using queries that reference both databases? If not, it's not too difficult to load a new DB instance for the second database. I.e. you'd still use $this->db for the first, but you could have $this->db2 for the second. I honestly have no idea if that would trigger the "change database" command you're talking about, but it would be MUCH more sustainable code, at least. CI could keep its connections to each database open for the duration of the script (not a persistent connection), and it seems your problem would be fixed.
I've never needed multiple mysql databases in a single app, so this is entirely a guess based on what I've seen with, say, one mysql db and another being an sqlite db.
I'm wondering how slow it's going to be switching between 2 databases on every call of every page of a site. The site has many different databases for different clients, along with a "global" database that is used for some general settings. I'm wondering if there would be much time added for the execution of each script if it has to connect to the database, select a DB, do a query or 2, switch to another DB and then complete the page generation. I could also have the data repeated in each DB, I just need to mantain it (will only change when upgrading).
So, in the end, how fast is mysql_select_db()?
Edit: Yes, I could connect to each DB separately, but as this is often the slowest part of any PHP script, I'd like to avoid this, especially since it's on every page. (It's slow because PHP has to do some kind of address resolution (be it an IP or host name) and then MySQL has to check the login parameters both times.)
Assuming that both databases are on the same machine, you don't need to do the mysql_select_db. You can just specify the database in the queries. For example;
SELECT * FROM db1.table1;
You could also open two connections and use the DB object that is returned from the connect call and use those two objects to select the databases and pass into all of the calls. The database connection is an optional parameter on all of the mysql db calls, just check the docs.
You're asking two quite different questions.
Connecting to multiple database instances
Switching default database schemas.
MySQL is known to have quite fast connection setup time; making two mysql_connect() calls to different servers is barely more expensive than one.
The call mysql_select_db() is exactly the same as the USE statement and simply changes the default database schema for unqualified table references.
Be careful with your use of the term 'database' around MySQL: it has two different meanings.