It is very common that using PHP to connect MySQL. The most common method is like this:
$sqlcon=mysql_connect("localhost","user","pw");
mysql_select_db('database');
$sqlcomm=mysql_query("SELECT id FROM bd");
while($row=mysql_fetch_row($sqlcomm))
{
//do something
}
mysql_close($sqlcon);
I think this is the fastest direct way to connect MySQL. But in project, there will have too many MySQL connections in php script, we should use "mysql_connect("localhost","user","pw")" code to connect MySQL in every php script. So you will like to build a MySQL class or function in a file to connect MySQL:
function connect( $query )
{
$sqlcon=mysql_connect("localhost","user","pw");
mysql_select_db('database');
$sqlcomm=mysql_query($query);
while($row=mysql_fetch_row($sqlcomm))
{
//do something.
}
mysql_close($sqlcon);
}
and then include into your project using include() for connection.
include('connect.php');
$data = connect('SELECT id from db');
OK, in this way, the code is look better. But using include() function will let PHP to read and execute other php script files, a I/O operation on harddisk again, it will also slow down the performance.
If the webpage is 100PV/s, php will read and execute a one php script 100 times/s in first method, but read and execute php script 200 times/s in this method!
I here show a simple example for only one query. Try image a high network multi-query environment.
Dose any one have other better way to make MySQL connection more easier and more efficient?
You don't really need to open that many connections. You just open 1 connection at the start of your script (before <body> gets generated, let's say), and then close it at the end of your script (after </body> is generated, let's say). That leaves you with only 1 connection. In between, you can execute as many queries as you need.
Have you looked at using PDO? it does connection pooling and what not andnot limited to mysql...
Have a look at Dibi.
You use a class that opens a MySQL connection (username / password / db is inherited from some sort of configuration file) and when you query the db - it establishes a connection.
That leads you on to using a framework that uses certain programing paradigms and so forth.
Also, you shouldn't worry about performance decrease because you're including a file. That should be the least of your worries. OS is doing many IOs, not just with the hard disk, your 1 file include won't be noticeable.
If you're asking whether there's more efficient way of connecting to a MySQL db without using mysql_, mysqli_, odbc or PDO - no, there isn't.
performance lack would be insignificant. you must be concerned more about correct approach to the structure of your code than performance.
you can move your host/user/password into constants into separate files and include wherever you need them, more over you can use some design patterns for database object. like Singleton or Factory. they will provide more flexibility to your system.
But in project, there are too many MySQL connections, we should type Username and Password code each time
There are lots of things wrong with this statement - even if you don't count the grammar.
If you mean that you have multiple servers with different datasets on them, then you should definitely consider consolidating them or using the federated engine to provide a single point of access.
Opening a new connection and closing it each time you run a query is very inneficient if you need to execute more than one query per script.
Realy you need to spend a lot of time thinking about why you need multiple database connections and eliminate them, but in the meantime, bearing in mind that connections are closed automatically when a script finishes.....
class connection_pool {
var $conxns=array(
'db1.example.com'=>
array ('host'=>'db1.example.com', 'user'=>'fred', 'password'=>'secret'),
'db2.example.com'=>
array ('host'=>'db1.example.com', 'user'=>'admin', 'password'=>'xxx4'),
....
);
function get_handle($db)
{
if (!array_key_exists($db, $this->conxns)) {
return false;
}
if (!#is_resource($this->conxns[$db]['handle'])) {
$this->conxns[$db]['handle']=mysql_connect(
$this->conxns[$db]['host'],
$this->conxns[$db]['user'],
$this->conxns[$db]['password']
);
}
return $this->conxns[$db]['handle'];
}
}
(NB never use 'USE database' if you have multiple databases on a single mysql instance - always explicitly state the database name in queries)
Related
I am wanting to hear what others think about this? Currently, I make a mysql database connection inside of a header type file that is then included in the top of every page of my site. I then can run as many queries as I want on that 1 open connection. IF the page is built from 6 files included and there is 15 different mysql queries, then they all would run on this 1 connection.
Now sometimes I see classes that make multiple connections, like 1 for each query.
Is there any benefit of using one method over the other? I think 1 connection is better then multiple but I could be wrong?
Creating connections can be expensive (I don't have a reference for this statement as yet Edit: Aha! Here it is) so it seems as if the consensus is to use fewer connections. Using a single connection for all queries on a single page seems to be a better choice than multiple connections.
In PHP+MySQL usually there is no much sence to use multiple connections per page (just slower and a little more RAM consumed).
The only way it might be useful is when you alter connection paremters which might interfer with other pages (like collation). But good PHP programs usually never do that kind of stuff.
Also, it is a good idea to enable persistent connections, so that 1 MySQL connection would be reused across multiples page executions.
If really depends on the level of activity you suspect the site will generate - if it's a high traffic web site, you'll soon run out of connections (unless you set the adjust MySQLs max connections to a stupidly high level, but that'll eventually grind the server to a halt).
I'd generally recommend that the front end of a web site should use a shared database object (singleton is your friend), as it doesn't require a great deal of discipline to write with this is mind and you won't waste time making connections. If you require additional concurrent queries on the backend, it shouldn't be that much of a deal as this isn't likely to be a highly trafficked area.
Its not recommended to execute multiple small queries where the work can be done using just one query, you can use a single query to get data from multiple tables and ieven multiple databases. see the link below:
http://www.x-developer.com/php-scripts/sql-connecting-multiple-databases-in-a-single-query
I don't see any benefit of using multiple connections, I 'd rather think it is a sign of bad structure. These are the reasons I can think of against using multiple connections:
You have to initialize the database multiple times. Setting conection properties upon connection establishment (like SET NAMES UTF8) would have to be done on multiple line.
It is definitely slower than a single connection.
A non-technical reason: Someone working with your code will most probably not expect it and might spend hours debugging the connection properties he had set in another connection.
Having a global connection object (or a class providing one) is the much better approach in PHP.
Are you sure the classes that make multiple connections aren't just returning a reference to the already open connection when one is open? I've seen a lot of stuff structured that way. It really is better performance-wise to use only one connection per page.
As far as I know, the only way to get mysqli error info is:
$con=mysqli_connect("localhost","my_user","my_password","my_db");
///RUN QUERY HERE, and if error:
$error=mysqli_error($con);
However (thinking I was smart) my whole software is programmed with a function that gets con info, as there are multiple databases and tables. So my queries look more like this.
$query=mysqli_query(CON("USER_DATABASE"),"INSERT INTO users (column) VALUES ("VALUE") ");
run_query($query); ///In this function I either run the query,
or record a failed query. However cannot record the specific error.
Is there any way at all to somehow still capture a mysqli_error(); without having the $con in the mysqli_error() function? Perhaps with the $query variable?
However (thinking I was smart) my whole software is programmed with a function that gets con info, as there are multiple databases and tables. So my queries look more like this.
Don't do that. Not only does it make it more difficult to retrieve error information, but -- more importantly! -- it means that your application will make a completely new database connection for every query. This will make your application significantly slower.
If your application really needs to connect to multiple databases, consider passing a symbolic name for the database as a parameter, and caching connections to all databases that have been used in a static variable.
as there are multiple databases and tables.
The number of tables doesn't matter.
As if the databases, your application should have one connection variable per database(however, I would rather say that you are doing something wrong if you need multiple databases in a simple application).
Either way, there is a way to get Mysqli error without a connection variable. To make it possible , you should tell Mysqli to start throwing exceptions. Just add this line
mysqli_report(MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT);
So the simplest solution for you would be to make con() function static and return the same connection instance every time it is called.
But you should reconsider the architecture, and make it better organized, using just one database unless it's absolutely necessary and also use OOP to get rid of static functions.
Note: I used Google Translator to write this
I've always done the following to work with MySQL:
-> Open Connection to the database.
-> see details
-> Insert Data
-> another query
-> close Connection
I usually use the same connection to do various things before closing.
A friend who studies this in the IPN of Mexico mentioned to me that the right way (for safety) is to make a new connection for each query, for example:
-> Open Connection to the database.
-> see details
-> close Connection
-> Open Connection to the database.
-> Insert Data
-> close Connection
-> Open Connection to the database.
-> another query
-> close Connection
My question is, what is the right thing to do? My method has been to make the least amount of queries to the database, and only make a connection and keep it until it no longer serves me.
Additionally, is it possible to make a double insertion to a table? For example:
insert into table1(relacion) values([insert into tablaRelacionada(id) values("dato")]);
and that "relacion" is the inserted ID from the first query in "tablaRelacionada".
No, it's not possible to insert rows into two different tables with a single INSERT statement. (You can use a trigger to get it done, but that trigger will need to issue a separate INSERT statement... from the client side it will look like one statement, but on the server, there would be two INSERT statements executed.
If performance and scalability aren't concerns, then "churning" connections is workable. There's nothing necessarily "wrong" with creating a separate connection for each statement, but it's resource intensive. There is a lot of overhead in creating a new session. (It looks rather simple from the client side, but it requires a lot of work on the server side, in addition to the codepath on the client.)
Reusing existing connections is a common pattern. It's one of the biggest benefits of implementing "connection pool", to make it easy to reuse connections without "churning", repeatedly connecting and disconnecting from the database.
In terms of a separate connection for each SQL statement somehow increasing "safety", that's a bit of a stretch.
But I can see some benefit of having a freshly initialized session.
For example, if you reuse an existing session, you may not know what changes have been made in the session state. Any changes made previously are still "in effect". This would be things like session variable settings (e.g. timezone, characterset, autocommit, user defined variables) which could have an impact on the current statement. But within a single script, where you've gotten a fresh connection, you should know what changes have been made, so that shouldn't really be an issue. (This would be more of an issue with using connections from a pool, where the connections are shared by multiple processes. One process mucking with the timezone or characterset could cause a slew of problems for other processes that reuse the connection.)
Using a separate connection per query is at best a great way to bog down both your application and database servers with needless overhead. There are three aspects I see raised here:
Efficiency
Application Security
Network Security
1. Efficiency
Short answer: Bad idea.
Oftentimes the overhead required to initialize the connection is far more than what is required to run the actual query. Your application is probably going to run orders of magnitude slower if you take a connection-per-query approach.
2. Application Security
Short answer: Generally a bad idea, but in the context of PHP completely unnecessary.
The only 'safety' issue I can think of here would be worrying about users accessing leftover temp tables, or session settings "bleeding" over. This is unlikely to happen unless you're using persistent connections which are not the default. As well, most temporary values in MySQL are stored per-connection, and unless you have some PHP code that written poorly [in a particular, strange, and seldom-recommended way, ie. sharing around DB singletons and accessing them strangely] then maybe if the planets align just right you might access some MySQL session-specific data in an unexpected way.
This is pretty much the same as preemptive optimization, and is not worth worrying about.
3. Network Security
Short answer: No. What? Just... no.
If you're worried about someone peeping in on your connections the solution is not to make more of them, it to make them securely. MySQL supports SSL, so use that if you're worried.
TL;DR No. Don't create separate connections per-query. Bad. Whoever told you this needs to go back to school.
Multi-Table Insert
What you've quoted is not possible, you would want to do something along the lines of the following:
$dbh->query("INSERT tablaRelacionada(id) values('dato')");
$lastid = $dbh->lastInsertId();
$dbh->query("INSERT INTO table1(relacion) values($lastid);");
Assuming that the table tablaRelacionada has an AUTO_INCREMENT column which is what you're trying to get from the first query.
See: lastInsertId()
It's php application using mysqli.
Someone else suggested to have db connection closed right after each query.
Current system have singleton database connection, so over-created new connection is not issue here. Only unused open connections.(Say, the script has not finished execution and the database is not closed by itself.)
So it seemed that there is something to balance - between the cost of waiting for the script to finish and multiple unnecessary closings of the db connection per script. I tend to think that the first is safer. But I am not very sure if it's sufficient. For example if I do:
$userA->sendMessageTo($userB);
And inside this:
$userA->send($userB);
$userA->useSomePoints();
$userA->flushPointsBalance();
....
Imagining each method will have some database operation but this is just one script call/request, if the db open/close happens around each query, this will certainly happen more than once, comparing to not closing it right after each query in method scope.
So which way is better?
generally, having your DB wrapper class (or ORM) create a single connection for the entire request and only close it during clean up (either via destructor, or via PHP's cleanup) is okay. if this is a problem, it probably means that something long is happening between your opening and closing of connections, and this is what you should be addressing instead.
causes could be:
slow queries that don't make use of indices
some other high latency blocking IO (file reading, decoding, etc)
you'll get better gains in terms of effort addressing those issues, rather than looking at how you open and close connections.
I am wanting to hear what others think about this? Currently, I make a mysql database connection inside of a header type file that is then included in the top of every page of my site. I then can run as many queries as I want on that 1 open connection. IF the page is built from 6 files included and there is 15 different mysql queries, then they all would run on this 1 connection.
Now sometimes I see classes that make multiple connections, like 1 for each query.
Is there any benefit of using one method over the other? I think 1 connection is better then multiple but I could be wrong?
Creating connections can be expensive (I don't have a reference for this statement as yet Edit: Aha! Here it is) so it seems as if the consensus is to use fewer connections. Using a single connection for all queries on a single page seems to be a better choice than multiple connections.
In PHP+MySQL usually there is no much sence to use multiple connections per page (just slower and a little more RAM consumed).
The only way it might be useful is when you alter connection paremters which might interfer with other pages (like collation). But good PHP programs usually never do that kind of stuff.
Also, it is a good idea to enable persistent connections, so that 1 MySQL connection would be reused across multiples page executions.
If really depends on the level of activity you suspect the site will generate - if it's a high traffic web site, you'll soon run out of connections (unless you set the adjust MySQLs max connections to a stupidly high level, but that'll eventually grind the server to a halt).
I'd generally recommend that the front end of a web site should use a shared database object (singleton is your friend), as it doesn't require a great deal of discipline to write with this is mind and you won't waste time making connections. If you require additional concurrent queries on the backend, it shouldn't be that much of a deal as this isn't likely to be a highly trafficked area.
Its not recommended to execute multiple small queries where the work can be done using just one query, you can use a single query to get data from multiple tables and ieven multiple databases. see the link below:
http://www.x-developer.com/php-scripts/sql-connecting-multiple-databases-in-a-single-query
I don't see any benefit of using multiple connections, I 'd rather think it is a sign of bad structure. These are the reasons I can think of against using multiple connections:
You have to initialize the database multiple times. Setting conection properties upon connection establishment (like SET NAMES UTF8) would have to be done on multiple line.
It is definitely slower than a single connection.
A non-technical reason: Someone working with your code will most probably not expect it and might spend hours debugging the connection properties he had set in another connection.
Having a global connection object (or a class providing one) is the much better approach in PHP.
Are you sure the classes that make multiple connections aren't just returning a reference to the already open connection when one is open? I've seen a lot of stuff structured that way. It really is better performance-wise to use only one connection per page.