Is there any way to get mysqli_error without CONNECTION variable? - php

As far as I know, the only way to get mysqli error info is:
$con=mysqli_connect("localhost","my_user","my_password","my_db");
///RUN QUERY HERE, and if error:
$error=mysqli_error($con);
However (thinking I was smart) my whole software is programmed with a function that gets con info, as there are multiple databases and tables. So my queries look more like this.
$query=mysqli_query(CON("USER_DATABASE"),"INSERT INTO users (column) VALUES ("VALUE") ");
run_query($query); ///In this function I either run the query,
or record a failed query. However cannot record the specific error.
Is there any way at all to somehow still capture a mysqli_error(); without having the $con in the mysqli_error() function? Perhaps with the $query variable?

However (thinking I was smart) my whole software is programmed with a function that gets con info, as there are multiple databases and tables. So my queries look more like this.
Don't do that. Not only does it make it more difficult to retrieve error information, but -- more importantly! -- it means that your application will make a completely new database connection for every query. This will make your application significantly slower.
If your application really needs to connect to multiple databases, consider passing a symbolic name for the database as a parameter, and caching connections to all databases that have been used in a static variable.

as there are multiple databases and tables.
The number of tables doesn't matter.
As if the databases, your application should have one connection variable per database(however, I would rather say that you are doing something wrong if you need multiple databases in a simple application).
Either way, there is a way to get Mysqli error without a connection variable. To make it possible , you should tell Mysqli to start throwing exceptions. Just add this line
mysqli_report(MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT);
So the simplest solution for you would be to make con() function static and return the same connection instance every time it is called.
But you should reconsider the architecture, and make it better organized, using just one database unless it's absolutely necessary and also use OOP to get rid of static functions.

Related

Display all SQL queries within a single PHP script execution

There is a long living large project in PHP, in the development of which at different times were involved a number of different developers. Potentially, there may have been places where duplicate database queries were made during bug fixing. Some developer to solve a local problem could write and execute a query returning data that were previously obtained. Now on the project there is a problem of DB performance and in light of it a have a question:
Are there any tools (besides General Log) that allow you to see what database queries were made as part of a single PHP script execution?
The question is related to the fact that there are a lot of API endpoints in the project and it will take a long time to check them all just by reading the code (which is sometimes very ornate).
Most PHP frameworks use a single connection interface - some kind of wrapper - that makes it possible to log all queries through that interface.
If that does not exist another approach would be to enable logging at the MySQL-level. But for some, probably legit, reason you don't want to do that either. If you want to prevent downtime you can actually enable query logging without restarting the MySQL server.
If none of the above solutions is possible you have to get your hands dirty :-)
One way could be to add your own logging in the PHP code.
Writing new lines to a file is not a performance issue if you write to the end of the file.
I'm not sure how you query the database, but if you..
A) ..use the procedural mysqli functions
If you use the procedural mysqli functions like mysqli_query() to call the database you could create your own global custom function which writes to the log file and then call the real function.
An example:
Add the new global function:
function _mysqli_query($link, $query, $resultmode = MYSQLI_STORE_RESULT )
{
// write new line to end of log-file
file_put_contents('queries.log', $query . "\n", FILE_APPEND);
// call real mysqli_query
return mysqli_query($link, $query, $resultmode);
}
The do a project-wide search and replace for mysqli_query( with _mysqli_query(.
Then a line like this:
$result = mysqli_query($conn, 'select * from users');
Would look like this after the search and replace:
$result = _mysqli_query($conn, 'select * from users');
B) ..use the mysqli class (object oriented style)
If you use the object oriented style by instantiation the mysqli class and then calling the query methods e.g. $mysqli->query("select * from users") an approach could be to create your own database class which extends the mysqli class, and inside the specific methods (e.g. the query function) you add the logging like in the above example.
In the highly recommendable book "High Performance MySQL" from O'Reilly they go through code examples how to do this and add extra debug information (timing etc.). Some of the pages are accessible at google books.
In general
I would consider using this as an opportunity to refactor some of the codebase to the better.
If you have written automated tests that is a good way to ensure that the refactoring did not break anything.
If you do not practice automated testing you have to test the whole application manually.
You said that you are afraid that your performance issues comes from duplicate database queries, which might very well be the case.
In most cases I find it to be missing indexes though or a need to rewrite single queries.
The performance schema in MySQL can help you debug these things if you find it realistic that it could be other causes to the performance issues.

Create new connection for each query?

Note: I used Google Translator to write this
I've always done the following to work with MySQL:
-> Open Connection to the database.
-> see details
-> Insert Data
-> another query
-> close Connection
I usually use the same connection to do various things before closing.
A friend who studies this in the IPN of Mexico mentioned to me that the right way (for safety) is to make a new connection for each query, for example:
-> Open Connection to the database.
-> see details
-> close Connection
-> Open Connection to the database.
-> Insert Data
-> close Connection
-> Open Connection to the database.
-> another query
-> close Connection
My question is, what is the right thing to do? My method has been to make the least amount of queries to the database, and only make a connection and keep it until it no longer serves me.
Additionally, is it possible to make a double insertion to a table? For example:
insert into table1(relacion) values([insert into tablaRelacionada(id) values("dato")]);
and that "relacion" is the inserted ID from the first query in "tablaRelacionada".
No, it's not possible to insert rows into two different tables with a single INSERT statement. (You can use a trigger to get it done, but that trigger will need to issue a separate INSERT statement... from the client side it will look like one statement, but on the server, there would be two INSERT statements executed.
If performance and scalability aren't concerns, then "churning" connections is workable. There's nothing necessarily "wrong" with creating a separate connection for each statement, but it's resource intensive. There is a lot of overhead in creating a new session. (It looks rather simple from the client side, but it requires a lot of work on the server side, in addition to the codepath on the client.)
Reusing existing connections is a common pattern. It's one of the biggest benefits of implementing "connection pool", to make it easy to reuse connections without "churning", repeatedly connecting and disconnecting from the database.
In terms of a separate connection for each SQL statement somehow increasing "safety", that's a bit of a stretch.
But I can see some benefit of having a freshly initialized session.
For example, if you reuse an existing session, you may not know what changes have been made in the session state. Any changes made previously are still "in effect". This would be things like session variable settings (e.g. timezone, characterset, autocommit, user defined variables) which could have an impact on the current statement. But within a single script, where you've gotten a fresh connection, you should know what changes have been made, so that shouldn't really be an issue. (This would be more of an issue with using connections from a pool, where the connections are shared by multiple processes. One process mucking with the timezone or characterset could cause a slew of problems for other processes that reuse the connection.)
Using a separate connection per query is at best a great way to bog down both your application and database servers with needless overhead. There are three aspects I see raised here:
Efficiency
Application Security
Network Security
1. Efficiency
Short answer: Bad idea.
Oftentimes the overhead required to initialize the connection is far more than what is required to run the actual query. Your application is probably going to run orders of magnitude slower if you take a connection-per-query approach.
2. Application Security
Short answer: Generally a bad idea, but in the context of PHP completely unnecessary.
The only 'safety' issue I can think of here would be worrying about users accessing leftover temp tables, or session settings "bleeding" over. This is unlikely to happen unless you're using persistent connections which are not the default. As well, most temporary values in MySQL are stored per-connection, and unless you have some PHP code that written poorly [in a particular, strange, and seldom-recommended way, ie. sharing around DB singletons and accessing them strangely] then maybe if the planets align just right you might access some MySQL session-specific data in an unexpected way.
This is pretty much the same as preemptive optimization, and is not worth worrying about.
3. Network Security
Short answer: No. What? Just... no.
If you're worried about someone peeping in on your connections the solution is not to make more of them, it to make them securely. MySQL supports SSL, so use that if you're worried.
 
TL;DR No. Don't create separate connections per-query. Bad. Whoever told you this needs to go back to school.
Multi-Table Insert
What you've quoted is not possible, you would want to do something along the lines of the following:
$dbh->query("INSERT tablaRelacionada(id) values('dato')");
$lastid = $dbh->lastInsertId();
$dbh->query("INSERT INTO table1(relacion) values($lastid);");
Assuming that the table tablaRelacionada has an AUTO_INCREMENT column which is what you're trying to get from the first query.
See: lastInsertId()

use only one $dbh for several databases?

If you have two databases on the same host, one called blog and one called forum, it seems like you can access both using only one database handle? (in PDO)
$dbh=new PDO("mysql:host=$dbHost;dbname=blog", $dbUser, $dbPassword);
This handle is for the database blog, although you can also perform operations on forum using $dbh if you write something like
SELECT website.tableName.fieldName
My questions are:
Is the only reason why you have to specify dbname in $dbh for letting you omit the blog.tableName.fieldName part?
Since my website has two databases, would there be any pros or cons of only using one database handle, rather than creating two handles (obviously one for blog and one for forum)? Possible performance difference?
Does creating a database handle consume any server resources?
It is usually good practice to keep database specific user on any app you make. I would go as far as calling it necessity. That is the reason of keeping the name of the database necessary in the connection. (Hint for reason behind this: What if someone got your dbms password for one table somehow?)
I am not very good at this but I do not think its a good idea to keep two separate databases when one can do. Like in your case, you are not using master-slave or anything. So unless you have some physical limit you are trying to make up for, make them into 1 database (use prefixes for table names to avoid name collisions)
The reason for the previous point comes with this one. Keeping one user for a database or some people even keep two for strange and to some extent justifiable reasons is a safety measure you should follow. For multiple users, you need to make multiple connections, which means for every page load you will be connecting to the dbms twice! simple math 2x load (yes, it eats resources, every single line of code does) Simplifying it, think of a man who needs to walk to the grocery store for everything you ask for, gets only 1 thing at a time. if you give him 2 different grocery stores, the man will need 2x time and energy to do the same work.
Yes, you can omit it. Or switch with running USE databasename;.
Use one handle, seems a waste to make double the connections.
Yes, hence (2).

The best efficient method to connect MySQL from PHP?

It is very common that using PHP to connect MySQL. The most common method is like this:
$sqlcon=mysql_connect("localhost","user","pw");
mysql_select_db('database');
$sqlcomm=mysql_query("SELECT id FROM bd");
while($row=mysql_fetch_row($sqlcomm))
{
//do something
}
mysql_close($sqlcon);
I think this is the fastest direct way to connect MySQL. But in project, there will have too many MySQL connections in php script, we should use "mysql_connect("localhost","user","pw")" code to connect MySQL in every php script. So you will like to build a MySQL class or function in a file to connect MySQL:
function connect( $query )
{
$sqlcon=mysql_connect("localhost","user","pw");
mysql_select_db('database');
$sqlcomm=mysql_query($query);
while($row=mysql_fetch_row($sqlcomm))
{
//do something.
}
mysql_close($sqlcon);
}
and then include into your project using include() for connection.
include('connect.php');
$data = connect('SELECT id from db');
OK, in this way, the code is look better. But using include() function will let PHP to read and execute other php script files, a I/O operation on harddisk again, it will also slow down the performance.
If the webpage is 100PV/s, php will read and execute a one php script 100 times/s in first method, but read and execute php script 200 times/s in this method!
I here show a simple example for only one query. Try image a high network multi-query environment.
Dose any one have other better way to make MySQL connection more easier and more efficient?
You don't really need to open that many connections. You just open 1 connection at the start of your script (before <body> gets generated, let's say), and then close it at the end of your script (after </body> is generated, let's say). That leaves you with only 1 connection. In between, you can execute as many queries as you need.
Have you looked at using PDO? it does connection pooling and what not andnot limited to mysql...
Have a look at Dibi.
You use a class that opens a MySQL connection (username / password / db is inherited from some sort of configuration file) and when you query the db - it establishes a connection.
That leads you on to using a framework that uses certain programing paradigms and so forth.
Also, you shouldn't worry about performance decrease because you're including a file. That should be the least of your worries. OS is doing many IOs, not just with the hard disk, your 1 file include won't be noticeable.
If you're asking whether there's more efficient way of connecting to a MySQL db without using mysql_, mysqli_, odbc or PDO - no, there isn't.
performance lack would be insignificant. you must be concerned more about correct approach to the structure of your code than performance.
you can move your host/user/password into constants into separate files and include wherever you need them, more over you can use some design patterns for database object. like Singleton or Factory. they will provide more flexibility to your system.
But in project, there are too many MySQL connections, we should type Username and Password code each time
There are lots of things wrong with this statement - even if you don't count the grammar.
If you mean that you have multiple servers with different datasets on them, then you should definitely consider consolidating them or using the federated engine to provide a single point of access.
Opening a new connection and closing it each time you run a query is very inneficient if you need to execute more than one query per script.
Realy you need to spend a lot of time thinking about why you need multiple database connections and eliminate them, but in the meantime, bearing in mind that connections are closed automatically when a script finishes.....
class connection_pool {
var $conxns=array(
'db1.example.com'=>
array ('host'=>'db1.example.com', 'user'=>'fred', 'password'=>'secret'),
'db2.example.com'=>
array ('host'=>'db1.example.com', 'user'=>'admin', 'password'=>'xxx4'),
....
);
function get_handle($db)
{
if (!array_key_exists($db, $this->conxns)) {
return false;
}
if (!#is_resource($this->conxns[$db]['handle'])) {
$this->conxns[$db]['handle']=mysql_connect(
$this->conxns[$db]['host'],
$this->conxns[$db]['user'],
$this->conxns[$db]['password']
);
}
return $this->conxns[$db]['handle'];
}
}
(NB never use 'USE database' if you have multiple databases on a single mysql instance - always explicitly state the database name in queries)

How quick is switching DBs with PHP + MySQL?

I'm wondering how slow it's going to be switching between 2 databases on every call of every page of a site. The site has many different databases for different clients, along with a "global" database that is used for some general settings. I'm wondering if there would be much time added for the execution of each script if it has to connect to the database, select a DB, do a query or 2, switch to another DB and then complete the page generation. I could also have the data repeated in each DB, I just need to mantain it (will only change when upgrading).
So, in the end, how fast is mysql_select_db()?
Edit: Yes, I could connect to each DB separately, but as this is often the slowest part of any PHP script, I'd like to avoid this, especially since it's on every page. (It's slow because PHP has to do some kind of address resolution (be it an IP or host name) and then MySQL has to check the login parameters both times.)
Assuming that both databases are on the same machine, you don't need to do the mysql_select_db. You can just specify the database in the queries. For example;
SELECT * FROM db1.table1;
You could also open two connections and use the DB object that is returned from the connect call and use those two objects to select the databases and pass into all of the calls. The database connection is an optional parameter on all of the mysql db calls, just check the docs.
You're asking two quite different questions.
Connecting to multiple database instances
Switching default database schemas.
MySQL is known to have quite fast connection setup time; making two mysql_connect() calls to different servers is barely more expensive than one.
The call mysql_select_db() is exactly the same as the USE statement and simply changes the default database schema for unqualified table references.
Be careful with your use of the term 'database' around MySQL: it has two different meanings.

Categories