Switch large website from MySQL to MySQLi [duplicate] - php

This question already has answers here:
How to change mysql to mysqli?
(12 answers)
Closed 1 year ago.
I want to switch from MySQL to MySQLi, but I have a very large website.
I read that https://wikis.oracle.com/display/mysql/Converting+to+MySQLi could help me and I read How could I change this mysql to mysqli?. It says that I could replace most of the functions with just adding an 'i' to the function, and that I should start bughunting.
But my website is very complex and large, and it would take a very long time to check if everything works. So: what is the best way to switch from MySQL to MySQLi for a very large website?
Thanks!

There is no easy answer to your question as practically every simple way to do this involved doing things differently when the application was written.
If you have direct calls to mysql_* functions throughout your code and no database abstraction layer where you do your queries through a helper class or function then you will need to edit every command.
You cannot just get away with adding an i to commands like mysql_query as procedurally mysqli_query() requires the first parameter to be the link to the db where with mysql_query() if a connection was given at all, it was a second parameter.
Instead of just changing mysql_query(...) to mysqli_query($link,.....) I would recommend that there is no better time to put a db abstraction layer in place. So use functions eg sql_query() that actually process your queries so in future if you need to change DB again you can just update the db specific commands in one abstraction file. That way if you write a function that wraps mysqli_query then you could be able to simply rename your mysql_query() to your helper function and let the helper function worry about putting the link in there.
Whilst that is the simplest way, it will not bind parameters or prepare statements which is a major factor in preventing sql injection vulnerabilities
Once you have changed all these commands you need to test.
If you have no automated tests written then this is probably a good time to start writing them. Even though you will need to check that every change has worked, if you do it by automated test then you can avoid that pain in future.

you are taking right move becouse the mysql_* function are deprecated for latest php version. you should download a converter for this purpose...
see this and download...mysql to mysqli

Related

PHP Mysqli, making PHP use the latest connection automatically instead of having to indicate the mysql connection link resource

I just converted from mysql to mysqli api on PHP...
i noticed that some mysql calls require $connection resource parameter to be indicated, w/c i found quite annoying latey as i tried to port my scripts to this new API... is there a way i can configure PHP to use the latest connection resource automatically instead of having to decalre it every single time i make these calls? - kinda like the behaviour of the old mysql API..
I do hope there is a switch or something for this.
Nope, there is no [sane] way.
But that's not the problem.
The point is that switching to mysqli mechanically makes no sense.
The only reason for moving from mysql to mysqli is prepared statements.
If you aren't going to use them, and if you want to stick with raw API methods which require repeating the same useless 4-6 lines of code for the every query, without using any abstraction (which will reduce that amount to 1-2 lines) - there is no point in switching drivers then. Keep on with old mysql. 5.5 is no out yet and 5.4 is still rarely available on shared hostings. Means you have 5-6 years ahead for your old code to run with no problem.

MySQLi Bulk Prepared statements on large site?

I have been digging around stack and all sorts of other sites looking for the best answer to my questions.
I am developing a very large and growing monster of a website, in the form of an information management system. At the core it is running off of PHP and MySQL. I have just updated code, in the more general sense, to mysqli, but without taking full advantage of all of the features. That is part of what I am working on now.
I have read a ton about prepared statements and this is something I certainly need to put to use given the number of statements that get re-used.
I am looking at making in the realm of about 50 prepared statements,
being used across nearly 200 different pages. Is there a recommended
way to do this? All examples I have seen deal with 1 or 2.
Due to the ever growing nature of the site, using databases and such,
one of the things that I liked with the previous mysql is that it
didn't require a connection specified for each query, but does with
mysqli. I had to tweak my functions due to this. Is there a
recommended solution for this?
I built the site in a procedural form rather that object oriented, but I am always open to suggestions, regardless of the format they use.
I'll try to be as accurate as possible, but I'm not an expert.
Your first point: You're probably looking for Stored Procedures. Basically you can store certain logic of your application for repetitive usage.
Prepared Statements, however, are different. They basically mean "Parse once, execute many times" but they're not stored on the server and carried out across connections.
In PHP, each "page load" is a different thread with its own variables and thus its connections to the database, so you cannot really use the Prepared Statement again.
As for your second point, mysql_query() doesn't require a connection handle to be passed to it simply because it assumes you want to use the last created connection.
For example:
mysql_connect();
mysql_query("SELECT * FROM table");
and
$link = mysql_connect();
mysql_query("SELECT * FROM table", $link);
are the same.
So using the connection implicitly doesn't mean scalability.
That's as far as I can write without providing possibly wrong information, so I highly recommend to you really read about this, and then if you have some question everybody here would be happy to answer.

Is SQL used by PDO database independent?

Different databases have slight variations in their implementations of SQL. Does PDO handle this?
If I write an SQL query that I use with PDO to access a MySQL database, and later tell PDO to start using a different type of database, will the query stop working? Or will PDO 'convert' the query so that it continues to work?
If PDO does not do this, are there any PHP libraries that allow me to write SQL according to a particular syntax, and then the library will handle converting the SQL so that it will run on different databases?
From PHP manual :
PDO provides a data-access abstraction
layer, which means that, regardless of
which database you're using, you use
the same functions to issue queries
and fetch data. PDO does not provide a
database abstraction; it doesn't
rewrite SQL or emulate missing
features. You should use a
full-blown abstraction layer if you
need that facility.
So,you can not change the database and expect that everything works as before. It depends on the queries you have used. Are they "simple" SQL92 queries or do they use special features for a specific db...
Ex a mysql query with "LIMIT 10,20" must be rewritting to work with an Oracle DB or Sqlite. They use "LIMIT 20 OFFSET 10"
PHP doesn't have libraries that will automatically convert SQL for you. If you want that kind of functionality you should look at an ORM implementation like Doctrine. There is a price to pay of course, since there is a learning curve involved in using it in your project, plus writting SQL stops being as simple as churning out a string. You should ask yourself if you absolutely positively need code that's database independent.

Running a list of MySQL queries without using exec()

I've got a site that requires manual creation of the database tables on install. At the moment they are saved (along with any initial data) in a collection of .sql files.
I've tried to auto-create using exec() with CLI MySQL and while it works on a few platforms it's fairly flakey and I don't really like doing it this way, plus it is hard to debug errors and is far from bulletproof (especially if the MySQL executable isn't in the system path).
Is there a better way of doing this? The MySQL query() command only allows one sql statement per query (which is the sticking point).
MySQLi I've heard may solve some of these issues but I am fairly invested in the original MySQL library but would be willing to switch provided it's stable, compatible and is commonly supported on a standard server build and is an improvement in general.
Failing this I'd probably be willing to do some sort of creation from a PHP array/data structure - which is arguably cleaner as it would be able to update tables to match the schema in situ. I am assuming this may be a problem that has already been solved, so any links to any example implementation with pro's/con's would be useful!
Thanks in advance for any insight.
Apparently you can pass 65536 as client flag when connecting to the datebase to allow multi queries, e.g. making use of ; in one SQL string.
You could also just read in the contents of the SQL files, explode by ; if necessary and run the queries inside a transaction to make sure all queries execute properly.
Another option would be to have a look at Phing and dbdeploy to manage databases.
If you're using this to migrate data between systems, consider using the LOAD DATA INFILE syntax (http://dev.mysql.com/doc/refman/5.1/en/load-data.html) after having used SELECT ... INTO OUTFILE (http://dev.mysql.com/doc/refman/5.1/en/select.html)
You can run the schema creation/update commands via the standard mysql_* PHP functions. And if the query() command as you call it will allow only one statement, just call it many times.
I really don't get why do you require everything to be in the same call.
You should check for errors after each statement and take corrective actions if it fails (unless you are using InnoDB, in which case you can wrap all statements in a transaction and rollback if it fails.)

find time to execute MySQL query via PHP

I've seen this question around the internet (here and here, for example), but I've never seen a good answer. Is it possible to find the length of time a given MySQL query (executed via mysql_query) took via PHP?
Some places recommend using php's microtime function, but this seems like it may be inaccurate. The mysql_query may be bogged down by network latency, or a sluggish system which isn't responding to your query quickly, or some other unrelated cause. None of these are directly related to the quality of your query, which is the only thing I really want to test out here. (Please mention in the comments if you disagree!)
My answer is similar, but varied. Record the time before and after the query, but do it within your database query class. Oh, you say you are using mysql_query directly? Well, now you know why you should use a class wrapper around those raw php database functions (pardon the snark). Actually, one is already built called PDO:
http://us2.php.net/pdo
If you want to extend the functionality to do timing around each of your queries... extend the class! Simple enough, right?
I you are only checking the quality of the query itself, then remove PHP from the equation. Use a tool like the MySQL Query Browser or SQLyog.
Or if you have shell access, just connect directly. Any of these methods will be superior in determining the actual performance of your queries.
At the php level you pretty much would need to record the time before and after the query.
If you only care about the query performance itself you can enable the slow query log in your mysql server: http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html That will log all queries longer than a specified number of seconds.
If you really need query information maybe you could make use of SHOW PROFILES:
http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html
Personally, I would use a combination of microtime-ing, the slow query log, mytop, and analyzing problem queries with the MySQL client (command line).

Categories