I am developing a Codeigniter (2.0.2) Application, which will utilise a Master database for all write operations (INSERT/UPDATE/DELETE) and a read replica for all read operations (SELECT).
Now I know I can access two different database objects within the code to route the individual requests to the specific database server, but i'm thinking there has a better way, automated way. I'll be using MySQL and Active Record, and also want to build in Memcache checking - although it won't be used immediately, I'd like the option there for the future, built in at this stage.
I'm thinking if its possible to add a hook/library of some kind to intercept the $this->db->query so that the following happens:
1) SQL Query received
2) Check if SELECT query
2a) If SELECT, see if Memcache is active, if so encode SQL and check Memcache for response.
2b) If no memcache response, or Memcache is not active, execute query as normal through READ MySQL server.
3) Query was NOT select, so execute query as normal through the WRITE MySQL server.
4) Return response.
I'm sure that looking at this, it should be quite simple to do, but no matter how I look at it i'm just not seeing a potential answer - but there's got to be one! Can anyone help/assist?
In addition, I also want the ability to be able to log all write SQL commands for troubleshooting, presumably the best way is to introduce 3a) Write SQL command to plain text file ... into the above scheme. I don't believe MySQL actually logs the non-SELECT queries in anyway ... does it?
That type of behavior is a little bit beyond the normal scope of CI. Unfortunately, your best bet is to manually extend the database drivers, specifically override the function simple_query or _execute (simple_query is a wrapper around _execute which simply ensures initialization). That is really the only place where you can guarantee that you can catch all of the queries and branch the logic accordingly. (You may also want to override close as that is the cleanup script)
(Personally, I would have a the SELECT DB load a secondary DB into itself and just call $write_db->simple_query conditionally, that seems like it would be the least trouble).
Related
My problem is I'm using a HUGE web application (a school system), with no documentation for the internal logic. I need to make a bulk update of a particular value, but I don't know what tables in the MySQL database contain the relevant data to update. The app it's self runs from php. Is there an easy way to compare the database before I do an operation and after I do it so I can see what tables are effected? I tried using a diff comparing tool on the dumped sql before and after, but the database is so huge it's really impractical to use, wondering if there is something better or if I can just configure php somehow to log any mysql operations from whatever file happens to trigger them.
You may want to run the performance tool from the mysql workbench and look at the performance reports/statement analysis. This will work if you pick a time when the system is not being used and then run some function in the web that updates the tables with the values you need to change. Look at the performance table before and after you run your experiment and look for those sql statements which show use. It's not perfect, but this will at least help you begin to hone in on the data you're looking for. The big 'gotcha' here is if the value you want to change is dynamically derived during the query process. Then you'll have to understand how the derivation works and the source columns. But, again, this will give you a brute-force starting place.
I have an issue where an instance of Solr is querying my MySQL database to refresh its index immediately after an update is made to that database, but the Solr query is not seeing the change made immediately prior.
I imagine the problem has to be something like Solr is using a different database connection, and somehow the change is not being "committed" (I'm not using transactions, just a call to mysql_query) before the other connection can see it. If I throw a sufficiently long sleep() call in there, it works most of the time, but obviously this is not acceptable.
Is there a PHP or MySQL function that I can call to force a write/update/flush of the database before continuing?
You might make Solr use SET TRANSACTION ISOLATION LEVEL = READ-COMMITTED to get more prompt view of updated data.
You should be able to do this with the transactionIsolation property of the JDBC URL.
I've got a site that requires manual creation of the database tables on install. At the moment they are saved (along with any initial data) in a collection of .sql files.
I've tried to auto-create using exec() with CLI MySQL and while it works on a few platforms it's fairly flakey and I don't really like doing it this way, plus it is hard to debug errors and is far from bulletproof (especially if the MySQL executable isn't in the system path).
Is there a better way of doing this? The MySQL query() command only allows one sql statement per query (which is the sticking point).
MySQLi I've heard may solve some of these issues but I am fairly invested in the original MySQL library but would be willing to switch provided it's stable, compatible and is commonly supported on a standard server build and is an improvement in general.
Failing this I'd probably be willing to do some sort of creation from a PHP array/data structure - which is arguably cleaner as it would be able to update tables to match the schema in situ. I am assuming this may be a problem that has already been solved, so any links to any example implementation with pro's/con's would be useful!
Thanks in advance for any insight.
Apparently you can pass 65536 as client flag when connecting to the datebase to allow multi queries, e.g. making use of ; in one SQL string.
You could also just read in the contents of the SQL files, explode by ; if necessary and run the queries inside a transaction to make sure all queries execute properly.
Another option would be to have a look at Phing and dbdeploy to manage databases.
If you're using this to migrate data between systems, consider using the LOAD DATA INFILE syntax (http://dev.mysql.com/doc/refman/5.1/en/load-data.html) after having used SELECT ... INTO OUTFILE (http://dev.mysql.com/doc/refman/5.1/en/select.html)
You can run the schema creation/update commands via the standard mysql_* PHP functions. And if the query() command as you call it will allow only one statement, just call it many times.
I really don't get why do you require everything to be in the same call.
You should check for errors after each statement and take corrective actions if it fails (unless you are using InnoDB, in which case you can wrap all statements in a transaction and rollback if it fails.)
I have a mysql trigger that logs every time a specific table is updated.
Is there a way to also log WHICH PHP SCRIPT triggered it? (without modifying each php script of course, that would defeat my purpose)
Also, is there a way to log what was the SQL statement right before the UPDATE that triggered it?
Thanks
Nathan
Short answers: no and no. Sorry.
What are you trying to achieve? Perhaps there's another way....
no, but you can get some more specific direction.
first, if you're using persitent connections, turn them off. this will make your logs easier to use.
second, since it sounds like you have multiple code bases accessing the same database, create a different user for each code base with exactly the same rights and make each code base log in with a different user. now when you look at the log, you can see which application is doing what.
third, if you have the query log on, then the UPDATE immediately preceding the trigger will be the UPDATE that caused the trigger.
fourth, if your apps use any sort of encapsulation for the mysql connection, it should be trivial to modify it to write the call stack at the time a query is sent to the database to a file.
I've read through a few of the answers and the comments. I had one idea that would be usefuls only if your queries are passing through a single point. For example, if you have a database class that all queries are executed through.
If that is the case, you could possibly add a comment to the query itself. The comment would include the function call trace, and would be added to the query as an SQL comment.
Next, you would turn query logging on and be able to see where each query is getting called from in the log file.
If your queries do not pass through a single point, you may be out of luck.
One final suggestion would be to take a look at MySQL Proxy. I have not used it much but it is designed to do intermediate processing of queries. However, I still think you would need to modify your PHP scripts to pass additional information.
Is there any way to make a PDO object throw an error if a query takes too long? I have tried PDO::ATTR_TIMEOUT to no effect.
I'd like a way to have a query throw an error if it is running for longer than a certain amount of time. This is not something that I can do in the database, ie, no maintenance jobs running on the db or anything.
I'm not sure what you mean by "This is not something that I can do in the database", but I would suggest that you have the person administering the database set up an Oracle profile to limit this on the database side. There are parameters such as CPU_PER_CALL and LOGICAL_READS_PER_CALL that can cap queries. The profile can be applied only to specific users if desired.
I'm not sure if you can do this in Oracle but I'm going to say it's not possible to do this within PHP since PHP is issuing the query to Oracle to be run and then is waiting for Oracle's response back. It may be possible to modify the PDO extension to support this, but you would need to modify the extension code (the actual C code) as there probably isn't any way to do this in just PHP.