Is there any way to make a PDO object throw an error if a query takes too long? I have tried PDO::ATTR_TIMEOUT to no effect.
I'd like a way to have a query throw an error if it is running for longer than a certain amount of time. This is not something that I can do in the database, ie, no maintenance jobs running on the db or anything.
I'm not sure what you mean by "This is not something that I can do in the database", but I would suggest that you have the person administering the database set up an Oracle profile to limit this on the database side. There are parameters such as CPU_PER_CALL and LOGICAL_READS_PER_CALL that can cap queries. The profile can be applied only to specific users if desired.
I'm not sure if you can do this in Oracle but I'm going to say it's not possible to do this within PHP since PHP is issuing the query to Oracle to be run and then is waiting for Oracle's response back. It may be possible to modify the PDO extension to support this, but you would need to modify the extension code (the actual C code) as there probably isn't any way to do this in just PHP.
Related
I have a very strange problem, that I cannot get my head around.
I am using Laravel for my backend application, where I am running a very simple query on table with 30k records all with proper indexes on it.
Here is the query:
DB::select('select * from Orders where ClientId = ?', [$id])
From the Laravel application this query runs for 1.2 seconds (The same thing is if I use Eloquent model.):
"query" => "select * from Orders where ClientId = ?"
"bindings" => array:1 [▼
0 => "44087"
]
"time" => 1015.2
The problem is, if I run THE SAME query inside the database console or PHPMyAdmin, the query takes approximate 20miliseconds.
I do not understand how is that possible since I am using the same database, same query, same computer and same connection to the database.
What can be the reason?
PHPMyAdmin will automatically add LIMIT for you.
This is because PHPMyAdmin will always by default paginate your query.
In your Laravel/Eloquent query, you are loading all 30k records in one go. It must take time.
To remedy this try pagination or chunking your query.
The total will take long, yes, but the chunks themselves will be very quick.
I would try debug the queries with the Debug Bar, to see how much time it takes, and which is taking longer,... It's very easy to use and install: https://github.com/barryvdh/laravel-debugbar
I think you are interested in DB administrations.. read this also,you can get some idea.good luck
There are several issues here. First one is how laravel works. Laravel only loads services and classes that are executed during your script. This is done to conserve resources, since PHP is meant to be run as a CGI script instead of a long running process. As a result, your timing might include the connection setup step instead of just executing the query. For a more "reliable" result, execute any query before timing your simple query.
There's another side of that behavior. In long running process, like Job runner, you ought not to change service parameters. This can cause undesired behavior and cause your parameter changes spill into other jobs. For example, if you provide SMTP login feature, you ought to reset the Email Sender credentials after sending the email, otherwise you will come into an issue where a user who doesn't use that feature will send an email as another user who does. This comes from thinking that services are reloaded every time a job is executed, as such is a behavior when running HTTP part.
Second, you're not using limit. As some other posters pointed out.
I'm almost sure this is due to the using limit by PHPMyAdmin, related to what you are seeing in the page output.
If you see top of the PHPMyAdmin page you see something like this:
Showing rows 0 - 24 (314 total, Query took 0.0009 seconds.)
You should have the same performance when you add the limit to your query.
How to enable MySQL Query Log?
Run query through phpmyadmin.
See which queries you actually have in MySQL.
Run app.
See which queries you actually have in MySQL.
Tell us, what was those extra, that slows down.
Query should be have the same speed in phpmyadmin or else whatever was the application try to use explain statement to see more details about query
Cause of this conflict may be due to many reasons other than MySQL as example
The php script itself have some functions that causes slow loading
Try to check server error.log maybe there's errors in functions
Basically phpmyadmin could have different than larval in the MySQL connection function try to check extension used in connection maybe it's not compatible with php version you use and I think this is the cause of slow query
I have noticed that in some app I have made and the cause was always in the php functions or in connection as example mysql_connect was much faster than PDO exten on php < 5.6 as I experienced but cause was always from php functions in the script
I have an issue where an instance of Solr is querying my MySQL database to refresh its index immediately after an update is made to that database, but the Solr query is not seeing the change made immediately prior.
I imagine the problem has to be something like Solr is using a different database connection, and somehow the change is not being "committed" (I'm not using transactions, just a call to mysql_query) before the other connection can see it. If I throw a sufficiently long sleep() call in there, it works most of the time, but obviously this is not acceptable.
Is there a PHP or MySQL function that I can call to force a write/update/flush of the database before continuing?
You might make Solr use SET TRANSACTION ISOLATION LEVEL = READ-COMMITTED to get more prompt view of updated data.
You should be able to do this with the transactionIsolation property of the JDBC URL.
I am developing a Codeigniter (2.0.2) Application, which will utilise a Master database for all write operations (INSERT/UPDATE/DELETE) and a read replica for all read operations (SELECT).
Now I know I can access two different database objects within the code to route the individual requests to the specific database server, but i'm thinking there has a better way, automated way. I'll be using MySQL and Active Record, and also want to build in Memcache checking - although it won't be used immediately, I'd like the option there for the future, built in at this stage.
I'm thinking if its possible to add a hook/library of some kind to intercept the $this->db->query so that the following happens:
1) SQL Query received
2) Check if SELECT query
2a) If SELECT, see if Memcache is active, if so encode SQL and check Memcache for response.
2b) If no memcache response, or Memcache is not active, execute query as normal through READ MySQL server.
3) Query was NOT select, so execute query as normal through the WRITE MySQL server.
4) Return response.
I'm sure that looking at this, it should be quite simple to do, but no matter how I look at it i'm just not seeing a potential answer - but there's got to be one! Can anyone help/assist?
In addition, I also want the ability to be able to log all write SQL commands for troubleshooting, presumably the best way is to introduce 3a) Write SQL command to plain text file ... into the above scheme. I don't believe MySQL actually logs the non-SELECT queries in anyway ... does it?
That type of behavior is a little bit beyond the normal scope of CI. Unfortunately, your best bet is to manually extend the database drivers, specifically override the function simple_query or _execute (simple_query is a wrapper around _execute which simply ensures initialization). That is really the only place where you can guarantee that you can catch all of the queries and branch the logic accordingly. (You may also want to override close as that is the cleanup script)
(Personally, I would have a the SELECT DB load a secondary DB into itself and just call $write_db->simple_query conditionally, that seems like it would be the least trouble).
I've got a site that requires manual creation of the database tables on install. At the moment they are saved (along with any initial data) in a collection of .sql files.
I've tried to auto-create using exec() with CLI MySQL and while it works on a few platforms it's fairly flakey and I don't really like doing it this way, plus it is hard to debug errors and is far from bulletproof (especially if the MySQL executable isn't in the system path).
Is there a better way of doing this? The MySQL query() command only allows one sql statement per query (which is the sticking point).
MySQLi I've heard may solve some of these issues but I am fairly invested in the original MySQL library but would be willing to switch provided it's stable, compatible and is commonly supported on a standard server build and is an improvement in general.
Failing this I'd probably be willing to do some sort of creation from a PHP array/data structure - which is arguably cleaner as it would be able to update tables to match the schema in situ. I am assuming this may be a problem that has already been solved, so any links to any example implementation with pro's/con's would be useful!
Thanks in advance for any insight.
Apparently you can pass 65536 as client flag when connecting to the datebase to allow multi queries, e.g. making use of ; in one SQL string.
You could also just read in the contents of the SQL files, explode by ; if necessary and run the queries inside a transaction to make sure all queries execute properly.
Another option would be to have a look at Phing and dbdeploy to manage databases.
If you're using this to migrate data between systems, consider using the LOAD DATA INFILE syntax (http://dev.mysql.com/doc/refman/5.1/en/load-data.html) after having used SELECT ... INTO OUTFILE (http://dev.mysql.com/doc/refman/5.1/en/select.html)
You can run the schema creation/update commands via the standard mysql_* PHP functions. And if the query() command as you call it will allow only one statement, just call it many times.
I really don't get why do you require everything to be in the same call.
You should check for errors after each statement and take corrective actions if it fails (unless you are using InnoDB, in which case you can wrap all statements in a transaction and rollback if it fails.)
I've seen this question around the internet (here and here, for example), but I've never seen a good answer. Is it possible to find the length of time a given MySQL query (executed via mysql_query) took via PHP?
Some places recommend using php's microtime function, but this seems like it may be inaccurate. The mysql_query may be bogged down by network latency, or a sluggish system which isn't responding to your query quickly, or some other unrelated cause. None of these are directly related to the quality of your query, which is the only thing I really want to test out here. (Please mention in the comments if you disagree!)
My answer is similar, but varied. Record the time before and after the query, but do it within your database query class. Oh, you say you are using mysql_query directly? Well, now you know why you should use a class wrapper around those raw php database functions (pardon the snark). Actually, one is already built called PDO:
http://us2.php.net/pdo
If you want to extend the functionality to do timing around each of your queries... extend the class! Simple enough, right?
I you are only checking the quality of the query itself, then remove PHP from the equation. Use a tool like the MySQL Query Browser or SQLyog.
Or if you have shell access, just connect directly. Any of these methods will be superior in determining the actual performance of your queries.
At the php level you pretty much would need to record the time before and after the query.
If you only care about the query performance itself you can enable the slow query log in your mysql server: http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html That will log all queries longer than a specified number of seconds.
If you really need query information maybe you could make use of SHOW PROFILES:
http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html
Personally, I would use a combination of microtime-ing, the slow query log, mytop, and analyzing problem queries with the MySQL client (command line).