find time to execute MySQL query via PHP - php

I've seen this question around the internet (here and here, for example), but I've never seen a good answer. Is it possible to find the length of time a given MySQL query (executed via mysql_query) took via PHP?
Some places recommend using php's microtime function, but this seems like it may be inaccurate. The mysql_query may be bogged down by network latency, or a sluggish system which isn't responding to your query quickly, or some other unrelated cause. None of these are directly related to the quality of your query, which is the only thing I really want to test out here. (Please mention in the comments if you disagree!)

My answer is similar, but varied. Record the time before and after the query, but do it within your database query class. Oh, you say you are using mysql_query directly? Well, now you know why you should use a class wrapper around those raw php database functions (pardon the snark). Actually, one is already built called PDO:
http://us2.php.net/pdo
If you want to extend the functionality to do timing around each of your queries... extend the class! Simple enough, right?

I you are only checking the quality of the query itself, then remove PHP from the equation. Use a tool like the MySQL Query Browser or SQLyog.
Or if you have shell access, just connect directly. Any of these methods will be superior in determining the actual performance of your queries.

At the php level you pretty much would need to record the time before and after the query.
If you only care about the query performance itself you can enable the slow query log in your mysql server: http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html That will log all queries longer than a specified number of seconds.

If you really need query information maybe you could make use of SHOW PROFILES:
http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html
Personally, I would use a combination of microtime-ing, the slow query log, mytop, and analyzing problem queries with the MySQL client (command line).

Related

Modifying PDO script to use functions, more efficient and less sequential

I've spent a few days making a script that essentially takes some data on a db2 system and creates new records for it in a mysql system. Along the way, it does some checks and I use looping to base inserts or updates on those conditions.
This script works, it returns what I expect and inserts/updates as expected. I've tested it for the deletion process, updates, inserts, whether records are expired or not......basically every function of this script I have tested thoroughly.
I feel like it's not as fast as it could be, it's probably more sequential and redundant than it should be as well.
I'm not used to working with PDO in a script like this, but I'm wondering what I could do here to fix performance/speed and redundancy. I know some of the logic may 'seem' redundant, but the logic is exactly where it needs to be, I'm just wondering if I could/should use functions to reduce calls or loops possibly.
Any help or advice is greatly appreciated.
this is tricky, I will get dragged over the hot coals for this, 1) you may want to not use PDO at all if you are sure that there is no risk of direct injection attack, 2) also I would look at implementing this sort of feature via PHP CLI i.e. via acron job on a curl trigger
In general PDO is a very expensive way to run a query. A simple string under mysqli_query() would be about 2.5x more efficient.

PHP with MSSQL performance

I'm developing a project where I need to retrieve HUGE amounts of data from an MsSQL database and treat that data. The data retrieval comes from 4 tables, 2 of them with 800-1000 rows, but the other two with 55000-65000 rows each one.
The execution time wasn't tollerable, so I started to rewrite the code, but I'm quite inexperienced with PHP and MsSQL. My execution of PHP atm is in localhost:8000. I'm generating the server using "php -S localhost:8000".
I think that this is one of my problems, the poor server for a huge ammount of data. I thought about XAMPP, but I need a server where I can put without problems the MsSQL Drivers to use the functions.
I cannot change the MsSQL for MySQL or some other changes like that, the company wants it that way...
Can you give me some advices about how to improve the performance? Any server that I can use to improve the PHP execution? Thank you really much in advance.
The PHP execution should least of your concerns. If it is, most likely you are going about things in the wrong way. All the PHP should be doing is running the SQL query against the database. If you are not using PDO, consider it: http://php.net/manual/en/book.pdo.php
First look to the way your SQL query is structured, and how it can be optimised. If in complete doubt, you could try posting the query here. Be aware that if you can't post a single SQL query that encapsulates your problem you're probably approaching the problem from the wrong angle.
I am assuming from your post that you do not have recourse to alter the database schema, but if so that would be the second course of action.
Try to do as much data processing in SQL Server as possible. Don't do data joining or other type of data processing that can be done in the RDBMS.
I've seen PHP code that retrieved data from multiple tables and matched lines based on several conditions. This is just an example of a misuse.
Also try to handle data in sets in SQL (be it MS* or My*) and avoid, if possible, line-by-line processing. The optimizer will output a much more performant plan.
This is small database. Really. My advices:
- Use paging for the tables and get data by portions (by parts)
- Use indexes for tables
- Try to find more powerful server. Often hosters companies uses one database server for thousands user's databases and speed is very slow. I suffered from this and bought dedicated server finally.

How expensive an operation is connecting to a Mysql database?

In certain functions of the code, php will execute hundreds or in some cases thousands of queries on the same tables using a loop. Currently, it creates a new database connection for each query. How expensive is that operation? Would I see a significant speed increase by reusing the same connection? It could take quite a bit of refactoring to change this behavior and use the same database.
The php uses mysql_connect to connect to the database.
Just based on what I've said here, are there other obvious optimizations that you would recommend (I've read about locking tables for example...)?
EDIT:
My question is more about the benefit of using a single connection, not how to avoid using more than one.
The documentation for mysql_connect states:
If a second call is made to mysql_connect() with the same arguments, no new link will be established, but instead, the link identifier of the already opened link will be returned.
So, unless you're connecting with different credentials, changing that part of your code will not affect performance.
I use Zend_Framework and my database profiling shows that the connection itself takes nearly 10x longer than most of my queries. I have two different databases that I connect to, and only connect once to each for each request.
I'd say reconnecting for every query is poor design, but the question of refactoring is more complex than that. Questions that need to be asked:
Are there current performance problems?
Have you done code profiling to narrow down where the performance issues are occurring?
How much time will be required for this refactoring? Take into account the testing involved, not just coding time.
The answer to the original question should be obvious. If its not obvious to you then it should still be obvious how to find out for yourself how much impact it has.
are there other obvious optimizations
No - because you've not provided any details of the table's structure nor the queries you are running.

Running a list of MySQL queries without using exec()

I've got a site that requires manual creation of the database tables on install. At the moment they are saved (along with any initial data) in a collection of .sql files.
I've tried to auto-create using exec() with CLI MySQL and while it works on a few platforms it's fairly flakey and I don't really like doing it this way, plus it is hard to debug errors and is far from bulletproof (especially if the MySQL executable isn't in the system path).
Is there a better way of doing this? The MySQL query() command only allows one sql statement per query (which is the sticking point).
MySQLi I've heard may solve some of these issues but I am fairly invested in the original MySQL library but would be willing to switch provided it's stable, compatible and is commonly supported on a standard server build and is an improvement in general.
Failing this I'd probably be willing to do some sort of creation from a PHP array/data structure - which is arguably cleaner as it would be able to update tables to match the schema in situ. I am assuming this may be a problem that has already been solved, so any links to any example implementation with pro's/con's would be useful!
Thanks in advance for any insight.
Apparently you can pass 65536 as client flag when connecting to the datebase to allow multi queries, e.g. making use of ; in one SQL string.
You could also just read in the contents of the SQL files, explode by ; if necessary and run the queries inside a transaction to make sure all queries execute properly.
Another option would be to have a look at Phing and dbdeploy to manage databases.
If you're using this to migrate data between systems, consider using the LOAD DATA INFILE syntax (http://dev.mysql.com/doc/refman/5.1/en/load-data.html) after having used SELECT ... INTO OUTFILE (http://dev.mysql.com/doc/refman/5.1/en/select.html)
You can run the schema creation/update commands via the standard mysql_* PHP functions. And if the query() command as you call it will allow only one statement, just call it many times.
I really don't get why do you require everything to be in the same call.
You should check for errors after each statement and take corrective actions if it fails (unless you are using InnoDB, in which case you can wrap all statements in a transaction and rollback if it fails.)

Can I prevent long queries in PDO?

Is there any way to make a PDO object throw an error if a query takes too long? I have tried PDO::ATTR_TIMEOUT to no effect.
I'd like a way to have a query throw an error if it is running for longer than a certain amount of time. This is not something that I can do in the database, ie, no maintenance jobs running on the db or anything.
I'm not sure what you mean by "This is not something that I can do in the database", but I would suggest that you have the person administering the database set up an Oracle profile to limit this on the database side. There are parameters such as CPU_PER_CALL and LOGICAL_READS_PER_CALL that can cap queries. The profile can be applied only to specific users if desired.
I'm not sure if you can do this in Oracle but I'm going to say it's not possible to do this within PHP since PHP is issuing the query to Oracle to be run and then is waiting for Oracle's response back. It may be possible to modify the PDO extension to support this, but you would need to modify the extension code (the actual C code) as there probably isn't any way to do this in just PHP.

Categories