Now I must start by saying, I can't copy the string. This is a general question.
I've got a query with several joins in that takes 0.9 seconds when run using the mysql CLI. I'm now trying to run the same query on a PHP site and it's taking 8 seconds. There are some other big joins on the site that are obviously slower, but this string is taking much too long. Is there a PHP cache for database connections that I need to increase? Or is this just to be expected.
PHP doesn't really do much with MySQL; it sends a query string to the server, and processes the results. The only bottleneck here is if it's an absolutely vast query string, or if you're getting a lot of results back - PHP has to cache and parse them into an object (or array, depending on which mysql_fetch_* you use). Without knowing what your query or results are like, I can only guess.
(From comments): If we have 30 columns and around, say, a million rows, this will take an age to parse (we later find that it's only 10k rows, so that's ok). You need to rethink how you do things:-
See if you can reduce the result set. If you're paginating things, you can use LIMIT clauses.
Do more in the MySQL query; instead of getting PHP to sort the results, let MySQL do it by using ORDER BY clauses.
Reduce the number of columns you fetch by explicitly specifying each column name instead of SELECT * FROM ....
Some wild guesses:
The PHP-version uses different parameters and variables each query: MySQL cannot cache it. While the version you type on the MySQL-CLI uses the same parameter: MySQL can fetch it from its cache. Try adding the SQL_NO_CACHE to your query on CLI to see if the result requires more time.
You are not testing on the same machine? Is the MySQL database you test the PHP-MySQL query with and the CLI the same machine? I mean: you are not testing one on your laptop and the other one on some production server, are you?
You are testing over a network: When the MySQL server is not installed on the same host as your PHP app, you will see some MySQL connection that uses "someserver.tld" instead of "localhost" as database host. In that case PHP will need to connect over a network, while your CLI already has that connection, or connects only local.
The actual connection is taking a long time. Try to run and time the query from your PHP-system a thousand times after each other. Instead of "connect to server, query database, disconnect", you should see the query timing when it is "connect to server, query database thousand times, disconnect". Some PHP-applications do that: they connect and disconnect for each and every query. And if your MySQL server is not configured correctly, connecting can take a gigantic amount of the total time.
How are you timing it?
If the 'long' version is accessed through a php page on a website, could the additional 7.1 seconds not just be the time it takes to send the request and then process and render the results?
How are you connecting? Does the account you're using use a hostname in the grant tables? If you're connectinv via TCP, MySQL will have to do a reverse DNS lookup on your IP to figure out if you're allowed in.
If it's the connection causing this, then do a simple test:
select version();
if that takes 8seconds, then it's connection overhead. If it return instantly, then it's PHP overhead in processing the data you've fetched.
The function mysql_query should should take the same time as mysql client. But any extra mysql_fetch_* will add up.
Related
I have a very strange problem, that I cannot get my head around.
I am using Laravel for my backend application, where I am running a very simple query on table with 30k records all with proper indexes on it.
Here is the query:
DB::select('select * from Orders where ClientId = ?', [$id])
From the Laravel application this query runs for 1.2 seconds (The same thing is if I use Eloquent model.):
"query" => "select * from Orders where ClientId = ?"
"bindings" => array:1 [▼
0 => "44087"
]
"time" => 1015.2
The problem is, if I run THE SAME query inside the database console or PHPMyAdmin, the query takes approximate 20miliseconds.
I do not understand how is that possible since I am using the same database, same query, same computer and same connection to the database.
What can be the reason?
PHPMyAdmin will automatically add LIMIT for you.
This is because PHPMyAdmin will always by default paginate your query.
In your Laravel/Eloquent query, you are loading all 30k records in one go. It must take time.
To remedy this try pagination or chunking your query.
The total will take long, yes, but the chunks themselves will be very quick.
I would try debug the queries with the Debug Bar, to see how much time it takes, and which is taking longer,... It's very easy to use and install: https://github.com/barryvdh/laravel-debugbar
I think you are interested in DB administrations.. read this also,you can get some idea.good luck
There are several issues here. First one is how laravel works. Laravel only loads services and classes that are executed during your script. This is done to conserve resources, since PHP is meant to be run as a CGI script instead of a long running process. As a result, your timing might include the connection setup step instead of just executing the query. For a more "reliable" result, execute any query before timing your simple query.
There's another side of that behavior. In long running process, like Job runner, you ought not to change service parameters. This can cause undesired behavior and cause your parameter changes spill into other jobs. For example, if you provide SMTP login feature, you ought to reset the Email Sender credentials after sending the email, otherwise you will come into an issue where a user who doesn't use that feature will send an email as another user who does. This comes from thinking that services are reloaded every time a job is executed, as such is a behavior when running HTTP part.
Second, you're not using limit. As some other posters pointed out.
I'm almost sure this is due to the using limit by PHPMyAdmin, related to what you are seeing in the page output.
If you see top of the PHPMyAdmin page you see something like this:
Showing rows 0 - 24 (314 total, Query took 0.0009 seconds.)
You should have the same performance when you add the limit to your query.
How to enable MySQL Query Log?
Run query through phpmyadmin.
See which queries you actually have in MySQL.
Run app.
See which queries you actually have in MySQL.
Tell us, what was those extra, that slows down.
Query should be have the same speed in phpmyadmin or else whatever was the application try to use explain statement to see more details about query
Cause of this conflict may be due to many reasons other than MySQL as example
The php script itself have some functions that causes slow loading
Try to check server error.log maybe there's errors in functions
Basically phpmyadmin could have different than larval in the MySQL connection function try to check extension used in connection maybe it's not compatible with php version you use and I think this is the cause of slow query
I have noticed that in some app I have made and the cause was always in the php functions or in connection as example mysql_connect was much faster than PDO exten on php < 5.6 as I experienced but cause was always from php functions in the script
I'm sure a lot of developers have run into the dreaded "Mysql has gone away" issue, especially when dealing with long running scripts such as those reserved for background or cron jobs. This is caused by the connection between mysql and php being dropped. What exactly is the best way to prevent that from happening?
I currently use a custom CDbConnection class with a setActive method straight out of here:
http://www.yiiframework.com/forum/index.php/topic/20063-general-error-2006-mysql-server-has-gone-away/page__p__254495#entry254495
This worked great and has stopped my MySQL gone away issue. Unfortunately, I've been running into a really random issue where after inserting a new record to the database via CActiveRecord, yii fails to set the primary key value properly. You end up with a pk value of 0. I looked into the issue more deeply and was finally able to reproduce it on my local machine. It seems like my custom CDbConnection::setActive() method might be the culprit. When you run the CActiveRecord::save() method, yii prepares the necessary sql and executes it via PDO. Immediately after this, yii uses PDO::lastInsertId() to grab the latest inserted ID and populates your models PK attribute. What happens though if for whatever reason, the initial insert command takes more than a few seconds to complete? This triggers the mysql ping action in my custom setActive() method, which only waits for a 2 second difference between the current timestamp and the last active timestamp. I noticed that when you do a PDO insert query, followed by a PDO select query, then finally the PDO::lastInsertId(), you end up with a last insert id value of 0.
I can't say for certain if this is what's happening on our live servers where the issue randomly occurs but it's been the only way I have been able to reproduce it.
There are actually many reasons for the Server Has Gone Away error, which are well documented in the MySQL Documentation. A couple common tricks to try are:
Increase the wait_timeout in your my.cnf file. See also innodb_lock_wait_timeout and lock_wait_timeout if your locks need to remain locked for a longer period of time.
The number of seconds the server waits for activity on a noninteractive connection before closing it.
Increase max_allowed_packet in your my.cnf file. Large data packets can trip up the connection and cause it to be closed abruptly.
You must increase this value if you are using large BLOB columns or long strings. It should be as big as the largest BLOB you want to use. The protocol limit for max_allowed_packet is 1GB. The value should be a multiple of 1024; nonmultiples are rounded down to the nearest multiple.
I'm updating an old script using php mysql functions to use mysqli and I've already found an interesting question (mysqli_use_result() and concurrency) but it doesn't clarify one thing:
Lets say five users connects to a webpage, they connected at same time and user1 was the first to select the data from a huge MyIsam database (15000 records of forum posts with left join to 'users' and 'attachments' table).
while the php scripts retrieves the result, the other users will not be able to get results, is that right?
Also using the same situation above, when user1 fully received it's result, an 'UPDATE set view_count = INC(1)' query is sent and the table is locked I suppose, and this same query will fail for the other users?
About the article you've quoted. It just means, that you should not do this:
mysqli_query($link, $query);
mysqli_use_result($link);
// lots of 'client processing'
// table is blocked for updates during this
sleep(10)
mysqli_fetch_* .....
In such situtations you are adviced to do so:
mysqli_query($link, $query);
mysqli_store_result($link);
// lots of 'client processing'
// table is NOT blocked for updates during this
sleep(10)
mysqli_fetch_* .....
The article further says, that if a second query will be issued - after calling mysql_use_result() and before fetching the results from the query it will fail. This is meant per connection - per script. So other user's queries won't fail during this.
while the php scripts retrieves the result, the other users will not be able to get results, is that right?
No this is not right. MySQL supports as many parallel connections as you have configured in my.ini max_connections. Concurrent reads are handled by the mysql server. Client code has not to worry about that unless the max connection limit is reached and mysqli_connect() would fail. If your application reaches a point where this happens frequently you'll in most cases first try to tweak your mysql config so that mysql allows more parrallel connections. If a threshold is reached you'll use an attempt like replication or mysql cluster.
Also using the same situation above, when user1 fully received it's result, an 'UPDATE set view_count = INC(1)' query is sent and the table is locked I suppose, and this same query will fail for the other users?
When there are concurrent reads and writes this is of course a performance issue. But the MySQL server handle this for you, meaning the client code has not worry about it as long as connecting to mysql works. If you have really high load you'll mostly use master slave Replication or MySQL cluster.
and this same query will fail for the other users?
A database server usually a bit more intelligent than a plain text file.
So, your queries won't fail. They'd wait instead.
Though I wouldn't use mysqli_use_result() at all.
Why not just fetch your results already?
Is there a Mysql statement which provides full details of any other open connection or user? Or, an equally detailed status report on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose.
What I'm trying to do: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here.
Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.
Edit: "Transaction" was the wrong word - a language collision. I'm not actually using Mysql transactions. What I meant was currently pending tasks, queries, and requests.
You can issue SHOW FULL PROCESSLIST; to show the active connections.
As for the rest, mysql doesn't know how many inserts are left, and how long they'll take. (And if you're using MyISAM tables, they dont support transactions). The server have no way of knowing whether your PHP scripts intend to send 10 more inserts, or 10000 - and if you're doing something like insert into xxx select ... from ... - mysql doesn't track/expose info on how much/many is done/is left .
You're better off handling this yourself via other tables where you update/insert data about when you started aggregating data, track the state,when it finished etc.
If the transactions are being performed on InnoDB tables, you can get full transaction details with SHOW INNODB STATUS. It's a huge blob of output, but part of it is transactions/lock status for each process/connection.
I'm using ajax, and quite often if not all the time, the first request is timed out. In fact, if I delay for several minutes before making a new request, I always have this issue. But the subsequent requests are all OK. So I'm guessing that the first time used a database connect that is dead. I'm using MySQL.
Any good solution?
Can you clarify:
are you trying to make a persistent connection?
do basic MySQL queries work (e.g. SELECT 'hard-coded' FROM DUAL)
how long does the MySQL query take for your ajax call (e.g. if you run it from a mysql command-line or GUI client.)
how often do you write to the MySQL tables used in your AJAX query?
Answering those questions should help rule-out other problems that have nothing to do with making a persistent connection: basic database connectivity, table indexing / slow-running SQL, MySQL cache invalidation etc.
chances are that you problem is NOT opening the connection, but actually serving the request.
subsequent calls are fast because of the mysql query cache.
what you need to do is to look for slow mysql queries, for example by turning on the slow query log, or by looking at the server at real time using mytop or "SHOW PROCESSLIST" to see if there is a query that takes too long. if you found one, use EXPLAIN to make sure it's properly indexed.