I have a very strange problem, that I cannot get my head around.
I am using Laravel for my backend application, where I am running a very simple query on table with 30k records all with proper indexes on it.
Here is the query:
DB::select('select * from Orders where ClientId = ?', [$id])
From the Laravel application this query runs for 1.2 seconds (The same thing is if I use Eloquent model.):
"query" => "select * from Orders where ClientId = ?"
"bindings" => array:1 [▼
0 => "44087"
]
"time" => 1015.2
The problem is, if I run THE SAME query inside the database console or PHPMyAdmin, the query takes approximate 20miliseconds.
I do not understand how is that possible since I am using the same database, same query, same computer and same connection to the database.
What can be the reason?
PHPMyAdmin will automatically add LIMIT for you.
This is because PHPMyAdmin will always by default paginate your query.
In your Laravel/Eloquent query, you are loading all 30k records in one go. It must take time.
To remedy this try pagination or chunking your query.
The total will take long, yes, but the chunks themselves will be very quick.
I would try debug the queries with the Debug Bar, to see how much time it takes, and which is taking longer,... It's very easy to use and install: https://github.com/barryvdh/laravel-debugbar
I think you are interested in DB administrations.. read this also,you can get some idea.good luck
There are several issues here. First one is how laravel works. Laravel only loads services and classes that are executed during your script. This is done to conserve resources, since PHP is meant to be run as a CGI script instead of a long running process. As a result, your timing might include the connection setup step instead of just executing the query. For a more "reliable" result, execute any query before timing your simple query.
There's another side of that behavior. In long running process, like Job runner, you ought not to change service parameters. This can cause undesired behavior and cause your parameter changes spill into other jobs. For example, if you provide SMTP login feature, you ought to reset the Email Sender credentials after sending the email, otherwise you will come into an issue where a user who doesn't use that feature will send an email as another user who does. This comes from thinking that services are reloaded every time a job is executed, as such is a behavior when running HTTP part.
Second, you're not using limit. As some other posters pointed out.
I'm almost sure this is due to the using limit by PHPMyAdmin, related to what you are seeing in the page output.
If you see top of the PHPMyAdmin page you see something like this:
Showing rows 0 - 24 (314 total, Query took 0.0009 seconds.)
You should have the same performance when you add the limit to your query.
How to enable MySQL Query Log?
Run query through phpmyadmin.
See which queries you actually have in MySQL.
Run app.
See which queries you actually have in MySQL.
Tell us, what was those extra, that slows down.
Query should be have the same speed in phpmyadmin or else whatever was the application try to use explain statement to see more details about query
Cause of this conflict may be due to many reasons other than MySQL as example
The php script itself have some functions that causes slow loading
Try to check server error.log maybe there's errors in functions
Basically phpmyadmin could have different than larval in the MySQL connection function try to check extension used in connection maybe it's not compatible with php version you use and I think this is the cause of slow query
I have noticed that in some app I have made and the cause was always in the php functions or in connection as example mysql_connect was much faster than PDO exten on php < 5.6 as I experienced but cause was always from php functions in the script
Related
I'm facing a curious problem with database queries on Laravel.
I've implemented a dynamic query with query builder, using get parameters.
I've noticed certain queries were extremely slow, so I started debugging the generated sql from query builder and executed queries using raw sql statements, using DB::select( $sql ).
I did some research, and I'm aware of n+1 problem, but this isn't the case, because for debugging I simply used a controller action with DB::select( $sql ), and it doesn't show me any results. It showed me instead:
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /controller/action.
Reason: Error reading from remote server
The same query, with parameters that don't require a lot of database processing, runs fine using this method...
When I ran the problematic queries on MySQL Workbench, they took between 5 and 10 seconds to show results.
Further on my debugging, I manually created a PDO object in my action and executed the query. For my surprise, it took the same amount of time to run as with Workbench. I used hydrate method on the query results and the objects were rendered successfully on the view.
Next, I tried executing the queries with Laravel's PDO object, getting it with DB::getPdo(). The query wasn't processed, leading to Proxy Error again...
The problem is solved with a custom PDO object, but I don't like the idea of creating PDO objects, instead of using Laravel's one.
I don't understand why is this happening. Can it be related with database config, or Laravel's PDO object? Any help is appreciated!
Sometimes this kind of issues are not fully related to slow or badly built queries but with MySQL settings not being properly managed.
MySql default settings are generic. There's a lot there that can be tweaked and fined tuned in order to improve significantly performance. Query cache, buffer, memory use, etc. There a lot that can be done.
Check this article: http://www.tecmint.com/mysql-mariadb-performance-tuning-and-optimization/
There are also specific tools for this purpose. Some have nice GUI but my favourite ones are based on the command line, like MyTop and MySQLTunner https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl
a very puzzling and exhausting problem of mine:
I developed an API to allow others to draw information from my database. My server collects the POSTed info, writes up a mysql query, performs the query [$query = mysql_query($string, $connection);] , and returns the results (very simple).
the problem is that sometimes (say 1 out of every 5 tries) no info is returned. My server logs say that the resource ($query) is boolean (and therefore no results). My server receives the info from the remote users of the API every single time, the problem seems to be that my queries are sometimes just not being performed...
Why is this happening?
Is it a mysql performance issue? I never seem to have even a hint of a performance issue for queries made on my own page (i.e. not from the API)!
please help...
Your query might be failing. Try doing this:
mysql_query($string, $conn)or die(mysql_error());
If the query is generating an exception/error, it will stop the script and display the MySQL error. Using that error, you can fix your query so that everything will work fine eventually.
By the way, you are using $string, but it might be a better idea to use $builtQuery, because "string" might be confusing if you are going to need to edit the script later on.
Greetings.
Now I must start by saying, I can't copy the string. This is a general question.
I've got a query with several joins in that takes 0.9 seconds when run using the mysql CLI. I'm now trying to run the same query on a PHP site and it's taking 8 seconds. There are some other big joins on the site that are obviously slower, but this string is taking much too long. Is there a PHP cache for database connections that I need to increase? Or is this just to be expected.
PHP doesn't really do much with MySQL; it sends a query string to the server, and processes the results. The only bottleneck here is if it's an absolutely vast query string, or if you're getting a lot of results back - PHP has to cache and parse them into an object (or array, depending on which mysql_fetch_* you use). Without knowing what your query or results are like, I can only guess.
(From comments): If we have 30 columns and around, say, a million rows, this will take an age to parse (we later find that it's only 10k rows, so that's ok). You need to rethink how you do things:-
See if you can reduce the result set. If you're paginating things, you can use LIMIT clauses.
Do more in the MySQL query; instead of getting PHP to sort the results, let MySQL do it by using ORDER BY clauses.
Reduce the number of columns you fetch by explicitly specifying each column name instead of SELECT * FROM ....
Some wild guesses:
The PHP-version uses different parameters and variables each query: MySQL cannot cache it. While the version you type on the MySQL-CLI uses the same parameter: MySQL can fetch it from its cache. Try adding the SQL_NO_CACHE to your query on CLI to see if the result requires more time.
You are not testing on the same machine? Is the MySQL database you test the PHP-MySQL query with and the CLI the same machine? I mean: you are not testing one on your laptop and the other one on some production server, are you?
You are testing over a network: When the MySQL server is not installed on the same host as your PHP app, you will see some MySQL connection that uses "someserver.tld" instead of "localhost" as database host. In that case PHP will need to connect over a network, while your CLI already has that connection, or connects only local.
The actual connection is taking a long time. Try to run and time the query from your PHP-system a thousand times after each other. Instead of "connect to server, query database, disconnect", you should see the query timing when it is "connect to server, query database thousand times, disconnect". Some PHP-applications do that: they connect and disconnect for each and every query. And if your MySQL server is not configured correctly, connecting can take a gigantic amount of the total time.
How are you timing it?
If the 'long' version is accessed through a php page on a website, could the additional 7.1 seconds not just be the time it takes to send the request and then process and render the results?
How are you connecting? Does the account you're using use a hostname in the grant tables? If you're connectinv via TCP, MySQL will have to do a reverse DNS lookup on your IP to figure out if you're allowed in.
If it's the connection causing this, then do a simple test:
select version();
if that takes 8seconds, then it's connection overhead. If it return instantly, then it's PHP overhead in processing the data you've fetched.
The function mysql_query should should take the same time as mysql client. But any extra mysql_fetch_* will add up.
First of all, I am using PhpMyAdmin, is this okay or not?
Because when I have cache disabled, and do two queries after eachother, the second query always is faster, so I am thinking maybe there is an internal cache on PhpMyAdmin?
Secondly, is there any way to get the time of how long a query takes, into php, and echo it onto the browser? (so I can use php instead of phpMyAdmin)
Thirdly, SHOW STATUS LIKE '%qcache%' gives me this:
Qcache_free_blocks 1
Qcache_free_memory 25154096
Qcache_hits 0
Qcache_inserts 2
Qcache_lowmem_prunes 0
Qcache_not_cached 62
Qcache_queries_in_cache 2
Qcache_total_blocks 6
How come Qcache_not_cached grows by a number of 5 or 10 for every query I make? Shouldn't there only be 1 increase per query?
Also, when I enabled the cache, and did a query, the Qcache_queries_in_cache got increased by 2... I thought it would be increased by 1 per every query, explain someone?
THEN, when I did another query the same as the one I cached, there was no performance gain at all, the query took as long as without the cache enabled...
Any help here please except for referring to the manual (I have read it already).
Thanks
UPDATE
Here is a typical query I make:
SELECT * FROM `langlinks` WHERE ll_title='Africa'
First of all, I am using PhpMyAdmin,
is this okay or not?
I suppose it's better than nothing -- and more user-friendly than a command-line client ; but a thing I don't like with phpMyAdmin is that it sends queries you didn't write.
I've already seen phpMyAdmin send some queries that were "hurting" a server, while the one that had been written by the user was OK, for instance (I don't have the exact example in mind).
Generally speaking, though, I'd say it's "ok" as long as you accept that more requests will be sent : phpMyAdmin displays lots of informations (like the list of databases, tables, and so on), and has to get those informations from somewehre !
Shouldn't there only be 1 increase per
query?
If you really want to see the impact of your query and no other, you'd probably better use the command-line mysql client, instead of phpMyAdmin : that graphical tool has to send queries to get the informations it displays.
The question, actually, is : do you prefer a user-friendly tool ? Or do you want to monitor only what your query actually does ?
In most cases, the answer is "user-friendly tool" -- and that's why phpMyAdmin has so much success ;-)
PhpMyAdmin do query and update status from the mysql server, so you won't see it increment by one in phpmyadmin
I've seen this question around the internet (here and here, for example), but I've never seen a good answer. Is it possible to find the length of time a given MySQL query (executed via mysql_query) took via PHP?
Some places recommend using php's microtime function, but this seems like it may be inaccurate. The mysql_query may be bogged down by network latency, or a sluggish system which isn't responding to your query quickly, or some other unrelated cause. None of these are directly related to the quality of your query, which is the only thing I really want to test out here. (Please mention in the comments if you disagree!)
My answer is similar, but varied. Record the time before and after the query, but do it within your database query class. Oh, you say you are using mysql_query directly? Well, now you know why you should use a class wrapper around those raw php database functions (pardon the snark). Actually, one is already built called PDO:
http://us2.php.net/pdo
If you want to extend the functionality to do timing around each of your queries... extend the class! Simple enough, right?
I you are only checking the quality of the query itself, then remove PHP from the equation. Use a tool like the MySQL Query Browser or SQLyog.
Or if you have shell access, just connect directly. Any of these methods will be superior in determining the actual performance of your queries.
At the php level you pretty much would need to record the time before and after the query.
If you only care about the query performance itself you can enable the slow query log in your mysql server: http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html That will log all queries longer than a specified number of seconds.
If you really need query information maybe you could make use of SHOW PROFILES:
http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html
Personally, I would use a combination of microtime-ing, the slow query log, mytop, and analyzing problem queries with the MySQL client (command line).