First of all, I am using PhpMyAdmin, is this okay or not?
Because when I have cache disabled, and do two queries after eachother, the second query always is faster, so I am thinking maybe there is an internal cache on PhpMyAdmin?
Secondly, is there any way to get the time of how long a query takes, into php, and echo it onto the browser? (so I can use php instead of phpMyAdmin)
Thirdly, SHOW STATUS LIKE '%qcache%' gives me this:
Qcache_free_blocks 1
Qcache_free_memory 25154096
Qcache_hits 0
Qcache_inserts 2
Qcache_lowmem_prunes 0
Qcache_not_cached 62
Qcache_queries_in_cache 2
Qcache_total_blocks 6
How come Qcache_not_cached grows by a number of 5 or 10 for every query I make? Shouldn't there only be 1 increase per query?
Also, when I enabled the cache, and did a query, the Qcache_queries_in_cache got increased by 2... I thought it would be increased by 1 per every query, explain someone?
THEN, when I did another query the same as the one I cached, there was no performance gain at all, the query took as long as without the cache enabled...
Any help here please except for referring to the manual (I have read it already).
Thanks
UPDATE
Here is a typical query I make:
SELECT * FROM `langlinks` WHERE ll_title='Africa'
First of all, I am using PhpMyAdmin,
is this okay or not?
I suppose it's better than nothing -- and more user-friendly than a command-line client ; but a thing I don't like with phpMyAdmin is that it sends queries you didn't write.
I've already seen phpMyAdmin send some queries that were "hurting" a server, while the one that had been written by the user was OK, for instance (I don't have the exact example in mind).
Generally speaking, though, I'd say it's "ok" as long as you accept that more requests will be sent : phpMyAdmin displays lots of informations (like the list of databases, tables, and so on), and has to get those informations from somewehre !
Shouldn't there only be 1 increase per
query?
If you really want to see the impact of your query and no other, you'd probably better use the command-line mysql client, instead of phpMyAdmin : that graphical tool has to send queries to get the informations it displays.
The question, actually, is : do you prefer a user-friendly tool ? Or do you want to monitor only what your query actually does ?
In most cases, the answer is "user-friendly tool" -- and that's why phpMyAdmin has so much success ;-)
PhpMyAdmin do query and update status from the mysql server, so you won't see it increment by one in phpmyadmin
Related
I have a very strange problem, that I cannot get my head around.
I am using Laravel for my backend application, where I am running a very simple query on table with 30k records all with proper indexes on it.
Here is the query:
DB::select('select * from Orders where ClientId = ?', [$id])
From the Laravel application this query runs for 1.2 seconds (The same thing is if I use Eloquent model.):
"query" => "select * from Orders where ClientId = ?"
"bindings" => array:1 [▼
0 => "44087"
]
"time" => 1015.2
The problem is, if I run THE SAME query inside the database console or PHPMyAdmin, the query takes approximate 20miliseconds.
I do not understand how is that possible since I am using the same database, same query, same computer and same connection to the database.
What can be the reason?
PHPMyAdmin will automatically add LIMIT for you.
This is because PHPMyAdmin will always by default paginate your query.
In your Laravel/Eloquent query, you are loading all 30k records in one go. It must take time.
To remedy this try pagination or chunking your query.
The total will take long, yes, but the chunks themselves will be very quick.
I would try debug the queries with the Debug Bar, to see how much time it takes, and which is taking longer,... It's very easy to use and install: https://github.com/barryvdh/laravel-debugbar
I think you are interested in DB administrations.. read this also,you can get some idea.good luck
There are several issues here. First one is how laravel works. Laravel only loads services and classes that are executed during your script. This is done to conserve resources, since PHP is meant to be run as a CGI script instead of a long running process. As a result, your timing might include the connection setup step instead of just executing the query. For a more "reliable" result, execute any query before timing your simple query.
There's another side of that behavior. In long running process, like Job runner, you ought not to change service parameters. This can cause undesired behavior and cause your parameter changes spill into other jobs. For example, if you provide SMTP login feature, you ought to reset the Email Sender credentials after sending the email, otherwise you will come into an issue where a user who doesn't use that feature will send an email as another user who does. This comes from thinking that services are reloaded every time a job is executed, as such is a behavior when running HTTP part.
Second, you're not using limit. As some other posters pointed out.
I'm almost sure this is due to the using limit by PHPMyAdmin, related to what you are seeing in the page output.
If you see top of the PHPMyAdmin page you see something like this:
Showing rows 0 - 24 (314 total, Query took 0.0009 seconds.)
You should have the same performance when you add the limit to your query.
How to enable MySQL Query Log?
Run query through phpmyadmin.
See which queries you actually have in MySQL.
Run app.
See which queries you actually have in MySQL.
Tell us, what was those extra, that slows down.
Query should be have the same speed in phpmyadmin or else whatever was the application try to use explain statement to see more details about query
Cause of this conflict may be due to many reasons other than MySQL as example
The php script itself have some functions that causes slow loading
Try to check server error.log maybe there's errors in functions
Basically phpmyadmin could have different than larval in the MySQL connection function try to check extension used in connection maybe it's not compatible with php version you use and I think this is the cause of slow query
I have noticed that in some app I have made and the cause was always in the php functions or in connection as example mysql_connect was much faster than PDO exten on php < 5.6 as I experienced but cause was always from php functions in the script
I have select which is counting number of rows from 3 tables, I'm using wp function $wpdb->get_var( $sql ), there are about 10 000 rows in tables. Sometimes this select takes <1 second to load sometimes more than 15. If I run this sql in phpmyadmin it always returns number of rows in less than 1 second, where could be problem?
There are a couple of things you can do.
First, do an analysis of your query. Putting EXPLAIN before the query will output data about the query and you may be able to find any problems with that.
Read more about EXPLAIN here
Also, WordPress may not have indexed the most commonly used columns.
Try indexing some of the columns which you most commonly use within your query and see if it helps.
For example:
ALTER TABLE wp_posts ADD INDEX (post_author,post_status)
Plugin
You could try a plugin such as Debug Queries which prints the queries onto the front-end and it helps debug where things are taking along time. This is recommended to run only in the dev area and not on a live website
I would also recommending hooking up something like New Relic and trying to profile what's happening on the application side. If New Relic is not an option, you might be able to use xhprof (http://pecl.php.net/package/xhprof) and/or IfP (https://code.google.com/p/instrumentation-for-php/). Very few queries will perform the same in production in an application as they do in direct SQL queries. You may have contention, read locks, or any other number of things that cause a query from php to effectively stall on its way over to MySQL. In which case you might literally see the query running very fast, but the time it takes to actually begin executing that query from PHP would be very slow. You'll definitely need to profile what's happening on the way from WordPress to MySQL and back, based on what you're saying. The tools I mentioned should all be very useful for helping you accomplish that.
I have 1 mysql table which is controlled strictly by admin. Data entry is very low but query is high in that table. Since the table will not change content much I was thinking to use mysql query cache with PHP but got confused (when i googled about it) with memcached.
What is the basic difference between memcached and mysqlnd_qc ?
Which is most suitable for me as per below condition ?
I also intend to extend the same for autcomplete box, which will be suitable in such case ?
My queries will return less than 30 rows mostly of very few bytes data and will have same SELECT queries. I am on a single server and no load sharing will be done. Thankyou in advance.
If your query is always the same, i.e. you do SELECT title, stock FROM books WHERE stock > 5 and your condition never changes to stock > 6 etc., I would suggest using MySQL Query Cache.
Memcached is a key-value store. Basically it can cache anything if you can turn it into key => value. There are a lot of ways you can implement caching with it. You could query your 30 rows from database, then cache it row by row but I don't see a reason to do that here if you're returning the same set of rows over and over. The most basic example I can think of for memcached is:
// Run the query
$result = mysql_query($con, "SELECT title, stock FROM books WHERE stock > 5");
// Fetch result into one array
$rows = mysqli_fetch_all($result);
// Put the result into memcache.
$memcache_obj->add('my_books', serialize($rows), false, 30);
Then do a $memcache_obj->get('my_books'); and unserialize it to get the same results.
But since you're using the same query over and over. Why add the complication when you can let MySQL handle all the caching for you? Remember that if you go with memcached option, you need to setup memcached server as well as implementing logic to check if the result is already in cache or not, or if the records have been changed in the database.
I would recommend using MySQL query cache over memcached in this case.
One thing you need to be careful with MySQL query cache, though, is that your query must be exactly the same, no extra blank spaces, comments whatsoever. This is because MySQL does no parsing to determine compare the query string from cache at all. Any extra character somewhere in the query means a different query.
Peter Zaitsev explained very well about MySQL Query Cache at http://www.mysqlperformanceblog.com/2006/07/27/mysql-query-cache/, worth taking a look at it. Make sure you don't need anything that MySQL Query Cache does not support as Peter Zaitsev mentioned.
If the queries run fast enough and does not really slows your application, do not cache it. With a table this small, MySQL will keep it in it's own cache. If your application and database are on the same server, the benefit will be very small, maybe even not measurable at all.
So, for your 3rd question, it also depends on how you query the underlying tables. Most of the time, it is sufficient to let MySQL cache it internally. An other approach is to generate all the possible combinations and store these, so mysql does not need to compute the matching rows and returns the right one straight away.
As a general rule: build your application without caching and only add caches for things that do not change often if a) the computation for the resultset is complex and timeconsuming or b) you have multiple application instances calling the database over a network. In those cases caching results in better performance.
Also, if you run PHP in a web server like Apache, caching inside your program does not add much benefit as it only uses the cache for the current page. An external cache (like memcache)- is then needed to cache over multiple results.
What is the basic difference between memcached and mysqlnd_qc ?
There is rather nothing common at all between them
Which is most suitable for me as per below condition ?
mysql query cache
I also intend to extend the same for autcomplete box, which will be suitable in such case ?
Sphinx Search
Background: I'm working on a system where the developers seem to be using a function which executes a MYSQL query like "SELECT MAX(id) AS id FROM TABLE" whenever they need to get the id of the LAST inserted row (the table having an auto_increment column).
I know this is a horrible practice (because concurrent requests will mess the records), and I'm trying to communicate that to the non-tech / management team, to which their response is...
"Oh okay, we'll only face this problem when we have
(a) a lot of users, or
(b) it'll only happen when two people try doing something
at _exactly_ the same time"
I don't disagree with either point, and think we'll run into this problem much sooner than we plan. However, I'm trying to calculate (or figure a mechanism) to calculate how many users should be using the system before we start seeing messed up links.
Any mathematical insights into that? Again, I KNOW its a horrible practice, I just want to understand the variables in this situation...
Update: Thanks for the comments folks - we're moving in the right direction and getting the code fixed!
The point is not if potential bad situations are likely. The point is if they are possible. As long as there's a non-trivial probability of the issue occurring, if it's known it should be avoided.
It's not like we're talking about changing a one line function call into a 5000 line monster to deal with a remotely possible edge case. We're talking about actually shortening the call to a more readable, and more correct usage.
I kind of agree with #Mark Baker that there is some performance consideration, but since id is a primary key, the MAX query will be very quick. Sure, the LAST_INSERT_ID() will be faster (since it's just reading from a session variable), but only by a trivial amount.
And you don't need a lot of users for this to occur. All you need is a lot of concurrent requests (not even that many). If the time between the start of the insert and the start of the select is 50 milliseconds (assuming a transaction safe DB engine), then you only need 20 requests per second to start hitting an issue with this consistently. The point is that the window for error is non-trivial. If you say 20 requests per second (which in reality is not a lot), and assuming that the average person visits one page per minute, you're only talking 1200 users. And that's for it to happen regularly. It could happen once with only 2 users.
And right from the MySQL documentation on the subject:
You can generate sequences without calling LAST_INSERT_ID(), but the utility of
using the function this way is that the ID value is maintained in the server as
the last automatically generated value. It is multi-user safe because multiple
clients can issue the UPDATE statement and get their own sequence value with the
SELECT statement (or mysql_insert_id()), without affecting or being affected by
other clients that generate their own sequence values.
Instead of using SELECT MAX(id) you shoud do as the documentation says :
Instead, use the internal MySQL SQL function LAST_INSERT_ID() in an SQL query
Even so, neither SELECT MAX(id) nor mysql_insert_id() are "thread-safe" and you still could have race condition. The best option you have is to lock tables before and after your requests. Or even better use transactions.
I don't have the math for it, but I would point out that response (a) is a little silly. Doesn't the company want a lot of users? Isn't that a goal? That response implies that they'd rather solve the problem twice, possibly at great expense the second time, instead of solve it once correctly the first time.
This will happen when someone has added something to the table between one insert and that query running. So to answer your question, two people using the system has the potential for things to go wrong.
At least using the LAST_INSERT_ID() will get the last ID for a particular resource so it won't matter how many new entries have been added in between.
In addition to the risk of getting the wrong ID value returned, there's also the additional database query overhead of SELECT MAX(id), and it's more PHP code to actually execute than a simple mysql_insert_id(). Why deliberately code something to be slow?
Is it possible to do a simple count(*) query in a PHP script while another PHP script is doing insert...select... query?
The situation is that I need to create a table with ~1M or more rows from another table, and while inserting, I do not want the user feel the page is freezing, so I am trying to keep update the counting, but by using a select count(\*) from table when background in inserting, I got only 0 until the insert is completed.
So is there any way to ask MySQL returns partial result first? Or is there a fast way to do a series of insert with data fetched from a previous select query while having about the same performance as insert...select... query?
The environment is php4.3 and MySQL4.1.
Without reducing performance? Not likely. With a little performance loss, maybe...
But why are you regularily creating tables and inserting millions of row? If you do this only very seldom, can't you just warn the admin (presumably the only one allowed to do such a thing) that this takes a long time. If you're doing this all the time, are you really sure you're not doing it wrong?
I agree with Stein's comment that this is a red flag if you're copying 1 million rows at a time during a PHP request.
I believe that in a majority of cases where people are trying to micro-optimize SQL, they could get much greater performance and throughput by approaching the problem in a different way. SQL shouldn't be your bottleneck.
If you're doing a single INSERT...SELECT, then no, you won't be able to get intermediate results. In fact this would be a Bad Thing, as users should never see a database in an intermediate state showing only a partial result of a statement or transaction. For more information, read up on ACID compliance.
That said, the MyISAM engine may play fast and loose with this. I'm pretty sure I've seen MyISAM commit some but not all of the rows from an INSERT...SELECT when I've aborted it part of the way through. You haven't said which engine your table is using, though.
The other users can't see the insertion until it's committed. That's normally a good thing, since it makes sure they can't see half-done data. However, if you want them to see intermediate data, you could throw in an occassional call to "commit" while you're inserting.
By the way - don't let anybody tell you to turn autocommit on. That a HUGE time waster. I have a "delete and re-insert" job on my database that takes 1/3rd as long when I turn off autocommit.
Just to be clear, MySQL 4 isn't configured by default to use transactions. It uses the MyISAM table type which locks the entire table for each insert, if I remember correctly.
Your best bet would be to use one of the MySQL bulk insertion functions, such as LOAD DATA INFILE, as these are dramatically faster at inserting large amounts of data. As for the counting, well, you could break the inserts into N groups of 1000 (or Y) then divide your progress meter into N sections and just update it on each group's request.
Edit: Another thing to consider is, if this is static data for a template, then you could use a "select into" to create a new table with the same data. Not sure what your application is, or the intended functionality, but that could work as well.
If you can get to the console, you can ask various status questions that will give you the information you are looking for. There's a command that goes something like "SHOW processlist".