I have a large database with a high volume of queries. Recently I noticed that a simple query is returning inconsistent results in the logs in production but cannot be replicated in my dev environment. For example, a query like
SELECT * FROM table WHERE somefield = "value"
would have a result set whose somefield would not equal "value". When I try to replicate the same using the MySql CLI client, I am not able to replicate the issue. The query btw, is handled by php/codeigniter.
Some of the leads I am pursuing right now are
Seeing if the charset of the string passed to codeigniter's where has anything to do with it.
Checking to see if allocating a different user for this query will fix it (no basis whatsoever but worth a try)
Looking for other leads to debug this issue.
Related
I have a very strange problem, that I cannot get my head around.
I am using Laravel for my backend application, where I am running a very simple query on table with 30k records all with proper indexes on it.
Here is the query:
DB::select('select * from Orders where ClientId = ?', [$id])
From the Laravel application this query runs for 1.2 seconds (The same thing is if I use Eloquent model.):
"query" => "select * from Orders where ClientId = ?"
"bindings" => array:1 [▼
0 => "44087"
]
"time" => 1015.2
The problem is, if I run THE SAME query inside the database console or PHPMyAdmin, the query takes approximate 20miliseconds.
I do not understand how is that possible since I am using the same database, same query, same computer and same connection to the database.
What can be the reason?
PHPMyAdmin will automatically add LIMIT for you.
This is because PHPMyAdmin will always by default paginate your query.
In your Laravel/Eloquent query, you are loading all 30k records in one go. It must take time.
To remedy this try pagination or chunking your query.
The total will take long, yes, but the chunks themselves will be very quick.
I would try debug the queries with the Debug Bar, to see how much time it takes, and which is taking longer,... It's very easy to use and install: https://github.com/barryvdh/laravel-debugbar
I think you are interested in DB administrations.. read this also,you can get some idea.good luck
There are several issues here. First one is how laravel works. Laravel only loads services and classes that are executed during your script. This is done to conserve resources, since PHP is meant to be run as a CGI script instead of a long running process. As a result, your timing might include the connection setup step instead of just executing the query. For a more "reliable" result, execute any query before timing your simple query.
There's another side of that behavior. In long running process, like Job runner, you ought not to change service parameters. This can cause undesired behavior and cause your parameter changes spill into other jobs. For example, if you provide SMTP login feature, you ought to reset the Email Sender credentials after sending the email, otherwise you will come into an issue where a user who doesn't use that feature will send an email as another user who does. This comes from thinking that services are reloaded every time a job is executed, as such is a behavior when running HTTP part.
Second, you're not using limit. As some other posters pointed out.
I'm almost sure this is due to the using limit by PHPMyAdmin, related to what you are seeing in the page output.
If you see top of the PHPMyAdmin page you see something like this:
Showing rows 0 - 24 (314 total, Query took 0.0009 seconds.)
You should have the same performance when you add the limit to your query.
How to enable MySQL Query Log?
Run query through phpmyadmin.
See which queries you actually have in MySQL.
Run app.
See which queries you actually have in MySQL.
Tell us, what was those extra, that slows down.
Query should be have the same speed in phpmyadmin or else whatever was the application try to use explain statement to see more details about query
Cause of this conflict may be due to many reasons other than MySQL as example
The php script itself have some functions that causes slow loading
Try to check server error.log maybe there's errors in functions
Basically phpmyadmin could have different than larval in the MySQL connection function try to check extension used in connection maybe it's not compatible with php version you use and I think this is the cause of slow query
I have noticed that in some app I have made and the cause was always in the php functions or in connection as example mysql_connect was much faster than PDO exten on php < 5.6 as I experienced but cause was always from php functions in the script
I have select which is counting number of rows from 3 tables, I'm using wp function $wpdb->get_var( $sql ), there are about 10 000 rows in tables. Sometimes this select takes <1 second to load sometimes more than 15. If I run this sql in phpmyadmin it always returns number of rows in less than 1 second, where could be problem?
There are a couple of things you can do.
First, do an analysis of your query. Putting EXPLAIN before the query will output data about the query and you may be able to find any problems with that.
Read more about EXPLAIN here
Also, WordPress may not have indexed the most commonly used columns.
Try indexing some of the columns which you most commonly use within your query and see if it helps.
For example:
ALTER TABLE wp_posts ADD INDEX (post_author,post_status)
Plugin
You could try a plugin such as Debug Queries which prints the queries onto the front-end and it helps debug where things are taking along time. This is recommended to run only in the dev area and not on a live website
I would also recommending hooking up something like New Relic and trying to profile what's happening on the application side. If New Relic is not an option, you might be able to use xhprof (http://pecl.php.net/package/xhprof) and/or IfP (https://code.google.com/p/instrumentation-for-php/). Very few queries will perform the same in production in an application as they do in direct SQL queries. You may have contention, read locks, or any other number of things that cause a query from php to effectively stall on its way over to MySQL. In which case you might literally see the query running very fast, but the time it takes to actually begin executing that query from PHP would be very slow. You'll definitely need to profile what's happening on the way from WordPress to MySQL and back, based on what you're saying. The tools I mentioned should all be very useful for helping you accomplish that.
I have 1 mysql table which is controlled strictly by admin. Data entry is very low but query is high in that table. Since the table will not change content much I was thinking to use mysql query cache with PHP but got confused (when i googled about it) with memcached.
What is the basic difference between memcached and mysqlnd_qc ?
Which is most suitable for me as per below condition ?
I also intend to extend the same for autcomplete box, which will be suitable in such case ?
My queries will return less than 30 rows mostly of very few bytes data and will have same SELECT queries. I am on a single server and no load sharing will be done. Thankyou in advance.
If your query is always the same, i.e. you do SELECT title, stock FROM books WHERE stock > 5 and your condition never changes to stock > 6 etc., I would suggest using MySQL Query Cache.
Memcached is a key-value store. Basically it can cache anything if you can turn it into key => value. There are a lot of ways you can implement caching with it. You could query your 30 rows from database, then cache it row by row but I don't see a reason to do that here if you're returning the same set of rows over and over. The most basic example I can think of for memcached is:
// Run the query
$result = mysql_query($con, "SELECT title, stock FROM books WHERE stock > 5");
// Fetch result into one array
$rows = mysqli_fetch_all($result);
// Put the result into memcache.
$memcache_obj->add('my_books', serialize($rows), false, 30);
Then do a $memcache_obj->get('my_books'); and unserialize it to get the same results.
But since you're using the same query over and over. Why add the complication when you can let MySQL handle all the caching for you? Remember that if you go with memcached option, you need to setup memcached server as well as implementing logic to check if the result is already in cache or not, or if the records have been changed in the database.
I would recommend using MySQL query cache over memcached in this case.
One thing you need to be careful with MySQL query cache, though, is that your query must be exactly the same, no extra blank spaces, comments whatsoever. This is because MySQL does no parsing to determine compare the query string from cache at all. Any extra character somewhere in the query means a different query.
Peter Zaitsev explained very well about MySQL Query Cache at http://www.mysqlperformanceblog.com/2006/07/27/mysql-query-cache/, worth taking a look at it. Make sure you don't need anything that MySQL Query Cache does not support as Peter Zaitsev mentioned.
If the queries run fast enough and does not really slows your application, do not cache it. With a table this small, MySQL will keep it in it's own cache. If your application and database are on the same server, the benefit will be very small, maybe even not measurable at all.
So, for your 3rd question, it also depends on how you query the underlying tables. Most of the time, it is sufficient to let MySQL cache it internally. An other approach is to generate all the possible combinations and store these, so mysql does not need to compute the matching rows and returns the right one straight away.
As a general rule: build your application without caching and only add caches for things that do not change often if a) the computation for the resultset is complex and timeconsuming or b) you have multiple application instances calling the database over a network. In those cases caching results in better performance.
Also, if you run PHP in a web server like Apache, caching inside your program does not add much benefit as it only uses the cache for the current page. An external cache (like memcache)- is then needed to cache over multiple results.
What is the basic difference between memcached and mysqlnd_qc ?
There is rather nothing common at all between them
Which is most suitable for me as per below condition ?
mysql query cache
I also intend to extend the same for autcomplete box, which will be suitable in such case ?
Sphinx Search
I'm working on a joomla extension and I'm trying to update entries in my joomla extensions database table using the following code in my model:
$this->_db->setQuery(
$this->_db->getQuery(true)
->update('#__my_table')
->set('position=position+1')
);
$dbres = $this->_db->result();
However it doesn't do anything and outputs no error (Development on and error reporting maximum in global config)
I entered the query directly in PHPmyAdmin:
UPDATE cprn7_my_table SET position=position+1
and it works without any problems.
I read about quoting keys and values with $this->_db->quoteName() or so, but I can't find any examples for that with queries like SET position=position+1 but only SET position=$newval so I don't know exactly what to quote and how.
//EDIT: Found the error, it has to be $this->_db->query() and not $this->_db->result()
A few possibilities come to mind. The first is that the --safe-updates setting is enabled, preventing updates (and deletes) whose WHERE clause lacks a primary key column specification.
phpMyAdmin may turn this setting off, while your default MySQL client settings (in my.cnf, under [client]) may be enabling it. You can use the show variables statement to discover the setting's value. Also, see this doc for more info: http://dev.mysql.com/doc/refman/5.5/en/mysql-command-options.html#option_mysql_safe-updates
A second thought: You can quote the column name using backticks, for example:
SET `position`=`position` + 1
However, if this still seems to have no effect, and, if you have an admin-level MySQL account for your server, you can turn the general log on, then run your query, then turn it off again, then examine your results. See this doc for working with the --general-log and related settings: http://dev.mysql.com/doc/refman/5.5/en/query-log.html
BTW, if your MySQL client provider (or Joomla, or any other tier in the mix) is setting --safe-updates using a SET command (i.e., upon connecting), you will also see this as an executed statement in the general log.
In PHP I'm using mysqli_fetch_assoc() in a while-loop to get every record in a certain query.
I'm wondering what happens if the data is changed while running the loop (by another process or server), so that the record doesn't match the query any more. Will it still be fetched?
In other words, is the array of records that are fetched fixed, when you do query()? Or is it not?
Update:
I understand that it's a feature that the resultset is not changed when the data is changed, but what if you actually WANT that? In my loop I'm not interested in records that are already updated by another server. How do I check for that, without doing a new query for each record that I fetch??
UPDATE:
Detailed explanation:
I'm working on some kind of searchengine-scraper that searches for values in a database. This is done by a few servers at the same time. Items that have been scraped shouldn't be searched anymore. I can't really control which server searches which item, I was hoping I could check the status of an item, while fetching the recordset. Since it's a big dataset, I don't transfer the entire resultset before searching, I fetch each record when I need it...
Introduction
I'm wondering what happens if the data is changed while running the loop (by another process or server), so that the record doesn't match the query any more. Will it still be fetched?
Yes.
In other words, is the array of records that are fetched fixed, when you do query()? Or is it not?
Yes.
A DBMS would not be worth its salt were it vulnerable to race conditions between table updates and query resultset iteration.
Certainly, as far as the database itself is concerned, your SELECT query has completed before any data can be changed; the resultset is cached somewhere in the layers between your database and your PHP script.
In-depth
With respect to the ACID principle *:
In the context of databases, a single logical operation on the data is called a transaction.
User-instigated TRANSACTIONs can encompass several consecutive queries, but 4.33.4 and 4.33.5 in ISO/IEC 9075-2 describe how this takes place implicitly on the per-query level:
The following SQL-statements are transaction-initiating
SQL-statements, i.e., if there is no current SQLtransaction, and an
SQL-statement of this class is executed, then an SQL-transaction is
initiated, usually before execution of that SQL-statement proceeds:
All SQL-schema statements
The following SQL-transaction statements:
<start transaction statement>.
<savepoint statement>.
<commit statement>.
<rollback statement>.
The following SQL-data statements:
[..]
<select statement: single row>.
<direct select statement: multiple rows>.
<dynamic single row select statement>.
[..]
[..]
In addition, 4.35.6:
Effects of SQL-statements in an SQL-transaction
The execution of an SQL-statement within an SQL-transaction has no
effect on SQL-data or schemas [..]. Together with serializable
execution, this implies that all read operations are repeatable
within an SQL-transaction at isolation level SERIALIZABLE, except
for:
1) The effects of changes to SQL-data or schemas and its contents
made explicitly by the SQL-transaction itself.
2) The effects of differences in SQL parameter values supplied to externally-invoked
procedures.
3) The effects of references to time-varying system
variables such as CURRENT_DATE and CURRENT_USER.
Your wider requirement
I understand that it's a feature that the resultset is not changed when the data is changed, but what if you actually WANT that? In my loop I'm not interested in records that are already updated by another server. How do I check for that, without doing a new query for each record that I fetch??
You may not.
Although you can control the type of buffering performed by your connector (in this case, MySQLi), you cannot override the above-explained low-level fact of SQL: no INSERT or UPDATE or DELETE will have an effect on a SELECT in progress.
Once the SELECT has completed, the results are independent; it is the buffering of transport of this independent data that you can control, but that doesn't really help you to do what it sounds like you want to do.
This is rather fortunate, frankly, because what you want to do sounds rather bizarre!
* Strictly speaking, MySQL has only partial ACID-compliance for tables other than those with the non-default storage engines InnoDB, BDB and Cluster, and MyISAM does not support [user-instigated] transactions. Still, it seems like the "I" should remain applicable here; MyISAM would be essentially useless otherwise.