Same mysql query gives different results at different times in phpmyadmin - php

We're running the following very simple mysql query through phpmyadmin
SELECT * FROM ProcessedListAssociations
We know the correct result has 751331 rows but successive runs of the query return different row counts - anywhere from 749978 to 752165 rows. At least that's what the row count message at the top of the phpmyadmin result page says:
Showing rows 0 - 24 (752165 total, Query took 0.0005 seconds.)
Running the query from a php script seems to return a result with the correct number of rows.
Running the following query from phpmyadmin:
SELECT count(*) FROM ProcessedListAssociations
also returns the correct result (751331)
We have recreated the table from scratch & still observe the same issue.
The table is an innoDB table. Here's basic info as phpmyAdmin reports it:
Space usage
Data 68.6 MiB
Index 136.3 MiB
Total 204.9 MiB
Row statistics
Format Compact
Collation utf8_general_ci
Next autoindex 751,332
Could it have something to do with concurrency? The server has 4 E7-4870 processors (80 threads total) but in the php.ini thread Safety is disabled.
If that is indeed the problem, then why are we only observing it in phpmyadmin and not with our own php scripts too?

See the answer for incorrect table rowcount in mysql
https://phpmyadmin.readthedocs.io/en/latest/faq.html?highlight=MaxExactCount#the-number-of-rows-for-innodb-tables-is-not-correct

Related

fastest way to do 1 billion queries on millions of rows

I'm running a PHP script that searches through a relatively large MySQL instance with a table with millions of rows to find terms like "diabetes mellitus" in a column description that has a full text index on it. However, after one day I'm only through a couple hundred queries so it seems like my approach is never going to work. The entries in the description column are on average 1000 characters long.
I'm trying to figure out my next move and I have a few questions:
My MySQL table has unnecessary columns in it that aren't being queried. Will remove those affect performance?
I assume running this locally rather than on RDS will dramatically increase performance? I have a decent macbook, but I chose RDS since cost isn't an issue, and I tried to run on an instance that was better than the my Macbook.
Would using a compiled language like Go rather than PHP do more than the 5-10x boost people report in test examples? That is, given my task is there any reason to think a static language would produce 100X or more speed improvements?
Should I put the data in a text or CSV file rather than MySQL? Is using MySQL just causing unnecessary overhead?
This is the query:
SELECT id
FROM text_table
WHERE match(description) against("+diabetes +mellitus" IN BOOLEAN MODE);
Here's the line of output of EXPLAIN for the query, showing the optimizer is utilizing the FULLTEXT index:
1 SIMPLE text_table fulltext idx idx 0 NULL 1 Using where
The RDS instance is db.m4.10xlarge which has 160GB of RAM. The InnoDB buffer pool is typically about 75% of RAM on an RDS instance, which make it 120GB.
The text_table status is:
Name: text_table
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 26000630
Avg_row_length: 2118
Data_length: 55079485440
Max_data_length: 0
Index_length: 247808
Data_free: 6291456
Auto_increment: 29328‌​568
Create_time: 2018-01-12 00:49:44
Update_time: NULL
Check_time: NULL
Collation: utf8_general_ci
Checksum: NULL
Create_options:
Comment:
This indicates the table has about 26 million rows, and the size of data and indexes is 51.3GB, but this doesn't include the FT index.
For the size of the FT index, query:
SELECT stat_value * ##innodb_page_size
FROM mysql.innodb_index_stats
WHERE table_name='text_table'
AND index_name = 'FTS_DOC_ID_INDEX'
AND stat_name='size'
The size of the FT index is 480247808.
Following up on comments above about concurrent queries.
If the query is taking 30 seconds to execute, then the programming language you use for the client app won't make any difference.
I'm a bit skeptical that the query is really taking 1 to 30 seconds to execute. I've tested MySQL fulltext search, and I found a search runs in under 1 second even on my laptop. See my presentation https://www.slideshare.net/billkarwin/practical-full-text-search-with-my-sql
It's possible that it's not the query that's taking so long, but it's the code you have written that submits the queries. What else is your code doing?
How are you measuring the query performance? Are you using MySQL's query profiler? See https://dev.mysql.com/doc/refman/5.7/en/show-profile.html This will help isolate how long it takes MySQL to execute the query, so you can compare to how long it takes for the rest of your PHP code to run.
Using PHP is going to be single-threaded, so you are running one query at a time, serially. The RDS instance you are using has 40 CPU cores, so you should be able to many concurrent queries at a time. But each query would need to be run by its own client.
So one idea would be to split your input search terms into at least 40 subsets, and run your PHP search code against each respective subset. MySQL should be able to run the concurrent queries fine. Perhaps there will be a slight overhead, but this will be more than compensated for by the parallel execution.
You can split your search terms manually into separate files, and then run your PHP script with each respective file as the input. That would be a straightforward way of solving this.
But to get really professional, learn to use a tool like GNU parallel to run the 40 concurrent processes and split your input over these processes automatically.

Only In Pagination page count Query: (PDO Query Error) -> server has gone away in

Please give me little help, I found in google and stackoverFlow but cant find solution.
In my website has 80,000 rows, and daily 1000 rows increase,
My web all Queries work properly, but some time pagination query return error, (after 30-60 sec)
Warning: PDOStatement::execute(): MySQL server has gone away in
/home/name/public_html/example.com/classes/functions.php on line 226
Warning: PDOStatement::execute(): Error reading result set's header in
/home/name/public_html/example.com/classes/functions.php on line 226
I am using simple count Query, I use pagination with different condition, and if return rows is more than 8000, only that query ERROR show but some time,,
Here is a simple one query Example
$query = "SELECT COUNT(DISTINCT(groupName)) FROM `table` WHERE `type` = 'string'";
//return expect 24,000 count
Now please Any Expert give me solution.
I test one thing to check server response,
I test , and get table 20,000 rows with WHERE CLAUSE and server return 20,000 rows from same table where pagination query run, but What the problem with my pagination Query? Why its return SERVER GONE AWAY Error
Please Dont suggest me to Increase server time out limit, I want to load my website with in 2 sec, not with in 2 min.
Table structure
According table structure it seems you need index on the columns you use to count by (mov_grp_name, mov_name).

Execution time of a mysql query, slow in php than phpmyadmin . How I can overcome this

I was running a query in phpmyadmin, it takes around 0.0012 sec to execute.
Showing rows 0 - 29 ( 727,934 total, Query took 0.0012 sec)
may be because of by default limitation in phpmyadmin. But it count the total no of row.
But in php, I need the total no of row .I was running the query without limitation that takes around 6-8 second of time to execute the query.
is their any way to solve this issue.
I have had a similar problem and in my case it was due to the fact that I was not using the correct data types. So while I wanted to query my VarChar Type Index Field with a VarChar I mistakenly formulated the query in such a way that I queried it with and Int Paramter. This resulted in the index not being utilized but rather a full table scan being performed. By example I did:
SELECT * FROM table WHERE Indexfield = 015523;
when I should have done the following:
SELECT * FROM table WHERE Indexfield = '015523';
There is some more insight into the issue here: https://www.percona.com/blog/2006/09/08/why-index-could-refuse-to-work/

SQL search query handled differently on different servers

I have a very strange issue that I am unable to figure out for several days now. I have done a lot of testing so many of the possible root causes are now excluded, which leaves room for the really "exotic" possibilities and I need some help with fresh ideas because I am stuck now.
Some background: A website source files and database (both identical) are installed on two servers Wamp and Lamp.
The issue that I face is for the website queries related to Search Results. The search queries are built from two SQL DB tables using LEFT JOIN. The join is done by an entry ID parameter.
This is an example of one of the search queries:
$tables = $tblprefix."vehicles AS v
LEFT JOIN ".$tblprefix."vehicles_car AS vc on v.id = vc.v_id LEFT JOIN ".$tblprefix."models AS model on v.model = model.id";
}
else {
$fields = " v.*, vc.exterior_color";
The search queries themselves are correct and work perfectly on both servers so this is just as example.
The different scenarios - from a CSV file I upload entries for the main DB table called "vehicles". When after this upload, a search is performed the results show all uploaded entries, i.e. all works correct. I have tired to add more than 27,000 rows and all are displayed without a glitch.
Then I go on and start uploading the entries for the second table "vehicles_car". When there are until about 200-215 entries inserted all works correct.
Now the issue - when I insert more than 210-220 entries in the second table, the Search queries suddenly show "No Result" but only for the website installed on the Lamp server.
The website on Wamp works no matter how many entries are loaded in the two tables. For some reason only the queries on Lamp server do not work and only if the second table has more that 200+ entries.
Note: the number of the table entries when "No results" are shown vary - it works for 215 entries, then I insert one more - shows "No results", then I delete this last entry and it continues to show "No results". Delete one more - "No results", keep deleting more entries from the second table and it suddenly shows the correct search results again. Really inconsistent behavior.
The strangest thing is that I exported the entire DB from the Lamp server, when queries showing "No results" and imported the DB into the Wamp server. And it works there!!!
So any ideas - what might be the issue (I suspect it is something in the DB) that might be causing the queries to work on one server and not work on the other (and only when more than certain number of rows exist in the second joind table)??
Lamp Server - SQL 5.5.32 InnoDB, phpMyAdmin - 2.8.0.1
Wamp Server - SQL 5.6.14 InnoDB, phpMyAdmin - 4.0.9
Any fresh ideas will be appreciated because I am really really stuck!!!
Thank you!
UPDATE: I just emptied all columns with special characters and replaced them with the cell values of the first row for both tables (where possible only - ID auto increment cells for example not changed).
The same behavior is observed on the Lamp server with the difference that now the SQL query shows "No results" on different number of rows added in the second table. First try added 2037 rows - "No results". Deleted last row - "No Results", deleted one more, all fine (at 2035 rows). Add same row (2036) again all fine, add new row (2037) - all fine. Keep on adding rows with INSERT query one by one all fine, now at row 2039 and search results work correct. Where can this inconsistent behavior be coming from? Any "variable" limit on number of queries with Left Join that the LAMP server can process since this is shared hosting environment? What else can it be?
UPDATE2: I am now inclined to think that this has something to do with the hosting provider service, rather then the queries or the DB themselves. Keep on investigating.
Ok so after spending two weeks looking into the SQL queries and pushing on the hosting support to investigate the issue on their side, it turned out that there was a limit for max_join_size set to 7,000,000 which would eventually return only about 2000+ records. If the records were to exceed this limit no results are returned by the server. Nice.
It turns out that what once used to be a great hosting service is now becoming a real pain in the *.

What is the maximum records I should fetch from a MySQL database?

My server is running slow as I am trying to fetch 200 records from a MySQl database (using PHP). They are posts that I need to display and I know this is my error because when I try to fetch 1 record it is fast, 200 slows it down tremendously.
Is this a known problem, fetching for too many entries causes a problem?
Your PHP code must be a complicated function looping every time for each record. So it should be running 200 times.. That will slow the page response time. Fetching 200 records in MYSQL is not problem at all. It will run instantly if you run in MySql Terminal..
There are three possibilities that might slow you server down from your side.
Your database is not optimized. Optimizing your database can give you a tremendous performance increase
Your query is doing something wrong. We need to see what query you are running to get the 200 rows.
You are running an individual query for each row in a loop.
What i would suggest though is base your query on this eg.
SELECT fields FROM table WHERE condition = required condition LIMIT 200
Also if that query runs slowly then do an explain to see what indexing its using
EXPLAIN SELECT fields FROM table WHERE condition = required condition LIMIT 200
Because to get 200 rows should take milliseconds
Number of records you can store in your table, that number of records you can fetch.
for unsigned int largest value is 4,294,967,295
for unsigned big int largest value is 18,446,744,073,709,551,615
for access records fast you need to define LIMIT in query.
You should fetch the records needed for displaying, no more no less. Do not fetch records for (simple) calculations, as that can be done in the query.
I would say that displaying 50 ~ 100 records is the max a users brain can scan, getting all the info in the records.
I am an exception, when seeing more then 15 records, my brain tilts :)

Categories