I'm trying to fetch all the customers and their remarks + products and treatments ... in one query.
The following query causes the MySQL database to crash on the server even though the following tables...
remarks_treatments
remarks_arrangements
remarks_products
I only have around 100.000 rows, which should be no problem for MySQL.
Here is a screenshot of the query.
I can't even print the...
$this->db->last_query()
... to paste it into PHPMyAdmin for optimisation/debugging because it causes the whole database and website to freeze.
Thanks in advance.
Related
I have a very strange issue that I am unable to figure out for several days now. I have done a lot of testing so many of the possible root causes are now excluded, which leaves room for the really "exotic" possibilities and I need some help with fresh ideas because I am stuck now.
Some background: A website source files and database (both identical) are installed on two servers Wamp and Lamp.
The issue that I face is for the website queries related to Search Results. The search queries are built from two SQL DB tables using LEFT JOIN. The join is done by an entry ID parameter.
This is an example of one of the search queries:
$tables = $tblprefix."vehicles AS v
LEFT JOIN ".$tblprefix."vehicles_car AS vc on v.id = vc.v_id LEFT JOIN ".$tblprefix."models AS model on v.model = model.id";
}
else {
$fields = " v.*, vc.exterior_color";
The search queries themselves are correct and work perfectly on both servers so this is just as example.
The different scenarios - from a CSV file I upload entries for the main DB table called "vehicles". When after this upload, a search is performed the results show all uploaded entries, i.e. all works correct. I have tired to add more than 27,000 rows and all are displayed without a glitch.
Then I go on and start uploading the entries for the second table "vehicles_car". When there are until about 200-215 entries inserted all works correct.
Now the issue - when I insert more than 210-220 entries in the second table, the Search queries suddenly show "No Result" but only for the website installed on the Lamp server.
The website on Wamp works no matter how many entries are loaded in the two tables. For some reason only the queries on Lamp server do not work and only if the second table has more that 200+ entries.
Note: the number of the table entries when "No results" are shown vary - it works for 215 entries, then I insert one more - shows "No results", then I delete this last entry and it continues to show "No results". Delete one more - "No results", keep deleting more entries from the second table and it suddenly shows the correct search results again. Really inconsistent behavior.
The strangest thing is that I exported the entire DB from the Lamp server, when queries showing "No results" and imported the DB into the Wamp server. And it works there!!!
So any ideas - what might be the issue (I suspect it is something in the DB) that might be causing the queries to work on one server and not work on the other (and only when more than certain number of rows exist in the second joind table)??
Lamp Server - SQL 5.5.32 InnoDB, phpMyAdmin - 2.8.0.1
Wamp Server - SQL 5.6.14 InnoDB, phpMyAdmin - 4.0.9
Any fresh ideas will be appreciated because I am really really stuck!!!
Thank you!
UPDATE: I just emptied all columns with special characters and replaced them with the cell values of the first row for both tables (where possible only - ID auto increment cells for example not changed).
The same behavior is observed on the Lamp server with the difference that now the SQL query shows "No results" on different number of rows added in the second table. First try added 2037 rows - "No results". Deleted last row - "No Results", deleted one more, all fine (at 2035 rows). Add same row (2036) again all fine, add new row (2037) - all fine. Keep on adding rows with INSERT query one by one all fine, now at row 2039 and search results work correct. Where can this inconsistent behavior be coming from? Any "variable" limit on number of queries with Left Join that the LAMP server can process since this is shared hosting environment? What else can it be?
UPDATE2: I am now inclined to think that this has something to do with the hosting provider service, rather then the queries or the DB themselves. Keep on investigating.
Ok so after spending two weeks looking into the SQL queries and pushing on the hosting support to investigate the issue on their side, it turned out that there was a limit for max_join_size set to 7,000,000 which would eventually return only about 2000+ records. If the records were to exceed this limit no results are returned by the server. Nice.
It turns out that what once used to be a great hosting service is now becoming a real pain in the *.
Currently,I am working on one php project. for my project extension,i needed to add more data in mysql database.but,i had to add datas in only one particular table and the datas are added.now,that table size is 610.1 MB and number of rows is 34,91,534.one more thing 22 distinct record is in that table,one distinct record is having 17,00,000 of data and one more is having 8,00,000 of data.
After that i have been trying to run SELECT statement it is taking more time(6.890 sec) to execute.in that table possible number of columns is having index.even though it is taking more time.
I tried two things for fast retrieval process
1.stored procedure with possible table column index.
2.partitions.
Again,both also took more time to execute SELECT query against some distinct record which is having more number of rows.any one can you please suggest me better alternative for my problem or let me know, if i did any mistake earlier which i had tried.
When working with a large amount of rows like you do, you should be careful of heavy complex nested select statements. With each iteration of nested selects it uses more resources to get to the results you want.
If you are using something like:
SELECT DISTINCT column FROM table
WHERE condition
and it is still taking long to execute even if you have indexes and partitions going then it might be physical resources.
Tune your structure and then tune your code.
Hope this helps.
I have two tables, to make it easy, consider the following as an example.
contacts (has name and email)
messages (messages but also has name and email w/c needs to be synced to the contacts table)
now please, for those who are itching to say "use relational method" or foreign key etc. I know, but this situation is different. I need to have a "copy" of the name and email of the messages on the messages table itself and need to sync it from time to time only.
As per the syncing requirement, I need to sync the names on the messages with the latest names on the contacts table.
I basically have the following UPDATE SQL in a loop for all rows in Contacts table
UPDATE messages SET name=(
SELECT name FROM contacts WHERE email = '$cur_email')
WHERE email='$cur_email'
the above loops through all the contacts and is fired as many contacts as I have.
I have several looping ideas to do this as well without using internal SELECT but I just thought the above would be more efficient (is it?), but I was wondering if there's an SQL way that's more efficient? Like:
UPDATE messages SET name=(
SELECT name FROM contacts WHERE email = '$cur_email')
WHERE messages.email=contacts.email
something that looks like a join?
I think it should be more efficient
UPDATE messages m JOIN contacts n on m.email=n.email SET m.name=n.name
Ok. i figured it out now.. using JOINS on update
like:
UPDATE messages JOIN contacts ON messages.email=contacts.email
SET messages.email = contacts.email
WHERE messages.email != contacts.email
it's fairly simple!
BUT... I'm not sure if this is really the ANSWER TO MY POST, since my question is what the "BEST WAY is" in terms of efficiency..
Executing the above query on 2000 records took my system a 4second pause.. where as executing a few select , php loop, and a few update statements felt like it was faster..
hmmmmm
------ UPDATE --------
Well i went ahead and created 2 scripts to test this ..
on my QuadCore i7 Ivybridge machine, surprisingly
a single Update query via SQL JOIN is MUCH SLOWER than doing a rather multi query and loop approach..
on one side i have the above simple query running on 1000 records, where all records need updating...
script execution time was 4.92 seconds! and caused my machine to hicup for a split second.. noticed a 100% spike on one of my cores..
succeeding calls to the script (where no fields where needing update) took the same amount of time! ridiculous..
The other side, involving SELECT JOIN query to all rows needing an update, and a simple UPDATE query looped in a foreach() function in PHP..
took the script
3.45 seconds to do all the updates.. # around 50% single core spike
and
1.04 seconds on succeeding queries (where no fields where needing update)
Case closed...
hope this helps the community!
ps
This is what i meant when debating some logic with programmers who are too much into "coding standards".. where their argument is "do it on the SQL side" if you can as it is faster and more of the standard rather than crude method of evaluating and updating in loops w/c they said was "dirty" code.. sheesh.
MyPHP Application sends a SELECT statement to MySQL with HTTPClient.
It takes about 20 seconds or more.
I thought MySQL can’t get result immediately because MySQL Administrator shows stat for sending data or copying to tmp table while I'd been waiting for result.
But when I send same SELECT statement from another application like phpMyAdmin or jmater it takes 2 seconds or less.10 times faster!!
Dose anyone know why MySQL perform so difference?
Like #symcbean already said, php's mysql driver caches query results. This is also why you can do another mysql_query() while in a while($row=mysql_fetch_array()) loop.
The reason MySql Administrator or phpMyAdmin shows result so fast is they append a LIMIT 10 to your query behind your back.
If you want to get your query results fast, i can offer some tips. They involve selecting only what you need and when you need:
Select only the columns you need, don't throw select * everywhere. This might bite you later when you want another column but forget to add it to select statement, so do this when needed (like tables with 100 columns or a million rows).
Don't throw a 20 by 1000 table in front of your user. She cant find what she's looking for in a giant table anyway. Offer sorting and filtering. As a bonus, find out what she generally looks for and offer a way to show that records with a single click.
With very big tables, select only primary keys of the records you need. Than retrieve additional details in the while() loop. This might look like illogical 'cause you make more queries but when you deal with queries involving around ~10 tables, hundreds of concurrent users, locks and query caches; things don't always make sense at first :)
These are some tips i learned from my boss and my own experince. As always, YMMV.
Dose anyone know why MySQL perform so difference?
Because MySQL caches query results, and the operating system caches disk I/O (see this link for a description of the process in Linux)
Trying to cut down on the number of queries on my site... Why would a single query run as multiple queries? Is there a way to fix this?
For example, from the following line of code (line 43)...
$model = Menu::model()->findAll();
We can see in my query log that 4 seperate queries were fired...
Or am I just reading this wrong?
Rows 1, 2 & 4 in your screenshot above are doing database queries.
ActiveRecord in Yii does SHOW COLUMNS FROM <table> and SHOW CREATE TABLE <table> before the query so it knows what columns / column types the table has. In production mode you can turn on schema caching to reduce these queries:
http://www.yiiframework.com/doc/blog/1.1/en/final.deployment#enabling-schema-caching