SQL search query handled differently on different servers - php

I have a very strange issue that I am unable to figure out for several days now. I have done a lot of testing so many of the possible root causes are now excluded, which leaves room for the really "exotic" possibilities and I need some help with fresh ideas because I am stuck now.
Some background: A website source files and database (both identical) are installed on two servers Wamp and Lamp.
The issue that I face is for the website queries related to Search Results. The search queries are built from two SQL DB tables using LEFT JOIN. The join is done by an entry ID parameter.
This is an example of one of the search queries:
$tables = $tblprefix."vehicles AS v
LEFT JOIN ".$tblprefix."vehicles_car AS vc on v.id = vc.v_id LEFT JOIN ".$tblprefix."models AS model on v.model = model.id";
}
else {
$fields = " v.*, vc.exterior_color";
The search queries themselves are correct and work perfectly on both servers so this is just as example.
The different scenarios - from a CSV file I upload entries for the main DB table called "vehicles". When after this upload, a search is performed the results show all uploaded entries, i.e. all works correct. I have tired to add more than 27,000 rows and all are displayed without a glitch.
Then I go on and start uploading the entries for the second table "vehicles_car". When there are until about 200-215 entries inserted all works correct.
Now the issue - when I insert more than 210-220 entries in the second table, the Search queries suddenly show "No Result" but only for the website installed on the Lamp server.
The website on Wamp works no matter how many entries are loaded in the two tables. For some reason only the queries on Lamp server do not work and only if the second table has more that 200+ entries.
Note: the number of the table entries when "No results" are shown vary - it works for 215 entries, then I insert one more - shows "No results", then I delete this last entry and it continues to show "No results". Delete one more - "No results", keep deleting more entries from the second table and it suddenly shows the correct search results again. Really inconsistent behavior.
The strangest thing is that I exported the entire DB from the Lamp server, when queries showing "No results" and imported the DB into the Wamp server. And it works there!!!
So any ideas - what might be the issue (I suspect it is something in the DB) that might be causing the queries to work on one server and not work on the other (and only when more than certain number of rows exist in the second joind table)??
Lamp Server - SQL 5.5.32 InnoDB, phpMyAdmin - 2.8.0.1
Wamp Server - SQL 5.6.14 InnoDB, phpMyAdmin - 4.0.9
Any fresh ideas will be appreciated because I am really really stuck!!!
Thank you!
UPDATE: I just emptied all columns with special characters and replaced them with the cell values of the first row for both tables (where possible only - ID auto increment cells for example not changed).
The same behavior is observed on the Lamp server with the difference that now the SQL query shows "No results" on different number of rows added in the second table. First try added 2037 rows - "No results". Deleted last row - "No Results", deleted one more, all fine (at 2035 rows). Add same row (2036) again all fine, add new row (2037) - all fine. Keep on adding rows with INSERT query one by one all fine, now at row 2039 and search results work correct. Where can this inconsistent behavior be coming from? Any "variable" limit on number of queries with Left Join that the LAMP server can process since this is shared hosting environment? What else can it be?
UPDATE2: I am now inclined to think that this has something to do with the hosting provider service, rather then the queries or the DB themselves. Keep on investigating.

Ok so after spending two weeks looking into the SQL queries and pushing on the hosting support to investigate the issue on their side, it turned out that there was a limit for max_join_size set to 7,000,000 which would eventually return only about 2000+ records. If the records were to exceed this limit no results are returned by the server. Nice.
It turns out that what once used to be a great hosting service is now becoming a real pain in the *.

Related

CodeIgniter ActiveRecord query large tables causes MySQL to freeze

I'm trying to fetch all the customers and their remarks + products and treatments ... in one query.
The following query causes the MySQL database to crash on the server even though the following tables...
remarks_treatments
remarks_arrangements
remarks_products
I only have around 100.000 rows, which should be no problem for MySQL.
Here is a screenshot of the query.
I can't even print the...
$this->db->last_query()
... to paste it into PHPMyAdmin for optimisation/debugging because it causes the whole database and website to freeze.
Thanks in advance.

How to select different records for ten site in large mysql databases

I have a large database of three million articles in a specific category.I'm going with this database, few sites launch.but my budget is low.So the best thing is for me to use a shared host but the problem is that the shared host hardware power is weak given to the user because it shared so I have to get a new post to a site that has already been posted i'm in trouble. I used the following method to get the new contents of the database but now with the increasing number and growing database records more than the power of a shared host to display information at the right time.
My previous method :
I have a table for content
And a table to know what entry was posted statistics that for every site.
My query is included below:
SELECT * FROM postlink WHERE `source`='$mysource' AND NOT EXISTS (SELECT sign FROM `state` WHERE postlink.sign = state.sign AND `cite`='$mycite') ORDER BY `postlink`.`id` ASC LIMIT 5
i use mysql
I've tested with different queries but did not get a good result and we had to show a few post more very time-consuming.
Now I want you to help me and offer me a solution thats I can with the number of posts and with normally shared host show in the shortest possible time some new content to the site requesting new posts.
The problem will happen when the sending post stats table is too large and if I want to empty this table we'll be in problems with sending duplicate content so I have no other choice to table statistics.
Statistics table now has a record 500 thousand entries for 10 sites.
thanks all in advance
Are you seriously calling 3 million articles a large database? PostgreSQL will not even start making Toasts at this point.
Consider migrating to a more serious database where you can use partial indexes, table partitioning, materialized views, etc.

How to search in MySQL in a 60G table containing 330M rows?

I have a table which is 60G and has about 330M entries.
I must display this on a front-end web-app. On the web-app there is a search function which searches a string pattern in every row of the database table.
The problem is that this search takes up to 10 min and makes the MySQL process freeze. I looked for solutions but haven't found a suitable one.
In-Memory Database: database is too big (it goes up to 200 GB - 60GB is only at the moment)
Split up the table into a table for each month a put these on 6 SSDs (I need the data from half a year) then it's possible to search parallel on 6 SSD
reduce the data amount (?)
Image is here: http://i.stack.imgur.com/Q2TyD.png
If you are using an implicit cursor to search threw the db You could consider closing it after say every 50 rows then reopening it at the row you stopped it at.
You need to make use of database sharding here. It basically splits up your big database into several small databases.
Here's a quick link for you :- http://codefutures.com/database-sharding/

What's the best way to sync row data in two tables in MySQL?

I have two tables, to make it easy, consider the following as an example.
contacts (has name and email)
messages (messages but also has name and email w/c needs to be synced to the contacts table)
now please, for those who are itching to say "use relational method" or foreign key etc. I know, but this situation is different. I need to have a "copy" of the name and email of the messages on the messages table itself and need to sync it from time to time only.
As per the syncing requirement, I need to sync the names on the messages with the latest names on the contacts table.
I basically have the following UPDATE SQL in a loop for all rows in Contacts table
UPDATE messages SET name=(
SELECT name FROM contacts WHERE email = '$cur_email')
WHERE email='$cur_email'
the above loops through all the contacts and is fired as many contacts as I have.
I have several looping ideas to do this as well without using internal SELECT but I just thought the above would be more efficient (is it?), but I was wondering if there's an SQL way that's more efficient? Like:
UPDATE messages SET name=(
SELECT name FROM contacts WHERE email = '$cur_email')
WHERE messages.email=contacts.email
something that looks like a join?
I think it should be more efficient
UPDATE messages m JOIN contacts n on m.email=n.email SET m.name=n.name
Ok. i figured it out now.. using JOINS on update
like:
UPDATE messages JOIN contacts ON messages.email=contacts.email
SET messages.email = contacts.email
WHERE messages.email != contacts.email
it's fairly simple!
BUT... I'm not sure if this is really the ANSWER TO MY POST, since my question is what the "BEST WAY is" in terms of efficiency..
Executing the above query on 2000 records took my system a 4second pause.. where as executing a few select , php loop, and a few update statements felt like it was faster..
hmmmmm
------ UPDATE --------
Well i went ahead and created 2 scripts to test this ..
on my QuadCore i7 Ivybridge machine, surprisingly
a single Update query via SQL JOIN is MUCH SLOWER than doing a rather multi query and loop approach..
on one side i have the above simple query running on 1000 records, where all records need updating...
script execution time was 4.92 seconds! and caused my machine to hicup for a split second.. noticed a 100% spike on one of my cores..
succeeding calls to the script (where no fields where needing update) took the same amount of time! ridiculous..
The other side, involving SELECT JOIN query to all rows needing an update, and a simple UPDATE query looped in a foreach() function in PHP..
took the script
3.45 seconds to do all the updates.. # around 50% single core spike
and
1.04 seconds on succeeding queries (where no fields where needing update)
Case closed...
hope this helps the community!
ps
This is what i meant when debating some logic with programmers who are too much into "coding standards".. where their argument is "do it on the SQL side" if you can as it is faster and more of the standard rather than crude method of evaluating and updating in loops w/c they said was "dirty" code.. sheesh.

Performance of MySQL

MyPHP Application sends a SELECT statement to MySQL with HTTPClient.
It takes about 20 seconds or more.
I thought MySQL can’t get result immediately because MySQL Administrator shows stat for sending data or copying to tmp table while I'd been waiting for result.
But when I send same SELECT statement from another application like phpMyAdmin or jmater it takes 2 seconds or less.10 times faster!!
Dose anyone know why MySQL perform so difference?
Like #symcbean already said, php's mysql driver caches query results. This is also why you can do another mysql_query() while in a while($row=mysql_fetch_array()) loop.
The reason MySql Administrator or phpMyAdmin shows result so fast is they append a LIMIT 10 to your query behind your back.
If you want to get your query results fast, i can offer some tips. They involve selecting only what you need and when you need:
Select only the columns you need, don't throw select * everywhere. This might bite you later when you want another column but forget to add it to select statement, so do this when needed (like tables with 100 columns or a million rows).
Don't throw a 20 by 1000 table in front of your user. She cant find what she's looking for in a giant table anyway. Offer sorting and filtering. As a bonus, find out what she generally looks for and offer a way to show that records with a single click.
With very big tables, select only primary keys of the records you need. Than retrieve additional details in the while() loop. This might look like illogical 'cause you make more queries but when you deal with queries involving around ~10 tables, hundreds of concurrent users, locks and query caches; things don't always make sense at first :)
These are some tips i learned from my boss and my own experince. As always, YMMV.
Dose anyone know why MySQL perform so difference?
Because MySQL caches query results, and the operating system caches disk I/O (see this link for a description of the process in Linux)

Categories