PHP-MySQL merge many query into one query to execution fast - php

I have PHP script to add new record and check this record in table1,table2 and table3 if record not exist than add it into table3 else update the record to table1 or table2 (where its exist).
I have large data to check. So its possible to perform this task using single MySQL query.
Thanks in advance.

Please keep in mind, that joining two large tables may be a lot slower than using 2 or 3 separate query to get data out of them one by one. The main question is what you consider huge. Joining millions of rows is never a good idea in MySQL AFAIK if you have large rows.
So while having it done in one query is definitely possible it may not be the economical thing to do.
We also need some info about row sizes, indexes, basic query syntax and stuff like that.

Related

How to Improve Select Query Performance For Large Data in Mysql

Currently,I am working on one php project. for my project extension,i needed to add more data in mysql database.but,i had to add datas in only one particular table and the datas are added.now,that table size is 610.1 MB and number of rows is 34,91,534.one more thing 22 distinct record is in that table,one distinct record is having 17,00,000 of data and one more is having 8,00,000 of data.
After that i have been trying to run SELECT statement it is taking more time(6.890 sec) to execute.in that table possible number of columns is having index.even though it is taking more time.
I tried two things for fast retrieval process
1.stored procedure with possible table column index.
2.partitions.
Again,both also took more time to execute SELECT query against some distinct record which is having more number of rows.any one can you please suggest me better alternative for my problem or let me know, if i did any mistake earlier which i had tried.
When working with a large amount of rows like you do, you should be careful of heavy complex nested select statements. With each iteration of nested selects it uses more resources to get to the results you want.
If you are using something like:
SELECT DISTINCT column FROM table
WHERE condition
and it is still taking long to execute even if you have indexes and partitions going then it might be physical resources.
Tune your structure and then tune your code.
Hope this helps.

What's the best way to sync row data in two tables in MySQL?

I have two tables, to make it easy, consider the following as an example.
contacts (has name and email)
messages (messages but also has name and email w/c needs to be synced to the contacts table)
now please, for those who are itching to say "use relational method" or foreign key etc. I know, but this situation is different. I need to have a "copy" of the name and email of the messages on the messages table itself and need to sync it from time to time only.
As per the syncing requirement, I need to sync the names on the messages with the latest names on the contacts table.
I basically have the following UPDATE SQL in a loop for all rows in Contacts table
UPDATE messages SET name=(
SELECT name FROM contacts WHERE email = '$cur_email')
WHERE email='$cur_email'
the above loops through all the contacts and is fired as many contacts as I have.
I have several looping ideas to do this as well without using internal SELECT but I just thought the above would be more efficient (is it?), but I was wondering if there's an SQL way that's more efficient? Like:
UPDATE messages SET name=(
SELECT name FROM contacts WHERE email = '$cur_email')
WHERE messages.email=contacts.email
something that looks like a join?
I think it should be more efficient
UPDATE messages m JOIN contacts n on m.email=n.email SET m.name=n.name
Ok. i figured it out now.. using JOINS on update
like:
UPDATE messages JOIN contacts ON messages.email=contacts.email
SET messages.email = contacts.email
WHERE messages.email != contacts.email
it's fairly simple!
BUT... I'm not sure if this is really the ANSWER TO MY POST, since my question is what the "BEST WAY is" in terms of efficiency..
Executing the above query on 2000 records took my system a 4second pause.. where as executing a few select , php loop, and a few update statements felt like it was faster..
hmmmmm
------ UPDATE --------
Well i went ahead and created 2 scripts to test this ..
on my QuadCore i7 Ivybridge machine, surprisingly
a single Update query via SQL JOIN is MUCH SLOWER than doing a rather multi query and loop approach..
on one side i have the above simple query running on 1000 records, where all records need updating...
script execution time was 4.92 seconds! and caused my machine to hicup for a split second.. noticed a 100% spike on one of my cores..
succeeding calls to the script (where no fields where needing update) took the same amount of time! ridiculous..
The other side, involving SELECT JOIN query to all rows needing an update, and a simple UPDATE query looped in a foreach() function in PHP..
took the script
3.45 seconds to do all the updates.. # around 50% single core spike
and
1.04 seconds on succeeding queries (where no fields where needing update)
Case closed...
hope this helps the community!
ps
This is what i meant when debating some logic with programmers who are too much into "coding standards".. where their argument is "do it on the SQL side" if you can as it is faster and more of the standard rather than crude method of evaluating and updating in loops w/c they said was "dirty" code.. sheesh.

Which is better to fire SQL Query or to Process result in php?

I have two tables Question and Option having one to many relationship. after fetching the result from the two tables the resulted array should be like this
Question = array(
//result from the Question Table
Option = array(
//result from the Option Table
)
);
Suppose If i want to fetch the 10 Question from database then i can do this in two ways (i think)
By Joining the Two Tables and fetch the data from database and the process with it PHP Loops and control.(i preferred this)
By fetching Question then firing the 10 query for Option. (it would be easy because we don't need to process as in case of 1)
i want to know which is better to process the Result in PHP or use SQL Queries.
Generally, writing one join query is much better:
You only make one request to the database instead of 11 (one for the question and 10 for the options). A database query is a very expensive operation - each time you make a query, a network request is made to the database.
The database engine can optimize the query to possibly run much faster even if we disregard the difference in network roundtrip time.
It is much easier to read, understand and modify one join query than PHP code with a query in a loop.
join would far more faster and save load on server so go with joining (option 1)
Offcourse Join is better because this way you get only what you want on UI and load is balanced.
In second option what you do is that you fetch all the records from DB and then Process them as per your need so it TOO MUCH ...
Remember the Basic to fetch only those information only that are required.

PostgreSQL Query and PHP Assistance

Directly under this small intro here you'll see the layout of the database tables that I'm working with and then you'll see the details of my question. Please provide as much guidance as possible. I am still learning PHP and SQL and I really do appreciate your help as I get the hang of this.
Table One ('bue') --
chp_cd
rgn_no
bgu_cd
work_state
Table Two ('chapterassociation') --
chp_cd
rgn_no
bgu_cd
work_state
Database Type: PostgreSQL
I'm trying to do the following with these two tables, and I think it's a JOIN that I have to do but I'm not all that familiar with it and I'm trying to learn. I've created a query thus far to select a set of data from these tables so that the query isn't run on the entire database. Now with the data selected, I'm trying to do the following...
First and foremost, 'work_state' of table one ('bue') should be checked against 'work_state' of table two ('chapterassociation'). Once a match is found, 'bgu_cd' of table one ('bue') should be matched against 'bgu_cd' of table two ('chapterassociation'). When both matches are found, it will always point to a unique row within the second table ('chapterassociation'). Using that unique row within the second table ('chapterassociation'), the values of 'rgn_no' and 'chp_cd' should be UPDATED within the first table ('bue') to match the values within the second table ('chapterassociation').
I know this is probably asking a lot, but if someone could help me to construct a query to do this, it'd be wonderful! I really do want to learn, as I don't wish to be ignorant to this forever. Though I'm not sure if I completely understand how the JOIN and comparison here would work.
If I'm correct, I'll have to put this into seperate queries which will then be in PHP. So for example, it'll probably be a few IF ELSE statements that end with the final result of the final query, which updates the values from table two to table one.
A JOIN will do both level of matching for you...
bue
INNER JOIN
chapterassociation
ON bue.work_state = chapterassociation.work_state
AND bue.bgu_cd = chapterassociation.bgu_cd
The actual algorithm is determined by PostreSQL. It could be a merge, use hashes, etc, and depends on indexes and other statistics about the data. But you don't need to worry about that directly, SQL abstracts that away for you.
Then you just need a mechanism to write the data from one table to the other...
UPDATE
bue
SET
rgn_no = chapterassociation.rgn_no,
chp_cd = chapterassociation.chp_cd
FROM
chapterassociation
WHERE bue.work_state = chapterassociation.work_state
AND bue.bgu_cd = chapterassociation.bgu_cd

Which of the following SQL queries would be faster? A join on two tables or successive queries?

I have two tables here:
ITEMS
ID| DETAILS| .....| OWNER
USERS:
ID| NAME|....
Where ITEMS.OWNER = USERS.ID
I'm listing the items out with their respective owners names. For this I could use a join on both tables or I could select all the ITEMS and loop through them making a sql query to retrieve the tuple of that itmes owner. Thats like:
1 sql with a JOIN
versus
1x20 single table sql queries
Which would be a better apporach to take in terms of speed?
Thanks
Of course a JOIN will be faster.
Making 20 queries will imply:
Parsing them 20 times
Making 20 index seeks to find the start of the index range on items
Returning 20 recordsets (each with its own metadata).
Every query has overhead. If you can do something with one query, it's (almost) always better to do it with one query. And most database engines are smarter than you. Even if it's better to split a query in some way, the database will find out himself.
An example of overhead: if you perform 100 queries, there will be a lot more traffic between your application and your webserver.
In general, if you really want to know something about performance, benchmark the various approaches, measure the parameters you're interested in and make a decision based on the results of the becnhmark.
Good luck!
Executing a join will be much quicker as well as a better practice
I join would be a lot quicker than performing another query on the child table for each record in the parent table.
You can also enable performance data in SQL to see the results for yourself..
http://wraithnath.blogspot.com/2011/01/getting-performance-data-from-sql.html
N

Categories