Is it possible to get data from one table when necessary data is in the other one(and so on a few times)? I mean, I have to get result, fetch array, pull all the data in php, make another query... and repeat it a several times before I get the wanted result. Could it be done in one query or have you guys got any ideas about optimization for such issue? I've been searching, but I really didn't find anything what makes sense for me.
UPDATE:
Thanks! JOIN fixed my problems but I got some more to work on. Let's say I've got tables like this
Users:
ID Name
1 Adam
2 John
3 Lana
Roles(with users ids)
ID Name Cleaner Soldier Doctor
1 Ship crew 2 1 3
How can I get my result like:
[1,Ship crew,John,Adam,Lana] in php without making many queries? I mean I'd like to load records from another table for a several fields depending on primary key ID.
EDIT:
OK I got it, I just needed some practice with mysql. It's not so hard as I thought it could be. Thanks for that join now I know what I was looking for :)
It is possible yes, you can use the "JOIN" technique to collate the tables or you can use the "IN" clause in your query .
http://www.tutorialspoint.com/mysql/mysql-in-clause.htm
Related
As i am a junior PHP Developer growing day by day stuck in a performance problem described here:
I am making a search engine in PHP ,my database has one table with 41 column and million's of rows obviously it is a very large dataset. In index.php i have a form for searching data.When user enters search keyword and hit submit the action is on search.php with results.The query is like this.
SELECT * FROM TABLE WHERE product_description LIKE '%mobile%' ORDER BY id ASC LIMIT 10
This is the first query.After result shows i have to run 4 other query like this:
SELECT DISTINCT(weight_u) as weight from TABLE WHERE product_description LIKE '%mobile%'
SELECT DISTINCT(country_unit) as country_unit from TABLE WHERE product_description LIKE '%mobile%'
SELECT DISTINCT(country) as country from TABLE WHERE product_description LIKE '%mobile%'
SELECT DISTINCT(hs_code) as hscode from TABLE WHERE product_description LIKE '%mobile%'
These queries are for FILTERS ,the problem is this when i submit search button ,all queries are running simultaneously at the cost of Performance issue,its very slow.
Is there any other method to fetch weight,country,country_unit,hs_code speeder or how can achieve it.
The same functionality is implemented here,Where the filter bar comes after table is filled with data,How i can achieve it .Please help
Full Functionality implemented here.
I have tried to explain my full problem ,if there is any mistake please let me know i will improve the question,i am also new to stackoverflow.
Firstly - are you sure this code is working as you expect it? The first query retrieves 10 records matching your search term. Those records might have duplicate weight_u, country_unit, country or hs_code values, so when you then execute the next 4 queries for your filter, it's entirely possible that you will get values back which are not in the first query, so the filter might not make sense.
if that's true, I would create the filter values in your client code (PHP)- finding the unique values in 10 records is going to be quick and easy, and reduces the number of database round trips.
Finally, the biggest improvement you can make is to use MySQL's fulltext searching features. The reason your app is slow is because your search terms cannot use an index - you're wild-carding the start as well as the end. It's like searching the phonebook for people whose name contains "ishra" - you have to look at every record to check for a match. Fulltext search indexes are designed for this - they also help with fuzzy matching.
I'll give you some tips that will show useful in many situations when querying a large dataset, or mostly any dataset.
If you can list the fields you want instead of querying for '*' is a better practice. The weight of this increases as you have more columns and more rows.
Always try to use the PK's to look for the data. The more specific the filter, the less it will cost.
An index in this kind of situation would come pretty handy, as it will make the search more agile.
LIKE queries are generally pretty slow and resource heavy, and more in your situation. So again, the more specific you are, the better it will get.
Also add, that if you just want to retrieve data from this tables again and again, maybe a VIEW would fit nicely.
Those are just some tips that came to my mind to ease your problem.
Hope it helps.
My Query which I am sure can be optimized extensively but couldn't say how off the top of my head.
What I have is a post/comment like concept going on for my members. Where they can choose to share it with specific people or everyone. So from this I know my query can potentially cause duplicate results (if I could get it to work in the first place). I need to get a distinct 'ciID' while finding the users ID, along with any potential ID's mentioned else where. The ID's for the members are stored in 3 different columns for 3 different reasons.
mID is the member themselves, then sharedWith and whos_with kinda speak for themselves. I store the id's in sharedWith and whos_with like 1111:2222:3333 But in all I have to search across all 3 columns for the member ID shared, and whos column to make sure I get all the results for the ID's being passed through a function in an array. Which after building the query through a loop I come up with a query like
select DISTINCT(ciID),* from user_posting
where (mID = 21 OR sharedWith LIKE '%21%' OR whosWith LIKE '%21%')
or (mID = 22 OR sharedWith LIKE '%22%' OR whosWith LIKE '%22%')
or (mID = 45 OR sharedWith LIKE '%45%' OR whosWith LIKE '%45%')
limit 1
somewhere in that I have a syntax erro I am not noticing, and I need a pair of eyes to help me out
This is definitely wrong:
select DISTINCT(ciID),* from user_posting
It should be:
select DISTINCT(up.ciID), up.* from user_posting up
However I am not quite sure if that will return the expected results.
I think you need to re-design your table structure. Selecting with LIKE doesn't scale that good.
I have recently written a survey application that has done it's job and all the data is gathered. Now i have to analyze the data and i'm having some time issues.
I have to find out how many people selected what option and display it all.
I'm using this query, which does do it's job:
SELECT COUNT(*)
FROM survey
WHERE users = ? AND table = ? AND col = ? AND row = ? AND selected = ?
GROUP BY users,table,col,row,selected
As evident by the "?" i'm using MySQLi (in php) to fetch the data when needed, but i fear this is causing it to be so slow.
The table consists of all the elements above (+ an unique ID) and all of them are integers.
To explain some of the fields:
Each survey was divided into 3 or 4 tables (sized from 2x3 to 5x5) with a 1 to 10 happiness grade to select form. (questions are on the right and top of the table, then you answer where the questions intersect)
users - age groups
table, row, col - explained above
selected - dooooh explained above
Now with the surveys complete and around 1 million entries in the table the query is getting very slow. Sometimes it takes like 3 minutes, sometimes (i guess) the time limit expires and you get no data at all. I also don't have access to the full database, just my empty "testing" one since the costumer is kinda paranoid :S (and his server seems to be a bit slow)
Now (after the initial essay) my questions are: I left indexing out intentionally because with a lot of data being written during the survey, it would be a bad idea. But since no new data is coming in at this point, would it make sense to index all the fields of a table? How much sense does it make to index integers that never go above 10? (as you can guess i haven't got a clue about indexes). Do i need the primary unique ID in this table? I
I read somewhere that indexing may help groups but only if you group by the first columns in a table (and since my ID is first and from my point of view useless can i remove it and gain anything by it?)
Is there another way to write my query that would basically do the same thing but in a shorter period of time?
Thanks for all your suggestions in advance!
Add an index on entries that you "GROUP BY" or do "WHERE". So that's ONE index incorporating users,table,col,row and selected in your case.
Some quick rules:
combine fields to have the WHERE first, and the GROUP BY elements last.
If you have other queries that only use part of it (e.g. users,table,col and selected) then leave the missing value (row, in this example) last.
Don't use too many indexes/indeces, as each will slow the table to updates marginally - so on really large system you need to balance queries with indexes.
Edit: do you need the GROUP BY user,col,row as these are used in the WHERE. If the WHERE has already filtered them out, you only need group by "selected".
I have several models that are related to one another through a HABTM relationship.
Workouts has many Exercises |
Exercises has many Workouts |
Exercises has one Logs |
Users has many Exercises_Workouts
All of these table relations are set in one table
What I would like to do:
As you can see, user_id and workout_id are not unique but exercise_id and log_id will always be unique.
I want to find the data for one user then all workouts and have it return all the exercises and their corresponding information as well as each exercise's log information.
Final output would look something like this.
I have tried several methods and none of them have returned positive results. I would also like to hear how someone much more experienced than myself would handle this situation. The only thing I can think of that would possibly get what I want is multiple SELECT statements.
thank you for your help.
cheers!
"All of these table relations are set in one table": What do you call that table? And that's not the usual way to define relationships.
"Workouts has many Exercises | Exercises has many Workouts | Exercises has one Logs | Users has many Exercises_Workouts": Workouts HABTM Exercises, (if logs is not connected to any other table, include its fields in exercises table), User hasMany Workouts.
I have tried several methods and none of them have returned positive results. I would also like to hear how someone much more experienced than myself would handle this situation. The only thing I can think of that would possibly get what I want is multiple SELECT statements.
How much are you familiar with SQL, PHP, and CakePHP? If you are new to all those, it's kinda hard to explain how to do what you want. Show us what approaches you have used so far.
Thank you for your help. I ended up figuring out my own answer.
The reason all these have to be in one table is that I need to be able to access a single ID that relates to all of these branches. Also, each of the branches are not to be updated only referenced. What I ended up doing is using the containable behavior. I actually found this answer looking through my own code. I had a sneaking suspicion that I had come across this problem earlier.
The containable behavior allowed me to go deeper into my association than I was able to before.
Originally I was only pulling data from Workout->Exercise->Log but I also needed LogDay(individual entries). So I used the containable behavior to get that data as well as remove other unnecessary data.
thanks again!
I'm running a sql query to get basic details from a number of tables. Sorted by the last update date field. Its terribly tricky and I'm thinking if there is an alternate to using the UNION clause instead...I'm working in PHP MYSQL.
Actually I have a few tables containing news, articles, photos, events etc and need to collect all of them in one query to show a simple - whats newly added on the website kind of thing.
Maybe do it in PHP rather than MySQL - if you want the latest n items, then fetch the latest n of each of your news items, articles, photos and events, and sort in PHP (you'll need the last n of each obviously, and you'll then trim the dataset in PHP). This is probably easier than combining those with UNION given they're likely to have lots of data items which are different.
I'm not aware of an alternative to UNION that does what you want, and hopefully those fetches won't be too expensive. It would definitely be wise to profile this though.
If you use Join in your query you can select datas from differents tables who are related with foreign keys.
You can look of this from another angle: do you need absolutely updated information? (the moment someone enters new information it should appear)
If not, you can have a table holding the results of the query in the format you need (serving as cache), and update this table every 5 minutes or so. Then your query problem becomes trivial, as you can have the updates run as several updates in the background.