So my problem is that I want to select rows where league_id is equal to something, but I want to store multiple values in one cell.
So this is may tabel and as you can see, I have two values in one cell.
I want to select rwo from DB where league_id is equal 2.
The best way would be to normalize this data and store the leagues in a separate table, but if you are not able or willing to do that, you can use the FIND_IN_SET function in MySQL.
SELECT *
FROM YourTable t
WHERE
FIND_IN_SET('2', t.league_id) > 0
Related
I have 2 tables. suppose a & b
a has id, name, roll. b has id,group,name
This name column data are not same. How can I select and uniquely identify them?
I know about
SELECT a.id,a.name,a.group FROM a,b ............
I know this. But this is an example. I am working with huge amount of data with 20-30 columns in each table. So I don't want to write the column names I need to select rather I want to write the names that I want to exclude.
Like
SELECT * Except b.name............
OR is there any way to uniquely identify after join. Like
.......... a,b WHERE a.name as name1
Please don't ask why those column names are same. I admit it was a mistake. But it's already implemented and heavily used. So finding another way. Is there any simple way to exclude a column while merging them?
Well, you can't write the names you wish to exclude. That is not how SQL works.
However, if writing out 20-30 column names is that much of a burden, you can use information_schema.columns. I write it that way, because 20-30 column names is not particularly large and writing them out is probably less effort than writing the question.
But, back to the solution. It looks something like this:
select concat(c.column_name, ' as ', 'a_', column_name, ', ')
from information_schema.columns c
where table_name = 'a' ;
You might want to include the table schema as well.
As an IDEA, what you can do is, if you want to avoid columns of specific table & your statements have multiple table, you can try following,
Suppose you have 20 columns in table a & 5 columns in table b, you want to avoid col2,col3 & col4 of table b. Standard method is that you should write name of all columns of table a & required columns of table b. But you can avoid to write long list of 20 columns of table by writing a.* & then type required columns of table b. Please see below statement.
Select a.*,b.col1,b.col4,b.col5 from a,b
But if you require to exclude some columns from both table, then I think there is no other way than writing all required column names from both table.
There is no way to exclude a column in SQL SELECT Statement, you can only select a column. You can give alias name to columns while selecting them like below, so that you can identity columns using those alias names.
SELECT a.id as [column1],a.name as [column2],a.group as [column3] FROM a,b ............
There is no way to exclude a specific column but you can avoid to write all columns name and easy your job by below steps-
Step1: Execute below query-
SELECT a.*,b.* FROM a,b ............limit 1;
Step2: Export it into csv format with headings.
Step3: Copyp first (heading) row from csv.
Step4: Delete columns, those are not required and use other columns in your query.
There's only one waY i could see-
first create a temorary table
CREATE TEMPORARY TABLE IF NOT EXISTS mytable
(id int(11) NOT NULL, PRIMARY KEY (id)) ENGINE=MyISAM;
then put your column in temporary table-
SELECT * INTO mytable
FROM YourTable
/* Drop the cloumns that are not needed */
ALTER TABLE mytable
DROP COLUMN ColumnToDrop
/* Get results and drop temp table */
SELECT * FROM #TempTable
DROP TABLE #TempTable
I have page that display information from two different tables , and for that I have two queries.
There is no related info between these two tables.
Since both queries may contain a lot of information, I need create pagination.
BUT I don't want two separate paginations, I want only one that will contain results from query 1 and query 2 together.
How can I do that?
The only idea I have is to fetch all info of both queries into arrays, then combine the arrays into one, then create pagination that based on that array.
That of course would not help save resources.
You could use a union - the columns you're displaying must line up, so something like this should work:
select
col1 col1_alias,
col2 col2_alias,
...
from
table1
where
...
union
select
col1,
col2,
...
from
table2
where
...
order by col1_alias, col2_alias
limit 10
Basically the union will pull all the data together, and the order by and limit will apply to the whole result set.
The names of the columns don't need to match in the second select, but use column names from the first select for your order by (or create aliases, which is probably more readable depending on your dataset).
I am currently working on a school system where we have a parent course and a child course (meta_courses in Moodle).
So, we have a table mdl_course_meta and it has 3 fields. Id, parent_course and child_course.
My problem is that a parent course can have many child courses so that means, for example, a parent_course = 50 can appear two times in the table which means it has 2 child courses. I just want to be able to find all the parent courses without it returning the same value twice or more times. I'm currently using this query right now which obviously doesn't do what I want:
$q = "SELECT * FROM mdl_course_meta";
I am working with PHP as well by the way.
Thanks a lot.
SELECT DISTINCT parent_course from mdl_course_meta
That should do it if you just want the course names. One thing to keep in mind, if you want other fields this is not going to work the way you want it to(how would it know which record to choose if there are multiple records with the same parent_course and you only want one).
This approach can only be used if you only want to return the parent_courses without duplicates.
DISTINCT helps to eliminate duplicates. If a query returns a result that contains duplicate rows, you can remove duplicates to produce a result set in which every row is unique. To do this, include the keyword DISTINCT after SELECT and before the output column list.
$q = "SELECT DISTINCT parent_course FROM mdl_course_meta";
If you don't want duplicate values in a single column, use GROUP BY parent_course.
In this way you are free to select any column.
If you only want distinct values for a particular column column, then you can use GROUP BY:
SELECT *
FROM mdl_course_meta
GROUP BY parent_course
The values in the other columns will be arbitrary. This will work in MySQL 5.x.
MySQL 4.x won't let you be arbitrary, so you can't mix aggregate and non-aggregate columns. Instead, you'd have to do something like this, which gets a bit complicated:
SELECT MAX(col1), MAX(col2), parent_course, MAX(col4), ...
FROM mdl_course_meta
GROUP BY parent_course
This way, the values aren't arbitrary. You've specified the ones you want.
I'm trying to query 2 tables where the first table will return 1 row and the second table will return multiple rows. So basically the first table with return text on a page and the second table will return a list that will go within the page. Both tables have a reference row which is what both tables are queried on. (See Below)
SELECT shop_rigs.*, shop_rigs_images.*, shop_rigs_parts.*
FROM shop_rigs
LEFT JOIN shop_rigs_images
ON shop_rigs.shoprigs_ref = shop_rigs_images.shoprigsimg_ref
LEFT JOIN shop_rigs_parts
ON shop_rigs.shoprigs_ref = shop_rigs_parts.shoprigsparts_ref
WHERE shoprigs_enabled='1' AND shoprigs_ref='$rig_select'
ORDER BY shoprigs_order ASC
Is it better to just do 2 queries?
Thanks,
dane
I would do this in two queries. The problem isn't efficiency or the size of the respective tables, the problem is that you're create a Cartesian product between shop_rigs_images and shop_rigs_parts.
Meaning that if a given row of shop_rigs has three images and four parts, you'll get back 3x4 = 12 rows for that single shop_rig.
So here's how I'd write it:
SELECT ...
FROM shop_rigs
INNER JOIN shop_rigs_images
ON shop_rigs.shoprigs_ref = shop_rigs_images.shoprigsimg_ref
WHERE shoprigs_enabled='1' AND shoprigs_ref='$rig_select'
ORDER BY shoprigs_order ASC
SELECT ...
FROM shop_rigs
INNER JOIN shop_rigs_parts
ON shop_rigs.shoprigs_ref = shop_rigs_parts.shoprigsparts_ref
WHERE shoprigs_enabled='1' AND shoprigs_ref='$rig_select'
ORDER BY shoprigs_order ASC
I left the select-list of columns out, because I agree with #Doug Kress that you should select only the columns you need from a given query, not all columns with *.
If you're pulling a large amount of data from the first table, then it would be better to do two queries.
Also, for efficiency, it would be better to specify each column that you actually need, instead of all columns - that way, less data will be fetched and retrieved.
Joins are usually more efficient than running 2 queries, as long as you are joining on indexes, but then it depends on your data and indexes.
You may want to run a "explain SELECT ....." for both options and compare "possible keys" and "rows" from your results.
I am writing a converter to transfer data from old systems to new systems. I am using php+mysql.
I have one table that contains millions records with duplicate entries. I want to transfer that data in a new table and remove all entries. I am using following queries and pseudo code to perform this task
select *
from table1
insert into table2
ON DUPLICATE KEY UPDATE customer_information = concat('$firstName',',','$lastName')
It takes ages to process one table :(
I am pondering that is it possible to use group by and get all grouped record automatically?
Other than going through each record and checking duplicate etc.?
For example
select *
from table1
group by firstName, lastName
insert into table 2 only one record and add all users'
first last name into column ALL_NAMES with comma
EDIT
There are different records for each customers with different information. Each row is called duplicated if first and last name of user is same. In new table, we will just add one customer and their bought product in different columns (we have only 4 products).
I don't know what you are trying to do with customer_information, but if you just want to transfer the non-duplicated set of data from one table to another, this will work:
INSERT IGNORE INTO table2(field1, field2, ... fieldx)
SELECT DISTINCT field1, field2, ... fieldx
FROM table1;
DISTINCT will take care of rows that are exact duplicates. But if you have rows that are only partial duplicates (like the same last and first names but a different email) then IGNORE can help. If you put a unique index on table2(lastname,firstname) then IGNORE will make sure that only the first record with lastnameX, firstnameY from table1 is inserted. Of course, you might not like which record of a pair of partial duplicates is chosen.
ETA
Now that you've updated your question, it appears that you want to put the values of multiple rows into one field. This is, generally speaking, a bad idea because when you denormalize your data this way you make it much less accessible. Also, if you are grouping by (lastname, firstname), there will not be names in allnames. Because of this, my example uses allemails instead. In any event, if you really need to do this, here's how:
INSERT INTO table2(lastname, firstname, allemails)
SELECT lastname, firstname, GROUP_CONCAT(email) as allemails
FROM table1
GROUP BY lastname, firstname;
If they are really duplicate rows (every field is the the same) then you can use:
select DISTINCT * from table1
instead of :
select * from table1