I have 3 tables :
-User
-SuperUser
-SpecialUser
SpecialUser and SuperUser are extensions of the User table. Let's say that User table has {name, email}, SuperUser has {idUser, superPower} and specialUser has {idUser, hairColor}.
Of course a SpecialUser and a SuperUser have a name and email from the user table. This means that a User can be a SuperUser or a SpecialUser.
My question is how do I perform a query that gets all the info of a user (I don't know before the query if he is a specialUser or SuperUser).
I thought about 2 methods :
-Putting a column "userType" (0 : he is specialUser, 1 : he is superUser) in the user table
With this method, should I do a MySQL query with IF inside ? (What would be the query). Or should I do simple query (getting the user table alone) then in PHP I do a if and query the right table (super or special table)
OR
-Not putting any column and doing a MySQL query with 2 joins on the id of the user (technically one of the 2 joins won't return anything)
Which one should be used ? (I care about speed performance, less about memory - Let's say that the tables have over 1 million rows)
How I would do this is in pseudo-sql:
Select * from user
left outer join superuser
left outer join specialuser
And return everything. If superuser's fields are not null, then in the PHP you can operate on that, same for specialuser. And this gives you two advantages:
1) You don't need a field to say what kind of user the user is anymore, the contents of the join will tell you.
2) You can have a user be a superuser and a specialuser at once, if you wished. (And if you didn't want that to happen, you can prevent it using a constraint or similar)
To do this without branching might be tricky, though I have an idea; since you said memory is not a problem,
but performance is.
First off make a column in the User table and call it userType like you suggested and store the contents of the table name followed by the table columns in a parse-able string like:
(SuperUser/SpecialUser), ("idUser, superPower"/"idUser, hairColor")
Example:
"SuperUser, idUser, superPower" //for a SuperUser
In psuedo code:
SELECT `userType` FROM `usersTable` WHERE name = [put var here]
Let the returned value be valRet
parse the valRet into an array... //the first value is the table name, the remaining values are column names
make a second sql query based on the array
Performance wise I believe this is good because there is no branching. However, I'm not sure what kind of performance hit you would take on the string parsing. Try benchmarking it on a few thousand queries to see.
//Also check out the answer to a similar question here:
Using an IF Statement in a MySQL SELECT query
Related
So I need to left join a table from MySQL with a couple of thousands of ids.
It’s like I need to temporarily build a table for the join then delete it, but that just sounds not right.
Currently the task is done by code but proves pretty slow on the results, and an sql query might be faster.
My thought was to use ...WHERE ID IN (“.$string_of_values.”);
But that cuts off the ids that have no match on the table.
So, how is it possible to tell MySQL to LEFT JOIN a table with a list of ids?
As I understand your task you need to leftjoin your working table to your ids, i.e. the output must contain all these ids even there is no matched row in working table. Am I correct?
If so then you must convert your ids list to the rowset.
You already tries to save them to the table. This is useful and safe practice. The additional points are:
If your dataset is once-used and may be dropped immediately after final query execution then you may create this table as TEEMPORARY. Than you may do not care of this table - it wil be deleted automatically when the connection is closed, but it may be reused (including its data edition) in this connection until it is closed. Of course the queries which creates and fills this table and final query must be executed in the same connection in that case.
If your dataset is small enough (approximately - not more than few megabytes) then you may create this table with the option ENGINE = Memory. In this case only table definition file (small text file) will be really written to the disk whereas the table body will be stored in the memory only, so the access to it will be fast enough.
You may create one or more indexes in such table and improve final query performance.
All these options may be combined.
Another option is to create such rowset dynamically.
In MySQL 5.x the only option is to create such rowset in according subquery. Like
SELECT ...
FROM ( SELECT 1 AS id UNION SELECT 2 UNIO SELECT 22 ... ) AS ids_rowset
LEFT JOIN {working tables}
...
In MySQL 8+ you have additional options.
You may do the same but use CTE:
WITH ids_rowset AS ( SELECT 1 AS id UNION SELECT 2 UNIO SELECT 22 ... )
SELECT ...
FROM ids_rowset
LEFT JOIN {working tables}
...
Alternatively you may transfer your ids list in some serialized form and parse it to the rowset in the query (in recursive CTE, or by using some table function, for example, JSON_TABLE).
All these methods creates once-used rowset (of course, CTE can be reused within the query). And this rowset cannot be indexed for query improvement (server may index this dataset during query execution if it finds this reasonable but you cannot affect this).
I am trying to replace a column in the result of the select query as denoted in
This reference but unlike the example I have many columns in the table thus I can not specify the name of every column in the select query.
I tried some ways to attain the same but none seems effective.
select
*, (REPLACE(REPLACE(role_id,1,"admin"),2,"moderator") AS role_id
from user;
or
Select *
from user
where role_id = (select REPLACE(role_id,1,"admin") as role_id from user;
Here we assume only two possible values for the role_id however at certain instanced it might have to get data from another table ie a different table that holds different ids and values corresponding to them.
So is there a way to attain the following conditions in a single query:-
to replace values of some fields returned from select query (assuming many columns writing the names of all the columns individually is not feasible)
to get the replacement values from different tables for different columns in single table.
I need to implement the above conditions in one query but the changes shouldn't be in the database only the result of select query needs to be optimized.
Already referred to the following too but could not help.
Link 1
Link 2
Link 3
I am using phpmyadmin as engine and php as the implementation language.
If i have understood your question correctly, it's easier to use CASE/WHEN
SELECT *,
CASE WHEN role_id = 1 THEN "admin" WHEN role_id = 2 THEN "moderator" END AS role_id
FROM user;
But easier still maybe to have an array in PHP,
$roles = array("1" => "admin", "2" => "moderator", .... );
and look it up in the array. that will keep your query short and sweet. The advantage of this approach is that you don't need to change your query every time you add a new role. If you get a large number of roles (say dozens) you might actually want a separate table for that.
I build a like system for a website and I'm front of a dilemma.
I have a table where all the items which can be liked are stored. Call it the "item table".
In order to preserve the speed of the server, do I have to :
add a column in the item table.
It means that I have to search (with a regex in my PHP) inside a string where all the ID of the users who have liked the item are registered, each time a user like an item. This in order verify if the user in question has (or not) already liked the item before. In this case, I show a different button on my html.
Problem > If I have (by chance) 3000 liked on an item, I fear the string to begin very big and heavy to regex each time ther is a like
on it...
add a specific new table (LikedBy) and record each like separately with the ID of the liker, the name of the item and the state of the like (liked or not).
Problem > In this case, I fear for the MySQL server with thousand of rows to analyze each time a new user like one popular item...
Server version: 5.5.36-cll-lve MySQL Community Server (GPL) by Atomicorp
Should I put the load on the PHP script or the MySql Database? What is the most performant (and scalable)?
If, for some reasons, my question does not make sens could anyone tell me the right way to do the trick?
thx.
You have to create another table call it likes_table containing id_user int, id_item int that's how it should be done, if you do like your proposed first solution your database won't be normalized and you'll face too many issues in the future.
To get count of like you just have to
SELECT COUNT(*) FROM likes_table WHERE id_item='id_item_you_are_looking_for';
To get who liked what:
SELECT id_item FROM likes_table WHERE id_user='id_user_you_are_looking_for';
No regex needed nothing, and your database is well normalized for data to be found easily. You can tell mysql to index id_user and id_item making them unique in likes_table this way all your queries will run much faster
With MySQL you can set the user ID and the item ID as a unique pair. This should improve performance by a lot.
Your table would have these 2 columns: item id, and user id. Every row would be a like.
I have a table in MySQL that I'm accessing from PHP. For example, let's have a table named THINGS:
things.ID - int primary key
things.name - varchar
things.owner_ID - int for joining with another table
My select statement to get what I need might look like:
SELECT * FROM things WHERE owner_ID = 99;
Pretty straightforward. Now, I'd like users to be able to specify a completely arbitrary order for the items returned from this query. The list will be displayed, they can then click an "up" or "down" button next to a row and have it moved up or down the list, or possibly a drag-and-drop operation to move it to anywhere else. I'd like this order to be saved in the database (same or other table). The custom order would be unique for the set of rows for each owner_ID.
I've searched for ways to provide this ordering without luck. I've thought of a few ways to implement this, but help me fill in the final option:
Add an INT column and set it's value to whatever I need to get rows
returned in my order. This presents the problem of scanning
row-by-row to find the insertion point, and possibly needing to
update the preceding/following rows sort column.
Having a "next" and "previous" column, implementing a linked list.
Once I find my place, I'll just have to update max 2 rows to insert
the row. But this requires scanning for the location from row #1.
Some SQL/relational DB trick I'm unaware of...
I'm looking for an answer to #3 because it may be out there, who knows. Plus, I'd like to offload as much as I can on the database.
From what I've read you need a new table containing the ordering of each user, say it's called *user_orderings*.
This table should contain the user ID, the position of the thing and the ID of the thing. The (user_id, thing_id) should be the PK. This way you need to update this table every time but you can get the things for a user in the order he/she wants using ORDER BY on the user_orderings table and joining it with the things table. It should work.
The simplest expression of an ordered list is: 3,1,2,4. We can store this as a string in the parent table; so if our table is photos with the foreign key profile_id, we'd place our photo order in profiles.photo_order. We can then consider this field in our order by clause by utilizing the find_in_set() function. This requires either two queries or a join. I use two queries but the join is more interesting, so here it is:
select photos.photo_id, photos.caption
from photos
join profiles on profiles.profile_id = photos.profile_id
where photos.profile_id = 1
order by find_in_set(photos.photo_id, profiles.photo_order);
Note that you would probably not want to use find_in_set() in a where clause due to performance implications, but in an order by clause, there are few enough results to make this fast.
I have an array of user ids in a query from Database A, Table A (AA).
I have the main user database in Database B, Table A (BA).
For each user id returned in my result array from AA, I want to retrieve the first and last name of that user id from BA.
Different user accounts control each database. Unfortunately each login cannot have permissions to each database.
Question: How can I retrieve the firsts and lasts with the least amount of queries and / or processing time? With 20 users in the array? With 20,000 users in the array? Any order of magnitude higher, if applicable?
Using php 5 / mysql 5.
As long as the databases are on the same server just use a cross database join. The DB login being used to access the data will also need permissions on both databases. Something like:
SELECT AA.userID, BA.first, BA.last
FROM databasename.schema.table AA
INNER JOIN databasename.schema.table BA ON AA.userID = BA.userID
In response to comments:
I don't believe I read the part about multiple logins correctly, sorry. You cannot use two different mySQL logins on one connection. If you need to do multiple queries you really only have three options. A) Loop through the first result set and run multiple queries. B) Run a query which uses a WHERE clause with userID IN (#firstResultSet) and pass in the first result set. C) Select everything out of the second DB and join them in code.
All three of those options are not very good, so I would ask, why can't you change user permissions on one of the two DBs? I would also ask, why would you need to select the names and IDs of 20,000 users? Unless this is some type of data dump, I would be looking for a different way to display the data which would be both easier to use and less query intensive.
All that said, whichever option you choose will be based on a variety of different circumstances. With a low number of records, under 1,000, I would use option B. With a higher number of records, I would probably use options C and try to place the two result sets into something that can be joined (such as using array_combine).
I think they key here is that it should be possible in two database calls.
Your first one to get the id's from database A and the second one to pass them to database B.
I don't know mysql, but in sqlserver I'd use the xml datatype and pass all of the ids into a statement using that. Before the xml datatype I'd have built up some dynamic SQL with the id's in an IN statement.
SELECT UserId FROM DatabaseA.TableA
Loop through id's and build up a comma separated string.
"SELECT FirstName, Surname FROM DataBaseB.TableA WHERE UserId IN(" + stringId + ")"
The problem with this is that wth 20,000 id's you may have some performance issues with the amount of data you are sending. This is where'd I'd use the XML datatype, so maybe look at what alternatives mysql has for passing lists of ids.