Mysql multiple updates in one statement - php

So the situation I have is this. I have 2 tables one for users and the other stores email(s) for those users. The email table is related to users by a foreign key user_id. I am trying to set up an archive system where the user can archive these users and keep them in the production db. But I need to delete the original and re-insert the user to give them a new id. This way they can archive again later and the data won't be corrupt and any new information will then be archived also. This is just a small example I am actually doing this with 10 tables, some of them related to each other. The amount of data that is involved can be quite large and when the archive starts it can take several minutes to complete because for every user I am also checking, archiving, deleting and re-inserting that user. The way I have it set up now there can be literally several 1000 queries to accomplish the complete archive.
So I am trying to re-write this so that the number of query calls is limited to one. By getting all the users, looping through them and building an insert query where I can insert many records with one call. Everything is great for the users but when I want to do this for their emails I run into an issue. As I'm looping through the users I save the id (the original id) to an array, hoping to use it later to update the new email records with the new user_id that was created. I can get the first id from my big insert and I know how many records there were so I know all the new id's. I can set up a for loop to create a new array with the new id's thereby matching the original id array.
So the question is, is there a way to set up an update statement that would allow for a multiple update statement in one call? The indexes from the two arrays will match.
update email_table
set user_id = {new_id_array}
where user_id = {old_id_array}
I have seen several different option out there but nothing that quite does what I'm trying to do. Any help is very much appreciated.

The simplest way to do what you need I think is to have some table containing old_id <-> new_id relations.
Once you have that data somewhere in your database you just need to join:
https://www.db-fiddle.com/f/bYump2wDn5n2hCxCrZemgt/1
UPDATE email_table e
JOIN replacement r
ON e.user_id = r.old_id
SET e.user_id = r.new_id;
But if you still want to do something with plain lists you need to generate query to manipulate with ELT and FIELD:
https://www.db-fiddle.com/f/noBWqJERm2t399F4jxnUJR/1
UPDATE email_table e
SET e.user_id = ELT(FIELD(e.user_id, 1, 2, 3, 5), 7, 8, 9, 10)
WHERE e.user_id IN (1,2,3,5);

Related

storing sum() results in database vs calculating during runtime

I'm new to sql & php and unsure about how to proceed in this situation:
I created a mysql database with two tables.
One is just a list of users with their data, each having a unique id.
The second one awards certain amounts of points to users, with relevant columns being the user id and the amount of awarded points. This table is supposed to get new entries regularly and there's no limit to how many times a single user can appear in it.
On my php page I now want to display a list of users sorted by their point total.
My first approach was creating a "points_total" column in the user table, intending to run some kind of query that would calculate and update the correct total for each user every time new entries are added to the other table. To retrieve the data I could then use a very simple query and even use sql's sort features.
However, while it's easy to update the total for a specific user with the sum where function, I don't see a way to do that for the whole user table. After all, plain sql doesn't offer the ability to iterate over each row of a table, or am I missing a different way?
I could probably do the update by going over the table in php, but then again, I'm not sure if that is even a good approach in the first place, because in a way storing the point data twice (the total in one table and then the point breakdown with some additional information in a different table) seems redundant.
A different option would be forgoing the extra column, and instead calculating the sums everytime the php page is accessed, then doing the sorting stuff with php. However, I suppose this would be much slower than having the data ready in the database, which could be a problem if the tables have a lot of entries?
I'm a bit lost here so any advice would be appreciated.
To get the total points awarded, you could use a query similar to this:
SELECT
`user_name`,
`user_id`,
SUM(`points`.`points_award`) as `points`,
COUNT(`points`.`points_award`) as `numberOfAwards`
FROM `users`
JOIN `points`
ON `users`.`user_id` = `points`.`user_id`
GROUP BY `users`.`user_id`
ORDER BY `users`.`user_name` // or whatever users column you want.

mySQL data unique for each user

I am building an application using MySQL/cakePHP which involves the following requirements.
Admins create steps which users need to complete. e.g. yes/no type questions. Admins create these steps and new users should get them.
Users are required to complete all these steps when they logon.
My question is what would the best way to do this in the database (mySQL)
When a user signs up it queries the steps table and creates the new rows for each step for that user? It creates the rows from the server side code by looking in the steps table?
Create a new interim table to store to relationship??
I'm fairly experienced around relational design but a little stumped on the best way to do this and future proof myself at the same time.
**Mock Structure**
Steps
id
name
order
desc
**Users**
id
username
**Step_Users**
step_id
user_id
result1
result2
result3
Basically your tables are fine. Here is one scenario: Someone signs up. You take all the steps from table Steps and create a form with this fields. The user fills the form and submits them. Then you write those answers to a step_user table. And somehow you keep info that the user has finished the answers (probably another field somewhere or just check if there are rows for him in the step_user table). Of course that there are different approaches to handle this, based on the exact needs you have. Because you did not mentioned more details, I could mention more scenarios here but not sure if they will help.
If you like to be sure that each user has unique data for each step then from the table step_user you should make a composite unique index over the columns step_id and user_id http://www.w3schools.com/sql/sql_unique.asp, but of course you also need code to check this.
This is how I would design this too. Just, to cater for different amounts of steps (questions) later, perhaps save the Steps as one record per question, and do the same for the Step_users table:
**Step_Users**
step_id
user_id
result
If needed you can add a flag to the Users table which is set when the user completed all steps.
I would break it down this way:
You have, I assume, one row in the Steps table for each step in the process. You have one row in the Users table for each user in the system. Thus, your number of rows in the Step_Users table should be (rows in Steps) * (rows in Users).
I would recommend putting a boolean value into the Users table which is steps_finished. That way, you can know in an instant, just from pulling from the users table, whether or not they've been marked as finished. Also, future changes to the steps won't affect the user unless you want it to.
Follow this step when registering or logging in:
User registers a new account: steps_finished = false OR User logs in to an account.
If steps_finished = true, then ignore the rest of this. If false, continue.
System does a select * from Step_Users where user_id=X order by step_id
System does a select * from Steps order by id
Count through the rows in step 3 and compare them to the rows in step 2. When you find a row in step 3 that is not in the rows in step 2, give them that step. ( if you get through the entire table from step 3 and none are missing from the rows in step 2, then go set steps_finished = true )
NEXT PAGE SUBMISSION
Once they give a result for that step, select * from Step_Users where user_id=X and step_id=Y
If that result comes back as 0 rows returned, then you know they haven't submitted this step before. `insert into Step_Users (step_id, user_id, result1, result2, result3) values (X, Y, 'response1', 'response2', 'response3')
Do steps 3-5 above again.
There's many reasons to check the rows in the tables against each other' but the biggest is that the user might stop doing the steps in the middle, and you'll need a way to jump right back in where they started.

Are database queries for everyone in a user list too much?

I am currently using MySQL and MyISAM.
I have a function of which returns an array of user IDs of either friends or users in general in my application, and when displaying them a foreach seemed best.
Now my issue is that I only have the IDs, so I would need to nest a database call to get each user's other info (i.e. name, avatar, other fields) based on the user ID in the loop.
I do not expect hundreds of thousands of users (mainly for hobby learning), although how should I do this one, such as the flexibility of placing code in a foreach for display, but not relying on ID arrays so I am out of luck to using a single query?
Any general structures or tips on what I can display the list appropriately with?
Is my amount of queries (1:1 per users in list) inappropriate? (although pages 0..n of users, 10 at a time make it seem not as bad I just realize.)
You could use the IN() MySQL method, i.e.
SELECT username,email,etc FROM user_table WHERE userid IN (1,15,36,105)
That will return all rows where the userid matches those ID's. It gets less efficient the more ID's you add but the 10 or so you mention should be just fine.
Why couldn't you just use a left join to get all the data in 1 shot? It sounds like you are getting a list, but then you only need to get all of a single user's info. Is that right?
Remember databases are about result SETS and while generally you can return just a single row if you need it, you almost never have to get a single row then go back for more info.
For instance a list of friends might be held in a text column on a user's entry.
Whether you expect to have a small database or large database, I would consider using the InnoDB engine rather than MyISAM. It does have a little higher overhead for processing than MyISAM, however you get all the added benefits (as your hobby grows) including JOIN, which will allow you to pull in specific data from multiple tables:
SELECT u.`id`, p.`name`, p.`avatar`
FROM `Users` AS u
LEFT JOIN `Profiles` AS p USING `id`
Would return id from Users and name and avatar from Profiles (where id of both tables match)
There are numerous resources online talking about database normalization, you might enjoy: http://www.devshed.com/c/a/MySQL/An-Introduction-to-Database-Normalization/

SQL query to collect entries from different tables - need an alternate to UNION

I'm running a sql query to get basic details from a number of tables. Sorted by the last update date field. Its terribly tricky and I'm thinking if there is an alternate to using the UNION clause instead...I'm working in PHP MYSQL.
Actually I have a few tables containing news, articles, photos, events etc and need to collect all of them in one query to show a simple - whats newly added on the website kind of thing.
Maybe do it in PHP rather than MySQL - if you want the latest n items, then fetch the latest n of each of your news items, articles, photos and events, and sort in PHP (you'll need the last n of each obviously, and you'll then trim the dataset in PHP). This is probably easier than combining those with UNION given they're likely to have lots of data items which are different.
I'm not aware of an alternative to UNION that does what you want, and hopefully those fetches won't be too expensive. It would definitely be wise to profile this though.
If you use Join in your query you can select datas from differents tables who are related with foreign keys.
You can look of this from another angle: do you need absolutely updated information? (the moment someone enters new information it should appear)
If not, you can have a table holding the results of the query in the format you need (serving as cache), and update this table every 5 minutes or so. Then your query problem becomes trivial, as you can have the updates run as several updates in the background.

Suggestions on retrieving related data across databases with different logins?

I have an array of user ids in a query from Database A, Table A (AA).
I have the main user database in Database B, Table A (BA).
For each user id returned in my result array from AA, I want to retrieve the first and last name of that user id from BA.
Different user accounts control each database. Unfortunately each login cannot have permissions to each database.
Question: How can I retrieve the firsts and lasts with the least amount of queries and / or processing time? With 20 users in the array? With 20,000 users in the array? Any order of magnitude higher, if applicable?
Using php 5 / mysql 5.
As long as the databases are on the same server just use a cross database join. The DB login being used to access the data will also need permissions on both databases. Something like:
SELECT AA.userID, BA.first, BA.last
FROM databasename.schema.table AA
INNER JOIN databasename.schema.table BA ON AA.userID = BA.userID
In response to comments:
I don't believe I read the part about multiple logins correctly, sorry. You cannot use two different mySQL logins on one connection. If you need to do multiple queries you really only have three options. A) Loop through the first result set and run multiple queries. B) Run a query which uses a WHERE clause with userID IN (#firstResultSet) and pass in the first result set. C) Select everything out of the second DB and join them in code.
All three of those options are not very good, so I would ask, why can't you change user permissions on one of the two DBs? I would also ask, why would you need to select the names and IDs of 20,000 users? Unless this is some type of data dump, I would be looking for a different way to display the data which would be both easier to use and less query intensive.
All that said, whichever option you choose will be based on a variety of different circumstances. With a low number of records, under 1,000, I would use option B. With a higher number of records, I would probably use options C and try to place the two result sets into something that can be joined (such as using array_combine).
I think they key here is that it should be possible in two database calls.
Your first one to get the id's from database A and the second one to pass them to database B.
I don't know mysql, but in sqlserver I'd use the xml datatype and pass all of the ids into a statement using that. Before the xml datatype I'd have built up some dynamic SQL with the id's in an IN statement.
SELECT UserId FROM DatabaseA.TableA
Loop through id's and build up a comma separated string.
"SELECT FirstName, Surname FROM DataBaseB.TableA WHERE UserId IN(" + stringId + ")"
The problem with this is that wth 20,000 id's you may have some performance issues with the amount of data you are sending. This is where'd I'd use the XML datatype, so maybe look at what alternatives mysql has for passing lists of ids.

Categories