I wanted to create a system to track the progress of a player in a game. Each player can be a member of multiple groups, which all have other requirements. In order to track his progress, the stats of the player will be saved once he joins a group. Every time he reloads his stats, the current ones should be saved inside the database.
All stats of the player are stored in a json-format, which will then be parsed either by PHP or JS. An entry with compare = 0 is set once the player joins a group. An entry with compare = 1 should be created the first time a player clicks on Update Stats and from then on it should only be updated, not newly created.
Now my question is: How to achieve that? When reading through the syntax of INSERT INTO I got the following:
INSERT INTO `groups` (`grp`, `id`, `json`, `compare`) VALUES
($grp, $id, $json, 1) ON DUPLICATE KEY SET `json` = $json
However, since there is no key set, and I don't know if I can set up two/three keys (as there can be multiple groups per user, as well as the compare = 0 entry in the same group), I don't think I can do it this way.
+------+----+---------+---------+
| grp | id | json | compare |
+------+----+---------+---------+
| 1 | 1 | stats | 0 |
| 1 | 1 | stats | 1 |
| 1 | 2 | stats | 0 |
| 1 | 2 | stats | 1 |
| 2 | 2 | stats | 0 |
| 2 | 3 | stats | 0 |
| 2 | 3 | stats | 1 |
| 2 | 4 | stats | 0 |
| 2 | 5 | stats | 0 |
+------+----+---------+---------+
grp is the group of the player. There is no real limit set to the
number of groups a player can be in.
id is the ID of the player.
json contains the stats of the player in a json
format (number of points, etc).
compare is a boolean. 0 stands for entry stats (the number of points a player
already had when he registered) and 1 stands for the current stats - Which will
be compared to the entry stats, in order to get the difference (= the points a
player made since joining the group).
I hope my explanation was understandable and someone can help me out.
You can use insert raplace:
REPLACE INTO groups (`grp`, `id`, `json`, `compare`) VALUES (...);
But you must have primary key in table. Replace into automaticly finds out primary key and if record exists, it update row, but if doesn't, it add new row.
You can create a unique key with multiple columns. This will trigger the 'on duplicate' clause.
ALTER TABLE groups
ADD UNIQUE (grp, id, compare)
Related
Here is my pivot table project_group:
+-----+----------+------------+----------+---------+
| ids | group_id | project_id | admin_id | user_id |
+-----+----------+------------+----------+---------+
| 4 | 115 | 1 | 1 | [3,4,5] |
| 5 | 115 | 2 | 1 | [5,2,1] |
| 6 | 115 | 3 | 1 | [1,3,6] |
This table represent group linked to the projects....user_id is which users can see projects/group... Is there any way to display correct projects/group only to the users defined in user_id?
Also content in user_id field can be changed....
The best way to handle this would be to first normalize your database. Storing comma separated lists in a cell is allowed, but generally bad practice, as explained in this question.
If you can have multiple users per project, you should have a linking table with a column for project and a column for user, like this:
project_users:
| project_id | user_id |
and you can make (project_id, user_id) a composite primary key.
That way, you can select the users for a project (say, project 1) like this:
SELECT user_id
FROM project_users
WHERE project_id = 1;
Once you have these, you can display the project data only to users whose id is returned in the above list.
I have built an SQL Fiddle that helps demonstrate this visually, if it helps.
It is good to note that this proper normalization gives the opportunity to a lot of useful data as well, as it becomes easier to search for users by project, but also you can search for project information based on a user.
I've been trying to figure out how to get my query to update an existing row if 2 values match
I have a table with this data
id | itemid | date | price
____________________________________________________________________________
eef1879a-4506-437c-801a-b874e38e290d | 123 | 2015-04-26 08:42:32 | 3.42
67391c5e-09ab-4c2f-b80e-fb0ce69f6e5d | 123 | 2015-04-27 20:02:32 | 3.50
6b16fba4-389e-40ae-94f8-7917ab09fd39 | 13512 | 2015-04-26 08:13:32 | 1.54
5ec3dfe0-29bf-48c8-a694-89606cdbfba3 | 13512 | 2015-04-27 20:02:32 | 1.70
808dc4a3-daa0-4470-b08a-4650f7f4d8e9 | 2124 | 2015-04-26 08:42:28 | 8.74
e327aa9e-fe02-4ccb-8543-752fe5d86e2c | 2124 | 2015-04-27 20:02:32 | 9.04
de4d69ce-eca0-419f-8514-1cc0509149dd | 2124 | 2015-04-28 17:04:02 | 9.78
f7efdcf3-9dd1-41ee-880b-b18563d6f934 | 13512 | 2015-04-28 13:07:30 | 2.09
c256fed7-8a09-4afe-97f3-0e5a9ceea930 | 123 | 2015-04-28 02:08:38 | 3.52
I have an insert query that's working fine. But I don't want multiple entries per day. I've seen ON DUPLICATE KEY for a single column unique key, but my PK is a uuid v4 that's generated via PHP on the insert.
I'm currently checking in a SQL query if the value exists, if it doesn't to insert it. However this is creating an issue if the process gets kicked off more than once. I'm trying to failsafe not having duplicate prices per day.
Current SQL to check if exists:
$date = DATE('Y-m-d');
SELECT i.id FROM items as i
LEFT JOIN itemprices as ip
ON i.id=ip.itemid
AND date(ip.date) = \"$date\"
WHERE ip.itemid IS NULL
It checks the list of item's to see what it needs to create a price for that day. The array that comes back from this is valid at this point.
Then I just do an insert per item with the appropriate value that I get from my endpoint.
Currently my data set that I'm getting prices for each day is 14000 lines, so processing things more than once is extra stress on MySQL and requires manual cleanup.
... ON DUPLICATE KEY ..
Also applies for composite unique/primary keys. Simply rebuild your PK as a composite index.
I want to calculate the standard deviation between page views on my site. I'd like to do this using pure MySQL - without querying the whole table to the webserver - and return a single number to the PHP code for further use. Each page view is stored as a visitor_id - page_id - visit_count trio as per the following schema:
+============+=========+=============+
| visitor_id | page_id | visit_count |
+============+=========+=============+
| 1 | 2 | 7 |
+------------+---------+-------------+
| 2 | 2 | 4 |
+------------+---------+-------------+
| 1 | 1 | 17 |
+------------+---------+-------------+
| 3 | 2 | 12 |
+------------+---------+-------------+
| 1 | 3 | 639478 |
+------------+---------+-------------+
| 2 | 1 | 6 |
+------------+---------+-------------+
page_id refers to a PRIMARY_KEY in the pages table, visitor_id refers to a PRIMARY_KEY in the visitors table. The above table's primary key is the visitor_id - page_id pair, since the same page seen by the same visitor is recorded by increasing the visit_count of the corresponding row, instead of creating a new one.
Before calculating standard deviation, the entries should be grouped together by page_id, their visit_count summed (visitor_id can be ignored here), so, effectively, I want to calculate the deviation of the following:
+=========+=============+
| page_id | visit_count |
+=========+=============+
| 2 | 23 |
+---------+-------------+
| 1 | 23 |
+---------+-------------+
| 3 | 639478 |
+---------+-------------+
I'm aware of the possible PHP solutions, but I'm interested in a MySQL one.
If you want the standard deviation for each page (i.e., the visitors are the population):
select page_id, sum(visit_count) as visit_count, std(visit_count) as visit_std
from table1
group by page_id;
If you want the standard deviation over the pages:
select std(visit_count) as page_std
from (select page_id, sum(visit_count) as visit_count
from table1
group by page_id
) t;
You could create a new table that stores timestamp + current views so you can view a history of changes in views. You'd be able to check the last two timestamped entries and how much the difference is between the two as well as a whole bunch of other stuff you haven't even thought of yet. Like graphs. Or pie charts showing activity increases per week day. Mmmm pie.
Right now I have a PHP script that is fetching the first three results from a MYSQL database using:
SELECT * FROM table Order by DATE DESC LIMIT 3;
After that command I wanted PHP to fetch the next three results, initially I was going to use:
SELECT * FROM table Order by DATE DESC LIMIT 3,3;
However there will be a delay between the two commands which means that it is very possible that a new row will be inserted into the table during the delay. My first thought was to store the DATE value of the last result and then include a WHERE DATE > $stored_date but if entry 3 and 4 have the same date it will skip entry 4 and return results from 5 onward. This could be avoided using the primary key field which is an integer which increments automatically.
I am not sure which the best approach is, but I feel like there should be a more elegant and robust solution to this problem, however I am struggling to think of it.
Example table:
-------------------------------------------
| PrimaryKey | Data | Date |
-------------------------------------------
| 0 | abc | 2014-06-17 11:43:00 |
| 1 | def | 2014-06-17 12:43:00 |
| 2 | ghi | 2014-06-17 13:43:00 |
| 3 | jkl | 2014-06-17 13:56:00 |
| 4 | mno | 2014-06-17 14:23:00 |
| 5 | pqr | 2014-06-17 14:43:00 |
| 6 | stu | 2014-06-17 15:43:00 |
-------------------------------------------
Where Data is the column that I want.
Best will be using primary key and select like
SELECT * FROM table WHERE pk < $stored_pk Order by DATE DESC LIMIT 3;
And if you have automatically generated PK you should use ORDER BY pk it will be faster
Two options I can think of depending on what your script does:
You could either use transactions: performing these queries inside a transaction will give you a consistent view of the data.
Alternatively you could just use:
SELECT * FROM table Order by DATE DESC;
And only fetch the results as you need them.
I wish to update one table in my database, the data is from a php POST. (It is a page where multiple edits on rows can take place at once, then it processes them all at once after) and i want it so for each "row" or "loop", it builds a single query that can update all the rows at once.
What i want to do, is in the query, select data from two other tables.
E.g
Posted data:
- Task = "Check current Sponsors"
- User Assigned = "Dan"
- Start Meeting = "Mar 1st"
- Meetings Required = 2
And for User Assigned, i want it to basically do this query:
SELECT id FROM team WHERE fullname LIKE 'Dan'
And for the start meeting, i want it to do this query:
SELECT id FROM meetings WHERE starttime='".strtotime("Mar
1st")."'
-- strtotime() makes a unix timestamp from a string.
but i want it to do that for each "task" that gets submitted. (It is queued up via javascript and it sends them all into the same post request)
Anyone have any ideas on how to do this?
Thanks in advance
Table Structures:
Tasks:
id | startmid | length | task | uid | completed
1 | 2 | 1 | Check Sponsors | 1 | 0
Meetings: (Joined by startmid)
id | maintask | starttime | endtime
1 | Sponsors | 1330007400 | 1330012800
Team: (Joined by uid)
id | fullname | position | class | hidden
1 | Team | All Members | black | 0
2 | Dan S | Team Manager | green | 0
you can use the following construct:
UPDATE mytable( col1, col2 )
SELECT col1_val, col2_val
FROM someothertables
WHERE cond1 = cond1;