I'm building a chat for a turn-based game (using PHP and MySQL) where I only want to save the 30 most recent messages in the MySQL database.
Currently, I'm doing 3 queries to delete the oldest message when the count exceeds 30:
Insert the new chat message as a new row:
INSERT INTO chatMessages (userId,matchId,message,timestamp) VALUES (".$params["userId"].",".$params["matchId"].",'".$params["message"]."',NOW())
Check if there are too many messages in the database, and get the oldest one's Id.
SELECT COUNT(id) AS count, id AS oldestId FROM chatMessages WHERE matchId = ".$params["matchId"]." ORDER BY id ASC LIMIT 1
Remove the oldest message
DELETE FROM chatMessages WHERE id = ".$oldestId
Is there any way to do this in 2, or even 1 single query? We have quite a lot of traffic on our servers, so performance is key.
You can reduce the 2nd and 3rd query to one:
delete from chatMessages
where id not in (select id from chatMessages
order by id desc
limit 30)
You could put that in an insert trigger. That way oldest records would get deleted automatically after an insert.
Related
We have a table which inserts a row each day and updates it with data.
I have the following query to get the total amount of clicks from the table:
SELECT SUM(`total_clicks`) AS clicks, `last_updated` FROM `reporting` WHERE `unique_id` = 'xH7' ORDER BY `last_updated` DESC
When pulling this info from the database, it is pulling the correct total amount of clicks but the last_updated field is from the first row (yesterday) not the new row inserted today.
How can I go about getting the most recent last_updated field?
If you want the most recent date, use MAX to select it:
SELECT SUM(total_clicks) as clicks, MAX(last_updated) AS last_updated
FROM reporting
WHERE unique_id = 'xH7'
The problem with your version is that ORDER BY happens after aggregating, and aggregation selects non-aggregate columns from arbitrary rows in the table.
If you have only one row per day, then you don't need sum(). Does the following do what you want?
SELECT `total_clicks` AS clicks, `last_updated`
FROM `reporting`
WHERE `unique_id` = 'xH7'
ORDER BY `last_updated` DESC
LIMIT 1;
Your query is an aggregation query that adds up all the clicks in the table. Because it returns only one row, the order by isn't doing anything.
I have a table with 100+ entries, I want to display fixed number of entries from this table randomly.
I am using the query below.
SELECT * FROM Table1 WHERE active='1' AND id NOT IN
(SELECT ad_id FROM Table1_logs WHERE uid='$username') ORDER BY RAND() LIMIT 5
Table1 contain all the entries, Table1_logs contains the entries which have been utilized by users today.
Problem:
I need to pick 5 entries daily for each users and should not exceed that amount.
when a user utilizes one entry and it is saved in logs, query picks again 5 entries and it remains 5 all the time.
What I want to acheieve:
When a user utilizes one entry, it count should decrease. He should be able to see only 5 entries daily.
You should add a new field to table1 "utilizationCount" . When a link is utilized, you increase the value by one. And in the query, you should order by this field: " order by utilizationCount asc limit 5" .
I'm thinking of implement a view history for my wordpress blog, where users can view their previously viewed articles as a list in their account page.
I would like to limit this to 24 unique page history per user at any point of time, meaning, if the number of articles exceeds 24, the oldest article row would be deleted, and the new article added to the table.
I'm using PHP and MySQL.
Here's my current thoughts on implementation:
Create a table with user_id and post_id columns
When user views an article, insert new row into the table
Select the rows with the current user_id, and if number of rows is more than 24,
Delete the oldest row
I'm not sure if this is the best method, since it's 3 additional database queries per user page view which is pretty heavy.
Is there a better way to do this?
The ideea is good. Improve it by updating the oldest row instead of deleting it and then adding a new one. ;)
Also make a single read query and a single write query.
make a query that is like this one
SELECT (SELECT COUNT(*) FROM recentArticles WHERE userID = 23)
AS NoOfArticles, articleID, timestamp
FROM recentArticles
WHERE userID = 23
ORDER BY timestamp ASC
LIMIT 1;
If NoOfArticles < 24 then execute an insert query, else execute an update query to articleID
You could always implement this using cookies to key on which pages have been visited and keeping a running list on the users PC. This will reduce the traffic to the site, put the processing scripts on the user-side, and could be easier to implement.
That being said, I agree with the other answer about updating the oldest entry if the table is full. This eliminates the need to delete and add a row. Key off a time-date stamp to sort the entries when you're displaying them and to figure out which page was the oldest (and needs to be updated if >= 24 pages.
You can merge two queries in one. In this way, You will save execution time of your script. So, basically, DELETE row if the post count is more than 24. You can modify this query according to your exact need. But yes, you can think on this way.
DELETE FROM `table_name`
WHERE id=(SELECT (CASE WHEN COUNT(id)>24 then id END)
AS last_id
FROM `table_name`
WHERE user_id='XX'
ORDER BY id DESC LIMIT 1);
I have a voting system for articles. Articles are stored in 'stories' table and all votes are stored in 'votes' table. id in 'stories' table is equal to item_name in 'votes' table (therefore each vote is related to article with item_name).
I want to make it so when sum of votes gets to 10 it updates 'showing' field in 'stories' table to value of "1".
I was thinking about setting up a cron job that runs every hour to check all posts that have a showing = 0. If showing = 0 than it will sum up votes related to that article and set showing = 1 if sum of votes >= 10. I'm not sure if it is efficient as it might take up a lot of server resources, not sure.
So could anyone suggest a cron job that could do the task?
Here is my database structure:
Stories table
Votes table
Edit:
For example this row from 'stories' table:
id| 12
st_auth | author name
st_date | story date
st_title| story title
st_category| story category
st_body| story body
showing| 0 for unaproved and 1 for approved
This row is related to this one from 'votes' table
id| 83
item_name| 12 (id of article)
vote_value| 1 for upvote -1 for downvote
...
Couple of things:
Why did you name the column item_name in the votes table, when it is actually the id of the article table? I would recommend making this a match on the article table in that it is an int(11) vs a var_char(255). Also, you should add a foreign key constraint to the votes table, so if an article is ever deleted, you don't orphan a row in the votes table.
Why is the vote_value column an int(11)? If it can only be two states (1, or -1) you can do a tinyint(1) signed (for the -1).
The ip column in the votes table is a bit concerning. If you are regulating 'unique' votes by ip, did you account for proxy ips? Something like this should be handled at the account level, so several users from the same proxy IP can issue individual votes.
I wouldn't do a cronjob for determining whether the showing column should be flagged 0 or 1. Rather, I would issue a count every time a vote was cast against the article. So if someone up-voted or down-voted, calculate the new value of the story, and store it in cache for future reads.
Using this query, you get a list of all articles plus a column containing the count of associated votes.
SELECT s.*, SUM(v.vote_value) AS votes_total
FROM stories AS s INNER JOIN votes AS v
ON v.item_name = s.id
GROUP BY v.vote
This way, you can create a view from which you can filter on votes_total > 10, without need of the cron job.
Or you can use it as a normal query, something like this:
SELECT * FROM (
SELECT s.*, SUM(v.vote_value) AS votes_total
FROM stories AS s INNER JOIN votes AS v
ON v.item_name = s.id
GROUP BY v.vote
) WHERE votes_total > 10;
I would use a trigger (insert trigger) and handle your logic there (in the database itself)?
This would remove the poll code altogether (cron job).
I would also keep your foreign key (in VOTES) the same (at least the type) as the primary key (in STORIES)?
Using a trigger instead of polling will be much cleaner in the long run.
You don't specify your database, but in TSQL (for SQL Server) it could be close to this
CREATE TRIGGER myTrigger
ON VOTES
FOR INSERT
AS
DECLARE #I INT --HOLDS COUNT OF VOTES
DECLARE #IN VARCHAR(255) --HOLDS FK ID FOR LOOKUP INTO STORIES IF UPDATE REQUIRED
SELECT #IN = ITEM_NAME FROM INSERTED
SELECT #I = COUNT(*) FROM VOTES WHERE ITEM_NAME = #IN
IF (#I >= 10)
BEGIN
UPDATE STORIES SET SHOWING = 1 WHERE ID = #IN --This is why your PK/FK should be refactored
END
I run points system on my site so I need to keep logs of different action of my users into database. The problem is that I have too many users and keeping all the records permanently may cause server overload... I there a way to keep only 10 records per user and automatically delete older entries? Does mysql have some function for this?
Thanks in advance
You can add a trigger that takes care of removing old entries.
For instance,
DELIMITER //
CREATE definer='root'#'localhost' TRIGGER afterMytableInsert AFTER INSERT ON MyTable
FOR EACH ROW
BEGIN
DELETE FROM MyTable WHERE user_id = NEW.user_id AND id NOT IN
(SELECT id FROM MyTable WHERE user_id = NEW.user_id ORDER BY action_time DESC LIMIT 10);
END//
Just run an hourly cron job that deletes the 11th - n records.
Before insert a record you could check how many the user has first. If they have >=10 delete the oldest one. Then insert the new one.
If your goal is to have the database ensure that for a given table there are never more than N rows per a given subkey (user) then the correct way to solve this will be either:
Use stored procedures to manage inserts in the table.
Use a trigger to delete older rows after an insert.
If you're already using stored procedures for data access, then modifying the insert procedure would make the most sense, otherwise a trigger will be the easiest solution.
Alternately if your goal is to periodically remove old data, then using a cron job to start a stored procedure to prune old data would make the most sense.
When you are inserting a new record for a user. Just do a query like this before (Don't forget the where-condition):
DELETE FROM tablename WHERE userID = 'currentUserId' LIMIT 9, 999999
After that you can insert new data. This keeps the data always to ten records for each user.
INSERT INTO tablename VALUES(....)
DELETE FROM Table WHERE ID NOT IN (SELECT TOP 10 ID FROM Table WHERE USER_ID = 1) AND USER_ID = 1
Clearer Version
DELETE FROM Table
WHERE ID NOT IN
(
SELECT TOP 10 ID FROM Table WHERE USER_ID = 1
)
AND USER_ID = 1