I'm developing a Facebook-type messaging system in PHP. I have a user_message table which contains sender_id, receiver_id, message, and flag as read/unread. After inserting a value in this table, I make it as unread and when the user clicks on the message notification, then the status will be updated as read.
It is working fine for single user. While sending message to multiple users though, I am storing receiver_id as comma separated values in the table. My problem is that how to set flag for multiple receivers? if I'm sending one message to 3 users, for example, then how can I set the read flag? Any help will be Appreciated.
While sending message to multiple users though, I am storing receiver_id as comma separated values in the table.
Don't misuse a string column to represent multiple values. It's impossible to index a column structured that way in most DBMSes, so you won't be able to search it efficiently. This is particularly significant for a "message recipient" structure, as it means that it's impossible to efficiently search for all messages received by a specific user. This would make many common operations like checking for new messages sent to a user, or viewing a user's mailbox, extremely slow on a large site.
Instead, if you want to have a single representation of a message sent to many people, rather than a separate message for each one, model this using two tables, e.g.
CREATE TABLE message (
id INTEGER PRIMARY KEY NOT NULL,
sender_id INTEGER NOT NULL,
message TEXT NOT NULL
);
CREATE TABLE message_recipient (
message_id INTEGER NOT NULL,
recipient_id INTEGER NOT NULL,
PRIMARY KEY (message_id, recipient_id)
);
With such a table structure, you can place a "read" flag on the message_recipient object, so that each recipient of a given message has a separate read status.
1) You can add field with name "users_reads" and put in it IDs of users, who read message, separating by comma. How to check if user reads message:
if (in_array($userId, explode(',', $row['users_reads']))) { ... }
2) You can add new table 'users_reads_messages' in your database with fields:
- message_id
- reader_id
3) you can make a copy of message with another reciever_id, I mean you put single user id in this field, for other users you make a copy of this message in your table.
p.s. sorry for my bad english
You can have a juntion table between message table and receiver table. For example MessageRead table. it has id(auto increment) message_id and receiver_id. If user with id=10 read the message with id=5, you need to execute this query:
INSERT INTO MessageRead(message_id, receiver_id) VALUES(5, 10)
For getting number of users that read the message with id=5, you can execute this query:
SELECT COUNT(*) FROM MessageRead WHERE message_id = 5
And, for checking if user with id = 10 read the message with id = 5, execute this query:
SELECT COUNT(*) FROM MessageRead WHERE message_id = 5 AND receiver_id = 10
If the result of this query is 1, it means the user has read the message, otherwise he has not read the message.
How can I implement a undo changes function to mysql database, just like Gmail when you delete/move/tag an email.
So far I have a system log table that holds the exact sql statements executed by the user.
For example, I'm trying to transform:
INSERT INTO table (id, column1, column2) VALUES (1,'value1', 'value2')
into:
DELETE FROM table WHERE id=1, column1='value1', column2='value2'
is there a built in function to do this like the cisco routers commands, something like
(NO|UNDO|REVERT) INSERT INTO table (id, column1, column2) VALUES (1,'value1', 'value2')
Maybe my approach is incorrect, should i save the current state of my row and the changed row to get back to it's original state?.
something like:
original_query = INSERT INTO table (id, column1, column2) VALUES (1,'value1', 'value2')
executed_query = INSERT INTO table (id, column1, column2) VALUES (1,'change1', 'change2')
to later transform into:
INSERT INTO table (id, column1, column2) VALUES (1,'value1', 'value2') ON DUPLICATE KEY UPDATE
column1=VALUES(column1), column2=VALUES(column2)
But maybe it won't work with newly inserted rows or can cause troubles if i modify the primary key so i will rather let them unchanged.
This is my log table:
CREATE TABLE `log` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT ,
`date` datetime NOT NULL ,
`user` int(11) NOT NULL,
`client` text COMMENT ,
`module` int(11) unsigned NOT NULL ,
`query` text NOT NULL ,
`result` tinyint(1) NOT NULL ,
`comment` text,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
The objective is like i said, undo changes from certain period of time based on the date of the statement execution, for example (can be in php)
function_undo(startdate, enddate)
{
RESULT = SELECT query FROM log WHERE date BETWEEN startdate AND endate
FOR EACH RESULT AS KEY - query
REVERT query
}
or a undo button to revert one single action (single logged query ).
It's my concept of this 'incremental backup changes' correct or am I overcomplicating everything?
Considering the obvious fact that the size of my database will be double or maybe tripple if I store the full queries. Should I store it in a different database ? or simply erase the log table once I make a programed full backup to only keep recent changes?
Any advices are welcome...
It was always problematic, SQL 2012 addresses this issue.
Temporal model is simple: add interval columns (valid_from, valid_to ) but it is very complicated to implement constraints.
Model manipulation is also simple:
1. insert - new version valid_from=now, valit_to=null
2. update - new version valid_from=now, valit_to=null, update previous version valit_to=now
3. delete - update current version valit_to=now
4. undo delete - update last version valit_to=null
5. undo update/insert - delete current version if you do not need redo and update valit_to=null if previous version exits
It is more complicated with redo but it is similar, typically this model is used in data warehouse to track changes instead of redo function but it should be fine for redo too. It is also know as slowly changing dimension in data warehouse.
I think you need to record the reverse of each insert / update / delete queries and then perform them to do the undo. Here is a solution for you but this does not take foreign key relationships (cascade operations) into account. It is just a simple solution concept. Hopefully it will give you more ideas. Here it goes:
assume u have a table like this that you want to undo
create table if not exists table1
(id int auto_increment primary key, mydata varchar(15));
here is the table that records reverse queries
create table if not exists undoer(id int auto_increment primary key,
undoquery text , created datetime );
create triggers for insert update and delete operations that saves the reverse/rescue query
create trigger after_insert after insert on table1 for each row
insert into undoer(undoquery,created) values
(concat('delete from table1 where id = ', cast(new.id as char)), now());
create trigger after_update after update on table1 for each row
insert into undoer(undoquery,created) values
(concat('update table1 set mydata = \'',old.mydata,
'\' where id = ', cast(new.id as char)), now());
create trigger after_delete after delete on table1 for each row
insert into undoer(undoquery,created) values
(concat('insert into table1(id,mydata)
values(',cast(old.id as char), ', \'',old.mydata,'\') '), now());
to undo, you execute the reverse queries from undoer table between your dates sorted by date in desc order
The best solution is a soft delete in the database table, usually a column named "is_deleted", and "datetime_deleted", auto populated when the user deletes.
When the delete is completed, the response includes the ID of the record- which populates a link calling an undo method the user can click, which simply undeletes the record by updating the database again.
You can then operate a job which is either executed by the user, or on a scheduled task, to clean up all data marked "is_deleted = 1" over a period of time.
I think a combination of techniques would be needed here...
You could implement a Queue system which executes a job (sending emails etc) after a certain time.
E.g. If the user deletes an object send it to the queue for 30seconds or so just incase the user clicks undo. If the user does click undo you could just simply remove the job from the queue.
This combined with soft deleting may be a good option to look into.
I've used Laravels Queue class which is really good.
I'm not really sure if there will ever be a correct answer for this as theres no correct way of doing it. Good luck though :)
I would suggest you use something like the following table to log the changes to your database.
TABLE audit_entry_log
-- This is an audit entry log table where you can track changes and log them here.
( audit_entry_log_id INTEGER PRIMARY KEY
, audit_entry_type VARCHAR2(10) NOT NULL
-- Stores the entry type or DML event - INSERT, UPDATE or DELETE.
, table_name VARCHAR2(30)
-- Stores the name of the table which got changed
, column_name VARCHAR2(30)
-- Stores the name of the column which was changed
, primary_key INTEGER
-- Stores the PK column value of the row which was changed.
-- This is to uniquely identify the row which has been changed.
, ts TIMESTAMP
-- Timestamp when the change was made.
, old_number NUMBER(36, 2)
-- If the changed field was a number, the old value should be stored here.
-- If it's an INSERT event, this would be null.
, new_number NUMBER(36,2)
-- If the changed field was a number, the new value in it should be stored here.
-- If it's a DELETE statement, this would be null.
, old_text VARCHAR2(2000)
-- Similar to old_number but for a text/varchar field.
, new_text VARCHAR2(2000)
-- Similar to new_number but for a text/varchar field.
, old_date VARCHAR2(2000)
-- Similar to old_date but for a date field.
, new_date VARCHAR2(2000)
-- Similar to new_number but for a date field.
, ...
, ... -- Any other data types you wish to include.
, ...
);
Now, suppose you have a table like this:
TABLE user
( user_id INTEGER PRIMARY KEY
, user_name VARCHAR2(50)
, birth_date DATE
, address VARCHAR2(50)
)
On this table, I have a trigger that populates audit_entry_log tracking the changes to this table.
I am giving this code example for Oracle, you can definitely tweak it a little to suit MySQL:
CREATE OR REPLACE TRIGGER user_id_trg
BEFORE INSERT OR UPDATE OR DELETE ON user
REFERENCING new AS new old AS old
FOR EACH ROW
BEGIN
IF INSERTING THEN
IF :new.user_name IS NOT NULL THEN
INSERT INTO audit_entry_log (audit_entry_type,
table_name,
column_name,
primary_key,
ts,
new_text)
VALUES ('INSERT',
'USER',
'USER_NAME',
:new.user_id,
current_timestamp(),
:new.user_name);
END IF;
--
-- Similar code would go for birth_date and address columns.
--
ELSIF UPDATING THEN
IF :new.user_name != :old.user_name THEN
INSERT INTO audit_entry_log (audit_entry_type,
table_name,
column_name,
primary_key,
ts,
old_text,
new_text)
VALUES ('INSERT',
'USER',
'USER_NAME',
:new.user_id,
current_timestamp(),
:old.user_name,
:new.user_name);
END IF;
--
-- Similar code would go for birth_date and address columns
--
ELSIF DELETING THEN
IF :old.user_name IS NOT NULL THEN
INSERT INTO audit_entry_log (audit_entry_type,
table_name,
column_name,
primary_key,
ts,
old_text)
VALUES ('INSERT',
'USER',
'USER_NAME',
:new.user_id,
current_timestamp(),
:old.user_name);
END IF;
--
-- Similar code would go for birth_date and address columns
--
END IF;
END;
/
Now, consider, as a simple example, you run this query on timestamp 31-JAN-2014 14:15:30:
INSERT INTO user (user_id, user_name, birth_date, address)
VALUES (100, 'Foo', '04-JUL-1995', 'Somewhere in New York');
Next you run an UPDATE query on timestamp 31-JAN-2014 15:00:00:
UPDATE user
SET username = 'Bar',
address = 'Somewhere in Los Angeles'
WHERE user_id = 100;
Thus your user table would have data:
user_id user_name birth_date address
------- --------- ----------- --------------------------
100 Bar 04-JUL-1995 Somewhere in Los Angeles
This results in following data in the audit_entry_log table:
audit_entry_type table_name column_name primary_key ts old_text new_text old_date new_date
---------------- ---------- ----------- ----------- -------------------- --------------------- ------------------------ -------- -----------
INSERT USER USER_NAME 100 31-JAN-2014 14:15:30 FOO
INSERT USER BIRTH_DATE 100 31-JAN-2014 14:15:30 04-JUL-1992
INSERT USER ADDRESS 100 31-JAN-2014 14:15:30 SOMEWHERE IN NEW YORK
UPDATE USER USER_NAME 100 31-JAN-2014 15:00:00 FOO BAR
UPDATE USER ADDRESS 100 31-JAN-2014 15:00:00 SOMEWHERE IN NEW YORK SOMEWHERE IN LOS ANGELES
Create a procedure like the following that would accept table name and timestamp to which we have to restore a particular table name.
The table would be restored only upto a timestamp. There will not be a from timestamp. It is only from current to a timestamp in the past.
CREATE OR REPLACE PROCEDURE restore_db (p_table_name varchar, p_to_timestamp timestamp)
AS
CURSOR cur_log IS
SELECT *
FROM audit_entry_log
WHERE table_name = p_table_name
AND ts > p_to_timestamp;
BEGIN
FOR i IN cur_log LOOP
IF i.audit_entry_type = 'INSERT' THEN
-- Delete the row that was inserted.
EXEC ('DELETE FROM '||p_table_name||' WHERE '||p_table_name||'_id = '||i.primary_key);
ELSIF i.audit_entry_type = 'UPDATE' THEN
-- Put all the old data back into the table.
IF i.old_number IS NOT NULL THEN
EXEC ('UPDATE '||p_table_name||' SET '||i.column_name||' = '||i.old_number
||' WHERE '||p_table_name||'_id = '||i.primary_key);
ELSIF i.old_text IS NOT NULL THEN
-- Similar statement as above EXEC for i.old_text
ELSE
-- Similar statement as above EXEC for i.old_text
END IF;
ELSIF i.audit_entry_type = 'DELETE' THEN
-- Write an INSERT statement for the row that has been deleted.
END IF;
END LOOP;
END;
/
Now, if you want to restore user table to a state at 31-JAN-2014 14:30:00- when the INSERT was fired and UPDATE was not fired, a procedure call like this would do a good joib:
restore_db ('USER', '31-JAN-2014 14:30:00');
I am iterating this again- treat all the above code as pseudo-code and make necessary changes when you try to run them. This is the most fail-proof design I have seen for manual query flashbacks.
Have you considered passing the old values into a separate table as XML values? Then, if you need to restore them, you can retrieve the XML values from the table.
For this kind of system, a log table is the way to go. Yes, the table will most likely be big, but it all depends on how far back you want to be able to go. You could use a time limit, as you said, and delete all logs before 6 months ago. You could also create some sort of recycle bin and don't allow users to have more than, lets say, 100 "items" in it - always keep the most recent 100 log entries for each user.
Regarding the issue of what queries to keep in your log table, there is no built in function that allows you to do what you want. But since you only log updates and deletes (no need to log inserts since users usually have the option to delete their stuff), you can easily build your own function.
Before any UPDATE or DELETE statement, you get the entire row from the database, and you create a REPLACE statement for it - it works both as an UPDATE and an INSERT. The only thing to keep in mind is that you need a PRIMARY KEY or UNIQUE index for all of your tables.
Here is an ideea on how the function should look like:
function translateStatement($table, $primaryKey, $id)
{
$sql = "SELECT * FROM `$table` WHERE `$primaryKey` = '$id'"; //should always return one row
$result = mysql_query($sql) or die(mysql_error());
$row = mysql_fetch_assoc($result);
$columns = implode(',', array_map( function($item){ return '`'.$item.'`'; }, array_keys($row)) ); //get column names
$values = implode(',', array_map( function($item){ return '"'.mysql_real_escape_string($item).'"'; }, $row) ); //get escaped column values
return 'REPLACE INTO `$table` ('.$columns.') VALUES ('.$values.')';
}
Ok so a user comes to my web application and gets points and the like for activity, sort of similar (but not as complex) as this site. They can vote, comment, submit, favorite, vote for comments, write description etc and so on.
At the moment I store a user action in a table against a date like so
Table user_actions
action_id - PK AI int
user_id - PK int
action_type - varchar(20)
date_of_action - datetime
So for example if a user comes along and leaves a comment or votes on a comment, then the rows would look something like this
action_id = 4
user_id = 25
action_type = 'new_comment'
date_of_action = '2011-11-21 14:12:12';
action_id = 4
user_id = 25
action_type = 'user_comment_vote'
date_of_action = '2011-12-01 14:12:12';
All good I hear you say, but not quite, remember that these rows would reside in the user_actions table which is a different table to the ones in which the comments and user comment votes are stored in.
So how do I know what comment links to what row in the user_actions?
Well I could just link to the unique comment_id in the comments table to a new column, called target_primary_key in the user_actions table?
Nope. Can't do that because the action could equally have been a user_comment_vote which has a composite key (double key)?
So the thought I am left with is, do I just add the primary keys in a column and comma deliminate them and let PHP parse it out?
So taking the example above, the lines below show how I would store the target primary keys
new_comment
target_primary_keys - 12 // the unique comment_id from the comments table
user_comment_vote
target_primary_keys - 22,12 // the unique comment_id from the comments table
So basically a user makes an action, the user_actions is updated and so is the specific table, but how do I link the two while still allowing for multiple keys?
Has anyone had experience with storing user activity before?
Any thoughts are welcome, no wrong answers here.
You do not need a user actions table.
To calculate the "score" you can run one query over multiple tables and multiply the count of matching comments, ratings etc. with a multiplier (25 points for a comment, 10 for a rating, ...).
To speed up your page you can store the total score in an extra table or the user table and refresh the total score with triggers if the score changes.
If you want to display the number of ratings or comments you can do the same.
Get the details from the existing tables and store the total number of comments and ratings in an extra table.
The simplest answer is to just use another table, which can contain multiple matches for any key and allow great indexing options:
create table users_to_actions (
user_id int(20) not null,
action_id int(20) not null,
action_type varchar(25) not null,
category_or_other_criteria ...
);
create index(uta_u_a) on users_to_actions(user_id, action_id);
To expand on this a bit, you would then select items by joining them with this table:
select
*
from
users_to_actions as uta join comments as c using(action_id)
where
uta.action_type = 'comment' and user_id = 25
order by
c.post_date
Or maybe a nested query depending on your needs:
select * from users where user_id in(
select
user_id
from
users_to_actions
where
uta.action_type = 'comment'
);
Here is the problem. I have couple tables in MySQL MyISAM tables. And also i have several queries one depend on another. Something of this kind:
CREATE TABLE users (
name varchar(255) DEFAULT NULL PRYMARY KEY,
money int(10) unsigned DEFAULT NULL
);
INSERT INTO users(name, money) VALUES('user1', 700);
INSERT INTO users(name, money) VALUES('user2', 200);
I need to transfer money from 1 user to anouther
<?php
$query1 = "UPDATE users SET money=money-50 WHERE name = 'user1'";
$query2 = "UPDATE users SET money=money+50 WHERE name = 'user2'";
The problem is if connection breaks between these two queries, the money just get lost, first user looses them, the other one doesn't get them. I could use InnoDB or BDB to start transaction, and rollback both queries on error in any of them, but still i have this asignment for MyISAM.
How this problem normally get solved?
Firstly, as several people have mentioned this isn't a good idea, and you shouldn't do this in any real system. But I assume this is a homework assignment, and the goal is to figure out how to fake atomic updates in a system that doesn't support it.
You can do it by basically creating your own transaction log system. The ideas is to create a set of idempotent operations, i.e., operations you can repeat again if they get interrupted, and get the correct result. Addition and subtraction are not idempotent, because if you add or subtract multiple times, you'll end up with a different result. Assignment is. So you can do something like this:
CREATE TABLE transactions(
id int auto_increment primary key,
committed boolean default false,
user1 varchar(255),
user2 varchar(255),
balance1 int,
balance2 int,
index (id, committed)
);
Then your "transaction" looks something like this:
INSERT INTO transactions(user1, user2, balance1, balance2)
VALUES(
'user1',
'user2',
(SELECT money + 50 FROM users where name='user1'),
(SELECT money - 50 FROM users where name='user2')
);
You then have a separate system or function that commits transactions. Find the first uncommitted transaction, update both the accounts with the stored values, and mark the transaction as committed. If the process gets interrupted, you'll be able to recover because you can play back transactions and there will be no harm done if you play back a transaction more than once.
MyISAM does not provide any mechanism for handling this internally. If you need atomicity, use an engine which does support transactions, such as the InnoDB engine. This is the usual and accepted solution to this kind of problem.
Another possibility would be to store transactions rather than totals.
CREATE TABLE users(name VARCHAR(255), PRIMARY KEY (name));
CREATE TABLE transactions(from_user VARCHAR(255), to_user VARCHAR(255), amount INT);
This means transactions are now a single query, but finding the current balance is more difficult.
The transaction:
INSERT INTO transactions VALUES('user1', 'user2', 50);
Finding the balance is harder:
SELECT (SELECT SUM(amount) FROM transactions WHERE to_user='user2') - (SELECT SUM(amount) FROM transactions WHERE from_user='user2')
Since the record can't be only half inserted, this resolves the issue. Note I didn't say this was a good idea. Use a transactional database.
Note: There is one more way to do this which is rather ugly but should still be atomic with MyISAM.
UPDATE users SET money=IF(name='user1',money-50, money+50) WHERE name='user1' OR name='user2';
UPDATE users u1
INNER JOIN users u2
SET u1.money=u1.money-50, u2.money=u2.money+50
WHERE u1.name = 'user1'
AND u2.name = 'user2'