Transaction problem in MyISAM - php

Here is the problem. I have couple tables in MySQL MyISAM tables. And also i have several queries one depend on another. Something of this kind:
CREATE TABLE users (
name varchar(255) DEFAULT NULL PRYMARY KEY,
money int(10) unsigned DEFAULT NULL
);
INSERT INTO users(name, money) VALUES('user1', 700);
INSERT INTO users(name, money) VALUES('user2', 200);
I need to transfer money from 1 user to anouther
<?php
$query1 = "UPDATE users SET money=money-50 WHERE name = 'user1'";
$query2 = "UPDATE users SET money=money+50 WHERE name = 'user2'";
The problem is if connection breaks between these two queries, the money just get lost, first user looses them, the other one doesn't get them. I could use InnoDB or BDB to start transaction, and rollback both queries on error in any of them, but still i have this asignment for MyISAM.
How this problem normally get solved?

Firstly, as several people have mentioned this isn't a good idea, and you shouldn't do this in any real system. But I assume this is a homework assignment, and the goal is to figure out how to fake atomic updates in a system that doesn't support it.
You can do it by basically creating your own transaction log system. The ideas is to create a set of idempotent operations, i.e., operations you can repeat again if they get interrupted, and get the correct result. Addition and subtraction are not idempotent, because if you add or subtract multiple times, you'll end up with a different result. Assignment is. So you can do something like this:
CREATE TABLE transactions(
id int auto_increment primary key,
committed boolean default false,
user1 varchar(255),
user2 varchar(255),
balance1 int,
balance2 int,
index (id, committed)
);
Then your "transaction" looks something like this:
INSERT INTO transactions(user1, user2, balance1, balance2)
VALUES(
'user1',
'user2',
(SELECT money + 50 FROM users where name='user1'),
(SELECT money - 50 FROM users where name='user2')
);
You then have a separate system or function that commits transactions. Find the first uncommitted transaction, update both the accounts with the stored values, and mark the transaction as committed. If the process gets interrupted, you'll be able to recover because you can play back transactions and there will be no harm done if you play back a transaction more than once.

MyISAM does not provide any mechanism for handling this internally. If you need atomicity, use an engine which does support transactions, such as the InnoDB engine. This is the usual and accepted solution to this kind of problem.
Another possibility would be to store transactions rather than totals.
CREATE TABLE users(name VARCHAR(255), PRIMARY KEY (name));
CREATE TABLE transactions(from_user VARCHAR(255), to_user VARCHAR(255), amount INT);
This means transactions are now a single query, but finding the current balance is more difficult.
The transaction:
INSERT INTO transactions VALUES('user1', 'user2', 50);
Finding the balance is harder:
SELECT (SELECT SUM(amount) FROM transactions WHERE to_user='user2') - (SELECT SUM(amount) FROM transactions WHERE from_user='user2')
Since the record can't be only half inserted, this resolves the issue. Note I didn't say this was a good idea. Use a transactional database.
Note: There is one more way to do this which is rather ugly but should still be atomic with MyISAM.
UPDATE users SET money=IF(name='user1',money-50, money+50) WHERE name='user1' OR name='user2';

UPDATE users u1
INNER JOIN users u2
SET u1.money=u1.money-50, u2.money=u2.money+50
WHERE u1.name = 'user1'
AND u2.name = 'user2'

Related

JOIN or 2 queries - 1 large table, 1 small, hardware limited

I have a page in which there is a <select> menu, which contains all of the values from a small table (229 rows), such that <option value='KEY'>VALUE</option>.
This select menu is a filter for a query which runs on a large table (3.5M rows).
In the large table is a foreign key which references KEY from small table.
However, in the results of the large table query, I also need to display the relative VALUE from the small table.
I could quite easily do an INNER JOIN to retrieve the results, OR I could do a separate 'pre'-query to my smaller table, fetch it's values into an array, and then let the application get the VALUE from small table results.
The application is written in PHP.
Hardware resources IS an issue (cannot upgrade to higher instance right now, boss constrained) - I am running this on a t2.micro RDS on Amazon Web Services instance.
I have added both single and covering indexes on columns in WHERE & HAVING clauses, and my server is reporting that I have 46mb RAM available.
Given the above, I know that JOIN can be expensive especially on big tables. Does it just make sense here to do 2 queries, and let the application handle some of the work, until I can negotiate better resources?
EDIT:
No Join : 6.9 sec
SELECT nationality_id, COUNT(DISTINCT(txn_id)) as numtrans,
SUM(sales) as sales, SUM(units) as units, YrQtr
FROM 1_txns
GROUP BY nationality_id;
EXPLAIN
'1', 'SIMPLE', '1_txns', 'index', 'covering,nat', 'nat', '5', NULL, '3141206', NULL
With Join: 59.03 Sec
SELECT 4_nationality.nationality, COUNT(DISTINCT(txn_id)) as numtrans,
SUM(sales) as sales, SUM(units) as units, YrQtr
FROM 1_txns INNER JOIN 4_nationality USING (nationality_id)
GROUP BY nationality_id
HAVING YrQtr LIKE :period;
EXPLAIN
'1', 'SIMPLE', '4_nationality', 'ALL', 'PRIMARY', NULL, NULL, NULL, '229', 'Using temporary; Using filesort'
'1', 'SIMPLE', '1_txns', 'ref', 'covering,nat', 'nat', '5', 'reports.4_nationality.nationality_id', '7932', NULL
Schema is
Table 1_txns (txn_id, nationality_id, yrqtr, sales, units)
Table 4_nationality (nationality_id, nationality)
I have separate indexes on each nationality_id, txn_id, yrqtr. in my Large Transactions Table. And just a primary key index on my small table.
Something strange also, is that the query WITHOUT the join, is missing a row from it's results!
If your lookup "menu" list table is only the 229 rows as stated, and it has a unique key, and your menu table has index on (key, value), the join would be negligible... especially if your only querying the results based on a single key anyhow.
The bigger question to me would be on your table of 3.5 million records. At 229 "menu" items, it would be returning an average of over 15k records each time. And I am sure that not every category is evenly balanced... some could have a few hundred or thousand entries, others could have 30k+ entries. Is there some other criteria that would allow smaller subsets to be returned? Obviously not enough info to quantify.
Now, after seeing your revised post while entering this, I see you are trying to get aggregations. The table would otherwise be fixed for historical data. I would suggest a summary table be done on a per Nationality/YrQtr basis. This way, you can query that directly if the period is PRIOR to the current period in question. If current period, then sum aggregates from production. Again, since transactions wont change historically, neither would their counts and you would have immediate response from the pre-summary table.
Feedback
As for how / when to implement a summary table. I would create the table with the respective columns you need... Nationality, Period (Yr/Month), and respective counts for distinct transactions, etc.
I would then pre-aggregate once for all your existing data for everything UP TO but not including the current period (Yr/Month). Now you have your baseline established in summary.
Then, add a trigger to your transaction table on insert. Then, process something like... (AND NOTE, THIS IS NOT ACTUAL TRIGGER, BUT CONTEXT OF WHAT TO DO)
update summaryTable
set numTrans = numTrans + 1,
TotSales = TotSales + NEWENTRY.Sales,
TotUnits = TotUnits + NEWENTRY.Units
where
Nationality = NEWENTRY.Nationality
AND YrQtr = NEWENTRY.YrQtr
if # records affected by the update = 0
Insert into SummaryTable
( Nationality,
YrQtr,
NumTrans,
TotSales,
TotUnits )
values
( NEWENTRY.Nationality,
NEWENTRY.YrQtr,
1,
NEWENTRY.Sales,
NEWENTRY.Units )
Now, your aggregates will ALWAYS be in synch in the summary table after EVERY record inserted into the transaction table. You can ALWAYS query this summary table instead of the full transaction table. If you never have activity for a given Nationality / YrQtr, no record will exist.
First, move the HAVING to WHERE so that the rest of the query has less to do. Second, delay the lookup of nationality until after the GROUP BY:
SELECT
( SELECT nationality
FROM 4_nationality
WHERE nationality_id = t.nationality_id
) AS nationality,
COUNT(DISTINCT(txn_id)) as numtrans,
SUM(sales) as sales,
SUM(units) as units,
YrQtr
FROM 1_txns AS t
WHERE YrQtr LIKE :period
GROUP BY nationality_id;
If possible, avoid wild cards and simply do YrQtr = :period. That would allow INDEX(YrQtr, nationality_id) for even more performance.

SQL INSERT INTO SELECT and Return the SELECT data to Create Row View Counts

So I'm creating a system that will be pulling 50-150 records at a time from a table and display them to the user, and I'm trying to keep a view count for each record.
I figured the most efficient way would be to create a MEMORY table that I use an INSERT INTO to pull the IDs of the rows into and then have a cron function that runs regularly to aggregate the view ID counts and clears out the memory table, updating the original one with the latest view counts. This avoids constantly updating the table that'll likely be getting accessed the most, so I'm not locking 150 rows at a time with each query(or the whole table if I'm using MyISAM).
Basically, the method explained here.
However, I would of course like to do this at the same time as I pull the records information for viewing, and I'd like to avoid running a second, separate query just to get the same set of data for its counts.
Is there any way to SELECT a dataset, return that dataset, and simultaneously insert a single column from that dataset into another table?
It looks like PostgreSQL might have something similar to what I want with the RETURNING keyword, but I'm using MySQL.
First of all, I would not add a counter column to the Main table. I would create a separate Audit table that would hold ID of the item from the Main table plus at least timestamp when that ID was requested. In essence, Audit table would store a history of requests. In this approach you can easily generate much more interesting reports. You can always calculate grand totals per item and also you can calculate summaries by day, week, month, etc per item or across all items. Depending on the volume of data you can periodically delete Audit entries older than some threshold (a month, a year, etc).
Also, you can easily store more information in Audit table as needed, for example, user ID to calculate stats per user.
To populate Audit table "automatically" I would create a stored procedure. The client code would call this stored procedure instead of performing the original SELECT. Stored procedure would return exactly the same result as original SELECT does, but would also add necessary details to the Audit table transparently to the client code.
So, let's assume that Audit table looks like this:
CREATE TABLE AuditTable
(
ID int
IDENTITY -- SQL Server
SERIAL -- Postgres
AUTO_INCREMENT -- MySQL
NOT NULL,
ItemID int NOT NULL,
RequestDateTime datetime NOT NULL
)
and your main SELECT looks like this:
SELECT ItemID, Col1, Col2, ...
FROM MainTable
WHERE <complex criteria>
To perform both INSERT and SELECT in one statement in SQL Server I'd use OUTPUT clause, in Postgres - RETURNING clause, in MySQL - ??? I don't think it has anything like this. So, MySQL procedure would have several separate statements.
MySQL
At first do your SELECT and insert results into a temporary (possibly memory) table. Then copy item IDs from temporary table into Audit table. Then SELECT from temporary table to return result to the client.
CREATE TEMPORARY TABLE TempTable
(
ItemID int NOT NULL,
Col1 ...,
Col2 ...,
...
)
ENGINE = MEMORY
SELECT ItemID, Col1, Col2, ...
FROM MainTable
WHERE <complex criteria>
;
INSERT INTO AuditTable (ItemID, RequestDateTime)
SELECT ItemID, NOW()
FROM TempTable;
SELECT ItemID, Col1, Col2, ...
FROM TempTable
ORDER BY ...;
SQL Server (just to tease you. this single statement does both INSERT and SELECT)
MERGE INTO AuditTable
USING
(
SELECT ItemID, Col1, Col2, ...
FROM MainTable
WHERE <complex criteria>
) AS Src
ON 1 = 0
WHEN NOT MATCHED BY TARGET THEN
INSERT
(ItemID, RequestDateTime)
VALUES
(Src.ItemID, GETDATE())
OUTPUT
Src.ItemID, Src.Col1, Src.Col2, ...
;
You can leave Audit table as it is, or you can set up cron to summarize it periodically. It really depends on the volume of data. In our system we store individual rows for a week, plus we summarize stats per hour and keep it for 6 weeks, plus we keep daily summary for 18 months. But, important part, all these summaries are separate Audit tables, we don't keep auditing information in the Main table, so we don't need to update it.
Joe Celko explained it very well in SQL Style Habits: Attack of the Skeuomorphs:
Now go to any SQL Forum text search the postings. You will find
thousands of postings with DDL that include columns named createdby,
createddate, modifiedby and modifieddate with that particular
meta data on the end of the row declaration. It is the old mag tape
header label written in a new language! Deja Vu!
The header records appeared only once on a tape. But these meta data
values appear over and over on every row in the table. One of the main
reasons for using databases (not just SQL) was to remove redundancy
from the data; this just adds more redundancy. But now think about
what happens to the audit trail when a row is deleted? What happens to
the audit trail when a row is updated? The trail is destroyed. The
audit data should be separated from the schema. Would you put the log
file on the same disk drive as the database? Would an accountant let
the same person approve and receive a payment?
You're kind of asking if MySQL supports a SELECT trigger. It doesn't. You'll need to do this as two queries, however you can stick those inside a stored procedure - then you can pass in the range you're fetching, have it both return the results AND do the INSERT into the other table.
Updated answer with skeleton example for stored procedure:
DELIMITER $$
CREATE PROCEDURE `FetchRows`(IN StartID INT, IN EndID INT)
BEGIN
UPDATE Blah SET ViewCount = ViewCount+1 WHERE id >= StartID AND id <= EndID;
# ^ Assumes counts are stored in the same table. If they're in a seperate table, do an INSERT INTO ... ON DUPLICATE KEY UPDATE ViewCount = ViewCount+1 instead.
SELECT * FROM Blah WHERE id >= StartID AND id <= EndID;
END$$
DELIMITER ;

PHP/MYSQL: SELECT statement, multiple WHERE's, populated from database

I'm trying to figure out how to get a select statement to be populated by an ever-changing number of where's. This is for an order-status tracking application.
Basically, the idea is a user (customer of our company) logs in, and can see his/her orders, check status, etc. No problem. The problem arises when that user needs to be associated with multiple companies. Say they work or own two different companies, or they work for a company that owns multiple sub-companies, each ordering individually, but the big-shot needs to see everything ordered by all of the companies. This is where I'm running into a problem. I can't seem to figure out a good way of making this happen. The only thing I have come up with is this:
client='Client Name One' OR client='Client name two' AND hidden='0' OR client='Client name three' AND hidden='0' OR client='Client name four' AND hidden='0'
(note that client in the previous code refers to the user's company, thus our client)
placed inside of a column called company in my users table of the database. This then gets called like this:
$clientnamequery = "SELECT company FROM mtc_users WHERE username='testing'";
$clientnameresult = mysql_query($clientnamequery); list($clientname)=mysql_fetch_row($clientnameresult);
$query = "SELECT -redacted lots of column names- FROM info WHERE hidden='0' AND $clientname ORDER BY $col $dir";
$result = mysql_query($query);
Thing is, while this works I can't seem to make PHP add in the client=' and ' AND hidden='0' correctly. Plus, it's kind of kludgy.
Any ideas? Thanks in advance!
Expanding on Tim's answer, you can use the IN operator and subqueries:
SELECT *columns* FROM info
WHERE hidden='0' AND client IN
( SELECT company FROM co_members
WHERE username=?
)
ORDER BY ...
Or you can try a join:
SELECT info.* FROM info
JOIN co_members ON info.client = co_members.company
WHERE co_members.username=?
AND hidden='0'
ORDER BY ...
A join is the preferred approach. Among other reasons, it will probably be the most efficient (though you should test this with EXPLAIN SELECT ...). You probably shouldn't grab all table columns (the info.*) in case you can later change the table definition; I only put that in because I didn't know which columns you wanted.
On an unrelated note, look into using prepared queries with either the mysqli or PDO drivers. Prepared queries are more efficient when you execute a query multiple times and also obviate the need to sanitize user input.
The relational approach involves tables like:
CREATE TABLE mtc_users (
username PRIMARY KEY,
-- ... other user info
) ENGINE=InnoDB;
CREATE TABLE companies (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR() NOT NULL,
-- ... other company info
) ENGINE=InnoDB;
CREATE TABLE co_members (
username NOT NULL,
company NOT NULL,
FOREIGN KEY (`username`) REFERENCES mtc_users (`username`)
ON DELETE CASCADE
ON UPDATE CASCADE,
FOREIGN KEY (`company`) REFERENCES companies (`id`)
ON DELETE CASCADE
ON UPDATE CASCADE,
INDEX (`username`, `company`)
) ENGINE=InnoDB;
If company names are to be unique, you could use those as a primary key rather than an id field. "co_members" is a poor name, but "employees" and "shareholders" didn't quite seem the correct terms. As you are more familiar with the system, you'll be able to come up with a more appropriate name.
You can use the IN keyword
client IN('client1','client2',...)

Store database, good pattern for simultaneous access

I am kinda new to database designing so i ask for some advices or some kind of a good pattern.
The situation is that, there is one database, few tables and many users. How should i design the database, or / and which types of queries should i use, to make it work, if users can interact with the database simultaneously? I mean, they have access to and can change the same set of data.
I was thinking about transactions, but I am not sure, if that is the right / good / the only solution.
UPDATE:
By many i mean hundreds, maybe thousands at all. Clients will be connecting to MySQL through WWW page in PHP. They will use operations such: insert, update, delete and select, sometimes join. It's a small database for 5-20 clients and one-two admins. Clients will be updating and selecting info. I am thinking about transactions with storing some info in $_SESSION.
a simple approach that can be very effective is the row versioning.
add a version int field to the main table,
when insert, set it to 0
when update, increment it by one; in the where it should be the version field
EXAMPLE:
CREATE TABLE myTable (
id INT NOT NULL,
name VARCHAR(50) NOT NULL,
vs INT NOT NULL,
)
INSERT INTO myTable VALUES (1, 'Sebastian', 0)
-- first user reads, vs = 0
SELECT * FROM myTable WHERE id = 1
-- second user reads, vs = 0
SELECT * FROM myTable WHERE id = 1
-- first user writes, vs = 1
UPDATE myTable SET name = 'Juan Sebastian', vs = vs + 1 WHERE id = 1 AND vs = 0
(1 row affected)
-- second user writes, no rows affected, because vs is different, show error to the user or do your logic
UPDATE myTable SET name = 'Julian', vs = vs + 1 WHERE id = 1 AND vs = 0
(0 rows affected)
Use InnoDB as a engine type. In opposite to MyISAM it supports row-level blocking, so you wouldn't have to block entire table when someone is updating some record.

Private Messaging System With Threading/Replies

I'm currently working on creating a private messaging system, (PHP/MySQL) in which users can send message to multiple recipients at one time, and those users can then decide to reply.
Here's what I'm currently working with:
tbl_pm tbl:
id
date_sent
title
content
status ENUM ('unread', 'read') DEFAULT 'unread'
tblpm_info tbl:
id
message_id
sender_id
receiver_id
However, I need some help determining the logic on two things:
1) When a new message is created, should the "id" be auto-increment? If the 'id' column is set to auto-increment in both tables, how would I set the "message_id" column in the 'relation table'?
For example, when a new message is created, my MySQL statement is as follows:
<?php
mysql_query("INSERT INTO `tblpm` (title, content, sender_id, date_sent) VALUES ('$subject', '$message', '$sender', NOW())" );
In the same statement, how would I enter the 'auto-incremented' value of tblpm into the tblpm_info "message_id" field?
2) What should my MySQL statement look like when users reply to messages?
Perhaps I am making this more complicated than I need to. Any help is greatly appreciated!
1) Definetely yes, id's should be auto-autoincremented unless you provide a different means of a primary key which is unique. You get the id of the insert either with mysql_insert_id() or LAST_INSERT_ID() from mysql directly, so to post some connected info you can do either
mysql_query("INSERT INTO table1 ...")
$foreign_key=mysql_insert_id(); //this gives you the last auto-increment for YOUR connection
or, but only if you're absolutely sure no one else writes to the table in the mean time or have control over the transaction, after insert do:
$foreign_key=mysql_query("SELECT LAST_INSERT_ID()")
INSERT INTO table2 message_id=$foreign_key
or, without pulling the FK to php, all in one transaction (I also advice to wrap the SQL as a transaction too) with something like:
"INSERT INTO table1...; INSERT INTO table2 (message_id,...) VALUES(LAST_INSERT_ID(),...)"
Depending on your language and mysql libraries, you might not be able to issue the multi-query approach, so you're better off with using the first approach.
2) This can have so many approaches, depending on if you need to reply to all the recepients too (e.g. conference), reply in a thread/forum-like manner, whether the client-side can store the last retrieved message/id (e.g. in a cookie; also affecting whether you really need the "read" field).
The "private chat" approach is the easiest one, you then are probably better off either storing the message in one table and the from-to relationships into an other (and use JOINs on them), or simply re-populate the message in one table (since storage is cheap nowadays). So, the simplistic model would be one table:
table: message_body,from,to
$recepients=array(1,2,3..);
foreach($recepients as $recepient)
mysql_query("INSERT INTO table (...,message_body,from,to) VALUES(...,$from,$recepient)");
(duplicate the message etc, only the recepient changes)
or
message_table: id,when,message_body
to-from-table: id,msg_id,from,to
$recepients=array(1,2,3,...);
mysql_insert("INSERT INTO message_table (when,message_body) VALUES(NOW(),$body)");
$msg_id=mysql_insert_id();
foreach($recepients as $recepient)
mysql_query("INSERT INTO to-from-table (msg_id,from,to) VALUES($msg_id,$from,$recepient)");
(message inserted once, store the relations and FK for all recepients)
Each client then stores the last message_id he/she received (default to 0), and assume all previous messages already read):
"SELECT * FROM message WHERE from=$user_id OR to=$user_id WHERE $msg_id>$last_msg_id"
or we just take note of the last input time from the user and query any new messages from then on:
"SELECT * FROM message WHERE from=$user_id OR to=$user_id WHERE when>='".date('Y-m-d H:i:s',$last_input_time)."' "
If you need a more conference- or forum-tread-like approach, and need to keep track of who read the message or not, you may need to keep track of all the users involved.
Assuming there won't be hundred-something people in one "multi-user conference" I'd go with one table for messages and the "comma-separated and wrapped list" trick I use a lot for storing tags.
id autoincrement (again, no need for a separate message id)
your usual: sent_at, title (if you need one), content
sender (int)
recepients (I'd go with varchar or shorter versions of TEXT; whereas TEXT or BLOB gives you unlimited number of users but may have impact on performance)
readers (same as above)
The secret for recepients/readers field is to populate them comma-separated id list and wrap it in commas again (I'll dulge into why later).
So you'd have to collect ids of recepients into an array again, e.g. $recepients=array(2,3,5) and modify your insert:
"INSERT INTO table (sent_at,title,content,sender,recepients) VALUES(NOW(),'$title','$content',$sender_id,',".implode(',', $recepients).",')"
you get table rows like
... sender | recepients
... 1 | ,2, //single user message
... 1 | ,3,5, //multi user message
to select all messages for a user with the id of $user_id=2 you go with
SELECT * FROM table WHERE sender=$user_id OR INSTR(recepients, ',$user_id,')
Previously we wrapped the imploded list of recepients, e.g. '5,2,3' becomes ',5,2,3,' and INSTR here tells if ',2,' is contained somewhere as a substring - since seeking for just '2',',2' or '2,' could give you false positives on e.g. '234,56','1**,234','9,452,**89' accordingly - that's why we had to wrap the list in the first place.
When the user reads/receives his/her message, you append their id to the readers list like:
UPDATE table SET readers=CONCAT(',',TRIM(TRAILING ',' FROM readers),',$user_id,') WHERE id=${initial message_id here}
which results in:
... sender | recepients | readers
... 1 | ,2, | ,2,
... 1 | ,3,5, | ,3,5,2,
Or we now can modify the initial query adding a column "is_read" to state whether the user previously read the message or not:
SELECT * FROM table WHERE INSTR(recepients, ',$user_id,'),INSTR(readers, ',$user_id,') AS is_read
collect the message-ids from the result and update the "recepients" fields with one go
"UPDATE table SET readers=CONCAT(',',TRIM(TRAILING ',' FROM readers),',$user_id,') WHERE id IN (".implode(',' ,$received_msg_ids).")"
You should not rely on auto-increment on both IDs due to the possibility of two users posting two messages at nearly the same time. If the first script inserts data into the tbl_pm table, then the second script manages to execute both its tbl_pm and tblpm_info inserts before the first script completes its tblpm_info insert, the first script's two database inserts will have different IDs.
Aside from that, your database structure doesn't seem well organized for the task at hand. Assuming your messages could be very long, and sent to a very large number of users, it would be ideal to have the message content stored once, and for each recipient have unread status, read time, etc. For example:
CREATE TABLE `pm_data` (
`id` smallint(5) unsigned NOT NULL auto_increment,
`date_sent` timestamp NOT NULL,
`title` varchar(255)
`sender_id` smallint(5) unsigned,
`parent_message_id` smallint(5) unsigned,
`content` text,
PRIMARY_KEY (`id`)
);
CREATE TABLE `pm_info` (
`id` smallint(5) unsigned NOT NULL auto_increment,
`pm_id` smallint(5) unsigned NOT NULL,
`recipient_id` smallint(5) unsigned,
`read` tinyint(1) unsigned default 0,
`read_date` timestamp,
PRIMARY_KEY (`id`)
);
Create these two tables, and notice both of them have an 'id' value set to auto-increment, but the 'info' table also has a pm_id field that would hold the ID number of the 'data' row that it refers to, such that you're sure each row has a primary key in the 'info' table that you can use to select from.
If you want a true relational database setup using MySQL, make sure your engine is set to InnoDB, which allows relationships to be set up between tables, so (for example) if you try to insert something into the 'info' table that refers to a pm_id that doesn't exist in the 'data' table, the INSERT will fail.
Once you've chosen a database structure, then your PHP code would look something like:
<?php
// Store these in variables such that if they change, you don't need to edit all your queries
$data_table = 'data_table';
$info_table = 'info_table';
mysql_query("INSERT INTO `$data_table` (title, content, sender_id, date_sent) VALUES ('$subject', '$message', '$sender', NOW())" );
$pmid = mysql_insert_id(); // Get the inserted ID
foreach ($recipent_list as $recipient) {
mysql_query("INSERT INTO `$info_table` (pm_id, recipient_id) VALUES ('$pmid', '$recipient')" );
}
Yes. You would definitely set auto_increment on both of the ids.
To set the message_id you would programatically insert it in there.
Your query would look like this:
mysql_query("INSERT INTO `tblpm` (title, content, sender_id, date_sent) VALUES ('$subject', '$message', '$sender', NOW())" );
Notice it's the same! If the id is set to auto_increment it will do all the magic for you.
In plain PHP/Mysql calls, mysql_insert_id() returns the auto-incremented value from the previous INSERT operation
So, you insert the message, collect the newly generated ID, and put that value into the other table.
Personally in your case (providing the example was not simplified and there is not more I cannot see) I would store the data from both of those table in a single table, as they appear to be directly related:
tbl_pm tbl:
message_id
date_sent
title
content
status ENUM ('unread', 'read') DEFAULT 'unread'
sender_id
receiver_id
So you end up with something like the above, there is not really any need for the join as the relationship is always going to be 1 to 1? You have read / unread in the tbl_pm table which would surely be changed per recipient, meaning you are having to store a copy of the message for each recipient anyway. perhaps staus is supposed to be in the tbl_pm info table.
If you do want to insert into both tables try using last_insert_id() within a query or
mysql_insert_id() as explained above, from within php.
I'd probably do something similar to what gavin recommended, but if you wanted threaded messages, you'd have to add another key, like this:
private_messages
- title (text)
- date (timestamp)
- content (text)
- status (enum)
- sender_id (int)
- receiver_id (int)
- parent_message_id (int)
Then you could have nested messages without a separate table or system.

Categories