I am working on a browser/mobile game and I am trying to build a system that automatically ends queued tasks after a certain time has passed. It's the basic research schema used in most games.
Research A costs $100 and will take 1 hour to complete. Do I have to check every second for tasks that are at or past their completion time and trigger an event to clear them and increment the level number? Is there a better way or more optimum way? This idea works by itself but what happens if you need to run 5 or 6 different queues in the game design? show I abstract them enough to get them all in one table?
I apologize if I seem a little vague or erratic with my questions. I am trying to figure out where to start with this concept.
I'm not very familiar with it, but I believe you could use websockets or NodeJS to create a callback event, you could then call that callback with a PHP socket server. This kind of
You can base yourself off this tutorial: http://www.sanwebe.com/2013/05/chat-using-websocket-php-socket
Steps
First, identify the message type using the websocket.onmessage callback, something similar to this should work:
websockets.onmessage = function(ev)
{
var msg = JSON.parse(ev.data); //Assuming you'll encode the message components in JSON with PHP
if ( msg.type == "research_end" )
{
FinishResearch(msg.content); //Assuming that the content element of the JSON array contains the ID of the research
}
}
Secondly, make the server send the actual message. To not make this too complicated or long I'll just pretend that sendMessage($msg, $client) is a function that sends a message to a client.
However, as explained in the tutorial, each client socket is stored in an array called $clients, you'll have to add some kind of identifier to each research so it's easy to know which research belongs to what client.
Now, here's an important part. On the server there will be a variable called $research which will be structured as such:
$research['peername'][0]['time'] = 60000
$research['peername'][0]['type'] = 20
You can add the research by sending an outgoing message to the websocket server by using this:
var array = {message: '20', type: 'research', time : '300000'}; //Create the request array
websocket.send(JSON.stringify(msg)); //Send it to the socket server as a json string, decode with json_decode once it arrives
Then, when it gets to the server and is identified as a research request, we call a callback called doResearch which takes two arguments
//loop through all connected sockets
foreach ($changed as $changed_socket) {
//check for any incomming data
while(socket_recv($changed_socket, $buf, 1024, 0) >= 1)
{
$received_text = unmask($buf); // Unmask data
$msg_array = json_decode($received_text); // Decode the JSON string we sent to the server
doResearch($msg_array, $changed_socket); // Let's say this function contains all the procedures to do the research
}
}
doResearch would be similar to this:
function doResearch($msg_array, $socket)
{
$name = socket_getpeername($socket, $addr);
$count = count($research[$name]);
$research[$name][$count]['time'] = $msg_array['time'];
$research[$name][$count]['type'] = $msg_array['type'];
}
And finally, you would have to add a conditional like this inside the main server loop:
foreach ( $research as $i )
{
foreach ( $i as $i2 )
{
if ( time() <= $i2['time'] )
{
$sql->query("INSERT INTO researches('peer', 'researchid') VALUES ('".$i."', '".$i2['type']."')");
sendMessage('Research type '.$i2['type'].' has finished.', $i2['socket']);
}
}
}
Then, that would check if a research has been finished and insert it into the database.
Hope this helps.
Related
I am doing communication between two people using Twilio proxy when any customer replies to my Twilio proxy reserved number, then the session will create and the session will handle all communication "SMS as well as call" how can I record call in this scenario.
When i am adding Twilio number to proxy pool, the "call comes in" and "message comes in" URL is changed to proxy service name.
Thanks
I found a pretty good project that examples that! And the best part is that really works very well!
Basically, you only need to set up the webhooks when a call is made and on this webhook you call the processRecordCall :D
The method is here:
async function processRecordCall(req){
let callSID = req.body.inboundResourceSid;
console.log(req.body.inboundResourceSid);
console.log(req.body.inboundResourceStatus);
console.log(req.body.outboundResourceStatus);
//The only Proxy callback we care about is the one that provides us context for the initiated outbound dial. We will ignore all other callbacks (including those with context for SMS). Alternatively and depending on the use case, we could use the Proxy "Intercept" callback, but this has additional requirements for handling error status.
if(req.body.outboundResourceStatus == "initiated")
{
//Get all recordings for a given call.
let recordings = await client.recordings.list({callSid: callSID});
console.log(recordings.length);
//We only want to start recording if there are no existing recordings for this callSid. *This may be overkill as I haven't hit this edge case. Consider it preventitive for now*
if(recordings.length == 0)
{
let recordingSet = false;
let callStatus = "";
//Self-invoking function to facilitate iterative attempts to start a recording on the given call. See example here: https://stackoverflow.com/a/3583740
(function tryRecord (i) {
setTimeout(async function () {
if(!recordingSet && i > 0)
{
console.log(`I is ${i}`);
//Fetch the call to see if still exists (this allows us to prevent unnecessary attempts to record)
//Prod code probably needs callback or async added to ensure downstream execution is as expected
let call = await client.calls(callSID).fetch();
callStatus = call.status;
//Only attempt to create the recording if the call has not yet completed
if(callStatus != "completed")
{
//Try to create the recording
try{
let result = await axios.post(`https://api.twilio.com/2010-04-01/Accounts/${accountSid}/Calls/${callSID}/Recordings.json`, {},
{
auth: {
username: accountSid,
password: authToken
}
});
//This code assumes an HTTP 201 *Created* status and thus sucessful creation of recording. Production code may need to explicitly ensure HTTP 201 (eg. handle any other possible status')
console.log(`statusCode: ${result.status}`);
recordingSet = true; //This isn't entirely necessary (eg. could just set i=1 to force stop iteration), but I'm making an assumption that most use cases will prefer having this for reporting purposes
} catch(err){
//TODO(?) - There may be specific errors that should be explicitly handeled in a production scenario (eg. 21220)
}
}
else //The call is completed, so there's no need to loop anymore
{
console.log("Force stop iteration");
i = 1; //Set i = 1 to force stop the iteration
}
}
if (--i || !recordingSet) tryRecord(i); // decrement i and call myLoop again if i > 0 or if recording is not set
}, 1000) //NOTE: 1 second time delay before retry (this can be whatever works best --responsibly-- for the customer)
})(15); //NOTE: 15 attempts to record by default (unless forced to stop or recording already started. This can be whatever works best --responsibly-- for the customer)
}
else
{
console.log("recording exists already");
}
}
else
{
console.log(`We ignored callback with outgoing status '${req.body.outboundResourceStatus}'`);
}
}
I'm linking the full project below
https://github.com/Cfee2000/twilio-proxy-admin-app
Information
I've started using the Asana API to make our own task overview in our CMS. I found an API on github which helps me a great deal with this.
As I've mentioned in an earlier question, I wanted to get all tasks for a certain user. I've managed to do this using the code below.
public function user($id)
{
if (isset($_SERVER['HTTP_X_REQUESTED_WITH']) &&
($_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest')) {
$this->layout = 'ajax';
}
$asana = new Asana(array(
'apiKey' => 'xxxxxxxxxxxxxxxxxxxx'
));
$results = json_decode($asana->getTasksByFilter(array(
'assignee' => $id,
'workspace' => 'xxxxxxxxxx'
)));
if ($asana->responseCode != '200' || is_null($results)) {
throw new \Exception('Error while trying to connect to Asana, response code: ' . $asana->responseCode, 1);
}
$tasks = array();
foreach ($results->data as $task) {
$result = json_decode($asana->getTaskTags($task->id));
$task->tags = $result->data;
$tasks[] = $task;
}
$user = json_decode($asana->getUserInfo($id));
if ($asana->responseCode != '200' || is_null($user)) {
throw new \Exception('Error while trying to connect to Asana, response code: ' . $asana->responseCode, 1);
}
$this->render("tasks", array(
'tasks' => $tasks,
'title' => 'Tasks for '.$user->data->name
));
}
The problem
The above works fine, except for one thing. It is slower than a booting Windows Vista machine (very slow :) ). If I include the tags, it can take up to 60 seconds before I get all results. If I do not include the tags it takes about 5 seconds which is still way too long. Now, I hope I am not the first one ever to have used the Asana API and that some of you might have experienced the same problem in the past.
The API itself could definitely be faster, and we have some long-term plans around how to improve responsiveness, but in the near-to-mid-term the API is probably going to remain the same basic speed.
The trick to not spending a lot of time accessing the API is generally to reduce the number of requests you make and only request the data you need. Sometimes, API clients don't make this easy, and I'm not familiar with the PHP client specifically, but I can give an example of how this would work in general with just the plain HTTP queries.
So right now you're doing the following in pseudocode:
GET /tasks?assignee=...&workspace=...
foreach task
GET /task/.../tags
GET /users/...
So if the user has 20 tasks (and real users typically have a lot more than 20 tasks - if you only care about incomplete and tasks completed in the last, say, week, you could use ?completed_since=<DATE_ONE_WEEK_AGO>), you've made 22 requests. And because it's synchronous, you wait a few seconds for each and every one of those requests before you start the next one.
Fortunately, the API has a parameter called ?opt_fields that allows you to specify the exact data you need. For example: let's suppose that for teach task, all you really want is to know the task ID, the task name, the tags it has and their names. You could then request:
GET /tasks?assignee=...&workspace=...&opt_fields=name,tags.name
(Each resource included always brings its id field)
This would allow you to get, in a single HTTP request, all the data you're after. (Well, the user lookup is still separate, but at least that's just 1 extra request instead of N). For more information on opt_fields, check out the documentation on Input/Output Options.
Hope that helps!
I've got a Minecraft Software written in C# that I want to send a heartbeat to my site. I've got the way to send the beat already written.
if (Server.Uri == null) return;
string uri = "http://GemsCraft.comli.com/Heartbeat.php";
// create a request
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);
request.Method = "POST";
// turn request string into a byte stream
byte[] postBytes = Encoding.ASCII.GetBytes(string.Format("ServerName={0}&Url={1}&Players={2}&MaxPlayers={3}&Uptime={4}",
Uri.EscapeDataString(ConfigKey.ServerName.GetString()),
Server.Uri,
Server.Players.Length,
ConfigKey.MaxPlayers.GetInt(),
DateTime.UtcNow.Subtract(Server.StartTime).TotalMinutes));
request.ContentType = "application/x-www-form-urlencoded";
request.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.NoCacheNoStore);
request.ContentLength = postBytes.Length;
request.Timeout = 5000;
Stream requestStream = request.GetRequestStream();
// send it
requestStream.Write(postBytes, 0, postBytes.Length);
requestStream.Flush();
requestStream.Close();
/* try
{
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Logger.LogToConsole(new StreamReader(response.GetResponseStream()).ReadToEnd());
Logger.LogToConsole(response.StatusCode + "\n");
}
catch (Exception ex)
{
Logger.LogToConsole("" + ex);
}*/
}
Now, I want to be able to retrieve the heartbeat in PHP, upload it to the SQL database, and then display each user's server in a table that will be displayed on the webpage
How do I do this?
portforwardpodcast's answer isn't very well-suited for your purposes, here's a process for you to ponder
Server accesses the following page: heartbeat.php?port=25565&maxplayers=25&players=2&name=Cheese_Pizza_Palace
Your PHP script will then do the following...
Go through each value, making sure they're all the types you want them to be (integers/strings)
Connect to the database
Update the server in the database if it already exists, create it if it doesn't
Return some value so the server knows that it completed successfully.
And to display the servers
Fetch all 'active' servers
Loop through them and display each one.
Things you'll need to figure out:
How to determine uptime
How to determine "active" servers
How to update/create MySQL entries
How to (properly) connect to a database. I would suggest using PDO since you're using PHP. It's a bit difficult to learn, but it's much more secure than writing the queries directly.
How to loop through all the GET variables.
Good hunting!
I would create a simple php page accept a get variable. something like www.site.com/beat.php?lasttime=123456&serverid=1 where the number us the unix timestamp. Then you need to re-work your c# to do a simple get request on a website. Finally your php should insert into a mysql table with a column for id, timestamp, server_id etc.
First you need to pull the data from the request. The $_REQUEST variable in php is nice because it works for both GET and POST:
http://php.net/manual/en/reserved.variables.request.php
Start out by var_dump or echo the fields you want. Once you can get the needed data into variables you are done with the first part. For the next part you need to create a database and table in MySQL. The best tool for this is phpmyadmin. If you have a host like godaddy (or some others) you can get at this from the control panel. If not you may need to install upload the phpmyadmin files yourself. It's a pretty simple tool to use:
http://www.youtube.com/watch?v=xxQSFHADUIY
Once your database has the correct columns, you need to insert the data from your php file. This page should help:
http://www.w3schools.com/php/php_mysql_insert.asp
Note: This is not the same as this question which utilises MessageComponentInterface. I am using WampServerInterface instead, so this question pertains to that part specifically. I need an answer with code examples and an explanation, as I can see this being helpful to others in the future.
Attempting looped pushes for individual users
I'm using the WAMP part of Ratchet and ZeroMQ, and I currently have a working version of the push integration tutorial.
I'm attempting to perform the following:
The zeromq server is up and running, ready to log subscribers and unsubscribers
A user connects in their browser over the websocket protocol
A loop is started which sends data to the specific user who requested it
When the user disconnects, the loop for that user's data is stopped
I have points (1) and (2) working, however the issue I have is with the third one:
Firstly: How can I send data to each specific user only? Broadcast sends it to everyone, unless maybe the 'topics' end up being individual user IDs maybe?
Secondly: I have a big security issue. If I'm sending which user ID wants to subscribe from the client-side, which it seems like I need to, then the user could just change the variable to another user's ID and their data is returned instead.
Thirdly: I'm having to run a separate php script containing the code for zeromq to start the actual looping. I'm not sure this is the best way to do this and I would rather having this working completely within the codebase as opposed to a separate php file. This is a major area I need sorted.
The following code shows what I currently have.
The server that just runs from console
I literally type php bin/push-server.php to run this. Subscriptions and un-subscriptions are output to this terminal for debugging purposes.
$loop = React\EventLoop\Factory::create();
$pusher = Pusher;
$context = new React\ZMQ\Context($loop);
$pull = $context->getSocket(ZMQ::SOCKET_PULL);
$pull->bind('tcp://127.0.0.1:5555');
$pull->on('message', array($pusher, 'onMessage'));
$webSock = new React\Socket\Server($loop);
$webSock->listen(8080, '0.0.0.0'); // Binding to 0.0.0.0 means remotes can connect
$webServer = new Ratchet\Server\IoServer(
new Ratchet\WebSocket\WsServer(
new Ratchet\Wamp\WampServer(
$pusher
)
),
$webSock
);
$loop->run();
The Pusher that sends out data over websockets
I've omitted the useless stuff and concentrated on the onMessage() and onSubscribe() methods.
public function onSubscribe(ConnectionInterface $conn, $topic)
{
$subject = $topic->getId();
$ip = $conn->remoteAddress;
if (!array_key_exists($subject, $this->subscribedTopics))
{
$this->subscribedTopics[$subject] = $topic;
}
$this->clients[] = $conn->resourceId;
echo sprintf("New Connection: %s" . PHP_EOL, $conn->remoteAddress);
}
public function onMessage($entry) {
$entryData = json_decode($entry, true);
var_dump($entryData);
if (!array_key_exists($entryData['topic'], $this->subscribedTopics)) {
return;
}
$topic = $this->subscribedTopics[$entryData['topic']];
// This sends out everything to multiple users, not what I want!!
// I can't send() to individual connections from here I don't think :S
$topic->broadcast($entryData);
}
The script to start using the above Pusher code in a loop
This is my issue - this is a separate php file that hopefully may be integrated into other code in the future, but currently I'm not sure how to use this properly. Do I grab the user's ID from the session? I still need to send it from client-side...
// Thought sessions might work here but they don't work for subscription
session_start();
$userId = $_SESSION['userId'];
$loop = React\EventLoop\Factory::create();
$context = new ZMQContext();
$socket = $context->getSocket(ZMQ::SOCKET_PUSH, 'my pusher');
$socket->connect("tcp://localhost:5555");
$i = 0;
$loop->addPeriodicTimer(4, function() use ($socket, $loop, $userId, &$i) {
$entryData = array(
'topic' => 'subscriptionTopicHere',
'userId' => $userId
);
$i++;
// So it doesn't go on infinitely if run from browser
if ($i >= 3)
{
$loop->stop();
}
// Send stuff to the queue
$socket->send(json_encode($entryData));
});
Finally, the client-side js to subscribe with
$(document).ready(function() {
var conn = new ab.Session(
'ws://localhost:8080'
, function() {
conn.subscribe('topicHere', function(topic, data) {
console.log(topic);
console.log(data);
});
}
, function() {
console.warn('WebSocket connection closed');
}
, {
'skipSubprotocolCheck': true
}
);
});
Conclusion
The above is working, but I really need to figure out the following:
How can I send individual messages to individual users? When they visit the page that starts the websocket connection in JS, should I also be starting the script that shoves stuff into the queue in PHP (the zeromq)? That's what I'm currently doing manually, and it just feels wrong.
When subscribing a user from JS, it can't be safe to grab the users id from the session and send that from client-side. This could be faked. Please tell me there is an easier way, and if so, how?
Note: My answer here does not include references to ZeroMQ, as I am not using it any more. However, I'm sure you will be able to figure out how to use ZeroMQ with this answer if you need to.
Use JSON
First and foremost, the Websocket RFC and WAMP Spec state that the topic to subscribe to must be a string. I'm cheating a little here, but I'm still adhering to the spec: I'm passing JSON through instead.
{
"topic": "subject here",
"userId": "1",
"token": "dsah9273bui3f92h3r83f82h3"
}
JSON is still a string, but it allows me to pass through more data in place of the "topic", and it's simple for PHP to do a json_decode() on the other end. Of course, you should validate that you actually receive JSON, but that's up to your implementation.
So what am I passing through here, and why?
Topic
The topic is the subject the user is subscribing to. You use this to decide what data you pass back to the user.
UserId
Obviously the ID of the user. You must verify that this user exists and is allowed to subscribe, using the next part:
Token
This should be a one use randomly generated token, generated in your PHP, and passed to a JavaScript variable. When I say "one use", I mean every time you reload the page (and, by extension, on every HTTP request), your JavaScript variable should have a new token in there. This token should be stored in the database against the User's ID.
Then, once a websocket request is made, you match the token and user id to those in the database to make sure the user is indeed who they say they are, and they haven't been messing around with the JS variables.
Note: In your event handler, you can use $conn->remoteAddress to get the IP of the connection, so if someone is trying to connect maliciously, you can block them (log them or something).
Why does this work?
It works because every time a new connection comes through, the unique token ensures that no user will have access to anyone else's subscription data.
The Server
Here's what I am using for running the loop and event handler. I am creating the loop, doing all the decorator style object creation, and passing in my EventHandler (which I'll come to soon) with the loop in there too.
$loop = Factory::create();
new IoServer(
new WsServer(
new WampServer(
new EventHandler($loop) // This is my class. Pass in the loop!
)
),
$webSock
);
$loop->run();
The Event Handler
class EventHandler implements WampServerInterface, MessageComponentInterface
{
/**
* #var \React\EventLoop\LoopInterface
*/
private $loop;
/**
* #var array List of connected clients
*/
private $clients;
/**
* Pass in the react event loop here
*/
public function __construct(LoopInterface $loop)
{
$this->loop = $loop;
}
/**
* A user connects, we store the connection by the unique resource id
*/
public function onOpen(ConnectionInterface $conn)
{
$this->clients[$conn->resourceId]['conn'] = $conn;
}
/**
* A user subscribes. The JSON is in $subscription->getId()
*/
public function onSubscribe(ConnectionInterface $conn, $subscription)
{
// This is the JSON passed in from your JavaScript
// Obviously you need to validate it's JSON and expected data etc...
$data = json_decode(subscription->getId());
// Validate the users id and token together against the db values
// Now, let's subscribe this user only
// 5 = the interval, in seconds
$timer = $this->loop->addPeriodicTimer(5, function() use ($subscription) {
$data = "whatever data you want to broadcast";
return $subscription->broadcast(json_encode($data));
});
// Store the timer against that user's connection resource Id
$this->clients[$conn->resourceId]['timer'] = $timer;
}
public function onClose(ConnectionInterface $conn)
{
// There might be a connection without a timer
// So make sure there is one before trying to cancel it!
if (isset($this->clients[$conn->resourceId]['timer']))
{
if ($this->clients[$conn->resourceId]['timer'] instanceof TimerInterface)
{
$this->loop->cancelTimer($this->clients[$conn->resourceId]['timer']);
}
}
unset($this->clients[$conn->resourceId]);
}
/** Implement all the extra methods the interfaces say that you must use **/
}
That's basically it. The main points here are:
Unique token, userid and connection id provide the unique combination required to ensure that one user can't see another user's data.
Unique token means that if the same user opens another page and requests to subscribe, they'll have their own connection id + token combo so the same user won't have double the subscriptions on the same page (basically, each connection has it's own individual data).
Extension
You should be ensuring all data is validated and not a hack attempt before you do anything with it. Log all connection attempts using something like Monolog, and set up e-mail forwarding if any critical's occur (like the server stops working because someone is being a bastard and attempting to hack your server).
Closing Points
Validate Everything. I can't stress this enough. Your unique token that changes on every request is important.
Remember, if you re-generate the token on every HTTP request, and you make a POST request before attempting to connect via websockets, you'll have to pass back the re-generated token to your JavaScript before trying to connect (otherwise your token will be invalid).
Log everything. Keep a record of everyone that connects, asks for what topic, and disconnects. Monolog is great for this.
To send to specific users, you need a ROUTER-DEALER pattern instead of PUB-SUB. This is explained in the Guide, in chapter 3. Security, if you're using ZMQ v4.0, is handled at the wire level, so you don't see it in the application. It still requires some work, unless you use the CZMQ binding, which provides an authentication framework (zauth).
Basically, to authenticate, you install a handler on inproc://zeromq.zap.01, and respond to requests over that socket. Google ZeroMQ ZAP for the RFC; there is also a test case in the core libzmq/tests/test_security_curve.cpp program.
I'm looking into doing some long polling with jQuery and PHP for a message system. I'm curious to know the best/most efficient way to achieve this. I'm basing is off this Simple Long Polling Example.
If a user is sitting on the inbox page, I want to pull in any new messages. One idea that I've seen is adding a last_checked column to the message table. The PHP script would look something like this:
query to check for all null `last_checked` messages
if there are any...
while(...) {
add data to array
update `last_checked` column to current time
}
send data back
I like this idea but I'm wondering what others think of it. Is this an ideal way to approach this? Any information will be helpful!
To add, there are no set number of uses that could be on the site so I'm looking for an efficient way to do it.
Yes the way that you describe it is how the Long Polling Method is working generally.
Your sample code is a little vague, so i would like to add that you should do a sleep() for a small amount of time inside the while loop and each time compare the last_checked time (which is stored on server side) and the current time (which is what is sent from the client's side).
Something like this:
$current = isset($_GET['timestamp']) ? $_GET['timestamp'] : 0;
$last_checked = getLastCheckedTime(); //returns the last time db accessed
while( $last_checked <= $current) {
usleep(100000);
$last_checked = getLastCheckedTime();
}
$response = array();
$response['latestData'] = getLatestData() //fetches all the data you want based on time
$response['timestamp'] = $last_checked;
echo json_encode($response);
And at your client's side JS you would have this:
function longPolling(){
$.ajax({
type : 'Get',
url : 'data.php?timestamp=' + timestamp,
async : true,
cache : false,
success : function(data) {
var jsonData = eval('(' + data + ')');
//do something with the data, eg display them
timestamp = jsonData['timestamp'];
setTimeout('longPolling()', 1000);
},
error : function(XMLHttpRequest, textstatus, error) {
alert(error);
setTimeout('longPolling()', 15000);
}
});
}
Instead of adding new column as last_checked you can add as last_checked_time. So that you can get the data from last_checked_time to the current_time.
(i.e) DATA BETWEEN `last_checked_time` AND `current_time`
If you only have one user, that's fine. If you don't, you'll run into complications. You'll also run one hell of a lot of SELECT queries by doing this.
I've been firmly convinced for a while that PHP and long polling just do not work natively due to PHP not having any cross-client event-driven possibilities. This means you'll need to check your database every second/2s/5s instead of relying on events.
If you still want to do this, however, I would make your messaging system write a file [nameofuser].txt in a directory whenever the user has a message, and check for message existence using this trigger. If the file exists and is not empty, fire off the request to get the message, process, feed back and then delete the text file. This will reduce your SQL overhead, while (if you're not careful) increasing your disk IO.
Structure-wise, an associative table is by far the best. Make a new table dedicated to checking the status, with three columns: user_id message_id read_at. The usage should be obvious. Any combination not in there is unread.
Instead of creating a column named last_checked, you could create a column called: checked.
If you save all messages in the database, you could update the field in the database. Example:
User 1 sends User 2 a message.
PHP receives the message using the long-polling system and saves the message in a table.
User 2, when online, would send a signal to the server, notifying the server that User 1 is ready to receive messages
The server checks the table for all messages that are not 'checked' and returns them.