Single PHP thread to respond to two asynchronous ajax requests - php

I am implementing a synchronization process between two devices. Web and mobile. I have planned an algorithm but not sure about the possibility of its implementation. My algorithm is,
Mobile will generate a ajax request to PHP script to sync its data onto database.
PHP scripts starts a transaction, then insert data into to database that mobile sent and collects data that it has to send to mobile and send resposce but do not commit.
Mobile receives response, starts a transaction, and send another ajax request as acknowledgement to php script and commit at mobile side.
PHP script receives acknowledgement ajax request and commit its data as well that i developed in setp 2.
My question is, is it possible to use single php thread to work with two synchronous ajax request?

I will leave the php concepts to experts.
We had a similar scenario, where we would send the data to server and insert it with a flag value (e.g., incomplete), then we will acknowledge back to device saying that xyz rows were inserted. It will also contain an unique identifier per device so that we can keep track of which device sent what data, we will also maintain the details of unique identifier per device, unique transaction id (uuid) and timestamp in a separate table (transaction_status). The acknowledgement will be updated in clients, this will basically remove the transactions from the sync queue maintained on the client.
When the next set of records from the same client reaches server, the server will query for the matching entries in the transaction_status table. if the entries are already available, it means that the acknowledgement did not reach the client and server will respond with necessary data in such case.
If the next set of records did not contain the entries which were sent during previous sync then those considered as successful sync and the incomplete flag will be update.
I understand that this might be vague, but i will try to create a diagram in my free time to clearly depict this scenario.
Please leave a comment if you need more clarification.
Anyone having a better approach, please feel free to share.

I finally ended up using websockets. And to implemet websockets in PHP i used Ratchet. I was able to get what i was trying to implement.

Related

MySQL callback to the web-app

I'm trying to figure out the best way possible to notify my web-application of database changes. The way I'm doing it now, is that I've got separate MySQL table with a counter.
Now, when some database action happens in table Foo, a trigger is adding up the counter.
The web-app then polls every 5 seconds the server to check out, if something has happened (counter number has changed) and if so, refresh the data in app.
What I would like, is that I would be able to do callback/notify from MySQL to the server and from there to the web-app so that I don't need to poll the server frequently. Is this possible somehow?
How does facebook, gmail send the real time notification?
You can't notify your application directly from MySQL, but there are some solutions to save bandwith and load of your server.
one way of handling this - would be to either implement the observer pattern yourself or simply use a pubsub messaging option (ZMQ/AMQ/RabbitMQ/Redis etc) - when the initial database action takes place (ensure that the transaction has committed), publish the message to the topic on the pubusb tool - your application can subscribe to the pubsub tool and receive messages when there is a DB change.
Hope it helps.

notification and messaging system in javascript and php (without the need of having to install additional software serverside)

i'm running a social network with a messaging and notification feature. each time a user sends a message or places a notification for another user, a row is inserted into a table news_updates with the details about the message or notification and all his friends are inserted into the news_seen table. (once the message is read, or the item related to the notification is opened, seen is set to 1, i'm doing this at the end of my callback function for my ajax request - i'm gathering all the newsitem_ids from all the news items, that are currently open and then i'm doing a big insert with all the newsitem_ids in it).
news_seen:
newsitem_id bigint,
user_id big int,
seen int DEFAULT '0'
at the moment, i'm running an ajax request every 3 seconds, to check the news_updates JOIN news_seen for news.
this turns out to be a huge server load now that i'm getting more and more users. i've been reading a lot about xmpp and the like and i think a push notification service would be great for my site.
the only thing is, i can't really decide on which way to go, since there are so many options.
also, i thought about creating my own system. i'm planning to do it like this:
create an xml file for each user on initial registration (and run a batch for the already registered users)
once a user sends out a news update (i have my own php function for writing them into the db), i include a small command to manipulate the xml file for the respective friends
instead of doing my 3sec ajax request, i'd establish a long connection to the xml file using jquery stream and in case changes were made since the last request, i'd do my usual ajax request that polls the data from the db.
instead of running my check_seen inside the ajax request, i'd insert all the new items into a global array, to be used by an intervaled function that tests if any item in the list is currently being viewed.
do you think this is a good idea?
To be honest I do not think I would implement your specification.
For example I would use a lighter data-model then XML. I would use JSON instead.
I would avoid touching the DISC(database) as much as possible(slow).
Doing two requests instead of one(long-polling/polling). I would try to avoid this.
I would probably try to avoid wasting CPU-time by not using interval functions, but only calling function when needed. I would probably use something like REDIS's pubsub.
Polling / Long-polling is (most of the times) a bad idea in PHP because of blocking IO. I found an interesting project named REACT which I believe does non-blocking IO(expensive). I have not tested this(the performance) myself, but this could be an option.
For XMPP you will have to install additional software. I for instance liked Prosody for it's easy installation/usage. Then you would need to have a look at BOSH. For your bosh client I would use strophe.js or JSJaC.
I would probably use something like socket.io, Faye or maybe vertx.io instead, because it would scale a lot better.

PHP infinitive loop or jQuery setInterval?

Js:
<script>
function cometConnect(){
$.ajax({
cache:false,
type:"post",
data:'ts='+1,
url: 'Controller/chatting',
async: true,
success: function (arr1) {
$(".page-header").append(arr1);
},
complete:function(){
cometConnect(true);
nerr=false;
},
dataType: "text"
});
}
cometConnect();
</script>
Php:
public function chatting()
{
while(true)
{
if(memcache_get(new_message))
return new_message;
sleep(0.5);
}
}
Is this a better solution than setting setInterval which connects to the PHP method which returns message if there is any every 1 second (1 sec increases +0.25 every 5 seconds let's say)?
If I used first solution, I could probably use sleep(0.5) it would give me messages instantly, because php loop is cheap, isn't?
So, what solution is better (more importantly, which takes less resources?). Because there are going to be hundreds of chats like this.
Plus, can first solution cause problems? Let's say I would reload a page or I would stop execution every 30 secs so I wouldn't get 502 Bad Gateway.
EDIT: I believe the second solution is better, so I am going to reimplement my site, but I am just curious if this can cause problems to the user or not? Can something not expected happen?
First problem I noticed is that you can't go to other page until there is at least one new message.
A chat is a one to many communication, while each one of the many can send messages and will receive messages from everybody else.
These two actions (sending, receiving) happen continuously. So this looks like an endless loop whereas the user can enter into (join the chat) and exit (leave the chat).
enter
send message
receive message
exit
So the loop looks like this (pseudo-code) on the client side:
while (userInChat)
{
if (userEnteredMessages)
{
userSendMessages(userEnteredMessages)
}
if (chatNewMessages)
{
displayMessages(chatNewMessages)
}
}
As you already note in your question, the problem is in implementing such a kind of chat for a website.
To implement such a "loop" for a website, you are first of all facing the situation that you don't want to have an actual loop here. As long as the user is in chat, it would run and run and run. So you want to distribute the execution of the loop over time.
To do this, you can convert it into a collection of event functions:
ChatClient
{
function onEnter()
{
}
function onUserInput(messages)
{
sendMessages = send(messages)
display(sendMessages)
}
function onReceive(messages)
{
display(messages)
}
function onExit()
{
}
}
It's now possible to trigger events instead of having a loop. Only left is the implementation to trigger these events over time, but for the moment this is not really interesting because it would be dependent to how the chat data exchange is actually implemented.
There always is a remote point where a chat client is (somehow) connected to to send it's own messages and to receive new messages from.
This is some sort of a stream of chat messages. Again this looks like a loop, but infact it's a stream. Like in the chat clients loop, at some point in time it hooks onto the stream and will send input (write) and receive output (read) from that stream.
This is already visible in the ChatClient pseudo code above, there is an event when the user inputs one or multiple messages which then will be send (written). And read messages will be available in the onReceive event function.
As the stream is data in order, there needs to be order. As this is all event based and multiple clients are available, this needs some dedicated handling. As order is relative, it will only work in it's context. The context could be the time (one message came before another message), but if the chat client has another clock as the server or another client, we can't use the existing clock as time-source for the order of messages, as it normally differs between computers in a WAN.
Instead you create your own time to line-up all messages. With a shared time across all clients and servers an ordered stream can be implemented. This can be easily done by just numbering the messages in a central place. Luckily your chat has a central place, the server.
The message stream starts with the first message and ends with the last one. So what you simply do is to give the first message the number 1 and then each new message will get the next higher number. Let's call it the message ID.
So still regardless which server technology you'll be using, the chat knows to type of messages: Messages with an ID and messages without an ID. This also represents the status of a message: either not part or part of the stream.
Not stream associated messages are those that the user has already entered but which have not been send to the server already. While the server receives the "free" messages, it can put them into the stream by assigning the ID:
function onUserInput(messages)
{
sendMessages = send(messages)
display(sendMessages)
}
As this pseudo code example shows, this is what is happening here. The onUserInput event get's messages that are not part of the stream yet. The sendMessages routine will return their streamed representation which are then displayed.
The display routine then is able to display messages in their stream order.
So still regardless how the client/server communication is implemented, with such a structure you can actually roughly handle a message based chat system and de-couple it from underlying technologies.
The only thing the server needs to do is to take the messages, gives each message an ID and return these IDs. The assignment of the ID is normally done when the server stores the messages into it's database. A good database takes care to number messages properly, so there is not much to do.
The other interaction is to read new messages from the server. To do this over network effectively, the client tells the server from which message on it likes to read from. The server will then pass the messages since that time (ID) to the client.
As this shows, from the "endless" loop in the beginning it's now turned into an event based system with remote calls. As remote calls are expensive, it is better to make them able to transfer much data with one connection. Part of that is already in the pseudo code as it's possible to send one or multiple messages to the server and to receive zero or more messages from the server at once.
The ideal implementation would be to have one connection to the server that allows to read and write messages to it in full-duplex. However no such technology exists yet in javascript. These things are under development with Websockets and Webstream APIs and the like but for the moment let's take things simple and look what we have: stateless HTTP requests, some PHP on the server and a MySQL database.
The message stream can be represented in a database table that has an auto-incrementing unique key for the ID and other fields to store the message.
The write transaction script will just connect to the database, insert the message(s) and return the IDs. That's a very common operation and it should be fast (mysql has a sort of memcache bridge which should make the store operation even more fast and convenient).
The read transaction script is equally simple, it will just read all messages with an ID higher than passed to it and return it to the client.
Keep these scripts as simple as possible and optimize the read/write time to the store, so they can execute fast and you're done even with chatting over plain HTTP.
Still your webserver and the overall internet connection might not be fast enough (although there is keep-alive).
However, HTTP should be good enough for the moment to test if you chat system is actually working without any loops, not client, nor server side.
It's also good to keep servers dead simple, because each client relies on them, so they should just do their work and that's it.
You can at any time change the server (or offer different type of servers) that can interact with your chat client by giving the chat client different implementations of the send and receive functions. E.g. I see in your question that you're using comet, this should work as well, it's probably easy to directly implement the server for comet.
If in the future websockets are more accessible (which might never be the case because of security considerations), you can offer another type of server for websockets as well. As long as the data-structure of the stream is intact, this will work with different type of servers next to each other. The database will take care of the congruency.
Hope this is helpful.
Just as an additional note: HTML5 offers something called Stream Updates with Server-Sent Events with an online demo and PHP/JS sources. The HTML 5 feature offers already an event object in javascript which could be used to create an exemplary chat client transport implementation.
I wrote a blog post about how I had to handle a similar problem (using node.js, but the principles apply).
http://j-query.blogspot.com/2011/11/strategies-for-scaling-real-time-web.html
My suggestion is, if it's going to be big either a) you need to cache like crazy on your web server layer, which probably means your AJAX call needs to have a timestamp on it or b) use something like socket.io, which is built for scaling real-time web apps and has built-in support for channels.
Infinite loops in php can and will use 100% of your CPU. Sleep functions will fix that problem. However, you probably don't want to have a separate HTTP process running all the time for every client that is connected to your server because you'll run out of connections. You could just have one php process that looks at all inbound messages and routes them to the right person as they come in. This process could be launched from a cron job once a minute. I've written this type of thing many times and it works like a charm. Note: Make sure you don't run the process if it's already running or you will run into multiprocessing problems (like getting double messages). In other words, you need to make the process thread safe.
If you want to get real time chatting, then you might want to take a look at StreamHub which opens a full duplex connection to the client's browser.
It's not a PHP or jQuery task now. Node.js!
There is socket.io, which means WebSockets.
I'll explain why node.js is better. I have a task to refresh on-page markers every, for example, 10 seconds. I've done it with the first method. When the persistent users count come to 200. Http server and php were in trouble. There were a lot of requests which was unnesessary.
Whats give you Node.js:
Creating separate rooms for chats (here)
Sends data, only for those who has updates (for example, if I do not have any new message my refresh will be blocked when there will be selection from database)
You run 1 query to the DB per 0.5 second, no matter how much users there are
Just look into Node.js and Socket.io. This solution help me with a great boost.
First off, ask yourself if it's necessary to update the chat frequently. What type of chats will be happening? Is it real-time? Simple Q&A? Tech support? Etc. In all but the real-time chat cases, you will be better off using a long polling JS-based design, because instantaneous responses are not that important. If this is for real-time chats, then you should consider a Gmail-like design whereby you keep an XHR open and push messages back to the client as they are received. If connection resources are a concern, you can get by using long polling with a very brief interval (ex. 5-10 seconds).

Transaction Processing Using PHP and MySQL

I'm trying to implement two-phase commit using PHP and MySQL, and coming up short.
The main block I've found is that I'm unable to store the MySQL connection resource in a place where I can find it again for the second phase. Is it possible to serialize a database handle?
Here's the case I'm trying to code for:
User sends data
Server starts a MySQL transaction and executes some queries based on the data it has received.
Server sends a file back to the user
When the user has successfully received the file, the server commits its transaction. Otherwise it rolls it back.
This seems to require two HTTP Request/Response cycles, so I need to be able to re-connect to the same database handle in the second request in order to commit the transaction. I've been failing at this part.
Any advice is welcome, even if it's "this is not possible in PHP"
Take a look to LIXA Transaction Manager (http://lixa.sourceforge.net/) it integrates PHP and MySQL starting with release 0.9.0
It provides Distributed Transaction Processing and two phase commit feature as well.
Regards
Ch. F.
Since php is Request / Response based the implementation of a persistent DB connection is not possbile, AFAIK.
You could try to work around this limitation using sort of a ticketing mechanism. Your steps would be:
User sends data
Server starts a MySQL transaction and executes some queries based on the data it has received, assigning a 'unique' ticket to that transaction.
Server sends a file and the ticket back to the user
When the user has successfully received the file and sent another request containing that ticket, the server commits its transaction. Otherwise it rolls it back.
refering to Cassy's comment: after a certain period of time all not commited TAs should be rolled back in order to prevent your db from beeing 'flooded' with old transactions
HTH
to answer KB22 and rojoca, the reason I need to do it this way is that the 'file' i'm referring to is actually a sqlite database that ends up as a data store on a mobile device.
The first request posts the updated sqlite database to the server, which attempts to merge in data from the sqlite tables; problems arise when the mobile device doesn't successfully receive a new sqlite database (one which reflects the mobile device's changes and any other new stuff from the web application), because it will then attempt to send the same (old) sqlite database to the web a second time, resulting in duplicate entries in the web tables for anything which was created on the mobile device.
So, the web needs to be sure that the device has the new database before committing the merge changes. Given the vagaries of networks, this only seems feasible if the device can send an explicit ACK after receiving the new sqlite database. And this is only possible if we make two requests (1. The sqlite database to merge; 2. the ACK of receipt of the new sqlite database on the device).
A thorny problem indeed, and it's useful information to find out that PHP can't manipulate database handles down to the necessary level.
[I also don't think I can use a transaction table because I need to return data to the device based on the 'real' web database tables. I think i'd run into issues with auto_increment fields if I didn't use the real tables]
Thanks for all your comments.

How to implement server push for more values at a time?

What is the best way to implement server push for more than one thing.
Lets say I want to just update user status, so I can periodicaly poll the server for status in like 1000ms and update the page.
The other way I can found is that server waits for like 30 sec, while it checks if there was any change and if one is found, server push response back to client, which update page and then make another poll.
But how would I implement this to check for like 10 things on website? For example if I want stackoverflow to refresh question votes when someone vote, but the only way how to do this that I can think of is
ask server for votes for each question -> server replies with votes of every question on the page
but how can I find out which question votes did change? I could send all current votes and then let server compare the values and reply only with those that did change, but I think that it would be very uneffective to do this while checking for like 30 values.
One example for all would be Facebook, where almost everything is refreshed by server push, but how can server find out what did change and what didnt?
Everything that I found (including my book "Ajax Patterns") explains only how to poll for one value, but I didn't find anything how to poll for many values at a time (like more than 10).
If you are prepared to use sessions, the easiest way would be to have a serial request ID, and let the server keep track of the latest information sent to the client on the session. Then the client asks if there is newer data than the last time it refreshed it.
e.g: Lets say the client sends ajax request #5001 after nothing has changed, the server might reply 'false'. Then someone posts a message, or there is a change of some kind, and so in request #5002 the server sends a list of changed elements (whatever these are). Then in request #5003 it would reply false again, because nothing has changed since request #5002.
A JSON client/server architecture would be perfect for this. It allows easy serialisation of object hierarchies/maps. I prefer jQuery for the client in javascript, and the server side is trivial to spit out.
I'd suggest to tag your data with revision numbers at the server side, so the client knows what revision of the data it has; create a composite query whereby the client can send a set of revisions, and the server can respond to that list of revisions with any updated revisions it may have for them. In that way, the client is only making one server query to see if there are any updates; you're just batching together all the data the client is interested in into one query. This method also has the strength of allowing the client to change the set of data that it's interested in; if your implementation server-side is flexible enough, you can use the same implementation for all your dynamic data needs.

Categories