I know there are Comet server technologies that do this but I want to write something simple and home-grown.
When a record is inserted into a MySQL table, I want it to somehow communicate this data to a series of long-polled Apache connections using PHP (or whatever). So multiple people are "listening" through their browser and the second the MySQL INSERT happens, it is sent to their browser and executed.
The easy way is to have the PHP script poll the MySQL database, but this isn't really pushing from the server and introduces some unacceptable order of unnecessary database queries. I want to get that data from MySQL to the long-polling connection essentially without the listeners querying at all.
Any ideas on how to implement this?
I have been trying all kinds of ideas for a solution to this as well and the only way to take out the polling of sql queries is to poll a file instead. If the fill equals 0 then continue looping. If file equals 1 have loop run sql queries and send to users. It does add another level of complexity but I would think it means less work by mysql but same for apache or what ever looping daemon. You could also send the command to a daemon "comet" style but it is going to fork and loop on each request as well from what I have seen on how sockets work so hopefully someone will find a solution to this.
This is something I have been looking for as well for many years. I have not found any functionality where the SQL server pushes out a message on INSERTS, DELETE and UPDATES.
TRIGGERS can run SQL on these events, but that is of no use here.
I guess you have to construct your own system. You can easily broadcast a UDP from PHP (Example in first comment), the problem is that PHP is running on the server side and the clients are static.
My guess would be that you could do a Java Applet running on the client, listening for the UDP message and then trigger an update of the page.
This was only some thoughts in the moment of writing...
MySQL probably isn't the right tool for this problem. Regilero suggested switching your DB, but an easier solution might be to use something like redis which has a pub/sub feature.
http://redis.io/topics/pubsub
Related
In PHP, my script is only trying to test if a server is online - nothing more. How would I go about creating multiple socket streams that all run at the same time? Doing it one-after-another would take forever if you're testing a bunch of servers.
Usually you would start a pool of threads and the threads would read all the sites that need to be tested from a queue. This would allow each thread to open a connection to a site (supporting concurrency)
or maybe pthreads? I dunno, i've never written threaded code in php
Use select. This takes a list of which sockets you want to read or write to and then tells you when they are ready. When you read/write to these, you know you'll be able to get/send some data. You then process what you need to do on those sockets, and go back and select again waiting for more data.
If you need to do other things as well, set the timeout on select and it will return in that amount of time, even if nothing is ready on any sockets.
edit: also, once you figure out how to use select (not that hard), it's a TON simpler to debug and deal with than dealing with synchronization between threads.
I'm programming c++ service which constantly every 1 second makes SELECT query with LIMIT 1 on mysql server, something computes and then makes INSERT and this in loop forever and ever.
I'd like to detect server overloading to make SELECTs with bigger LIMIT, for example LIMIT 10 and in greater inetrvals, like every 5 seconds or so. Not sure if my solution will lighten server overloads.
My problem is how to detect these overloads and I'm not sure what I mean by overload :) It could be anything, but my application is web application in php (chat) so overload could be detected on Apache2 side, or mysql side, or detecting how many users make how many inputs (chat messages) in time interval. I don't know :/
Thank you!
EDIT: Okay, I made an socket server from my C++ application and its really fast that way. Now I'm struggling with memory leaks, but that's another story.
So thank you #brianbeuning for helpful thoughts about my problem.
Better solve that forever and ever loop, its not good idea.
If that loop is really must, then use some caching technique.
For detecting "overload" (I would call it high MySQL CPU usage), try calling external commands supported by operating system.
For example if you use this on Linux, play with ps command.
EDIT:
I realized now that you are programming chatting server.
Using MySQL as middleman is NOT good idea.
Try solving this without using MySQL, and then if you need to save chat log, occasionally save it to MySQL (eg. every 10 seconds or so).
I bet it is CPU hog right now for just 20 intensive users.
Try to make direct client-to-client communication, without requiring server (use server only to establish communication between 2 clients).
Another approach would be to buffer the data in your app and use a connection pool of sorts to manage the load. Keep a rolling buffer of data that needs to be inserted and manage the 'limit' based on the size of the buffer.
I have built a small PHP server/client code. When I say client server I mean it acts bought as client and a server alternating for 5 seconds in each mode.
Now the code runs on two servers and is triggered by a cron.
On rare occasions they manage to get in perfect sync with each other and they either establish a connection at the very last microsecond but by then the PHP code has already passed to the client mode or they never manage to establish a connection.
Before this whole dance starts they run some database queries to select some information that can be big or small and never identical on them so adding some randomness in the timings has only made this incidents happen more rarely but not disappear completely.
Anyone ever manage to do something like this successfully? How?
You have designed a race condition here. No matter how you try to synchronize these, you'll get in trouble eventually.
The way to solve this is to have each process acting as a servere all the time, and doing client functionality on demand.
I'm currently working on an event-logging system that will form part of a real-time analytics system. Individual events are sent via rpc from the main application to another server where a separate php script running under apache handles the event data.
Currently the receiving server PHP script hands off the event data to an AMQP exchange/queue from where a Java application pops events from the queue, batches them up and performs a batch db insert.
This will provide great scalability however I'm thinking the cost is complexity.
I'm now looking to simplify things a little so my questions are:
Would it be possible to remove the AMQP queue and perform the batching and inserting of events directly to the db from within the PHP script(s) on the receiving server?
And if so, would some kind of intermediary database be required to batch up the events or could the batching be done from within PHP ?
Thanks in advance
Edit:
Thanks for taking the time to respond, to be more specific. Is it possible for a PHP script running under Apache to be configured to handle multiple http requests?
So, as Apache spawns child processes each of these processes would be configured to accept say 1000 http requests, deal with them and then shut down?
I see three potential answers to your question:
Yes
No
Probably
If you share metrics of alternative implementations (because everything you ask about is techncially possible so please do it first and then get hard results) we can give better suggestions. But as long as you don't provide some meat, put it on the grill and show us the results, there is not much more to tell.
I've finally made a simple chat page that I had wanted to make for a while now, but I'm running into problems with my servers.
I'm not sure if long polling is the correct term, but from what I understand, I think it is. I have an ajax call to a php page that checks a mysql database for messages with times newer than the time sent in the ajax request. If there isn't a newer message, it keeps looping and checking until there is. Else, it just returns the new messages and the client script sends another ajax request as soon as it gets the messages.
Everything is working fine, except for the part where the server on 000webhost stops responding after a few chat messages, and the server on x10 hosting gives me a message about hitting a resource limit.
Maybe this is a dumb way to do a chat system, but it's all I know how to do. If there is a better way please let me know.
edit: Holy hell, it's just occurred to me that I didn't put any sleep time in the while loop on the server.
You can find a lot of reading on this, but I disbelieve that free web hosting is going to allow to do what you are thinking of doing. PHP was also not really designed to create chat systems.
I would recommend using WebSockets, and use for example, Node.JS with Socket.IO, or Tornado with Python; There is a lot of solutions out there, but most of them would require you to run your own server since it requires to run a whole program that interacts with many connections at once instead of simple scripts that just start and finish with a single connection.
What about using the same strategy whether there are newer messages on the server or not. The server would always return a list of newer messages - this list could be empty when there are no newer messages. The empty list could be also be encoded as a special data token.
The client then proceeds in both cases the same way: it processes the received data and requests new messages after a time period.
Make sure you sleep(1) your code on each loop, the code gonna enter the loop several times per second, stressing your database/server.
But still, nodejs or websockets are better tecnologies to deal with real time chats.