All,
HTML5 Rocks has a nice beginner tutorial on Server-sent Events (SSE):
http://www.html5rocks.com/en/tutorials/eventsource/basics/
But, I don't understand an important concept - what triggers the event on the server that causes a message to be sent?
In other words - in the HTML5 example - the server simply sends a timestamp once:
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache'); // recommended to prevent caching of event data.
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
If I were building a practical example - e.g., a Facebook-style "wall" or a stock-ticker, in which the server would "push" a new message to the client every time some piece of data changes, how does that work?
In other words... Does the PHP script have a loop that runs continuously, checking for a change in the data, then sending a message every time it finds one? If so - how do you know when to end that process?
Or - does the PHP script simply send the message, then end (as appears to be the case in the HTML5Rocks example)? If so - how do you get continuous updates? Is the browser simply polling the PHP page at regular intervals? If so - how is that a "server-sent event"? How is this different from writing a setInterval function in JavaScript that uses AJAX to call a PHP page at a regular interval?
Sorry - this is probably an incredibly naive question. But none of the examples I've been able to find make this clear.
[UPDATE]
I think my question was poorly worded, so here's some clarification.
Let's say I have a web page that should display the most recent price of Apple's stock.
When the user first opens the page, the page creates an EventSource with the URL of my "stream."
var source = new EventSource('stream.php');
My question is this - how should "stream.php" work?
Like this? (pseudo-code):
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache'); // recommended to prevent caching of event data.
function sendMsg($msg) {
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
flush();
}
while (some condition) {
// check whether Apple's stock price has changed
// e.g., by querying a database, or calling a web service
// if it HAS changed, sendMsg with new price to client
// otherwise, do nothing (until next loop)
sleep (n) // wait n seconds until checking again
}
?>
In other words - does "stream.php" stay open as long as the client is "connected" to it?
If so - does that mean that you have as many threads running stream.php as you have concurrent users? If so - is that remotely feasible, or an appropriate way to build an application? And how do you know when you can END an instance of stream.php?
My naive impression is that, if this is the case, PHP isn't a suitable technology for this kind of server. But all of the demos I've seen so far imply that PHP is just fine for this, which is why I'm so confused...
"...does "stream.php" stay open as long as the client is "connected"
to it?"
Yes, and your pseudo-code is a reasonable approach.
"And how do you know when you can END an instance of stream.php?"
In the most typical case, this happens when the user leaves your site. (Apache recognizes the closed socket, and kills the PHP instance.) The main time you might close the socket from the server-side is if you know there is going to be no data for a while; the last message you send the client is to tell them to come back at a certain time. E.g. in your stock-streaming case, you could close the connection at 8pm, and tell clients to come back in 8 hours (assuming NASDAQ is open for quotes from 4am to 8pm). Friday evening you tell them to come back Monday morning. (I have an upcoming book on SSE, and dedicate a couple of sections on this subject.)
"...if this is the case, PHP isn't a suitable technology for this kind
of server. But all of the demos I've seen so far imply that PHP is
just fine for this, which is why I'm so confused..."
Well, people argue that PHP isn't a suitable technology for normal web sites, and they are right: you could do it with far less memory and CPU cycles if you replaced your whole LAMP stack with C++. However, despite this, PHP powers most of the sites out there just fine. It is a very productive language for web work, due to a combination of a familiar C-like syntax and so many libraries, and a comforting one for managers as plenty of PHP programmers to hire, plenty of books and other resources, and some large use-cases (e.g. Facebook and Wikipedia). Those are basically the same reasons you might choose PHP as your streaming technology.
The typical setup is not going to be one connection to NASDAQ per PHP-instance. Instead you are going to have another process with a single connection to the NASDAQ, or perhaps a single connection from each machine in your cluster to the NASDAQ. That then pushes the prices into either a SQL/NoSQL server, or into shared memory. Then PHP just polls that shared memory (or database), and pushes the data out. Or, have a data-gathering server, and each PHP instance opens a socket connection to that server. The data-gathering server pushes out updates to each of its PHP clients, as it receives them, and they in turn push out that data to their client.
The main scalability issue with using Apache+PHP for streaming is the memory for each Apache process. When you reach the memory limit of the hardware, make the business decision to add another machine to the cluster, or cut Apache out of the loop, and write a dedicated HTTP server. The latter can be done in PHP so all your existing knowledge and code can be re-used, or you can rewrite the whole application in another language. The pure developer in me would write a dedicated, streamlined HTTP server in C++. The manager in me would add another box.
Server-sent events are for realtime update from the server-side to the client-side. In the first example, the connection from the server isn't kept and the client tries to connect again every 3 seconds and makes server-sent events no difference to ajax polling.
So, to make the connection persist, you need to wrap your code in a loop and check for updates constantly.
PHP is thread-based and more connected users will make the server run out of resources. This can be solved by controlling the script execution time and end the script when it exceed an amount of time (i.e. 10mins). The EventSource API will automatically connect again so the delay is in a acceptable range.
Also, check out my PHP library for Server-sent events, you can understand more about how to do server-sent events in PHP and make it easier to code.
I have notice that the sse techink sends every couple of delay data to the client (somtething like reversing the pooling data techink from client page e.x. Ajax pooling data.) so to overcome this problem i made this at a sseServer.php page :
<?php
session_start();
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache'); // recommended to prevent caching of event data
require 'sse.php';
if ($_POST['message'] != ""){
$_SESSION['message'] = $_POST['message'];
$_SESSION['serverTime'] = time();
}
sendMsg($_SESSION['serverTime'], $_SESSION['message'] );
?>
and the sse.php is :
<?php
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
?>
Notice that at the sseSerer.php i start a session and using a session variable! to overcome the problem.
Also i call the sseServer.php via Ajax (posting and set value to variable message) every time that i want to "update" message.
Now at the jQuery (javascript) i do something like that :
1st) i declare a global variable var timeStamp=0;
2nd) i use the next algorithm :
if(typeof(EventSource)!=="undefined"){
var source=new EventSource("sseServer.php");
source.onmessage=function(event)
if ((timeStamp!=event.lastEventId) && (timeStamp!=0)){
/* this is initialization */
timeStamp=event.lastEventId;
$.notify("Please refresh "+event.data, "info");
} else {
if (timeStamp==0){
timeStamp=event.lastEventId;
}
} /* fi */
} else {
document.getElementById("result").innerHTML="Sorry, your browser does not support server-sent events...";
} /* fi */
At the line of : $.notify("Please refresh "+event.data, "info");
is there that you can handle the message.
For my case i used to send an jQuery notify.
You may use POSIX PIPES or a DB Table instead to pass the "message" via POST since the sseServer.php does something like an "infinite loop".
My problem at the time is that the above code DOES NOT SENDS THE "message" to all clients but only to the pair (client that called the sseServer.php works as individual to every pair) so i'll change the technik and to a DB update from the page that i want to trigger the "message" and then the sseServer.php instead to get the message via POST it will get it from DB table.
I hope that i have help!
This is really a structural question about your application. Real-time events are something that you want to think about from the beginning, so you can design your application around it. If you have written an application that just runs a bunch of random mysql(i)_query methods using string queries and doesn't pass them through any sort of intermediary, then many times you won't have a choice but to either rewrite much of your application, or do constant server-side polling.
If, however, you manage your entities as objects and pass them through some sort of intermediary class, you can hook into that process. Look at this example:
<?php
class MyQueryManager {
public function find($myObject, $objectId) {
// Issue a select query against the database to get this object
}
public function save($myObject) {
// Issue a query that saves the object to the database
// Fire a new "save" event for the type of object passed to this method
}
public function delete($myObject) {
// Fire a "delete" event for the type of object
}
}
In your application, when you're ready to save:
<?php
$someObject = $queryManager->find("MyObjectName", 1);
$someObject->setDateTimeUpdated(time());
$queryManager->save($someObject);
This is not the most graceful example but it should serve as a decent building block. You can hook into your actual persistence layer to handle triggering these events. Then you get them immediately (as real-time as it can get) without hammering your server (since you have no need to constantly query your database and see if things changed).
You obviously won't catch manual changes to the database this way - but if you're doing anything manually to your database with any frequency, you should either:
Fix the problem that requires you to have to make a manual change
Build a tool to expedite the process, and fire these events
Basically, PHP is not suitable techonology for this sort of things.
Yes you can make it work, but it will be a disaster on highload. We run stockservers that send stock-change signals via websockets to dozens thousends users - and If we'd use php for that... Well, we could, but those homemade cycles - is just a nightmare. Every single connection will make a separate process on server or you have to handle connections from some sort of database.
Simply use nodejs and socket.io. It will let you easily start and have a running server in couple days. Nodejs has own limitations also, but for websockets (and SSE) connections now its the most powerfull technology.
And also - SSE is not that good as it seems. The only advantage to websockets - is that packets are being gzipped natively (ws is not gzipped), but on the downside is that SSE is one-side connection. You user, if he wants to add another stock symbol to subscripton, will have to make ajax request (including all troubles with origin control and the request will be slow). In websockets client and sever communicate both ways in one single opened connection, so if user sends a trading signal or subscribes to quote, he just send a string in already opened connection. And it's fast.
Related
Id like very much to have second thoughts on this approach Im implementing to handle very long processes in a web application.
The problem
I have a web application, all written in javascript, which communicates with the server via an API. This application has got some "bulk actions" that take a lot of time to execute. I want to execute them in a safe way, making sure the server won't time out, and with a rich feedback to the user, so he/she knows what is going on.
The usual approach
As I can see in my research, the recommended method of doing that is firing a background process in the server and make it write somewhere how its going so you can make requests to check on it and give feedback to the user. Since Im using php in the back-end, the approach would be more or less what is described here: http://humblecontributions.blogspot.com.br/2012/12/how-to-run-php-process-in-background.html
Adding a few requisites
Since Im developing an open source project (a WordPress plugin) I want it to work in a variety of situations and environments. I did not want to add server side requirements and, as far as I know, the background process approach may not work in several shared hosting solutions.
I want it to work out of the box, in (almost) any server with typical WordPress support, even if it ended up beeing a bit slower solution.
My approach
The idea is to break this process in a way it will run incrementally through many small requests.
So the first time the browser sends a request to run the process, it will run only a small step of it, and return useful information to give the user some feedback. Then the browser does another request, and repeats it until the server informs that the process is done.
In order to do this, I would store this object in a Session, so the first request will give me an id, and the following requests will send this id to the server so it will manipulate the same object.
Here is an conceptual example:
class LongProcess {
function __construct() {
$this->id = uniqid();
$_SESSION[$this->id] = $this;
$this->step = 1;
$this->total = 100;
}
function run() {
// do stuff based on the step you are in
$this->step = $this->step + 10;
if ($this->step >= $this->total)
return -1;
return $this->step;
}
}
function ajax_callback() {
session_start();
if (!isset($_POST['id']) || empty($_POST['id'])) {
$object = new LongProcess();
} else {
$object = $_SESSION[$_POST['id']];
}
$step = $object->run();
echo json_encode([
'id' => $object->id,
'step' => $return,
'total' => $object->total
]);
}
With this I can have my client to send requests recursivelly and update the feedback to the user as the responses are recieved.
function recursively_ajax(session_id)
{
$.ajax({
type:"POST",
async:false, // set async false to wait for previous response
url: "xxx-ajax.php",
dataType:"json",
data:{
action: 'bulk_edit',
id: session_id
},
success: function(data)
{
updateFeedback(data);
if(data.step != -1){
recursively_ajax(data.id);
} else {
updateFeedback('finish');
}
}
});
}
$('#button').click(function() {
recursively_ajax();
});
Of course this is just a proof of concept, Im not even using jQuery in the actual code. This is just to express the idea.
Note that this object which is stored in the session should be a very lightweight object. Any actual data beeing processed should be stored in the database or filesystem and only reference it in the object so it knows where to look for stuff.
One typical case would be processing a large CSV file. The file would be stored in the filesystem, and the object would store a pointer to the last processed line so it knows where to start in the next request.
The object may also return a more verbose log, describing everything that was done and reporting errors, so the user have complete knowledge of what has been done.
The interface I think would be great is a progress bar with a "see details" button that would open a textarea with this detailed log.
Does it make sense?
So now I ask. How does it looks like? Is it a viable approach?
Is there a better way to do this and assure it will work in very limited servers?
Your approach has several disadvantages:
Your heavy requests may block other requests. Usually you have a limit of concurrent PHP processes for handling web request. If the limit is 10, and all slots are taken by processing your heavy requests, your website will not work until some of these requests will complete releasing slot for another lightweight request.
You (probably) will not be able to estimate how much time will take to finish one step. Depending on server load it could take 5 or 50 seconds. And 50 second will probably exceed time execution limit on most of shared hostings.
This task will be controlled by client - any interruption from client side (network problems, closing browser tab) will interrupt the task.
Depending on session backend, using session for storing current state may result in race condition bugs - concurrent request from the same client may overwrite changes in session done by background task. By default PHP uses locking for session, so this should not be the case, but if someone uses alternative backend for sessions (DB, redis) without locking, this will result serious and hard to debug bugs.
There is an obvious trade-off here. For small websites where simplifying installation and configuration is a priority, your approach is OK. In any other case I would stick to simple cron-based queue for running tasks in background and use AJAX request only to retrieve current status of task. So far I have not seen hosting without cron support and adding task to cron should not be that hard for the end user (with proper documentation).
In both cases I would not use session as a storage. Save task and its status in database and use some locking system to ensure, that only one process can modify data of one task. This will be really much more robust and flexible than using session.
Thanks for all the input. I just want to document here some very good answers I got.
Some WordPress plugins, named Woocommerce, have incorporated code from "WP Background Processing" library, that is no longer mantained, but that implements the Cron approach with some important improvements. See this blog post:
https://deliciousbrains.com/background-processing-wordpress/
The actual library lives here: https://github.com/A5hleyRich/wp-background-processing
Although this is a WordPress specific library, I think the approach is valid for any situation.
There is also, for WordPress, a library called Action Scheduler, that not only tun proccesses in background, but allows you to schedule them. Its worth a look:
https://github.com/Prospress/action-scheduler
I have a php-script (in a normal LAMP environment) that runs a couple of housekeeping-tasks at the end of script.
I use flush() to push all the data to the client, which works fine (the page is fully loaded), but the browser still waits for data (indicated by the "loading"-animation) which is confusing for the user but of course clear because Apache cannot know whether PHP will generate more output after flush() - in my case it never does, however.
Is there a way to tell the client that the output is finished and the http-connection should be closed immediately even though the script keeps running?
It sounds like you have a long running script performing varioous tasks. Especially it appears to script goes on doing things after it has sent the reply to the client. This is a design that opens a whole lot of potential problems. You should re-think your architecture.
Keep house keeping tasks and client communication strictly separate. For example you could have a client request processed and trigger internal sub requests (which you can detach from) or deligate tasks to a cron like system. Then offer a second view to the client which visualized the progress and result of those tasks. This approach is much safer, more flexible and easier to extend when required. And your problem at hand is solved, too :-)
you can use this function fastcgi_finish_request() special function to finish request and flush all data while continuing to do something time-consuming (video converting, stats processing etc.); http://php.net/manual/en/install.fpm.php but you need to install FPM for it like
<?php
echo "You can see this from the browser immediately.<br>";
fastcgi_finish_request();
sleep(10);
echo "You can't see this form the browser.";
?>
I am making a notification system for my website. I want the logged in users to immediately noticed when a notification has made. As many people say, there're only a few ways of doing so.
One is writing some javascript code to ask the server "Are there any new notifications ?" at a given time interval. It's called "Polling" (I should be right).
Another is "Long Polling" or "Comet". As wikipedia says, long polling is similar to polling. Without asking everytime for new notifications, when new notifications are available, server sends them directly to the client.
So how can i use Long Polling with PHP ? (Don't need full source code, but a way of doing so)
What's its architecture/design really ?
The basic idea of long-polling is that you send a request which is then NOT responded or terminated by the server until some desired condition. I.e. server-side doesn't "finish" serving the request by sending the response. You can achieve this by keeping the execution in a loop on server-side.
Imagine that in each loop you do a database query or whatever is necessary for you to find out if the condition you need is now true. Only when it IS you break the loop and send the response to the client. When the client receives the response, it immediately re-sends the "long-polling" request so it wouldn't miss a next "notification".
A simplified example of the server-side PHP code for this could be:
// Set the loop to run 28 times, sleeping 2 seconds between each loop.
for($i = 1; $i < 29; $i++) {
// find out if the condition is satisfied.
// If YES, break the loop and send response
sleep(2);
}
// If nothing happened (the condition didn't satisfy) during the 28 loops,
// respond with a special response indicating no results. This helps avoiding
// problems of 'max_execution_time' reached. Still, the client should re-send the
// long-polling request even in this case.
You can use (or study) some existing implementations, like Ratchet. There are a few others.
Essentially, you need to avoid having apache or the web server handle the request. Just like you would with a node.js server, you can start PHP from the command line and use the server socket functions to create a server and use socket_select to handle communications.
It could technically work throught the web server by keeping a loop active. However, the memory overhead of keeping a php process active per HTTP connection is typically too high. Creating your own server allows you to share the memory between connections.
I used long polling for a chat application recently. After doing some research and playing it with a while here are some things I would recommend.
1) Don't long poll for more than about 20 seconds. Some browsers will timeout. I normally set my long poll to run about 20 seconds and send back an empty response at that point. Then you can use javascript to restart the long poll.
2) Every once in a while a browser will hang up. To help add a second level of error checking, I have a javascript timer run for 30 seconds and if no response has come in 30 seconds I abandon the ajax call and start it up again.
3) If you are using php make sure you use session_write_close()
4) If you are using ajax with Jquery you may need to use abort()
You can find your answer here. More detail here . And you should remember to use $.ajaxSetup({ cache:false }); when working with jquery.
This will be a newbie question but I'm learning php for one sole purpose (atm) to implement a solution--everything i've learned about php was learned in the last 18 hours.
The goal is adding indirection to my javascript get requests to allow for cross-domain accesses of another website. I also don't wish to throttle said website and want to put safeguards in place. I can't rely on them being in javascript because that can't account for other peers sending their requests.
So right now I have the following makeshift code, without any throttling measures:
<?php
$expires = 15;
if(!$_GET["target"])
exit();
$fn = md5($_GET["target"]);
if(!$_GET["cache"]) {
if(!array_search($fn, scandir("cache/")) ||
time() - filemtime($file) > $expires)
echo file_get_contents("cache/".$fn);
else
echo file_get_contents(file);
}
else if($_GET["data"]) {
file_put_contents("cache/".$fn, $_GET["data"]);
}
?>
It works perfectly, as far as I can tell (doesn't account for the improbable checksum clash). Now what I want to know is, and what my search queries in google refuse to procure for me, is how php actually launches and when it ends.
Obviously if I was running my own web server I'd have a bit more insight into this: I'm not, I have no shell access either.
Basically I'm trying to figure out whether I can control for when the script ends in the code, and whether every 'get' request to the php file would launch a new instance of the script or whether it can 'wake up' the same script. The reason being I wish to track whether, say, it already sent a request to 'target' within the last n milliseconds, and it seems a bit wasteful to dump the value to a savefile and then recover it, over and over, for something that doesn't need to be kept in memory for very long.
Every HTTP request starts a new instance of the interpreter; it's basically an implementation detail whether this is a whole new process, or a reuse of an existing one.
This generally pushes you towards good simple and scalable designs: you can run multiple server processes and threads and you won't get varying behaviour depending whether the request goes back to the same instance or not.
Loading a recently-touched file will be very fast on Linux, since it will come right from the cache. Don't worry about it.
Do worry about the fact that by directly appending request parameters to the path you have a serious security hole: people can get data=../../../etc/passwd and so on. Read http://www.php.net/manual/en/security.variables.php and so on. (In this particular example you're hashing the inputs before putting them in the path so it's not a practical problem but it is something to watch for.)
More generally, if you want to hold a cache across multiple requests the typical thing these days is to use memcached.
php is done from a per-connection basis. IE: each request for a php file is seen as a new instance. Each instance is ended, generally, when the connection is closed. You can however use sessions to save data between connections for a specific user
For basic use of sessions look into:
session_start()
$_SESSION
session_destroy()
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Several visitors connect to http://site.com/chat.php
They each can write and send a text message to chat.php and it displays instantly on everyone's browser (http://site.com/chat.php)
Do I have to use a database? I mean, is AJAX or PHP buffer capabilities enough for such a chat room on sessions?
How can sessions of different users share data from each other?
Any idea or insights will be appreciated, thanks!
Edit: Thanks for the links. But what I want is the way to push data to a client browser. Is constantly refreshing client browser (AJAX or not) the only way? Also the challenge here is how different users, for example, 2, 1 on 1, share chat texts? How do you store them? And how do you synchronize the texts between the 2 clients? Not using a database preferably.
Edit 2: Actually YShout mentioned by Peter D does this job pretty well. It doesn't seem to keep refresh the browser. But I don't understand how it pushes new messages to existing user's window.
there are (roughly) 3 options for creating a chat application:
sockets
use flash/java and sockets for the frontend and a socket-capable programming language for the backend. for the backend, i'd recommend java or python, because they are multithreading and NIO-capable. it's possible to do it with PHP (but php can't really do efficient multithreading and is generally not really suited for this). this is an option if you need high performance, and probably not what you're looking for.
use ajax and pull
in this case all clients are constantly (for example ever 2 seconds) polling if something new has happened. it feels strange because you only get responses at those intervals. additionally, it puts quite a strain on your server and bandwidth. you know an application uses this technique because the browser constantly refreshes. this is a suboptimal solution.
use ajax and push
this works with multipart-responses and has long running (php-) scripts in the backend. not the best solution, but most of the time it's better than pulling and it works and is used in several well known chat apps. this technique is sometimes called COMET.
my advise: if you need a chat app for production use, install an existing one. programming chat applications is not that easy.
if you just want to learn it, start with a simple ajax/pull app, then try to program one using ajax and push.
and yes, most probably you'll need a database, tough i successfully implemented a very simple ajax/pull solution that works with text files for fun (but i certainly wouldn't use it in production!).
it is (to my knowledge, but i'm pretty sure) not possible to create a chat app without a server-side backend (with just frontend javascript alone)!
UPDATE
if you want to know how the data pushing is done, look at the source here: http://wehrlos.strain.at/httpreq/client.html. async multipart is what you want :)
function asSendSyncMulti() {
var httpReq = new XMLHttpRequest();
showMessage( 'Sending Sync Multipart ' + (++this.reqCount) );
// Sync - wait until data arrives
httpReq.multipart = true;
httpReq.open( 'GET', 'server.php?multipart=true&c=' + (this.reqCount), false );
httpReq.onload = showReq;
httpReq.send( null );
}
function showReq( event ) {
if ( event.target.readyState == 4 ) {
showMessage( 'Data arrives: ' + event.target.responseText );
}
else {
alert( 'an error occured: ' + event.target.readyState );
}
}
showReq is called every time data arrives, not just once like in regular ajax-requests (i'm not using jquery or prototype here, so the code's a bit obese - this is really old :)).
here's the server side part:
<?php
$c = $_GET[ 'c' ];
header('Content-type: multipart/x-mixed-replace;boundary="rn9012"');
sleep( 1 );
print "--rn9012\n";
print "Content-type: application/xml\n\n";
print "\n";
print "Multipart: First Part of Request " . $c . "\n";
print "--rn9012\n";
flush();
sleep( 3 );
print "Content-type: application/xml\n\n";
print "\n";
print "Multipart: Second Part of Request " . $c . "\n";
print "--rn9012--\n";
?>
update2
regarding the database: if you've got a nothing-shared architecture like mod_php/cgi in the backend, you definitley need some kind of external storage like databases or textfiles. but: you could rely on memory by writing your own http server (possible with php, but i'd not recommend it for serious work). that's not really complicated, but probably a bit out of the scope of your question ^^
update3
i made a mistake! got everything mixed up, because it's been a long time i actually did something like that. here are the corrections:
multipart responses only work with mozilla browsers and therefore are of limited use. COMET doesn't mean multipart-response.
COMET means: traditional singlepart response, but held (with an infinite loop and sleep) until there is data available. so the browser has 1 request/response for every action (in the worst case), not one request every x seconds, even if nothing response-worthy happens.
You mention wanting this to work without a DB, and without the client(s) polling the server for updates.
In theory you can do this by storing the "log" of chats in a text file on the server, and changing your page so that the user does a GET request on the chat.php page, but the PHP page never actually finishes sending back to the user. (e.g. the Response never completes)
You would need to send back some "no op" data to keep the connection going when there are no messages but in theory this would work.
The problem is, to accomplish the above is still a lot of work. You would need to do AJAX posts back to the server to submit new comments... the users' browser would be spinning the the whole time (unless you nest the chat log in an iframe - e.g. more work)... and this kind of setup would just be very hard to manage.
I'd suggest grabbing a free chat script from elsewhere (e.g. http://tinychat.com/) or if you want to roll your own (for fun/experience) then go ahead, but start with a DB and build a page that will push and pull messages from the server.
Finally if you are worried about "hammering" the server with AJAX requests... don't. Just build the chat, then if you find there are performance issues, return to StackOverflow with a question on how to optimize it so that hundreds of requests are not flooding the chat when there is no activity.
While HTTP is not made for easy pushing, you can emulate a push connection by having the PHP script never terminate and the JavaScript result be watched carefully.
Essentially you're simulating a stream reader.
If you would like new users to load a history of the chat that occurred before they entered the room, a DB or other storage is required. Unless you are trying to create a chat for learning, there are too many out there to use for free to bother.
http://tinychat.com is another simple chat site.
AJAX works fine. I have created a simple page for one of my sites. But I find that chat doesn't get used as often as you would think.
Sharing data gets a little more complicated and would be easier to accomplish by hosting an IRC server and allowing users to use IRC clients which have data exchange capability. Although nothing is stopping you from having one user upload to the site, then others download. Person to person would be difficult with using a web interface, because the users are not connected in any way with each other.
You can do this entirely with HTML and Javascript using a service like PubNub. You wouldn't need a database as you could use something like the history api to populate the last x chat messages.
Here is a quick tutorial on building a chat app with PubNub.
Real-time Chat Apps in 10 Lines of Code
Enter Chat and press enter
<div><input id=input placeholder=you-chat-here /></div>
Chat Output
<div id=box></div>
<script src=http://cdn.pubnub.com/pubnub.min.js></script>
<script>(function(){
var box = PUBNUB.$('box'), input = PUBNUB.$('input'), channel = 'chat';
PUBNUB.subscribe({
channel : channel,
callback : function(text) { box.innerHTML = (''+text).replace( /[<>]/g, '' ) + '<br>' + box.innerHTML }
});
PUBNUB.bind( 'keyup', input, function(e) {
(e.keyCode || e.charCode) === 13 && PUBNUB.publish({
channel : channel, message : input.value, x : (input.value='')
})
} )
})()</script>