First of all, sorry for my english, I wrote this with some help of Google Translate.
I'm trying to make an application like Google Wave with PHP and Ajax. I have a textarea that when the user input something, the javascript on the page detected with oninput and send the contents of the textarea to the server and the server stores the contents into the database.
What I'm doing is that every time when i send the content by XHR, there is XHR.abort() that always interrupts the previous XHR request. The data that is in the database are fine, however, sometimes it is stored a previous version.
I know it happens because PHP has not stopped the execution even though the client has made an abort and sometimes the previous request has taken more time that the last request and completed after the last request, so I read the manual of functions of "ignore_user_abort" and "connection_aborted", but the problem persist.
I created this script to simulate the situation and I hoped when I aborted the connection (press 'stop', close the tab/window), there are not any new data on the database, but after 5 seconds, there still I have new data, so I need help to rollback the transaction when user abort the connection.
Here is the script to simulate (PDO_DSN, PDO_USER, PDO_PASS are defined):
<?php
ignore_user_abort(true);
ob_start('ob_gzhandler');
$PDO = new PDO(PDO_DSN, PDO_USER, PDO_PASS, array(PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8'));
$PDO->beginTransaction();
$query = $PDO->query('INSERT INTO `table` (`content`) VALUES (' . $PDO->quote('test') . ')');
sleep(5);
echo ' ';
ob_flush();
flush();
if (connection_aborted()) {
$PDO->rollBack();
exit;
}
$PDO->commit();
ob_end_flush();
If you are finding XHR.abort() and connection_aborted() unreliable, consider other ways to send an out-of-band signal to inform the running PHP request that it should not commit the transaction.
Are you running APC (or could you be)?
Instead of invoking XHR.abort(), you could send another XHR request signaling the abort. The purpose of this request would be to record a special key in the APC user cache. This key's presence would indicate to the running PHP request that it should roll back.
To make this work, each XHR request would need to carry a (relatively) unique transaction identifier, e.g. as a form variable. This identifier would be generated randomly, or based on the current time, and would be sent in the initial XHR as well as the "abort" XHR and would allow the abort request to be correlated to the running request. In the below example, the transaction identifier is in form variable t.
Example "abort" XHR handler:
<?php
$uniqueTransactionId = $_REQUEST['t'];
$abortApcKey = 'abortTrans_' . $uniqueTransactionId;
apc_store($uniqueTransactionId, 1, 15);
Example revised database write XHR handler:
<?php
$PDO = new PDO(PDO_DSN, PDO_USER, PDO_PASS,
array(PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8'));
$PDO->beginTransaction();
$query = $PDO->query('INSERT INTO `table` (`content`) VALUES (' . $PDO->quote('test') . ')');
$uniqueTransactionId = $_REQUEST['t'];
$abortApcKey = 'abortTrans_' . $uniqueTransactionId;
if (apc_exists($abortApcKey)) {
$PDO->rollBack();
exit;
}
$PDO->commit();
You may still have timing issues. The abort may still arrive too late to stop the commit. To deal with this gracefully, you could modify the database write handler to record an APC key indicating that the transaction had committed. The abort handler could then check for this key's existence, and send back a meaningful XHR abort result to advise the client, "sorry, I was too late."
Keep in mind, if your application is hosted on multiple live servers, you will want to use a shared cache such as memcached or redis, since APC's cache is only shared across processes on a single machine.
How about having the browser send back a timestamp or a running number that you also store in the database. and your update can check so that it only writes if the new timestamp is newer.
I have seen this issue many times with Javascript and Ajax. If you are not very careful in implementing the UI then the user can click twice and/or the browser can trigger the ajax call twice resulting in the same record hitting your database twice. These might be two completely different http requests from your UI, so you need to make sure that your server side code can filter out the duplicates before inserting them into the database.
My solution is usually to query the records entered by the same user recently and check whether this is really a new entry or not. If this new record is not in the database yet then insert it, otherwise ignore.
In Oracle you can use PL/SQL to have a MERGE IF MATCH THEN INSERT command, so you can handle this in one query, but in MySQL you are better off by using two queries - one to query the existing records of this user and then the other one to insert if there is no match.
As Marc B pointed, PHP can't detect if the browser is disconnected without sending some output to the browser. The problem is, you have enabled output buffering at the line ob_start('ob_gzhandler'); and that prevents PHP from sending output to the browser.
You either have to remove that line or add a ob_end_* (for example: ob_end_flush()) along with the echo/flush calls.
Related
I have a PHP script that performs a connection to my other server using file_get_contents, and then retrieves and displays the data.
//authorize connection to the ext. server
$xml_data=file_get_contents("http://server.com/connectioncounts");
$doc = new DOMDocument();
$doc->loadXML($xml_data);
//variables to check for name / connection count
$wmsast = $doc->getElementsByTagName('Name');
$wmsasct = $wmsast->length;
//start the loop that fetches and displays each name
for ($sidx = 0; $sidx < $wmsasct; $sidx++) {
$strname = $wmsast->item($sidx)->getElementsByTagName("WhoIs")->item(0)->nodeValue;
$strctot = $wmsast->item($sidx)->getElementsByTagName("Sessions")->item(0)->nodeValue;
/**************************************
Display only one instance of their name.
strpos will check to see if the string contains a _ character
**************************************/
if (strpos($strname, '_') !== FALSE){
//null. ignoring any duplicates
}
else {
//Leftovers. This section contains the names that are only the BASE (no _jibberish, etc)
echo $sidx . " <b>Name: </b>" . $strname . " Sessions: " . $strctot . "<br />";
}//end display base check
}//end name loop
From the client side, I'm calling on this script using jQuery load () and to execute using mousemove().
$(document).mousemove(function(event){
$('.xmlData').load('./connectioncounts.php').fadeIn(1000);
});
And I've also experimented with set interval which works just as well:
var auto_refresh = setInterval(
function ()
{
$('.xmlData').load('./connectioncounts.php').fadeIn("slow");
}, 1000); //refresh, 1000 milli = 1 second
It all works and the contents appear in "real time", but I can already notice an effect on performance and it's just me using it.
I'm trying to come up with a better solution but falling short. The problem with what I have now is that each client would be forcing the script to initiate a new connection to the other server, so I need a solution that will consistently keep the information updated without involving the clients making a new connection directly.
One idea I had was to use a cron job that executes the script, and modify the PHP to log the contents. Then I could simply get the contents of that cache from the client side. This would mean that there is only one connection being made instead of forcing a new connection every time a client wants the data.
The only problem is that the cron would have to be run frequently, like every few seconds. I've read about people running cron this much before, but every instance I've come across isn't making an external connection each time as well.
Is there any option for me other than cron to achieve this or in your experience is that good enough?
How about this:
When the first client reads your data, you retrieve them from the remote server and cache them together with a timestamp.
When the next clients read the same data, you check how old the contents of the cache is and only if it's older than 2 seconds (or whatever) you access the remote server again.
make yourself familiar with APC as a global storage. Once you have fetched the file, store it in the APC cache and set a timeout. You only need to connect to the remote server, once a page is not in the cache or outdated.
Mousemove: are you sure? That generates gazllions of parallel requests unless you set a semaphore clientside to not issue any AJAX queries anymore.
When trying to write to the client, the message is getting buffered, and in some cases, it's not being written at all.
CURRENT STATUS:
When I telnet into the server, the Server Ready: message is readily printed as expected.
When I send random data (other than "close"), the server's terminal nicely shows progress every second, but the clients output waits until after all the sleeping, and then prints it all at once.
Most importantly, when sending "close", it just waits the obligatory second, and then closes without ANY writeout in the client.
GOAL:
My main goal is for a quick message to be written to the client prior closing a connection.
CODE:
// server.php
$loop = React\EventLoop\Factory::create();
$socket = new React\Socket\Server($loop);
$socket->on('connection', function ($conn)
{
$conn->write("Server ready:\n");
$conn->on('data', function ($data) use ($conn)
{
$data = trim($data);
if( $data == 'close')
{
$conn->write("Bye\n");
sleep(1);
$conn->close();
}
for ($i = 1; $i<5; $i++) {
$conn->write(". ");
echo '. ';
sleep(1);
}
$conn->write(".\n");
echo ".\n";
$conn->write("You said \"".$data."\"\n");
});
});
$socket->listen(1337, '127.0.0.1');
$loop->run();
SUMMARY:
Why can't I get anything written to the client before closing?
The problem your are encountering is because you are forgetting about the event loop that drives ReactPHP. I ran into this issue recently when building a server and after following the code around, I found 2 things out that should help you solve your problem.
If you close the connection after writing to it, it simply closes the connection before it can write. Solving this issue can help you fix the next issue... The correct call for writing something to the client, THEN closing the connection is called $conn->end('msg'); If you follow this chain of code the behaviour becomes clear; First it basically writes to the connection just as if you ran $conn->write('msg'); but then it registers a new event handler for the full-drain event, this handler simple calls $conn->close();; the full-drain event is only dispatched when the write buffer is completely emptied. So you can see that the use of end, simply waits to write before it closes the connection.
The drain and full-drain are only dispatched after writing to a stream. full-drain occurs when the write buffer is completely empty. drain is dispatched after the write buffer is emptied past its softLimit which by default is 2048 bytes.
The reason your writes are not making it through is because $conn->write('msg') only adds the string to the write buffer; it does not actually write. The event loop needs to run before it will be given time to write. Your use of sleep() is incorrect with react because this blocks the call at that line. In react you don't want to block any code from executing. If you are done a task, then let your function return, and code execution returns to the react main event loop. If you wish to have things run on certain intervals, or simply on the next iteration of the loop, you can schedule callbacks for the main event loop with the use of $loop->addTimer($seconds, $callback), $loop->addPeriodicTimer($seconds, $callback), $loop->nextTick($callback) or $loop->futureTick($callback)
Ultimately it is because you are programming without acknowledging that we are still in a blocking thread. Anything your code that blocks, blocks the entire react event loop, in turn blocking everything that react does for you. Give up processing time back to the loop to ensure it can do the reads/writes that you have queued up. You only need on iteration of the loop to occur for the write buffer to begin emptying (depending on the size of the buffer it may or may not write it all out)
If you're goal here is just to fix the connection closing bit, switch your call to $conn->end('msg') instead of the write -> close. However I believe that the other loop you have printing the dots also does not behave in the way that I think you expect/desire it to work. As it's purpose is not as clear, if you can tell me what your goal was for it I can possibly help you restructure that step as well.
I've recently discovered EventSource, YUI3 has a Gallery module to normalise and fallback behaviour, that's what I've chosen to go with in my example as I use that framework already.
So I've searched about quite a bit, read many blogs, posts and examples, all of which show pretty much the same thing: How to set up basic SSE events. I now have 6 examples of open/message/error/close events firing.
What I don't have (what I'd hoped this link was going to give me) is an example of how to fire SSE events which are more useful to my application, I'm trying one called 'update'.
Here's is my basic test page: http://codefinger.co.nz/public/yui/eventsource/test.php (it might as well be an html file, there's no php code in here yet)
And here's the 'message.php' in the EventSource constructor:
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache'); // recommended to prevent caching of event data.
/**
* Constructs the SSE data format and flushes that data to the client.
*
* #param string $id Timestamp/id of this connection.
* #param string $msg Line of text that should be transmitted.
*/
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
while(true) {
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
sleep(10);
}
// I was hoping calling this file with a param might allow me to fire an event,
// which it does dutifully, but no browsers register the 'data : update' - though
// I do see the response in Firebug.
if( $_REQUEST['cmd'] ){
sendMsg($serverTime, $_REQUEST['cmd'] );
}
?>
From the live example above, you can see that I've tried to use YUI's io module to send a request, with param, to fire my 'update' event when I click the 'update' button. It seems to work, as you can see in Firebug's Net panel, but my event isn't handled (I realise the script above will run that loop again, I just want to get my event handled in connected browsers, then I'll remove/cleanup).
Am I doing this part wrong? Or is there something more fundamental I'm doing wrong? I'm trying to push events in response to my UI's state changing.
This SO question seemed to come close, #tomfumb commented that his next question was going to be "how to send new events to the client after the initial connection is made - now I see that the PHP just has to never stop executing." But surely I'd only send events as they happen... and not continuously...
there are several issues in your approach:
The server-side code that reads the cmd parameter is unreachable because of the infinite loop that sends event data to the client.
You are trying to send an event from the client to the server. It is in the specification name - Server-Sent Events - the server is the sender and the client is the receiver of events. You have options here:
Use the appropriate specification for the job called Web Sockets which is a two-way communication API
Write the logic that makes the desired type of communication possible
If you choose to stay with the SSE API I see two possible scenarios
Reuse the same Event Source connection and store a pool of connections on the server. When the user sends subsequent XMLHttpRequest with the update command, get the EventSource connection from the pool, that was made by this visitor, and send response with it that specifies your custom event type, the default type is message. It is important to avoid entering in the infinite loop that would make another EventSource connection to the client, but the client does not handle it because he made the request with XMLHttpRequest and not with EventSource.
Make all requests with EventSource. Before making a new EventSource request, close the previous one - you can do this from the client or from the server. On the server check the parameters and then send data to client.
Also you can use XMLHttpRequest with (long) polling and thus avoiding the need of using EventSource. Because of the simplicity of your example I can't see a reason to mix the two type of requests.
I have a simple file upload service, written out in PHP, which also includes a script that controls download speeds by sending limited-sized packets when a user requests a download from this site.
I want to implement a system to limit parallel/simultaneous downloads to 1 per user if they are not premium members. In the download script above, I can use a MySQL database to store a record that has: (1) the user ID; (2) the file ID; (3) when the download was initiated; and (4) when the last packet was sent, which is updated each time this is done (if DL speed is limited to 150 kB/sec, then after every 150 kB, this record is updated, etc.).
However, thus far, the database record will only be deleted once the download has successfully completed — at the end of the script, after the download has been fully served, the record is deleted from the table:
insert DB record;
while (download is being served) {
serve packet of data;
update DB record with current date/time;
}
// Download is now complete
delete DB record;
How will I be able to detect when a download has been cancelled? Would I just have to have a Cron job (or something similar) detect if an existing download record is more than X minutes/hours old? Or is there something else I can do that I'm missing?
I hope I've explained this well enough. I don't think posting specific code is required; I'm interested more in the logistics of how/whether this can be done. If specific is needed, I will gladly provide it.
NOTE: I know how to detect if a file was successfully downloaded; I need to know how to detect if it was cancelled, aborted, or otherwise stopped (and not just paused). This will be useful in stopping parallel downloads, as well as preventing a situation where the user cancels Download #1 and tries to initiate Download #2, only to find that the site claims he is still downloading file #1.
EDIT: You can find my download script here: http://codetidy.com/1319/ — it already supports multi-part downloads and download resuming.
<?php
class DownloadObserver
{
protected $file;
public function __construct($file) {
$this->file = $file;
}
public function send() {
// -> note in DB you've started
readfile($this->file);
}
public function __destruct() {
// download is done, either completed or aborted
$aborted = connection_aborted();
// -> note in DB
}
}
$dl = new DownloadObserver("/tmp/whatever");
$dl->send();
should work just fine. No need for a shutdown_function or any funky self-built connection observation.
You will want to check out the following functions: connection_status(), connection_aborted() and ignore_user_abort() (see the connection handling section of the PHP manual for more info).
Although I can't guarantee the reliability (it's been a while since I've played around with it), with the right combination you should be able to accomplish what you want. There are a few caveats when working with these though, the big one being that if something goes wrong you could end up with stranded PHP scripts running on the server requiring you to kill Apache to stop them.
The following should give you a good idea of how to do it (adapted from the PHP code examples and a couple of the comments):
<?php
//Set PHP not to cancel execution if the connection is aborted
//and drop the time limit to allow for big file downloads
ignore_user_abort(true);
set_time_limit(0);
while(true){
//See the ignore_user_abort() docs re having to send data
echo chr(0);
//Make sure the data gets flushed properly or the connection check won't work
flush();
ob_flush();
//Check then connection status and exit loop if aborted
if(connection_status() != CONNECTION_NORMAL || connection_aborted()) break;
//Just to provide some spacing in this example
sleep(1);
}
file_put_contents("abort.txt", "aborted\n", FILE_APPEND);
//Never hurts to ensure that the script halts execution
die();
Obviously for how you would be using it the data being sent would simply be the download data chunk (just make sure you flush the buffer properly to ensure the data is actually sent). As far as I'm aware, there is no way of making a distinction between pausing and aborting/stopping. Pause/resume functionality (and multi-part downloading - i.e. how download managers accelerate downloads) relies on the "Range" header, basically requesting byte x to byte y of the file. So if you want to allow resumable downloads you'll have to deal with that too.
There is no HTTP "cancel" signal that is sent by default. So, it looks like you will need to decide on a timeout, the length of time a connection can sit without sending/receiving another packet. If you are sending rather small packets (as I presume you are) keep the timeout short for best effect.
In your while condition you will need to check the age of the last timestamp update, if its too old, stop sending the file.
Scenario is as follows:
Call to a specified URL including the Id of a known SearchDefinition should create a new Search record in a db and return the new Search.Id.
Before returning the Id, I need to spawn a new process / start async execution of a PHP file which takes in the new Search.Id and does the searching.
The UI then polls a 3rd PHP script to get status of the search (2nd script keeps updating search record in the Db).
This gives me a problem around spawning the 2nd PHP script in an async manner.
I'm going to be running this on a 3rd party server so have little control over permissions. As such, I'd prefer to avoid a cron job/similar polling for new Search records (and I don't really like polling if I can avoid it). I'm not a great fan of having to use a web server for work which is not web-related but to avoid permissions issues it may be required.
This seems to leave me 2 options:
Calling the 1st script returns the Id and closes the connection but continues executing and actually does the search (ie stick script 2 at the end of script 1 but close response at the append point)
Launch a second PHP script in an asynchronous manner.
I'm not sure how either of the above could be accomplished. The first still feels nasty.
If it's necessary to use CURL or similar to fake a web call, I'll do it but I was hoping for some kind of convenient multi-threading approach where I simply spawn a new thread and point it at the appropriate function and permissions would be inherited from the caller (ie web server user).
I'd rather use option 1. This would also keep related functionality closer to each other.
Here is a hint how to send something to user and then close the connection and continue executing:
(by tom ********* at gmail dot com, source: http://www.php.net/manual/en/features.connection-handling.php#93441)
<?php
ob_end_clean();
header("Connection: close\r\n");
header("Content-Encoding: none\r\n");
ignore_user_abort(true); // optional
ob_start();
echo ('Text user will see');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // Strange behaviour, will not work
flush(); // Unless both are called !
ob_end_clean();
//do processing here
sleep(5);
echo('Text user will never see');
//do some processing
?>
swoole: asynchronous & concurrent extension.
https://github.com/matyhtf/swoole
event-driven
full asynchronous non-blocking
multi-thread reactor
multi-process worker
millisecond timer
async MySQL
async task
async read/write file system
async dns lookup