does anybody know how to update a mysql database if a user exits a browser or navigates away from a page? I have a value set to 1 and want it setting to 0 in such an event. I have been banging my head against the wall with this for weeks and any help would be massively appreciated, thanks. I'm using PHP. I have tried it on body unload but it does not do what I want.
Don't do that. Simply because you rely on the browser to send you a notification (through the beforeunload event) or some other mechanism. What will happen if the browser crashes, or the user put his computer to sleep/hibernate?
You may consider other options, something like a PING! method that the browser could send through Ajax calls every 1 min or so telling he's still alive. All you'd need to do is to create a MySQL procedure that would scan for any "alive" entry more than 1:30 min old (call that last_seen TIMESTAMP or something) and set it your "alive" to 0. This procedure could be called at random requests (pretty much like the PHP session cleanup mechanism).
This will ensure that the user is still there since you will get PING! requests periodically and when the user stops, you will be able to safely say that he won't PING! anymore (though it will cause a light delay, but an acceptable one). This will also ensure user security.
PHP can only change things when a page is ACCESSED. From the PHP perspective, as soon as the script ends, the transaction is over and it have nothing to do with that page anymore.
The only way to do this is keeping the connection open, the script running, and use something like this:
<?php
function shutdown()
{
// here you change the value on your db...
}
register_shutdown_function('shutdown');
// send the data to client
echo "anything";
flush();
// sleep for a lot of time... 99999 seconds...
sleep(99999);
?>
Related
I've got a name lookup box that operates by your typical ajax requests. Here's the general flow of the Javascript that fires every time a letter is pressed:
If ajax request already open, then abort it.
If timeout already created, destroy it.
Set new timeout to run the following in half a second:
Send string to 'nameLookup.php' via ajax
Wait for response
Display results
The issue is that nameLookup.php is very resource heavy. In some cases up to 10,000 names are being pulled from an SQL database, decrypted, and compared against the string. Under normal circumstances requests can take anywhere from 5 to 60 seconds to return.
I completely understand that when you abort a request on the client side the server is still working on things and sends back the result. That it's just the client side that knows to ignore the response. But the server is getting so hung up on working on all of these requests.
So if you do:
Request 1
Abort Request 1
Request 2
Abort Request 2
Request 3
Wait for response from Request 3
The server is either not even working on Request 3 until it's finished with 1 and 2... or it's just so hung up on working on Request 1 and 2 that Request 3 is taking an extra long amount of time.
I need to know how to tell the server to stop working on Request 1 and 2 so I can free up resources for it to work on Request 3.
I'm using Javascript & jQuery on the client side. PHP/Apache and SQL on the server side.
Store a boolean value in the DB in a table, or in the session.
Have your resource intensive script check periodically that value to see if it should continue or not. If the DB says to stop, then your script cancels itself (by calling return; in the current function for example).
When you want to cancel, instead of calling abort();, make an AJAX request to set that value to false.
Next time the resource checks that value it will see that it has to stop.
Potential limitations:
1. Your script does not have a way of checking periodically the DB.
2. Based on how often the script checks the DB, it might take a few seconds to effectively kill the script.
I think there is something missing from this question. What are the triggers for doing the requests? You might be trying to solve the wrong problem.
Let me elaborate on that. If you lookup box is actually doing autocompletion of some kind and is doing a new search everytime the user presses a key, then you are going to have the issue you describe.
The solution in that case is not killing all the process. The solution lies in not starting them. So, you might make some decisions like not trying to search if there is only one character to search with - lets say we go for three. We then might say we want to wait until we can be reasonable sure the user has finished typing before sending off the request. Lets say we wait 1 second.
Now, someone looking for all the Paul's in you list of names will send off one search when they type 'pau' and pause for 1 second, instead of three searches for 'p' then 'pa' then 'pau'... so no need to kill anything.
I've come up with an awesome solution that I've tested and it's working beautifully. It's just a few lines of PHP code to put in whatever files are being resource intensive.
This solution utilizes the Process Identifier (PID) from the server. We can use two PHP function: posix_getpid() to get the current PID and posix_kill() to kill another PID. This also assumes that you already have called session_start() somewhere else.
Here's the code:
//if any existing PIDs to kill, go through each
if ($_SESSION['pid']) foreach ($_SESSION['pid'] as $i => $pid) {
//if posix_kill returns true, unset this PID from the session so we don't waste time killing it again
if(posix_kill($pid,0)) unset($_SESSION['pid'][$i]);
}
//now that all others are killed, we can store the current PID in the session
$_SESSION['pid'][]=posix_getpid();
//close the session now, otherwise the PID we just added won't actually be saved to the session until the process ends.
session_write_close();
Couple things to note:
posix_kill has two values. The first is the pid, and the second is supposed to be one of the signal constants from this list. Nothing there made any sense to me, other people seemed to have success just using 0, and when I use 0 it returns true. So whatever works!
calling session_write_close() before the resource intensive things start happening is crucial. Otherwise the new PID that has been saved to the session won't ACTUALLY be saved to the session until all of the page's processing is done. Which means the next process won't know to cancel the one that's still going on and taking forever.
Part of the PHP web app I'm developing needs to do the following:
Make an AJAX request to a PHP script, which could potentially take from one second to one hour, and display the output on the page when finished.
Periodically update a loading bar on the web page, defined by a status variable in the long running PHP script.
Allow the long running PHP script to detect if the AJAX request is cancelled, so it can shut down properly and in a timely fashion.
My current solution:
client.php: Creates an AJAX request to request.php, followed by one request per second to status.php until the initial request is complete. Generates and passes along a unique identifier (uid) in case multiple instances of the app are running.
request.php: Each time progress is made, saves the current progress percentage to $_SESSION["progressBar"][uid]. (It must run session_start() and session_write_close() each time.) When finished, returns the data that client.php needs.
status.php: Runs session_start(), returns $_SESSION["progressBar"][uid], and runs session_write_close().
Where it falls short:
My solution fulfills my first two requirements. For the third, I would like to use connection_aborted() in request.php to know if the request is cancelled. BUT, the docs say:
PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client. Simply using an echo statement does not guarantee that information is sent, see flush().
I could simply give meaningless output, but PHP must send a cookie every time I call session_start(). I want to use the same session, BUT the docs say:
When using session cookies, specifying an id for session_id() will always send a new cookie when session_start() is called, regardless of if the current session id is identical to the one being set.
My ideas for solutions, none of which I'm happy with:
A status database, or writing to temp files, or a task management system. This just seems more complicated than what I need!
A custom session handler. This is basically the same as the above solution.
Stream both progress data and result data in one request. This solves everything, but I would essentially be re-implementing AJAX. That can't be right.
Please tell me I'm missing something! Why doesn't PHP know immediately when a connection terminates? Why must PHP resend the cookie, even when it is exactly the same? An answer to any of these questions will be a big help!
My sincere thanks.
Why not set a second session variable, consisting of the unique request identifier and an access timestamp, from status.php.
If the client is closed it stops getting updates from status.php and the session variable stops being updated, which triggers a clean close in request.php if the variable isn't updated in a certain amount of time.
I am trying create a small web application that allows a user to "login" and "logout."
What I am currently having a problem with is allowing the client to send constant "heartbeats" or messages to the server to notify that it is still active.
This is more of a logical question. What I want to do is have a while(1) loop in php that checks if n number of heartbeats have been skipped. I still want the client and server to be able to interact while this loop is going on (essentially I want the server to behave as if it has a separate "check_for_heartbeats" thread.
How does one accomplish this using php? I am running XAMPP at the moment. Any help would be much appreciated.
Edit: To clarify, what I want to do is be able to catch a browser close event even on instances where the window.unload event won't fire (e.g. a client gets disconnected from the internet). In this case, having a thread to monitor heartbeats seems to be the most intuitive solution, though I'm not sure how to make it happen in php.
Edit 2: isLoggedIn() is just helper function that checks to see if a session boolean variable ($_SESSION['is_logged_in')) is set.
Edit Final: Okay, so I now understand exactly what the comments and responses were saying. So to paraphrase, here is the potential solution:
Have Javascript code to send "heartbeats" to the server. The server will add a timestamp associated with these beats.
Modify the database to hold these time stamps
Query the entire "timestamps" table (more likely a 'users' table with a 'timestamp' attribute), and see if the difference between NOW and last timestamp is greater than some threshold.
"Log off" any users passed this threshold.
The only issue is if there is just one user logged in or if all users lose connection at the same time - but in these cases, no one else will be there to see that a user has lost connection.
This is a combination of multiple responses, but I think chris's response takes care of the majority of the issue. Thank you to both chris and Alex Lunix for the helpful contributions. :D
Here is a code snippet for a better explanation
Server Side:
function checkBeats()
{
while(isLoggedIn())
{
// it's been > 30 secs since last heartbeat
if((time() - $_SESSION['heartbeat']) > 30)
{
logout();
break;
}
}
}
What i usually do is call a php file using javascript (jQuery) and update a database or whatevery you like. This question might answer yours: Whats the easiest way to determine if a user is online? (PHP/MYSQL)
You could use ajax to heartbeat a script that changes the heartbeats session variable, and just at the top of every script do this check (put it in a function and call that of course):
// it's been > 30 secs since last heartbeat
if((time() - $_SESSION['heartbeat']) > 30)
{
logout();
}
Edit:
If you want the database to reflect that status instantly instead of when they next visit the page, you'll need to use MySQL. Without using another program (such as a java program) to check the database the only thing I can think of is to add this at the top of every page (in a function that gets called of course):
mysql_query("UPDATE `users` SET `loggedin`=0 WHERE heartbeat<".time()-30);
Which would update every user, which means the accuracy of the loggedin value would be set by the frequency of page views.
I have a PHP script something like:
$i=0;
for(;$i<500;++i) {
//Do some operation with files numbered 0 to 500;
}
The thing is, the script works and displays the end results, but the operation takes a while and watching a blank screen can be frustrating. I was thinking if there is some way I can continuously update the page at the client's end, detailing which file is currently being worked upon. That is, can I display and continuously update what is the current value of $i?
The Solution
Thanks everyone! The output buffering is working as suggested. However, David has offered valuable insight and am considering that approach as well.
You can buffer and control the output from the PHP script.
However, you may want to consider the scalability of this design. In general, heavy processes shouldn't be done online. Your particular case may be an edge in that the wait is acceptable, but consider something like this as an alternative for an improved user experience:
The user kicks off a process. This can be as simple as setting a flag on a record in the database or inserting some "to be processed" records into the data.
The user is immediately directed to a page indicating that the process has been queued.
An offline process (either kicked off by the PHP script on the server or scheduled to run regularly) checks the data and does the heavy processing.
In the meantime, the user can refresh the page (manually, by navigating elsewhere and coming back to check, or even use an AJAX polling mechanism to update the page) to check the status of the processing. In this case, it sounds like you'd have several hundred records in a database table queued for processing. As each one finishes, it can be flagged as done. The page can just check how many are left, which one is current, etc. from the data.
When the processing is completed, the page shows the result.
In general this is a better user experience because it doesn't force the user to wait. The user can navigate around the site and check back on progress as desired. Additionally, this approach scales better. If your heavy processing is done directly on the page, what happens when you have many users or the data processing load increases? Will the page start to time out? Will users have to wait longer? By making the process happen outside of the scope of the website you can offload it to better hardware if needed, ensure that records are processed in serial/parallel as business rules demand (avoid race conditions), save processing for off-peak hours, etc.
Check out PHP's Output Buffering.
Try to use:
flush();
http://php.net/manual/ru/function.flush.php
Try the flush() function. Calling this function forces PHP to send whatever output it has so far to the client, instead of waiting for the script to end.
However, some web servers will only send the output once the entire page is done being built, so calling flush() would have no effect in this case.
Also, browsers themselves buffer input, so you may run into problems there. For example, certain versions of IE won't start displaying the page until 256 bytes has been received.
I currently have a class Status, and I call it throughout my code as I preform various tasks, for example - when I upload an image, I call $statusHandler = new Status(121), when I resize it, I call $statusHandler = new Status(122).
Every statusID corresponds to a certain text stored in the database.
Class status retrieves the text, and stores in $_SESSION.
I have another file, getstatus.php, that returns the current status.
My idea was to call getstatus.php every 500 miliseconds with ajax (via jquery), and append the text to the webpage.
This way the user gets (almost) real-time data about what calculations are going on in the background.
The problem is that I only seem to be getting the last status.
I thought that it just was a result of things happening too quickly, so I ran sleep after calling new Status. This delayed the entire output of the page, meaning PHP didn't output any text until it completed running through the code.
Does PHP echo data only after it finishes running through all the code, or does it do it real-time, line-by-line?
If so, can anyone offer any workarounds so that I can achieve what I want?
Thanks.
The default session implementation (flat file) sets a file lock on the session-file when session_start() is called. This lock is not released until the session mechanism shuts down, i.e. when the script has ended or session_write_close() is executed. Another request/thread that wants to access the same session has to wait until this lock has been released.