I'm working on a project that involves multiple users and the ability to alter data.
Basically, when a user land on a page, he can enter information for a particular entry in a database. While he's on this page, no other user can access that particular entry. When he finishes, the entry becomes open again. Now, to restrict access to the entry is easy. I set it up so when the particular entry is selected, a value in the db states it is "Inactive" and no one else can get to that page. On the page itself, there's a "leave" and a "submit" button. Either of these will set the entry back to active.
The trouble I have is if the user decides to click on a different link, close the tab or somehow navigate away. How can I structure it to restore the entry back to an active state? I was looking into the "onunload" event and potentially using it to make an AJAX call. Is this the most logical route to take or is there something similar I'm missing due to my limited knowledge? Thanks for all your help.
I wouldn't go the onunload way (at least not exclusively), as it's not reliable in case of crash, power loss, etc. Chances are that entries could be locked "forever" in such case.
I'd embed an Ajax method in the page, which periodically requests a PHP script to "confirm" that the page is still alive.
You can use such "confirmation" requests to build/update a table to track current locks, working with s/t like a lock_id which uniquely identifies the "entry" being locked, session_id to uniquely identify the browser session holding that lock and expire_timestamp to set the time at which "entry" should be unlocked again in case no more "confirmation" requests of session_id come in and raise its expire_timestamp.
In combination with a cron job, periodically deleting records having exceeded their expire_timestamp, that should be a more reliably way to do what you are trying to achieve.
I worked on a similar problem and "onunload" is the right way to go. The down side of this is that if the browser crashed or is killed (from taskmanager), the unload does not get invoked. So it is better to have a job that sets entries to active state if it is sessionId corresponding to that entry is not present. (you can store a combination of sessionId & isactive lock to detect the browser idleness as well)
Related
I am working on a updating an existing visitor tracking script on a high traffic website. I noticed that there is a problem not with the script itself, but with what happens when there are multiple requests. Let's say a user double clicks on certain links to my site and there end up being two requests made at effectively the same time. Request 1 gets processed and a session is created. The script then proceeds to add a visitor record to the database. At the same time, request 2 is getting processed. It checks whether a session is set and there isn't, so it does the same thing as request 1 does. Now, we have 2 different sessions and 2 records in the visitors table in the database, when there should really be one. The session id for the current session ends up being from whichever request finished last.
So, what I'm looking to do is to prevent this from happening. Even if there are 100 multiple concurrent requests from the same visitor, I want there to be only one session id created and above all, only one record (not 100 records) inserted into the visitors table in the database. This involves determining in a matter of a few milliseconds that one request was already made. Any ideas?
You can set the beginning of your script to force a specific session. http://www.php.net/manual/en/function.session-start.php
I have the following workaround for my java application. When first accessing the php server I send a blocking ping call that establishes the unique session id. Afterwards I start my concurrent requests to the php server. This should also be possible from a web page for example in an init block.
I have some kind of chat/forum application that checks for new messages using periodic polling (every 15 seconds) using jquery ajax. I was wondering if i can get around the issue of users who try to be 'funny' by loading several same browser instances, with lots of tabs, all pointing to the same application. Each tab is sending an ajax request, which potentially can overflow a server if several users start to do the same thing.
I do store sessions in a table, along with the last access time and IP address, which works fine as long as users don't use the same browser. I could store a unique identifyer that is sent using the ajax POST or GET request, but that would give problems if a regular (non abusing) user refreshes his page, which would then create a new identifyer.
This is not a real problem yet, but better catch it before someone thinks of abusing the system like this :) Any idea how to do this?
One option could be to fetch data like so:
Your script is preparing to poll data. Before executing the request, write (with LocalStorage), a value saying that you're going to fetch data. localStorage.setItem("last-request-timestamp", new Date().getTime());
Poll for data. You get a result. Write that result to the localStorage: localStorage.setItem("latest-messages", ajax_result);
Check if a page is preparing to poll data by checking if localStorage.getItem("last-request-timestamp") is longer than 15 seconds ago. If so, go to step 1. If not, wait 15 seconds and check again.
Regardless if the current page polled for data or not, check the latest-messages variable and update the page.
Other pages will of course share the localStorage data. They won't get data if another page is fetching at the moment. If page #1 is closed, one of the other pages will continue to fetch data.
I haven't used LocalStorage before, but browser support seems decent enough. You should also be able to just use it as a key-value array: localStorage["last-request-timestamp"].
You can only store strings in localStorage, but you can of course serialize it into JSON.
Not sure if it is do-able in javascript. You can check if the tab is active. And only do the ajax on the active tab?
I have the similar problem. Now I force all users to log in (it means i have their e-mails). Also i setup connections limit per account and request limit per connection, after 5 overflows i ask user to enter captcha, then i block account for a 30 min and send e-mail with password recovery link. It's not a clear solution but for now it works for me.
UPD:
The simplest way to do this is to use cookie or session storage. I use cookies. The algorithm is simple:
User login on web.
Check is there any opened session for this user,
is opened, then delete the other session or trigger exception or
switch to that session, you have decide your own the desired
behavior.
Create session id for user and store it in database.
Increase sessions counter field for specific user to detect opened
sessions, so now it doesn't matter is there one browser in use
or many.
Update last access mark (i use microtime(true) + $delay and mysql
decimal(14,4)). Send it to user
Send id to client
On each request:
Search for session by passed id in $_COOKIE.
Check last access mark. If it less then microtime(true) it means that client send requests to frequent, so decide yourself what to do, increase the mark, for example microtime(true) + $delay + $penalty or drop whole session or trigger error. The behavior depends of your application.
Why not throw something like Memcached/Redis at the problem? Cache a response with a 10-15s lifetime and avoid as much processing as possible.
I'm creating a PHP Web application, which would involve:
1) Users opening a record
2) Users making changes to the record
3) Saving changes to the record
Since this is a multi-user application, I want to prevent situations where two users have the same record open at the same time, and one user's changes overwrites the next, preferrably by enforcing some sort of locking method when a record is opened that automatically unlocks when the user navigates away from the page.
By record, you mean SQL records? If so, you could add another column isOpen. Set it to 1 as long as someone else has it open, and in that case, do not serve it to anyone else.
In situations like this, it works best to also implement a timeout mechanism, where a record can be open only for 'x' min before being forcibly closed.
(Edit: This answer is assuming you want to keep a record locked the entire duration a user is viewing the info fetched from the table. If you want to lock a record only for the instant that a read/write operation is occuring on that record, MySQL engines have inbuilt mechanisms for that)
In response to your comment
To make a record accessible to others when the active user navigates away, off the top of my head, I can think of two ways to achieve it:
Allow the timeout mechanism to take care of it. Depending on your scenario, a short enough time window could work fine.
In addition to the timeout, also implement a heartbeat mechanism - an Ajax script on the page polls the server letting it know the page is still open. If the user navigates away, the server recognizes the skipped heartbeat, and unsets the record. In this case, the timeout would still take precedence. So, if the user leaves the window open and walks away, the server would still receive the heartbeat, but when the time window closes, the server unsets the record (despite still receiving heartbeats).
I use a field update_date. When user reads the record I write a cookie with this date. When user updates the record and submits the new data I'am adding WHERE update_date = '$my_escaped_date' AND id = '$the_edited_id' and if mysql_affected_rows is zero I'm showing error message that the edited data is old. It's not perfect as if you edit old data you must reenter it, but it does the job.
A locking method is exactly what is available in mysql:
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
It's not automatic but it allows you to lock a table, do stuff and then unlock it again.
Be carefull tho' that the system does not become locked up if you forget to unlock a table or a user takes a long time to change something and you only unlock it when that user submits the form.
A better way might be to read data from the table and upon submission of the form, check to see if the data has not been altered. If it has you can notify the user of the changes and other wise you can lock the table, perform the changes and unlock it again.
You can add a field in_use to the records table,
when a user open that record update its value to 1 and when he saves it
update it back to 0.
If the value is 1 - the record is locked and won't be opened for other users.
Okay so I'm running into a small problem.
Basicly my whole website runs through the AJAX system, content is loaded in the middle page, and theres a left and right menu which dont refresh.
Currently I'm trying to look for a PHP->Ajax feature that refreshes the whole website incase a certain record changes in the MYSQL table
Okay so every user has a record called "State" which indicates the state of their account, this can be changed by anyone, for example the account gets shot and killed by someone. How do I make it so it checks what state you have and if it changes from the "standart" state that it performs a full page refresh.
I tried to find an answer for this everywhere but haven't been able to figure something out.
-----Edit-----
Okay so I'll also notify, I kind of know how to perform a full page refresh, and I know how to retrieve data from the mysql database, this isn't the problem.
I have a table with all the users accounts in it, one of the records for every user is called "State" everybodies state will be 1 which means alive. when its 0 it means its a dead account.
On a part of my website theres an auto refresh with always fetches data from the database every 5 seconds, to check if your online if you have money etc. it also checks what state you have.
the only thing I want to do, is that when it sees your state is 0, it performs a full page refresh, considering state 0 means death, you should be seeing a deathscreen, I want it to perform a full page refresh cause the menu's have to dissapear. and it has to redirect you to the deathpage.
You need long pooling / comet - basically you keep open connection between the client and the server, and when the state is changed, the server sends the response to the client.
Basically, you'll open a long pooling connection, sending the userid.
The server script receives the userid, and starts monitoring for changes for that user. If such change is detected, send the response.
If performance is concern, you can use Tornado web server. What's nice about it, is that you can post from another application to the web server, and it can detect which client is affected by the change and send response to that client.
i m creating two table(in mysql) named
table_temp_guest
table_temp_order
now if a guest enters then we save his personal information in this first table and if he purchase something from any stall ,it saved as a temporary order in table_temp_order.
now my question is :
i m using session id, so when user goes to logout( without checkout) then
i delete his information(personal and order) from both table )using session id,
BUT if he close the browser, or does not go to checkout(any reson) then how
to delete his information from both tables
please suggest me how to do this?
additional question:
is there any other way to do this whole process by some other manner.
You can't detect when a user closes the browser or types in a new address. You basically need to have a "timeout" facility like the rest of the websites have.
There is a window.onunload event that you can detect with javascript, but it's not universally supported, and it detects window closes, not browser closes.
Your best resolution is probably going to be tracking the session_id and last accessed date. Re-update the table's last_accessed_date on every page load, and delete everything that's older than a few hours.
A timeout would be the best method.
Record the last active time in the guest table. Have a cron job running periodically on the web server cleaning up sessions that exceed the maximum time that you wish to allow.
Be careful about the amount of time that you allow. You have to allow for slow users and dropped connections.
If you're using session_id() anyway (I guess this is what you mean by session id), just use php sessions. PHP automatically invalidates them for you and you don't need those two tables (you can store everything you need in $_SESSION).
There is no way to check if the broswer wasn't closed you could rely on.
If you don't want to change the way your project works now, just add a created field to the tables and set it to the current time() whenever you're "seeing" the specific user. Then set up a cronjob which deletes all records from this table which are older than a specific timeout.
Also you can try to have a script that would run on the client side and ping the server so that you know if the script has not pinged for a while, the user closed the browser. That being said, I would agree with the previous posters, a timeout/ cleanup procedure would be best.
For that you would add a ModifiedDate field to your tables, you can set it as an "ON UPDATE" field for ease of use, then just delete all records that have an ModifiedDate field of older then several hours.