I have a Excel like online spreadsheet for some purpose and it's almost done and working perfect but there is a issue I am facing and want to rectify.
You can see spreadsheet there at http://partydesigners.site50.net/Excel%20Like%20App/Index.html
The issue is that at one single time more than 1 users are using this spreadsheet and if one person modify any cell the other don't get it updated in their spreadsheet so I planned to have a setTimeout() function to call a function that will update every cell in sheet there from database. '
Now the problem is there are 40 rows each having 10 records from the database and 400 records needs to be updated every "n" seconds so it hangs the browser and UE suffers. I thought I can create a timer like update one cell and then move to another after few seconds and then update another after few seconds in a chain.
You can imagine as I updated first cell and then when it will be finished updating it will call a function for a cell next to it and so on a continue chain.
So what pseudo-jquery code you would write for it?
As Jonathun Kuhn mentioned in the comments, it would be better to keep track of only the cells that need to be changed and update them accordingly
Depending on how you have this setup with your database, will depend on how to keep track of what has been changed. But my first thought is to have a table that keeps track of one change per row along with a timestamp of when the change happened. Then you can run a function from the browser every 'n' seconds that uses some ajax to request all changes since it's last request (can keep track of the unique id of the last update, sort by timestamp and grab everything new).
This should help speed things up as it will only spend time updating cells that actually need it.
As a disclaimer, however, it is still very possible that a second user updates a cell before the first edit has shown on their screen. (Think two users editing a cell at virtually the same time, maybe the second initiates a 'save' milliseconds after the first.) The best way I can think of handling this is to show a warning to the second user, if it is noticed that they are editing something very quickly after a previous edit, that they may be overwriting data.
Related
We're setting up a system which allows a department to make edits to a record here. Their division of labor isn't clear, and they've had problems in the past where more than one individual loads data into a web form, makes edits, and then sends those edits to the record. Inevitably, the slower editor over-writes the faster editor's freshly edited data with the old data that had been loaded when the page loaded.
Currently, we have a white-board solution that would use changes to the last modified time of the data to reject the second request to write data and deliver an error message when that data POSTED.
The members of the department, however, would prefer a file-lock styled system--one where they were notified that another user was in the dataset prior to being allowed to access the data. Timeouts will inevitably be too short or too long depending on the day, and these particular users cannot be relied upon to "log out" somehow.
I'm just wondering if anyone else has implemented a solution to this, and, if so, how?
We're running PHP 5.6 on Apache 2.2 and building in Zend Framework 2.4. Not that any of that has any bearing on the question, but someone will inevitably ask.
Add 2 columns to your table locked_by_user_id and locked_time.
Before you allow a user to enter the "edit" view, check if those values are set and if locked_time is within the past 10 minutes. The reason you should record the locked time instead of a binary flag is because, as you say, some people forget to log out or might not log out cleanly (for example, they could just close their browser).
When someone is able to acquire a lock, set up a setInterval that runs every 5 minutes that reacquires the lock via an ajax call. Someone still might forget to leave the "edit" screen but in that case you can allow someone else to override a lock and if that happens you can have the ajax call exit the "edit" screen when it fails to reacquire the lock.
Why not set a flag on the table row for edit_in_progress? If a user clicks to edit and that flag is already set, fail out with a message saying someone else is editing it (perhaps consider also setting a field for WHO is editing it, so they can go back in before they've saved and continue their edits). Once saved, unset both fields, and allow the next user to lock the row.
Assuming that you're using a database, but not knowing the database schema you're using, this is going to be a generic answer.
You need to give each record an identifier which is set as non-unique. By this, I mean each record could be identified as record_1, record_2 ... record_n but this identifier can appear multiple times in the table.
When a record is edited, don't update the record directly but create a new record which is
timestamped
has the original record_n identifier
Also, add a field to the record to give its current state (e.g. stable, editing?) and a field which gives the edit start date/time if it is being edited
With this, when someone wants to edit a record (e.g., record_2), you retrieve the most recent data for this record and check its state (if marked as editing then report this and prevent concurrent editing).
When they submit changes, a new timestamped record is created in the database, you mark the old and new records as stable.
Using this, you also create a paper-trail for auditing changes
With regards to people who wander off/retire/die and leave records in an editied state, create a scheduled job which resets states to "stable" after a preset number of minutes (60?)
What would be the best way to achieve an undo function in a PHP CRUD application? The only solution I've been able to come up with is using a sort of buffer table in my database that is periodically wiped.
If the user clicks the "Undo" button after deleting a record for example, the id of the last change will be passed to a handler which will pull the buffer record and reinstate it into the main table for that data type. If the "Undo" is not done in say, 4 or 5 minutes, a reaper script will drop the entry.
Does this sound feasible? Is there a better way of accomplishing this?
You could use a flag field in your database to mark a row for delete.
And you can setup task (crontab in linux) to delete all rows with delete flag set to true and time difference > to 5 mins.
I've learned to not delete anything, but simply do as Ignacio Ocampo stated by using a flag column in your DB such as status. By default set the status column to open. If your client clicks your delete button, just update that records status column to void, or deleted..
In doing this, you'll need to update your data request to pull only those records with the status column set to open. This allows the data to not be lost, but also not seen.
all undo(s) or redo(s) if applicable can reset the open status to - or + 1 record sorted by a timestamp column.
If db space is at a premium, and you need to remove old data then crontab does work, but I prefer the simplicity phpmyadmin conjob to loop a file that will wipe all void or deleted records older than time()-'(last cron run).
Depending on what and how you're building, you might also want to consider using one of the following solutions.
1) A pure PHP CRUD solution would be something along the lines you've mentioned, with also possibly storing cookies on the client side to track which actions are being done. Every action a new cookie is created, then your application will only have to sort the cookies by date and time. You could also set the cookies to be automatically expire after x amount of time. (Although I would expire after a x amount of steps, instead of time)
2) If you are able to use HTML5 local storage (http://www.w3schools.com/html/html5_webstorage.asp) along with some Javascript would be perfect for this, since you wouldn't have to wait around for the server to respond everytime 'undo' is clicked since all the processing would be handled locally.
For a two-player game, I need to send updated data to player every 30 seconds.
I have a table (ideally 4 tables) from where I need to select data and sent to user once he/she login. Since it is multi-player interaction game, data needs to be sync every 30-60 seconds.
My problem is, I've a very heavy query to run every 30-60 seconds. So ideally, I should send only updated and new rows to the player during sync (Its also a front end requirement for IPhone/Android game, app don't want whole data during every sync operation).
I went through MySQL: difference of two result sets and hope I'll get only updated/new records through SQL but problem is, how do I save result of last query.
Even if I save first result in Session (probably not recommended) that record will be useless as soon as new row inserted or updated. Updating session record again will definitely put lot of pressure on the server.
Can someone please suggest the best way to achieve this requirement; Not detailed solution, just some hint/link will be sufficient.
Basically, this isn't that hard. Let me provide you with a step plan.
Add a datetime field to each table you want to do this on
In each of your updating queries, set this field to NOW()
Make sure that the application adds the time of its last update to all its requests
Have the server add the time of the update to result it send to the app (which also sends the updated rows)
Can't you just timestamp everything?
Give every row in the tables a timestamp called something like "last_updated"
In the query, filter out all entries with a last_updated that is before the last time the query was executed (or possibly the latest last_updated that the client got the last time it called the server)
I am currently working on a web application where I have encountered a little problem. In this system, multiple users can log onto the same page and update the data (a series of checkboxes, dropdowns, and text fields).
The issue is that data might get overwritten if one user was already on a page where old data was loaded, and has since been updated, and submits their changes, which update everything.
Any suggestions on how to solve this problem? I am currently just working with plain-text files.
I am currently just working with plain-text files.
Suggestion 1. Use a database.
Suggestion 2. Use a lock file. Use OS-level API calls to open a file with an exclusive
lock. The first user to acquire this file has exclusive access to the data. When that
user finishes their transaction, close the file, release the OS-level lock.
Suggestion 3. Don't "update" the file. Log the history of changes. You can then read usernames and timestamps from the log to find the latest version.
If you do this, you need to make each request do something like this.
When getting the current state, read the last line from the file. Also, get the file size and last modification time. Keep the size and last modified time in the session. Display the current state in the form.
When the user's change is being processed, check the file size and last modification time. If the file is different from what was recorded in the session, this user is attempting an update to data which was changed by someone else. Read the last line from the file. Also, get the file size and last modification time. Keep the size and last modified time in the session. Display the current state in the form.
In addition, you might want to have two files. One with "current" data, the other with the history of changes. This can make it faster to find the current data, since it's the only record in the current state file.
Another choice is to have a "header" in your file that is a fixed-size block of text. Each time you append, you also seek(0,0) and refresh the header with the offset to the last record as well as the timestamp of the last change.
When saving new data you could compare the date that data has been modified with the time the user started editing.
If there have been modifications while the user was making changes you could then show a message to the user and ask them which version to take or allow him to merge the two versions.
This problem has been addressed by revision systems, like svn, git, etc. in the very same fashion.
You can make an additional table, and store there all information as well as userID, so you will be able to access using joins all data that users inserted.
If there is a HTTP request coming to a web server from many clients the requests will be handled in the order.
For all the http request i want to use a token bucket system.
So when there is a first Request i write a number to a file and increment the number for the next request and so on..
I dont want to do it in DB since the DB size increases..
Is this the right way to do this.Please suggest
Edit:So if a user posts a comment the comment should be stored in the a file instead of the DB.So to keep track of it there is a variable that is incremented for every request.this number will be used in writing the file name and refer it for future reference.so if there are many requests is this the right way to do it..
Thanks..
Why not lock ( http://php.net/manual/en/function.flock.php ) files in a folder ?
First call locks 01,
Second call locks 02,
3rd call locks 03,
01 gets unlocked,
4th call locks 01
Basically each php script tries to lock the first file it can and when it's done it unlocks/erases the file.
I use this in a system with 250+ child processes spawned by a "process manager". Tried to use a database but it slowed down everything.
If you want to keep incrementing the file number for some content i would suggest using mktime() or time() and using
$now=time();
$suffix=0;
while(is_file($dir.$now.'_'.$suffix)) {
$suffix++;
}
But again, depending on how you want to read the data or use it, there are many options. Could you provide more details?
-----EDIT 1-----
Each request has a "lock-file", and stores the lock id (number) is in $lock.
three visitors post at the same time with the lock-id 01, 02, 03 (the last step in the described situation)
$now=time();
$suffix=0;
$post_id=30;
$dir='posts/'.$post_id.'/';
if(!is_dir($dir)) { mkdir($dir,0777,true); }
while(is_file($dir.$mktime.'_'.$lock.'_'.$suffix.'.txt')) {
$suffix++;
}
The while should not be neede but i usually keep it anyway just in case :).
That should create a txt file 30/69848968695_01_0.txt and ..02_0.txt and ..03_0.txt.
When you want to show the comments you just sort them by filename....
The database size need not increase. All you need is a single row. In concept the logic goes:
Read row, taking lock, getting the current count
Write row with count incremented, releasing lock
Note that you're using the database locks to deal with the possibilities that multiple requests are being processed at the same time.
So I'm suggesting to use the database as the place to manage your count. You can still write your other data to files if you wish. However you'll still need housekeeping for the files. Is that much harder with a database?
I agree with some of the other commenters that, regardless of whichever problem you are trying to solve, you may be making it more difficult than it needs to be.
Your example is mentioning putting comments in a file and keeping them outside the database.
What is the purpose of your count, exactly? You want to count the number of comments a user has made? Or total number of comment requests exactly?
If you don't have to update the count anywhere in real time, you could write a simple script that reads your server access logs and adds up the total.
Also, Matthew points out above, if you want requests to be handled in a particular order you will be rapidly heading for strange concurrency bugs and performance issues.
If you update your post to include details more explicitly, we should be able to help you further.
Hope this helps.