"A TEMPORARY table is visible only within the current session, and is dropped automatically when the session is closed."
This fact is causing lots of heartache. I am using codeigniter 3 and Mysql RDS. Creating TEMPORARY tables didn't work due to the above quote since my multiuser app creates about 6 regular tables for each user that get deleted (dropped in sql) when they press logoff. But a large number of users will not press logoff in the app, instead pressing the X to close the tab. This leaves 6 extra tables on my RDS server. This is causing a large number of orphan tables (in 24 hours) which is slowing the database server to a crawl.
Is there anyway to "catch" when someone closes the app without pressing logout? I am thinking that if I could keep php from closing sessions constantly, that might work, but that seem pretty far fetched. I was then thinking (outside the box) that perhaps an external service like Redis could hold the temporary tables, but being on AWS I am already at my upper limit of what extra services I can afford.
I have tried taking the TEMPORARY tables and making them regular old mysql tables and deleting them when a user logs off. But the issue is that many users will exit via the X on the tab.
After trying a few different ways to solve this I ended up creating a log where I log any small temp files created. Then, when any user logs in, I check to see if any tables were created (in the log) more than two hours ago and use drop if exists to delete them. This should work without creating a cron. Is 2 hours enough? I sure hope so.
Related
I have a raspberry pi3 running this project from instructables.com
Simple and intuitive web interface for your Raspberry Pi
I need to make the buttons function so that only one user at a time can push a button while a lot of users can view the page. This is to control a pan/tilt web camera where Button Zero pans the camera left, Button One pans the camera right, etc. The raspberry drives relays that drive the motors in the Pelco Camera Pan/tilt mount. I can't have one user trying to pan left while another user on a different http connection tries to pan right. There is no log-in to access this raspberry.
Is there an Apache2 setting to accomplish this? I don't think this can be solved with adding code to the GPIO.php file, or is there? Can I use a semophore or $global flag to limit button actuation with multiple concurrent viewers?
You need a mutex lock. Two quick and dirty ways I may suggest.
Lock a local file on the system
Open the file, and lock it for writing.
During this time if another user attempted to control the camera you would first attempt to open this file with and lock it for writing and it would fail with an error.
At that point you know you can't give the second user control.
PHP lock a file for writing
Note: You could use multiple different files for different locks. For example if you had multiple cameras then you could use multiple files to lock, one for each camera.
Use a database like MySQL
Using a database like MySQL you can lock a specific row in a table, effectively doing the same thing as we did with a file in the last example.
If a second user comes along we again attempt to lock that same row and we will fail, at this point we can reject the second user's request.
Lock a single row in MySQL
Note: You can user multiple rows, where each row may represent a different camera as mentioned above.
Other things to consider
I highly recommend providing your users the ability to see if they are they current user or not, and implementing a way to fairly switch between users so that a single user can't hog all the fun. Perhaps something as simple as a 15 or 30 second timer which switches control between the current active users.
I have an online sql database with a few tables for users matches and bets. When I update the result of any game, I need the status of all bets containing that game in the bet table to be updated. So for example if I update game 8 with the result home win I need all bets which have game 8 in them to be updated as either lost, won or still open.
The way I do this currently is that when the user turns on my android app, I retrieve all the information about the games and all the information about the user's bets using asynctasks. I then do some string comparisons in my app and then I update the data in my database using another asynctask. The issue is that this wastes a lot of computation time and makes my app UI laggy.
As someone with minimal experience with php and online databases, I'd like to ask is there a way to carry out these things in the database itself either periodically (every 3 hours for example) or whenever the data in the gamestable is changed using a php file for example which is automatically run?
I tried looking for some kind of onDataChanged function but couldn't find anything. I'm also not sure how to make a php file run and update data without getting the app involved.
Another idea I had was to create a very simple app which I wouldn't distribute to anyone but just keep on my phone with an update button which I could press and trigger a php file to carry out these operations for all users in my database.
I would appreciate some advice on this from someone who has experience.
Thanks :).
You can easily execute php script periodically if your hosting provider supports script executors like Cron.
About updating game status multiple times, first check tables engine. If you are using engine like InnoDB you can create relationship between those tables, so updating status of one row will affect all connected to them.
This question already has answers here:
PHP - Visitors Online Counter
(5 answers)
Closed 8 years ago.
I am working on a very simple and small visitor counter and I am wondering myself if it is not too heavy to on the server resources to open a MySQL database every time a visitor enters on a page of the website.
Some people store the visits on a plain-text file and maybe I could store the number in a session (in an array with a key for each page), and when the session is closed, I copy it in the database in one time?
What is the lightest way to do this?
In most robust web applications, the database is queried on every page load anyway for some reason or another, so unless you have serious resource limits you're not going to break the bank with your counter query or save much load time by avoid it.
One consideration might be to increase the value of the database update so that one update can be queried for multiple uses. In your case, you could have a view log, like :
INSERT INTO view_log
VALUES (user_name, ip_address, visit_timestamp, page_name)
Which could be used for reporting on popularity of specific pages, tracking user activity, debugging, etc. And the hit count would simply be:
SELECT COUNT(1) FROM view_log
If your site has a database already, use it!
Connections are most likely pooled between opens and take very little effort.
If you write to a file the site requires write access to it and you risk concurrency problems during multiple user connections.
Only persisting when session closes is also a risk if the server is closed abruptly..
Most sites open a MySQL connection anyway, so if you do, you won't have to open it twice. Writing to disk also takes resources (although probably less), and additionally you might wear out a small part of your disk very fast if you have a file based counter on a busy website. And those file writes will have to wait for each other, while MySQL handles multiple requests more flexible.
Anyway, I would write a simple counter interface that abstracts this, so you can switch to file, MySQL or any other storage easily without having to modify your website too much.
How does PHP handle multiple requests from users? Does it process them all at once or one at a time waiting for the first request to complete and then moving to the next.
Actually, I'm adding a bit of wiki to a static site where users will be able to edit addresses of businesses if they find them inaccurate or if they can be improved. Only registered users may do so. When a user edits a business name, that name along with it's other occurrences is changed in different rows in the table. I'm a little worried about what would happend if 10 users were doing this simultaneously. It'd be a real mishmash of things. So does PHP do things one at time in order received per script (update.php) or all at once.
Requests are handled in parallel by the web server (which runs the PHP script).
Updating data in the database is pretty fast, so any update will appear instantaneous, even if you need to update multiple tables.
Regarding the mish mash, for the DB, handling 10 requests within 1 second is the same as 10 requests within 10 seconds, it won't confuse them and just execute them one after the other.
If you need to update 2 tables and absolutely need these 2 updates to run subsequently without being interrupted by another update query, then you can use transactions.
EDIT:
If you don't want 2 users editing the same form at the same time, you have several options to prevent them. Here are a few ideas:
You can "lock" that record for edition whenever a user opens the page to edit it, and not let other users open it for edition. You might run into a few problems if a user doesn't "unlock" the record after they are done.
You can notify in real time (with AJAX) a user that the entry they are editing was modified, just like on stack overflow when a new answer or comment was posted as you are typing.
When a user submits an edit, you can check if the record was edited between when they started editing and when they tried to submit it, and show them the new version beside their version, so that they manually "merge" the 2 updates.
There probably are more solutions but these should get you started.
It depends on which version of Apache you are using and how it is configured, but a common default configuration uses multiple workers with multiple threads to handle simultaneous requests. See http://httpd.apache.org/docs/2.2/mod/worker.html for a rundown of how this works. The end result is that your PHP scripts may together have dozens of open database connections, possibly sending several queries at the exact same time.
However, your DBMS is designed to handle this. If you are only doing simple INSERT queries, then your code doesn't need to do anything special. Your DBMS will take care of the necessary locks on its own. Row-level locking will be fastest for multiple INSERTs, so if you use MySQL, you should consider the InnoDB storage engine.
Of course, your query can always fail whether it's due to too many database connections, a conflict on a unique index, etc. Wrap your queries in try catch blocks to handle this case.
If you have other application-layer concerns about concurrency, such as one user overwriting another user's changes, then you will need to handle these in the PHP script. One way to handle this is to use revision numbers stored along with your data, and refusing to execute the query if the revision number has changed, but how you handle it all depends on your application.
My requirement is, users can click and avail the deal.
One deal can be availed by only one member, so I'm locking the table when a user tries to avail a deal, and unlocking it. So that if two users clicks and tries to avail the deal, it will form a queue and it will prevent two users to avail the deal.
The code is like
LOCK TABLES deal WRITE;
//MySQL queries and my php code goes here.
UNLOCK TABLES;
The problem now is, what if some problem happens with my php code between lock and unlock,
will the table get locked permanently? Is there anyway i can set a maximum time to lock the table?
If I would have been in your place I would have created a database table named locked_deals. It would have a column named deal_id. When ever a user chooses a deal, its deal_id will get inserted into the locked_deals table. And when the next user clicks on the same deal it will first check if the deal_id is in lock table. If yes it would not allow the user to choose the deal. Finally when everything goes fine we can delete the lock_id from the lock table at the end of process. For the lock_ids which get stuck into the table because of any exception in the php code - we can create a background service that cleans the stucked ids (which ahve been stuck for the last n minutes) in the locked_deals table every n minutes.
Hope it helps.
If the connection for a client session terminates, whether normally or abnormally, the server implicitly releases all table locks held by the session.
Source