I have a raspberry pi3 running this project from instructables.com
Simple and intuitive web interface for your Raspberry Pi
I need to make the buttons function so that only one user at a time can push a button while a lot of users can view the page. This is to control a pan/tilt web camera where Button Zero pans the camera left, Button One pans the camera right, etc. The raspberry drives relays that drive the motors in the Pelco Camera Pan/tilt mount. I can't have one user trying to pan left while another user on a different http connection tries to pan right. There is no log-in to access this raspberry.
Is there an Apache2 setting to accomplish this? I don't think this can be solved with adding code to the GPIO.php file, or is there? Can I use a semophore or $global flag to limit button actuation with multiple concurrent viewers?
You need a mutex lock. Two quick and dirty ways I may suggest.
Lock a local file on the system
Open the file, and lock it for writing.
During this time if another user attempted to control the camera you would first attempt to open this file with and lock it for writing and it would fail with an error.
At that point you know you can't give the second user control.
PHP lock a file for writing
Note: You could use multiple different files for different locks. For example if you had multiple cameras then you could use multiple files to lock, one for each camera.
Use a database like MySQL
Using a database like MySQL you can lock a specific row in a table, effectively doing the same thing as we did with a file in the last example.
If a second user comes along we again attempt to lock that same row and we will fail, at this point we can reject the second user's request.
Lock a single row in MySQL
Note: You can user multiple rows, where each row may represent a different camera as mentioned above.
Other things to consider
I highly recommend providing your users the ability to see if they are they current user or not, and implementing a way to fairly switch between users so that a single user can't hog all the fun. Perhaps something as simple as a 15 or 30 second timer which switches control between the current active users.
Related
"A TEMPORARY table is visible only within the current session, and is dropped automatically when the session is closed."
This fact is causing lots of heartache. I am using codeigniter 3 and Mysql RDS. Creating TEMPORARY tables didn't work due to the above quote since my multiuser app creates about 6 regular tables for each user that get deleted (dropped in sql) when they press logoff. But a large number of users will not press logoff in the app, instead pressing the X to close the tab. This leaves 6 extra tables on my RDS server. This is causing a large number of orphan tables (in 24 hours) which is slowing the database server to a crawl.
Is there anyway to "catch" when someone closes the app without pressing logout? I am thinking that if I could keep php from closing sessions constantly, that might work, but that seem pretty far fetched. I was then thinking (outside the box) that perhaps an external service like Redis could hold the temporary tables, but being on AWS I am already at my upper limit of what extra services I can afford.
I have tried taking the TEMPORARY tables and making them regular old mysql tables and deleting them when a user logs off. But the issue is that many users will exit via the X on the tab.
After trying a few different ways to solve this I ended up creating a log where I log any small temp files created. Then, when any user logs in, I check to see if any tables were created (in the log) more than two hours ago and use drop if exists to delete them. This should work without creating a cron. Is 2 hours enough? I sure hope so.
The hosting package im using wont allow sql and will have to pay more monthly if i want sql and its on a shared server so i cant have a client server set up for my app.
Im making a leaderboard for my first ever app.
so iv decided to do everything using php. the app will launch a link in the browser which will look something like this:
....../myAppLeaderBoard.php?opt=submit&?id=5465&name?='myName'&dat=DKGHKDHGKHDKGHSAJDHKJAHGJKHDFGHKJDFHGLKHDFGJHSDJLFGHJKSDHFGKJDSHFKGJHSLKDFHGLSJDHFGLJSHDFGJHSLDFJHGLSDJHFGLSDHJFGLSHDFGHG
All those alphabets is because i plan on using my own encrypting technique to prevent a user from cheating and giving themselves a highscore.
when its submitted it will read everything from a textfile into an array
it will check if the user exists and if they do it will change that record and if they dnt exist it will add a new record,then it will write everything from the array back to the textfile.
Then it will display a success message and show the user on the leaderboard,Now what will happen if say 100000 users each simultaneously submit there scores.
If it reads and writes one record at a time there wont be a problem but if it does this simultaneously then some records might be deleted by a simultaneous write.
So is it done simultaneously or one at a time?
Feel free to give suggestions for a better way to do this.
File access is simultaneous but you can use flock to block other handles from accessing the file while you perform your read/write operations. It sounds like the best solution to your problem would be to use PDO and SQLite (if available). This way you have the database handle all locking for you but do not need a dedicated database server. SQLite is entirely file based.
If SQLite is not available, you'll want to make use of flock's LOCK_EX operation, this will only allow one write stream to access the file at a time, e.g.
// create the file handle
$hnd = fopen($destpath, 'ab+');
if ($isReadOperation) {
// shared lock - simultaneous reads can happen at one time
flock($hnd, LOCK_SH);
// perform your read operation
} else {
// exclusive lock - only one write can occur at a time after all shared locks are released
flock($hnd, LOCK_EX);
// perform your read/write operation
}
// release the lock
flock($hnd, LOCK_UN);
// release the file handle
fclose($hnd);
File access (and this applies to sqlite databases as well, as they're file-based) is, unfortunately, not supposed to handle many simultaneous read & write operations. Therefore, you will run into problems with that.
I'm afraid your only sensible option is buying a hosting plan that offers a real database, e.g. MySQL.
How does PHP handle multiple requests from users? Does it process them all at once or one at a time waiting for the first request to complete and then moving to the next.
Actually, I'm adding a bit of wiki to a static site where users will be able to edit addresses of businesses if they find them inaccurate or if they can be improved. Only registered users may do so. When a user edits a business name, that name along with it's other occurrences is changed in different rows in the table. I'm a little worried about what would happend if 10 users were doing this simultaneously. It'd be a real mishmash of things. So does PHP do things one at time in order received per script (update.php) or all at once.
Requests are handled in parallel by the web server (which runs the PHP script).
Updating data in the database is pretty fast, so any update will appear instantaneous, even if you need to update multiple tables.
Regarding the mish mash, for the DB, handling 10 requests within 1 second is the same as 10 requests within 10 seconds, it won't confuse them and just execute them one after the other.
If you need to update 2 tables and absolutely need these 2 updates to run subsequently without being interrupted by another update query, then you can use transactions.
EDIT:
If you don't want 2 users editing the same form at the same time, you have several options to prevent them. Here are a few ideas:
You can "lock" that record for edition whenever a user opens the page to edit it, and not let other users open it for edition. You might run into a few problems if a user doesn't "unlock" the record after they are done.
You can notify in real time (with AJAX) a user that the entry they are editing was modified, just like on stack overflow when a new answer or comment was posted as you are typing.
When a user submits an edit, you can check if the record was edited between when they started editing and when they tried to submit it, and show them the new version beside their version, so that they manually "merge" the 2 updates.
There probably are more solutions but these should get you started.
It depends on which version of Apache you are using and how it is configured, but a common default configuration uses multiple workers with multiple threads to handle simultaneous requests. See http://httpd.apache.org/docs/2.2/mod/worker.html for a rundown of how this works. The end result is that your PHP scripts may together have dozens of open database connections, possibly sending several queries at the exact same time.
However, your DBMS is designed to handle this. If you are only doing simple INSERT queries, then your code doesn't need to do anything special. Your DBMS will take care of the necessary locks on its own. Row-level locking will be fastest for multiple INSERTs, so if you use MySQL, you should consider the InnoDB storage engine.
Of course, your query can always fail whether it's due to too many database connections, a conflict on a unique index, etc. Wrap your queries in try catch blocks to handle this case.
If you have other application-layer concerns about concurrency, such as one user overwriting another user's changes, then you will need to handle these in the PHP script. One way to handle this is to use revision numbers stored along with your data, and refusing to execute the query if the revision number has changed, but how you handle it all depends on your application.
For example if we have a certain php file on server getProducts.php. Does it get interrupted when multiple users request it at the same time?
for example if a user asks for details about product A, and another user about product B, and another user about a product C, etc...will php be interrupted? or it's a self generated threading system that works and respond upon and to each request?
Thank you!
This has, unexpectedly, little or nothing to do with PHP. It's not PHP that answers the user's request but the web server. For example Apache, NginX, IIS, and so on.
The web server then routes the call to a PHP instance that is usually independent of any other request being satisfied in that exact moment. The number of concurrent requests depends on the server configuration, architecture, and platform capabilities. So-called "C10K" servers are designed to front up to ten thousand connections simultaneously.
But PHP is not the only factor in the process that goes from "GET /index.php" to a bunch of HTML; any active page (PHP or ASP or Python etc.) may request further resources from, say, a database. In that case a concurrency problem arises, and whenever two users need to acquire the same resource (a row in a data table, the whole table, a log file...), some sort of semaphore system makes it so that only one of them at a time can acquire a "lock" on that specific resource, and all others must wait for their turn, even if the overlying web server is capable of handling hundreds or thousands of concurrent connections.
Update on performance issues: the same happens within PHP for things such as sessions. Imagine you have a single user requesting a single page and that page has code to generate ten more calls (to images, pop-ups, ads, AJAX...). The first request opens a session, which is a bunch of data that must remain coherent. So when the other ten calls come by, all bound to the same session, and PHP has no way of knowing whether any one of these calls wants to modify session data -- it has no recourse but to prevent the second call from proceeding until the first call has released the session lock, and once it does, the second call will block the third, and so on. Take-away point: avoiding session_start() if it is not needed (e.g. replacing it with cryptographically strong GET tokens or doing without altogether), or calling session_commit() as soon as you are finished modifying _SESSION's values, will greatly improve performances. (So will using a faster session manager, or one that doesn't do coarse lock: e.g. redis).
For example in image generation:
session_start();
// This does the magic.
session_commit();
// We can still read session. We just can't write it anymore.
// That's why we needed a session.
if (!isset($_SESSION['authorized'])) {
Header('HTTP/1.1 403 Forbidden');
die();
}
// Here the code that generates an image *and sends* it. The session
// lock, if we hadn't committed, will *not* expire until the request
// has been processed by the *client* with network slowness. (Things
// go much better if you use the CGI interface instead of module).
In your example and seeing the "WAMP" tags, you have a Windows Apache serving data retrieved from MySQL by PHP, and serving requests on products.
The Apache server will receive hundreds of connections, activate hundreds of instances of PHP module (they'll share most of their code, so memory occupation doesn't go up disastrously fast), and then all these instances will ask to MySQL, "What about product XYZ?". In MySQL parlance they will try to obtain a READ LOCK. Read lock means something like, "I'm reading this thing, so please none of you dare write on it until I'm finished". But all of them are just reading, so they will all succeed - concurrently.
So no, there will be no stops -- just then.
But suppose you also want to update a counter of product views. Then every PHP instance also needs a WRITE LOCK, which means, "I want to write on this thing, so none of you read until I'm finished or you'll risk reading half-baked data, and of course none of you write here while I'm going at it".
At this point, the table type counts. MyISAM tables have table locking: if the instance updating product A's statistics is writing on product_views, no other instance will be able to do anything with that whole table. They will all queue and wait. If the table is InnoDB, the lock is at row level - all instances updating product A will queue one after the other, parallel to those updating product B, C, D and so on. So if all instances are writing to different records, they'll run in parallel.
That's why you really want to use InnoDB tables in these cases.
Of course, if you have a record such as "page visits", and they are all updating the row for "product-page.php", you have a bottleneck right there, and in case of a high traffic site, you'd do well if you designed some other way of writing that information (one of many workarounds is to store it in a shared memory location; every now and then one of the many instances accessing it receives the task of saving the information to the database. The instances still compete for locking on the memory, but that's orders of magnitude faster than competing for a database transaction).
If you are using apache, it's a concurrent system. That means each request will be handled in parallel so your php script will not be interrupted.
A very flowery title indeed.
I have a PHP web application that is in the form of a web based wizard. A user can run through the wizard and select options, run process (DB queries) etc. They can go backwards and forwards and run process again and again.
I am trying to work out how to best save the state of what users do/did, what process they ran etc. So basically a glorified log that I can pull up later.
How do I save these states or sessions? One option which is being considered by my colleague is using an XML file for each session and to save everything there. My idea is to use a database table to do this.
There are pros and cons for each and I was hoping I could get answers on which option to go for? Suggestiosn of other options that are feasible would be great! Or what kind of questions should I ask myself to choose the right implementation.
Technologies Currently Used
Backend: PHP and MS SQL Server, running on Windows Server 2005
FrontEnd: HTML, CSS, JavaScript (JQuery)
Any help will be greatly appreciated.
EDIT
There will be only one/two/three users per site where this system will be launched. Each site will not be connected in any way. The system can have about 10 to 100 sessions per month.
Using a database is probably the way to go. Just create a simple table, that tracks actions by session id. Don't index anything, as you want inserting rows to be a low-cost operation (you can create a temp table, add indexes, and run reports on it later).
XML files could also work -- you'd want to write a separate file for each sessionid -- but doing analysis will probably be much more straightforward if you can leverage your database's featureset.
If you're talking about a large number of users doing there operations simultaneously, and you'd want to trace their steps, I think it's better to go for a database-oriented approach. The database server can optimize data flow and disk writes, leading to a better concurrent performance than constantly writing files on the disk. You really should try to stress-test the system, whichever you choose, to make sure performance does not suffer in the event of a big load.