Im using functions for logging in a user, When they login but fail either by no captcha sent, failed captcha or failed login it will give there IP a Try. When they reach 5 tries they get blocked from the login page for approximately 1 hour. I have a function that updates the MySQL Column to increment there try count and last try date. But from looking at PHP's documents it states:
Note: The increment/decrement operators only affect numbers and
strings. Arrays, objects and resources are not affected.
My function gets the try count from the Database and then tries updating it. My SQL result for fetching the Try count is by default an Array because of how PDO works. So how can I efficiently increment an array?
I was thinking of doing a foreach condition and use the .=opperator to save it to a string and from there increment. But is that really the most efficient way?
Thank you.
P.S: I'm not showing any example code e.t.c because this question is simple enough. I have searched around on here and couldn't find a proper answer.
To understand why your question is wrong, you have to understand what an array is.
An array is just a "bag" that holds other variables. so, your question sounds like "How can I pay for a two beers with my pocket?". The thing is, you can't pay with a pocket. you have to take the cash out of the pocket and then use that cash.
Exactly the same thing goes with arrays: you have to extract the returned data from array, and then you are free to perform any operation on its contents. On the contents, remember, not on the bag.
But for the efficient solution, go for the other answer, which solved your initial problem the right way - without the need of selecting any arrays at all.
And just a side note
MySQL result for fetching the Try count is by default an Array because of how PDO works.
As a matter of fact, PDO can work in many different ways. For example, it can return scalar values all right.
You can increment it in an update query directly. When you want to add a try, simply:
UPDATE `tries` SET `tries` = `tries` + 1 WHERE `ip` = '127.0.0.1';
Just replace the IP with the actual IP.
Just to add..
IMO you should be using a separate table for incorrect login attempts. There are many reasons for this, but one of the important is that any attack is likely to rotate usernames and not only passwords in the attempt.
Having a separate table that records all incorrect logins allows you to much more easily query for an amount of incorrect logins in xx time. Incorrect logins attached to a user limits your ability to detect DoS and brute force attacks coming from scripted sources as you can only look at the username attempted if it actually existed in the first place.
However, you can relate a field in the table to the users ID, so that you can track users independently, then on successful login, the records that relate to that user could be deleted.
To give you a working example. I have built in the following functionality into the commercial Symfony project that I work on on a daily basis.
table example
userID --- foreign key (not mandatory)
IP --- mandatory
timestamp --- mandatory
we query the data like this:
Overall failed attempts for a particular subdomain (we have lots of them in use using the same system)
the system is used in schools, so we have to cater for naughty students!
Overall failed attempts in the last minute
system sleeps for a random time based on a base value x the amount. (a bit of a hacky way to try to trip up script attacks)
Overall attempts for a particular user
similar to your example.. compares to preconfigured amounts then warns/disables users accordingly. If it blocks sends an email to the helpdesk team.
this is by no means a suggested list, or an example of what should be done.. its merely what we decided on our applications circumstances.
The point is, without a separate table much of this wouldn't be possible.
I believe in PHP, whenever a user sends a request to backend PHP page, there is a one-to-one communication started, that is, a new instance of that page is created and executed as per the request of user.
My question, if each time a new instance is created, I want to create a PHP script, which is shared among all instances,
For ex: I want to store few hundred random numbers in that script (lets name it as pool.php - A static pool), and each time a request to Back end page ( lets say BE.php ) is made, BE.php requests pool.php to return a unique variable each time, and once all variables are used, I will put a logic in pool.php to create new set of variables
If my question is not clear, pls let me know
Memcached is a good candidate for this.
It is a key/value store that persists despite PHP coming and going. You can write values from one PHP script and read them from another. This isn't exactly what you are looking for, but can be used for the same purpose, and is much easier to deal with than sockets connecting to other PHP scripts.
http://memcached.org/
You could solve this with MySQL and locking the table in question. Keep this pool of variables in a separate table, then use SQL table-level lock to hold-off other requests until current request is finished, by using:
SELECT GET_LOCK( 'my_lock', 10 ) AS success
Make sure to check that the query returns 1, which means you now have a lock. If it doesn't, your query timed out waiting for the lock.
Then perform your ordinary query, like checking if a non-occupied variable exists. If so, occupy it by updating it or whatever.
Then you release the lock, using:
DO RELEASE_LOCK( 'my_lock' )
The number 10 is the timeout that each request will wait before failing.
Tarun, you do know that databases have something called AUTO_INCREMENT fields that can be used as primary keys for your user comments. And every time you add a new comment, that field gets incremented by the DB server and you get a unique ID on every new entry without breaking a sweat?
The only viable way for your need is using a database and some kind of Mutex or MySQL's internal Mutex like John Severinson said if the AUTO_INCREMENT field will not suffice.
PS: Performance overhead... when talking about PHP scripting is kind of a non-issue. If you need performance, write your sites in C/C++. You are talking about milliseconds (0.001 seconds) or less. If that will impact your performance, you need to revisit your project/logic.
I got a javascript client that sometimes sends two ajax requests within milliseconds of each other to the (php) server. (it's a javascript bug that I have no control over from the client side, i only got control over the server side).
The first request checks if a voucher already exists in the dbase (given a couple of parameters.. ie cust id etc).. if the voucher already exists, it just re-uses the voucher and updates is value, if it doesn't , it creates a new one from scratch.
the problem is that before it has finished checking if the voucher exists.. the second request comes in and checks if the voucher exists as well.. at that point the first hasn't created the voucher yet..
so long story short.. we end up with 2 duplicate vouchers.. (and the dbase doesn't restrict the voucher name to be unique. I have no control over the dbase either)..
so how do I prevent the second ajax request from doing anything until the first has done it's thing?
Keep in mind that the two requests are two different threads.. so if I make any $isVoucherCreationInProgress variables, it would be useless as the second call would be completely oblivious about it.
ideas?
If the 2 ajax requests are from the same client, on the server side make a lock-like system, so if someone has checked for a voucher existence, set a session variable until it finishes what it has to do. So when the second line comes it first checks for the session and if ( someone else is using the voucher ) finish it with a message check later, so on the client side when it comes with a denied message you can simply send it with a 1 second delay, to be sure nobody is "working".
Hope this helps.
Or you could see this question in order to make a mutex in php: PHP mutual exclusion (mutex)
Personally I would think of a very simple method. On your server, you must have a procedure where you will create the voucher. Keep a global array and just before creating the voucher, set the index of array as the id, something just like key = > Value, where key may be the id of the voucher and Value may be a status such as "creating". After creating the voucher, you can remove the entry using the id of the voucher as the key.
Now, every time just before creating the voucher, simply check from the global array of the key already exist, if yes and Value="creating", then in fact, you are actually creating the voucher, so then you exits
Hope it helps, :-)
Use transactions. If you really can't touch the database (not even make your own statements), you can use STM or the like. Wouldn't be too hard with locks either, but either way requires that your application is running continuously. You can run a server with software like phpdaemon and forward a specific path to that server, to get that continuance.
I understand that you create a new row in one table of your database.
You should add a unicity constraint so that you can't add it twice. Is it possible that you have to create several vouchers? Could you give more info on this?
Regarding the update, you should add a 'version' field to your row. The client side needs to have the correct version number to update the row. Thus it avoids a problem of unwanted concurrent update. This is a best practice with ORM, you may check this looking for 'optimistic update'.
As you have no control on the db, create a cache of requests (i.e. static object) in your server and create/update a row if nothing (regarding this customer + others parameters if needed) in your cache (like this one for example http://www.php.net/manual/en/book.memcache.php) . Your cache should clean itself atfer a while (I guess there are cache solutions in php).
Another idea (ugly but because it seems you are so limited with solutions): just make it slower. Wait sufficiently to make sure there is noone else (you will need a loop which checks and undo if needed - with random for convergence).
You can either set a flag (in JavaScript) when your ajax request starts - check to see if it's set then RETURN, or you can change your AJAX request to synchronous.
i have a LAPP (linux, apache, postgresql and php) environment, but the question is pretty the same both on Postgres or Mysql.
I have an cms app i developed, that handle clients, documents (estimates, invoices, etc..) and other data, structured in 1 postgres DB with many schemas (one for each our customer using the app); let's assume around 200 schemas, each of them used concurrently by 15 people (avg).
EDIT: I do have an timestamp field named last_update on every table, and a trigger that update the timestamp every time the row is update.
The situation is:
People Foo and Bar are editing the document 0001, using a form with every document details.
Foo change the shipment details, for example.
Bar change the phone numbers, and some items in the document.
Foo press the 'Save' button, the app update the db.
Bar press the 'Save' button after bar, resending the form with the old shipment details.
In the database, the Foo changes have been lost.
The situation i want to have:
People Foo, Bar, John, Mary, Paoul are editing the document 0001, using a form with every document details.
Foo change the shipment details, for example.
Bar and the others change something else.
Foo press the 'Save' button, the app update the db.
Bar and the others get an alert 'Warning! this document has been changet by someone else. Click here to load the actuals data'.
I've wondered to use ajax to do this; simply using an hidden field with the id of the document and the last-updated timestamp, every 5 seconds check if the last-updated time is the same and do nothing, else, show the alert dialog box.
So, the page check-last-update.php should look something like:
<?php
//[connect to db, postgres or mysql]
$documentId = isset($_POST['document-id']) ? $_POST['document-id'] : 0;
$lastUpdateTime = isset($_POST['last-update-time']) ? $_POST['last-update-time'] : 0;
//in the real life i sanitize the data and use prepared statements;
$qr = pg_query("
SELECT
last_update_time
FROM
documents
WHERE
id = '$documentId'
");
$ray = pg_fetch_assoc($qr);
if($ray['last_update_time'] > $lastUpdateTime){
//someone else updated the document since i opened it!
echo 'reload';
}else{
echo 'ok';
}
?>
But i dont like to stress the db every 5 seconds for every user that have one (or more...) documents opened.
So, what can be another efficent solution without nuking the db?
I thought to use files, creating for example an empty txt file for each document, and everytime the document is updated, i 'touch' the file updating the 'last modified time' as well... but i guess that this would be slower than db and give problems when i have much users editing the same document.
If someone else have a better idea or any suggestion, please describe it in details!
* - - - - - UPDATE - - - - - *
I definitely choosen to NOT hit the db for check the 'last update timestamp', dont mind if the query will be pretty fast, the (main) database server has other tasks to fullfill, dont like the idea to increase his overload for that thing.
So, im taking this way:
Every time a document is updated by someone, i must do something to sign the new timestamp outside the db environment, e.g. without asking the db. My ideas are:
File-system: for each document i create an empry txt files named as the id of the document, everytime the document is update, i 'touch' the file. Im expecting to have thousands of those empty files.
APC, php cache: this will be probably a more flexible way than the first one, but im wondering if keeping thousands and thousands of data permanently in the apc wont slow down the php execution itself, or consume the server memory. Im little bit afraid to choose this way.
Another db, sqlite or mysql (that are faster and lighter with simple db structures) used to store just the documents ID and timestamps.
Whatever way i choose (files, apc, sub-db) im seriously thinking to use another web-server (lighttp?) on a sub-domain, to handle all those.. long-polling requests.
YET ANOTHER EDIT:
The file's way wouldnt work.
APC can be the solution.
Hitting the DB can be the solution too, creating a table just to handle the timestamps (with only two column, document_id and last_update_timestamp) that need to be as fast and light as possible.
Long polling: that's the way i'll choose, using lighttpd under apache to load static files (images, css, js, etc..), and just for this type of long-polling; This will lighten the apache2 load, specially for the polling.
Apache will proxy-up all those request to lighttpd.
Now, i only have to decide between db solution and APC solution..
p.s: thanks to all whom already answered me, you have been really usefull!
I agree that I probably wouldn't hit the database for this. I suppose I would use APC cache (or some other in-memory cache) to maintain this information. What you are describing is clearly optimistic locking at the detailed record level. The higher the level in the database structure the less you need to deal with. It sounds like you want to check with multiple tables within a structure.
I would maintain a cache (in APC) of the IDs and the timestamps of the last updated time keyed by the table name. So for example I might have an array of table names where each entry is keyed by ID and the actual value is the last updated timestamp. There are probably many ways to set this up with arrays or other structures but you get the idea. I would probably add a timeout to the cache so that entries in the cache are removed after a certain period of time - i.e., I wouldn't want the cache to grow and assume that 1 day old entries aren't useful anymore).
With this architecture you would need to do the following (in addition to setting up APC):
on any update to any (applicable) table, update the APC cache entry with the new timestamp.
within ajax just go as far "back" as php (to obtain the APC cache to check the entry) rather than all of the way "back" to the database.
I think you can use a condition in the UPDATE statement like WHERE ID=? AND LAST_UPDATE=?.
The idea is that you will only succeed in updating when you are the last one reading that row. If someone else has committed something, you will fail, and once you know you've failed, you can query the changes.
Hibernate uses a version field to do that. Give every table such a field and use a trigger to increment it on every update. When storing an update, compare the current version with the version when the data was read earlier. If those don't match, throw an exception. Use transactions to make the check-and-update atomic.
You will need some type of version stamp field for each record. What it is doesn't matter as long as you can guarantee that making any change to a record will result in that version stamp being different. Best practice is to then check and make sure the loaded record's version stamp is the same as the version stamp in the DB when the user clicks save, and if it's different handle it.
How you handle it is up to you. At the very least you'd want to offer to reload from the DB so the user can verify that they still want to save. One up from that would be to attempt to merge their changes into the new DB record and then ask them to verify that the merge worked correctly.
If you want to periodically poll any DB capable of handling your system should be able to take the poll load. 10 users polling once every 5 seconds is 2 transactions per second. This is a trivial load, and should be no problem at all. To keep the average load close to the actual load, just jitter the polling time slightly (instead of doing it exactly every 5 seconds, do it every 4-6 seconds, for example).
Donnie's answer (polling) is probably your best option - simple and works. It'll cover almost every case (its unlikely a simple PK lookup would hurt performance, even on a very popular site).
For completeness, and if you wanted to avoid polling, you can use a push-model. There's various ways described in the Wikipedia article. If you can maintain a write-through cache (everytime you update the record, you update the cache), then you can almost completely eliminate the database load.
Don't use a timestamp "last_updated" column, though. Edits within the same second aren't unheard of. You could get away with it if you add extra information (server that did the update, remote address, port, etc) to ensure that, if two requests came in at the same second, to the same server, you could detect the difference. If you need that precision, though, you might as well use a unique revision field (it doesn't necessarily have to be an incrementing integer, just unique within that record's lifespan).
Someone mentioned persistent connections - this would reduce the setup cost of the polling queries (every connection consumes resources on the database and host machine, naturally). You would keep a single connection (or as few as possible) open all the time (or as long as possible) and use that (in combination with caching and memoization, if desired).
Finally, there are SQL statements that allow you to add a condition on UPDATE or INSERT. My SQl is really rusting, but I think its something like UPDATE ... WHERE .... To match this level of protection, you would have to do your own row locking prior to sending the update (and all the error handling and cleanup that might entail). Its unlikely you'd need this; I'm just mentioning it for completness.
Edit:
Your solution sounds fine (cache timestamps, proxy polling requests to a another server). The only change I'd make is to update the cached timestamps on every save. This will keep the cache fresher. I'd also check the timestamp directly from the db when saving to prevent a save sneaking in due to stale cache data.
If you use APC for caching, then a second HTTP server doesn't make sense - you'd have to run it on the same machine (APC uses shared memory). The same physical machine would be doing the work, but with the additional overhead of a second HTTP server. If you want to off load the polling requests to a second server (lighttpd, in your case), then it would be better to setup lightttpd in front of Apache on a second physical machine and use a shared caching server (memcache) so that the lighttpd server can read the cached timestamps, and Apache can update the cached timestamps. The rationale for putting lighttpd in front of Apache is, if most requests are polling requests, to avoid the heavier-weight Apache process usage.
You probably don't need a second server at all, really. Apache should be able to handle the additional requests. If it can't, then I'd revisit your configuration (specifically the directives that control how many worker processes you run and how many requests they are allowed to handle before being killed).
Your approach of querying the database is the best one. If you do it every 5 seconds and you have 15 concurrent users then you're looking at ~3 queries a second. It should be a very small query too, returning only one row of data. If your database can't handle 3 transactions a second then you might have to look at a better database because 3 queries/second is nothing.
Timestamp the records in the table so you can quickly see if anything has changed without having to diff each field.
This is slightly off topic, but you can use the PEAR package (or PECL package, I forget which) xdiff to send back good user guidance when you do get a collision.
First off only update the fields that have changed on when writing to the database, this will decrease database load.
Second, query the timestamp of the last update, if you have a older timestamp then the current version in the database then throw the warning to the client.
Third is to somehow push this information to the client, though some kind of persistent connection with the server, enabling a concurrent two way connection.
Polling is rarely a nice solution.
You could do the timstamp check only when the user (with the open document) is doing something active with the document like scrolling, moving the mouse over it or starts to edit. Then the user gets an alert if the document has been changed.
.....
I know it was not what you asked for but ... why not a edit-singleton?
The singleton could be a userID column in the document-table.
If a user wants to edit the document, the document is locked for edit by other users.
Or have edit-singletons on the individual fields/groups of information.
Only one user can edit the document at a time. If another user has the document open and want to edit a single timestamp check reveal that the document has been altered and is reloaded.
With a singleton there is no polling and only one timestamp check when the user "touches" and/or wants to edit the document.
But perhaps a singleton mechanism doesn't fit your system.
Regards
Sigersted
Ahhh, i though it was easyer.
So, lets make the point: i have a generic database (pgsql or mysql doesn't matter), that contains many generic objects.
I have $x (actually $x = 200, but is growing, hoping will reach 1000 soon) of exact copy of this database, and for each of them up to 20 (avg 10) users for 9 hours at day.
If one of those users is viewing a record, any record, i must advice him if someone edit the same record.
Let's say Foo is watching the document 0001, sit up for a coffee, Bar open and edit the same document, when Foo come back he must see an 'Warning, someone else edited this document! click here to refresh tha page.'.
That'all i need atm, probably i'll extend this situation, adding a way to see the changes and rollback, but this is not the point.
Some of you suggested to check the 'last update' timestamp only when foo try to save the document; Can be a solution too, but i need something in real-time ( 10 sec deelay ).
Long polling, bad way, but seem to be the only one.
So, what i've done:
Installed Lighttp on my machine (and php5 as fastcgi);
Loaded apache2's proxy module (all, or 403 error will hit you);
Changed the lighttpd port from 80 (that is used by apache2) to 81;
Configured apache2 to proxying the request from mydomain.com/polling/* to polling.mydomain.com (served with Lighttp)
Now, i have another sub http-service that i'll use both for polling and load static content (images, etc..), in order to reduce the apache2's load.
Becose i dont want to nuke the database for the timestamp check, i've tryed some caches system (that can be called from php).
APC: quite simple to install and manage, very lightweight and faster, this would be my first choice.. if only the cache would be sharable between two cgi process (i need to store in cache a value from apache2's php process, and read it from lighttpd's php process)
Memcached: around 4-5 times slower than APC, but run as a single process that can be touched everywhere in my environment. I'll go with this one, atm. (even if is slower, the use i'll do of it is relatively simple).
Now, i just have to try this system loading some test datas to see ho will move 'under pressure' and optimize it.
I suppost this environment will work for other long-polling situations (chat?)
Thanks to everyone who gave me hear!
I suggest: when you first query the record that might be changed, hang onto a local copy. When "updating", compare the copy in the locked table/row against your copy, and if it's changed, kick it back to the user.