I am need to use batchId in my one of project, one or more rows can have single batchId. So when I will go to insert a bunch of 1000 rows from a single user, I will give this 1000 rows a single batchId. this batchId is next autoincrement batchId.
Currently I maintain a separate database table for unique_ids, and storing last batchId there.
Whenever I need to insert a batch of rows in table, I update the batchId in unique_ids table by 1 and use it for batch insertion.
update unique_ids set nextId = nextId + 1 where `key` = 'batchId';
select nextId from unique_ids where `key` = 'batchId';
I call up a function which fires above two queries and return me the nextId for batch (batchId).
Here is my PHP class and function call for same. I am using ADODB, You can ignore that ADODB related code.
class UniqueId
{
static public $db;
public function __construct()
{
}
static public function getNextId()
{
self::$db = getDBInstance();
$updUniqueIds = "Update unique_ids set nextId = nextId + 1 where `key` = 'batchId'";
self::$db->EXECUTE($updUniqueIds);
$selUniqueId = "Select nextId from unique_ids where `key` = 'batchId'";
$resUniqueId = self::$db->EXECUTE($selUniqueId);
return $resUniqueId->fields['nextId'];
}
}
Now whenever I require a next batchId, I just call below line of code.
`$batchId = UniqueId::getNextId();`
But the real problem is When there are hundreds of simultaneous requests in a second, It gives same batchId to two different batches. It is a serious issue for me. I need to solve that.
Please suggest me what should I do? can I restrict only a single instance of this class so no simultaneous requests can call this function at a time and never give a single batchId to two different batches.
Have a look into atomic operations or transactions. It will lock the database and only allow one write query at any given instance in time.
This might affect your performance, since now other users have to wait for a unlocked database!
I am not sure what sort of support ADODB provides for atomicity though!
Basic concept is:
Acquire Lock
Read from DB
Write to DB with new ID
Release Lock
If a lock is already acquired, the script will be blocked (busy waiting) until it is released again. But this way you are guaranteed no data hazards occur.
Begin tran
Update
Select
Commit
That way the update locks prevents two concurrent runs from pulling the same value.
If you select first,the shared lock will not isolate the two
Related
I have been given the task of creating a "Mass Crawler" which completely relies on proxies inside a database. Here's a simple overview in what I'm attempting to achieve :
1 x CronJob Bootstrap file - This is the file which sends 50 parallel curl requests to the individual crawler file
1 x Individual Crawler file - This is supposed to grab a UNIQUE row (proxy) from the database which another process hasn't selected.
I've had a look into the TRANSACTIONS with mySQL, but I still believe doing this wouldn't help as the query would be getting executed at the exact same time for each individual crawler process.
Here's kind of the idea I had in my head for the individual crawler file :
$db = new MysqliDb("localhost", "username", "password", "database");
$db->connect();
$db->startTransaction();
$db->where("last_used", array("<" => "DATE_SUB(NOW(),INTERVAL 30 SECOND)"));
$proxies = $db->get("proxies", 1);
if(count($proxies) == 1) {
//complete any scraping that needs to be done
//update the database to say the proxy has just been used
$db->where("id", $accounts[0]['id']);
$db->update("proxies", array("last_used", date("Y-m-d H:i:s")));
//commit the complete transaction
$db->commit();
}
$db->disconnect();
Would that above example be the correct way to use the mySQL TRANSACTION feature and assure ALL parallel queries selected different rows?
You need a column in the table that indicates that the row is in use by one of the crawler processes. Your first SELECT should look for WHERE in_use = 0; it needs to use the FOR UPDATE clause to lock the rows that are processed, though.
SELECT *
FROM proxies
WHERE in_use = 0
LIMIT 1
FOR UPDATE;
I don't know how to write that query with the DB API you're using; you may need to use its function for performing raw queries.
Then updates that row to SET in_use = 1. By doing both operations in a transaction, you ensure that no other process will get that row.
When it's done processing the row, it can SET in_use = 0.
I have a process that selects the next item to process from a MySQL InnoDB Table based on some criteria. When a row has been selected as the next to process, it's processing field is set to 1 while processing is happening outside the database. I do this so that many processors can be run at once, and they won't process the same row.
If I use transactions to execute the following queries, are they guaranteed to be executed together ( eg. Without any other MySQL connections executing queries. )? If they are not, then multiple processors could get the same id from the SELECT query and then processing will be redundant.
Pseudo Code Example
Prepare Transaction...
$id = SELECT id
FROM companies
WHERE processing = 0
ORDER BY last_crawled ASC
LIMIT 1;
UPDATE companies
SET processing = 1
WHERE id = $id;
Execute Transaction
I've been struggling to accomplish this fast enough using a single UPDATE query ( see this question ). Assume that is not an option for the purposes of this question.
You still have a possibility of a race condition, even though you execute the SELECT followed by the UPDATE in a single transaction. SELECT by itself does not lock anything, so you could have two concurrent sessions both SELECT and get the same id. Then both would attempt to UPDATE, but only one would "win" - the other would have to wait.
To get around this, use the SELECT...FOR UPDATE clause, which creates a lock on the rows it returns.
Prepare Transaction...
$id = SELECT id
FROM companies
WHERE processing = 0
ORDER BY last_crawled ASC
LIMIT 1
FOR UPDATE;
This means that the lock is created as the row is selected. This is atomic, which means no other session can sneak in and get a lock on the same row. If they try, their transaction will block on the SELECT.
UPDATE companies
SET processing = 1
WHERE id = $id;
Commit Transaction
I changed your "execute transaction" pseudocode to "commit transaction." Statements within a transaction execute immediately, which means they create locks and so on. Then when you COMMIT, the locks are released and any changes are committed. Committed means they can't be rolled back, and they are visible to other transactions.
Here's a quick example of using mysqli to accomplish this:
$mysqli = new mysqli(...);
$mysqli->report_mode = MYSQLI_REPORT_STRICT; /* throw exception on error */
$mysqli->begin_transaction();
$sql = "SELECT id
FROM companies
WHERE processing = 0
ORDER BY last_crawled ASC
LIMIT 1
FOR UPDATE";
$result = $mysqli->query($sql);
while ($row = $result->fetch_array(MYSQLI_ASSOC)) {
$id = $row["id"];
}
$sql = "UPDATE companies
SET processing = 1
WHERE id = ?";
$stmt = $mysqli->prepare($sql);
$stmt->bind_param("i", $id);
$stmt->execute();
$mysqli->commit();
Re your comment:
I tried an experiment and created a table companies, filled it with 512 rows, then started a transaction and issues the SELECT...FOR UPDATE statement above. I did this in the mysql client, no need to write PHP code.
Then, before committing my transaction, I examined the locks reported:
mysql> show engine innodb status\G
=====================================
2013-12-04 16:01:28 7f6a00117700 INNODB MONITOR OUTPUT
=====================================
...
---TRANSACTION 30012, ACTIVE 2 sec
2 lock struct(s), heap size 376, 513 row lock(s)
...
Despite using LIMIT 1, this report shows transaction appears to lock every row in the table (plus 1, for some reason).
So you're right, if you have hundreds of requests per second, it's likely that the transactions are queuing up. You should be able to verify this by watching SHOW PROCESSLIST and seeing many processes stuck in a state of Locked (i.e. waiting for access to rows that another thread has locked).
If you have hundreds of requests per second, you may have outgrown the ability for an RDBMS to function as a fake message queue. This isn't what an RDBMS is good at.
There are a variety of scalable message queue frameworks with good integration with PHP, like RabbitMQ, STOMP, AMQP, Gearman, Beanstalk.
Check out http://www.slideshare.net/mwillbanks/message-queues-a-primer-international-php-conference-fall-2012
That depends. There are (in general) differet isolation levels in SQL. In MySQL you can change which one to use using SET TRANSACTION ISOLATION LEVEL.
While "SERIALIZABLE" (which is the strictest one) still doesn't imply that no other actions are executed in between the ones from your transaction, it DOES make sure that there is no difference if simultanious transactions are executed one after another or not - if it would make a difference, on transaction is rolled back and executed later.
Note however that the stricter the isolation is, the more locking and rollbacks has to be done. So makre sure you really need that before using it.
I have a php script that executes mysql pdo queries. There are a few reads and writes to the same table in this script.
For sake of example let's say that there are 4 queries, a read, write, another read, another write, each read takes 10 second to execute, and each write takes .1 seconds to execute.
If I execute this script from the cli nohup php execute_queries.php & twice in 1/100th of a second, what would the execution order of the queries be?
Would all the queries from the first instance of the script need to finish before the queries from the 2nd instance begin to run, or would the first read from both instances start and finish before the table is locked by the write?
NOTE: assume that I'm using myisam and that the write is an update to a record (IE, entire table gets locked during the write.)
Since you are not using transactions, then no, the won't wait for all the queries in one script to finish an so the queries may get overlaped.
There is an entire field of study called concurrent programming that teaches this.
In databases it's about transactions, isolation levels and data locks.
Typical (simple) race condition:
$visits = $pdo->query('SELECT visits FROM articles WHERE id = 44')->fetch()[0]['visits'];
/*
* do some time-consuming thing here
*
*/
$visits++;
$pdo->exec('UPDATE articles SET visits = '.$visits.' WHERE id = 44');
The above race condition can easily turn sour if 2 PHP processes read the visits from the database one millisecond after the other, and assuming the initial value of visits was 6, both would increment it to 7 and both would write 7 back into the database even though the desired effect was that 2 visits increment the value by 2 (final value of visits should've been 8).
The solution to this is using atomic operations (because the operation is simple and can be reduced to one single atomic operation).
UPDATE articles SET visits = visits+1 WHERE id = 44;
Atomic operations are guaranteed by the database engines to take place uninterrupted by other processes/threads. Usually the database has to queue incoming updates so that they don't affect each other. Queuing obviously slows things down because each process has to wait for all processes before it until it gets the chance to be executed.
In a less simple operation we need more than one statement:
SELECT #visits := visits FROM articles WHERE ID = 44;
SET #visits = #visits+1;
UPDATE articles SET visits = #visits WHERE ID = 44;
But again even at the database level 3 separate atomic statements are not guaranteed to yield an atomic result. They can be overlap with other operations. Just like the PHP example.
To solve this you have to do the following:
START TRANSACTION
SELECT #visits := visits FROM articles WHERE ID = 44 FOR UPDATE;
SET #visits = #visits+1;
UPDATE articles SET visits = #visits WHERE ID = 44;
COMMIT;
$database->count = "SELECT * FROM table WHERE item_id = 1"
if($database->count == 1)
{
$database->update = "UPDATE users SET money = money - 1000";
$database->delete = "DELETE table WHERE item_id = 1";
}
Let's say I have this code (I've just created it) in index.php page. Can at the same time "SELECT * FROM table WHERE item_id = 1" query happen so two people would get count 1 and -1000 money? If yes, how can I avoid that?
Thank you.
If you're worried about two queries running at the same time being responsible for unbalanced state in your DB, you should be using transactions : http://dev.mysql.com/doc/refman/5.0/en/ansi-diff-transactions.html
Transactions are helpful in keeping the state of your data correct.
You can LOCK TABLE table WRITE before and UNLOCK TABLE table after the queries.
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
You need transactions.
If you are using InnoDB, you can play with the Transaction Isolation Level so that dirty reads are not allowed. Make sure you use repeatable reads as your Transaction Isolation Level.
BTW The DELETE line should say DELETE FROM table WHERE item_id = 1;
Not all of the databases have transaction support so If you are using mysql as you are working with PHP so you will need table locking technique.
Your table will be locked unless all work will be done than you unlock it, you can specify row locking as well.
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
Best :) Waqar Alamgir
I have a website that has user ranking as a central part, but the user count has grown to over 50,000 and it is putting a strain on the server to loop through all of those to update the rank every 5 minutes. Is there a better method that can be used to easily update the ranks at least every 5 minutes? It doesn't have to be with php, it could be something that is run like a perl script or something if something like that would be able to do the job better (though I'm not sure why that would be, just leaving my options open here).
This is what I currently do to update ranks:
$get_users = mysql_query("SELECT id FROM users WHERE status = '1' ORDER BY month_score DESC");
$i=0;
while ($a = mysql_fetch_array($get_users)) {
$i++;
mysql_query("UPDATE users SET month_rank = '$i' WHERE id = '$a[id]'");
}
UPDATE (solution):
Here is the solution code, which takes less than 1/2 of a second to execute and update all 50,000 rows (make rank the primary key as suggested by Tom Haigh).
mysql_query("TRUNCATE TABLE userRanks");
mysql_query("INSERT INTO userRanks (userid) SELECT id FROM users WHERE status = '1' ORDER BY month_score DESC");
mysql_query("UPDATE users, userRanks SET users.month_rank = userRanks.rank WHERE users.id = userRanks.id");
Make userRanks.rank an autoincrementing primary key. If you then insert userids into userRanks in descending rank order it will increment the rank column on every row. This should be extremely fast.
TRUNCATE TABLE userRanks;
INSERT INTO userRanks (userid) SELECT id FROM users WHERE status = '1' ORDER BY month_score DESC;
UPDATE users, userRanks SET users.month_rank = userRanks.rank WHERE users.id = userRanks.id;
My first question would be: why are you doing this polling-type operation every five minutes?
Surely rank changes will be in response to some event and you can localize the changes to a few rows in the database at the time when that event occurs. I'm pretty certain the entire user base of 50,000 doesn't change rankings every five minutes.
I'm assuming the "status = '1'" indicates that a user's rank has changed so, rather than setting this when the user triggers a rank change, why don't you calculate the rank at that time?
That would seem to be a better solution as the cost of re-ranking would be amortized over all the operations.
Now I may have misunderstood what you meant by ranking in which case feel free to set me straight.
A simple alternative for bulk update might be something like:
set #rnk = 0;
update users
set month_rank = (#rnk := #rnk + 1)
order by month_score DESC
This code uses a local variable (#rnk) that is incremented on each update. Because the update is done over the ordered list of rows, the month_rank column will be set to the incremented value for each row.
Updating the users table row by row will be a time consuming task. It would be better if you could re-organise your query so that row by row updates are not required.
I'm not 100% sure of the syntax (as I've never used MySQL before) but here's a sample of the syntax used in MS SQL Server 2000
DECLARE #tmp TABLE
(
[MonthRank] [INT] NOT NULL,
[UserId] [INT] NOT NULL,
)
INSERT INTO #tmp ([UserId])
SELECT [id]
FROM [users]
WHERE [status] = '1'
ORDER BY [month_score] DESC
UPDATE users
SET month_rank = [tmp].[MonthRank]
FROM #tmp AS [tmp], [users]
WHERE [users].[Id] = [tmp].[UserId]
In MS SQL Server 2005/2008 you would probably use a CTE.
Any time you have a loop of any significant size that executes queries inside, you've got a very likely antipattern. We could look at the schema and processing requirement with more info, and see if we can do the whole job without a loop.
How much time does it spend calculating the scores, compared with assigning the rankings?
Your problem can be handled in a number of ways. Honestly more details from your server may point you in a totally different direction. But doing it that way you are causing 50,000 little locks on a heavily read table. You might get better performance with a staging table and then some sort of transition. Inserts into a table no one is reading from are probably going to be better.
Consider
mysql_query("delete from month_rank_staging;");
while(bla){
mysql_query("insert into month_rank_staging values ('$id', '$i');");
}
mysql_query("update month_rank_staging src, users set users.month_rank=src.month_rank where src.id=users.id;");
That'll cause one (bigger) lock on the table, but might improve your situation. But again, that may be way off base depending on the true source of your performance problem. You should probably look deeper at your logs, mysql config, database connections, etc.
Possibly you could use shards by time or other category. But read this carefully before...
You can split up the rank processing and the updating execution. So, run through all the data and process the query. Add each update statement to a cache. When the processing is complete, run the updates. You should have the WHERE portion of the UPDATE reference a primary key set to auto_increment, as mentioned in other posts. This will prevent the updates from interfering with the performance of the processing. It will also prevent users later in the processing queue from wrongfully taking advantage of the values from the users who were processed before them (if one user's rank affects that of another). It also prevents the database from clearing out its table caches from the SELECTS your processing code does.