What I want to do is to execute the same script every few minutes with cron.
The script needs to process some data read from the database, so obviously I need it work on diffrent row each time.
My concept was to use row locking to make sure each instance work on different row, but it doesn't seem to work that way.
Is it even possible to use row locks this way? Any other solutins?
Example:
while($c < $limit) {
$sql=mysql_query("SELECT * FROM table WHERE ... LIMIT 1 FOR UPDATE");
$data=mysql_fetch_assoc($sql);
(process data)
mysql_query("update table set value=spmething, timestamp=NOW()");
$c++;
}
Basically what i need is SCRIPT1 reads R1 from the table; SCRIPT2 reads R2 (next non-locked row matching criteria)
EDIT:
Let's say for example that:
1) the table stores a list of URL
2) the script checks if URL responses, and updates it's status (and timestamp) in database
This should essentially be treated as two separate problems:
Finding a job for each worker to process. Ideally this should be very efficient and pre-emptively avoid failures in step 2, which comes next.
Ensuring that each job gets processed at most once or exactly once. No matter what happens the same job should not be concurrently processed by multiple workers. You may want to ensure that no jobs are lost due to buggy/crashing workers.
Both problems have multiple workable solutions. I'll give some suggestions about my preference:
Finding a job to process
For low-velocity systems it should be sufficient just to look for the most recent un-processed job. You do not want to take the job yet, just identify it as a candidate. This could be:
SELECT id FROM jobs ORDER BY created_at ASC LIMIT 1
(Note that this will process the oldest job first—FIFO order—and we assume that rows are deleted after processing.)
Claiming a job
In this simple example, this would be as simple as (note I am avoiding some potential optimizations that will make things less clear):
BEGIN;
SELECT * FROM jobs WHERE id = <id> FOR UPDATE;
DELETE FROM jobs WHERE id = <id>;
COMMIT;
If the SELECT returns our job when queried by id, we've now locked it. If another worker has already taken this job, an empty set will be returned, and we should look for a different job. If two workers are competing for the same job, they will block each other from the SELECT ... FOR UPDATE onwards, such that the previous statements are universally true. This will allow you to ensure that each job is processed at most once. However...
Processing a job exactly once
A risk in the previous design is that a worker takes a job, fails to process it, and crashes. The job is now lost. Most job processing systems therefor do not delete the job when they claim it, instead marking it as claimed by some worker and implement a job-reclaim system.
This can be achieved by keeping track of the claim itself using either additional columns in the job table, or a separate claim table. Normally some information is written about the worker, e.g. hostname, PID, etc., (claim_description) and some expiration date (claim_expires_at) is provided for the claim e.g. 1 hour in the future. An additional process then goes through those claims and transactionally releases claims which are past their expiration (claim_expires_at < NOW()). Claiming a job then also requires that the job row is checked for claims (claim_expires_at IS NULL) both at selection time and when claiming with SELECT ... FOR UPDATE.
Note that this solution still has problems: If a job is processed successfully, but the worker crashes before successfully marking the job as completed, we may eventually release the claim and re-process the job. Fixing that requires a more advanced system which is left as an exercise for the reader. ;)
If you are going to read the row once, and only once, then I would create an is_processed column and simply update that column on the rows that you've processed. Then you can simply query for the first row that has is_processed = 0
Related
I have a daemon that runs background jobs requested by our webservice. We have 4 workers running simultaneously.
Sometimes a job is executed twice at the same time, because two workers decided to run that job. To avoid this situation we tried several things:
Since our jobs comes from our databases, we added a flag called executed, that prevents other works to get a job that has already been started to execute; This does not solve the problem, sometimes the delay with our database is enough to have simultaneous executions;
Added memcached in the system (all workers run in the same system), but somehow we had simultaneous jobs running today -- memcached does not solve for multiple servers as well.
Here is the following logic we are currently using:
// We create our memcached server
$memcached = new Memcached();
$memcached->addServer("127.0.0.1", 11211);
// Checkup every 5 seconds for operations
while (true) {
// Gather all operations TODO
// In this query, we do not accept operations that are set
// as executed already.
$result = findDaemonOperationsPendingQuery();
// We have some results!
if (mysqli_num_rows($result) > 0) {
$op = mysqli_fetch_assoc($result);
echo "Found an operation todo #" . $op['id'] . "\n";
// Set operation as executed
setDaemonOperationAsDone($op['id'], 'executed');
// Verifies if operation is happening on memcached
if (get_memcached_operation($memcached, $op['id'])) {
echo "\tOperation id already executing...\n";
continue;
} else {
// Set operation on memcached
set_memcached_operation($memcached, $op['id']);
}
... do our stuff
}
}
How this kind of problem is usually solved?
I looked up on the internet and found out a library called Gearman, but I'm not convinced that it will solve my problems when we have multiple servers.
Another thing I thought was to predefine a daemon to run the operation at insertion, and create a failsafe exclusive daemon that runs operations set by daemons that are out of service.
Any ideas?
Thanks.
An alternative solution to using locks and transactions, assuming each worker has an id.
In your loop run:
UPDATE operations SET worker_id = :wid WHERE worker_id IS NULL LIMIT 1;
SELECT * FROM operations where executed = 0 and worker_id = :wid;
The update is a single operation which is atomic and you are only setting worker_id if it is not yet set so no worries about race conditions. Setting the worker_id makes it clear who owns the operation. The update will only assign one operation because of the LIMIT 1.
You have a typical concurrency problem.
Worker 1 reads the table, select a job
Worker 1 update the table to mark the job as 'assigned' or whatever
Oh but wait, between 1 and 2, worker 2 read the table as well, and since the job wasn't yet marked a 'assigned', worker 2 selected the same job
The way to solve this is to use transactions and locks, in particular SELECT.. FOR UPDATE. It'll go like this:
Worker 1 starts a transaction (START TRANSACTION) and tries to acquire an exclusive lock SELECT * FROM jobs [...] FOR UPDATE
Worker 2 does the same. Except he has to wait because Worker 1 already has the lock.
Worker 1 updates the table to say he's now working on the job and commit the transaction immediately. This releases the lock for other workers to select jobs. Worker 1 can now safely start working on this job.
Worker 2 can now read the table and acquire a lock. Since the table has been updated, worker 2 will select a different job.
EDIT: Specific comment about your PHP code:
Your comment says you are fetching all the jobs that needs to be done at once in each worker. You should only select one, do it, select one, do it, etc.
You are setting the flag 'executed' when in fact it's not (yet) executed. You need a 'assigned' flag, and a different 'executed' flag.
I have a cron task running every x seconds on n servers. It will "SELECT FROM table WHERE time_scheduled<CURRENT_TIME" and then perform a lengthy task on this result set.
My problem is now: How do I avoid having two seperate servers perform the same task at the same time?
The idea is to update *time_scheduled* with a set interval after selecting it. But if two servers happen to run the query at the same time, that will be too late, no?
All ideas are welcome. It doesnt have to be a strict MySQL solution.
Thanks!
I am guessing you have a single MySQL instance, and connections from your n servers to run this processing job. You're implementing a job queue here.
The table you mention needs to use the InnoDB access method (or one of the other transaction-friendly access methods offered by Percona or MariaDB).
Do these items in your table need to be processed in batches? That is, are they somehow inter-related? Or is it possible for your server processes to handle them one-by-one? This is an important question, because you'll get better load balancing between your server processes if you can handle them individually or in small batches. Let's assume the small batches.
The idea is to prevent any server process from grabbing onto a row in your table if some other server process has that row. I've had to do this kind of thing a lot, and here is my suggestion; I know this works.
First, add an integer column to your table. Call it "working" or some such thing. Give it a default value of zero.
Second, assign a permanent id number to each server. The last part of the server's IP address (for example, if the server's IP address is 10.1.0.123, the id number is 123) is a good choice, because it's probably unique in your environment.
Then, when a server's grabbing work to do, use these two SQL queries.
UPDATE table
SET working = :this_server_id
WHERE working = 0
AND time_scheduled < CURRENT_TIME
ORDER BY time_scheduled
LIMIT 1
SELECT table_id, whatever, whatever
FROM table
WHERE working = :this_server_id
The first query will consistently grab a batch of rows to work on. If another server process comes in at the same time, it won't ever grab the same rows, because no process can grab rows unless working = 0. Notice that the LIMIT 1 will limit your batch size. You don't have to do this, but you can. I also threw in ORDER BY to process the rows first that have been waiting the longest. That's probably a useful way to do things.
The second query retrieves the information you need to do the work. Don't forget to retrieve the primary key values (I called them table_id) for the rows you're working on.
Then, your server process does whatever it needs to do.
When it's done, it needs to throw the row back into the queue for a later time. To do that, the server process needs to set the time_scheduled to whatever it needs to be, then to set working = 0. So, for example, you could run this query for each row you're processing.
UPDATE table
SET time_scheduled = CURRENT_TIME + INTERVAL 5 MINUTE,
working = 0
WHERE table_id = ?table_id_from_previous_query
That's it.
Except for one thing. In the real world these queuing systems get fouled up sometimes. Server processes crash. Etc. Etc. See Murphy's Law. You need a monitoring query. That's easy in this system.
This query will give a list of all jobs that are more than five minutes overdue, along with the server that's supposed to be working on them.
SELECT working, COUNT(*) stale_jobs
FROM table
WHERE time_scheduled < CURRENT_TIME - INTERVAL 5 MINUTE
GROUP BY WORKING
If this query comes up empty, all is well. If it comes up with lots of jobs with working set to zero, your servers aren't keeping up. If it comes up with jobs with working set to some server's id number, that server is taking a lunch break.
You can reset all the jobs assigned to the server that's gone to lunch with this query, if need be.
UPDATE table
SET working=0
WHERE working=?server_id_at_lunch
By the way, a compound index on (working, time_scheduled) will probably help this perform well.
I have a php script that executes mysql pdo queries. There are a few reads and writes to the same table in this script.
For sake of example let's say that there are 4 queries, a read, write, another read, another write, each read takes 10 second to execute, and each write takes .1 seconds to execute.
If I execute this script from the cli nohup php execute_queries.php & twice in 1/100th of a second, what would the execution order of the queries be?
Would all the queries from the first instance of the script need to finish before the queries from the 2nd instance begin to run, or would the first read from both instances start and finish before the table is locked by the write?
NOTE: assume that I'm using myisam and that the write is an update to a record (IE, entire table gets locked during the write.)
Since you are not using transactions, then no, the won't wait for all the queries in one script to finish an so the queries may get overlaped.
There is an entire field of study called concurrent programming that teaches this.
In databases it's about transactions, isolation levels and data locks.
Typical (simple) race condition:
$visits = $pdo->query('SELECT visits FROM articles WHERE id = 44')->fetch()[0]['visits'];
/*
* do some time-consuming thing here
*
*/
$visits++;
$pdo->exec('UPDATE articles SET visits = '.$visits.' WHERE id = 44');
The above race condition can easily turn sour if 2 PHP processes read the visits from the database one millisecond after the other, and assuming the initial value of visits was 6, both would increment it to 7 and both would write 7 back into the database even though the desired effect was that 2 visits increment the value by 2 (final value of visits should've been 8).
The solution to this is using atomic operations (because the operation is simple and can be reduced to one single atomic operation).
UPDATE articles SET visits = visits+1 WHERE id = 44;
Atomic operations are guaranteed by the database engines to take place uninterrupted by other processes/threads. Usually the database has to queue incoming updates so that they don't affect each other. Queuing obviously slows things down because each process has to wait for all processes before it until it gets the chance to be executed.
In a less simple operation we need more than one statement:
SELECT #visits := visits FROM articles WHERE ID = 44;
SET #visits = #visits+1;
UPDATE articles SET visits = #visits WHERE ID = 44;
But again even at the database level 3 separate atomic statements are not guaranteed to yield an atomic result. They can be overlap with other operations. Just like the PHP example.
To solve this you have to do the following:
START TRANSACTION
SELECT #visits := visits FROM articles WHERE ID = 44 FOR UPDATE;
SET #visits = #visits+1;
UPDATE articles SET visits = #visits WHERE ID = 44;
COMMIT;
I sometimes gets mysql deadlock errors saying:
'Deadlock found when trying to get lock; try restarting transaction'
I have a queues table where multiple php processes are running simultaneously selecting rows from the table. However, for each process i want it grab a unique batch of rows each fetch so i don't have any overlapping rows being selected.
so i run this query: (which is the query i get the deadlock error on)
$this->db->query("START TRANSACTION;");
$sql = " SELECT mailer_queue_id
FROM mailer_queues
WHERE process_id IS NULL
LIMIT 250
FOR UPDATE;";
...
$sql = "UPDATE mailer_queues
SET process_id = 33044,
status = 'COMPLETED'
WHERE mailer_queue_id
IN (1,2,3...);";
...
if($this->db->affected_rows() > 0) {
$this->db->query("COMMIT;");
} else{
$this->db->query("ROLLBACK;");
}
I'm also:
inserting rows to the table (with no transactions/locks) at the same time
updating rows in the table (with no transactions/locks) at the same time
deleting the rows from the table (with no transactions/locks) at the same time
As well, my updates and deletes only update and delete rows where they have a process_id assigned to them ...and where i perform my transactions that "SELECT rows ... FOR UPDATE" are where the process_id = null. In theory they should never be overlapping.
I'm wondering if there is a proper way to avoid these deadlocks?
Can a deadlock occur because one transaction is locking the table for too long while its selecting/update and the another process is trying to perform the same transaction and just timesout?
any help is much appreciated
Deadlocks occur when two or more processes requests locks in such a way that the resources being locked overlap, but occur in different orders, so that each process is waiting for a resource that's locked by another process, and that other process is waiting for a lock that the original process has open.
In real world terms, consider a construction site: You've got one screwdriver, and one screw. Two workers need to drive in a screw. Worker #1 grabs the screwdriver, and worker #2 grabs the screw. Worker #1 goes to grab the screw as well, but can't, because it's being held by worker #2. Worker #2 needs the screwdriver, but can't get it because worker #1 is holding it. So now they're deadlocked, unable to proceed, because they've got 1 of the 2 resources they need, and neither of them will be polite and "step back".
Given that you've got out-of-transaction changes occurring, it's possible that one (or more) of your updates/deletes are overlapping the locked areas you're reserving inside the transactions.
You might want to try LOCK TABLES before starting the transaction, thereby assuring you have explicit control over the tables. The lock will wait until all activity on the particular tables has completed.
I think everyone on net has explained very well about the deadlock.
Mysql provide very good log to check all the last dead lock happened and which
queries were stuck at that time.
Check this mysql documentation page and search for LATEST DETECTED DEADLOCK
its a great logs, helped finding many subtle deadlocks.
I've done some searching for this but haven't come up with anything, maybe someone could point me in the right direction.
I have a website with lots of content in a MySQL database and a PHP script that loads the most popular content by hits. It does this by logging each content hit in a table along with the access time. Then a select query is run to find the most popular content in the past 24 hours, 7 day or maximum 30 days. A cronjob deletes anything older than 30 days in the log table.
The problem I'm facing now is as the website grows the log table has 1m+ hit records and it is really slowing down my select query (10-20s). At first I though the problem was a join I had in the query to get the content title, url, etc. But now I'm not sure as in test removing the join does not speed the query as much as I though it would.
So my question is what is best practise of doing this kind of popularity storing/selecting? Are they any good open source scripts for this? Or what would you suggest?
Table scheme
"popularity" hit log table
nid | insert_time | tid
nid: Node ID of the content
insert_time: timestamp (2011-06-02 04:08:45)
tid: Term/category ID
"node" content table
nid | title | status | (there are more but these are the important ones)
nid: Node ID
title: content title
status: is the content published (0=false, 1=true)
SQL
SELECT node.nid, node.title, COUNT(popularity.nid) AS count
FROM `node` INNER JOIN `popularity` USING (nid)
WHERE node.status = 1
AND popularity.insert_time >= DATE_SUB(CURDATE(),INTERVAL 7 DAY)
GROUP BY popularity.nid
ORDER BY count DESC
LIMIT 10;
We've just come across a similar situation and this is how we got around it. We decided we didn't really care about what exact 'time' something happened, only the day it happened on. We then did this:
Every record has a 'total hits' record which is incremented every time something happens
A logs table records these 'total hits' per record, per day (in a cron job)
By selecting the difference between two given dates in this log table, we can deduce the 'hits' between two dates, very quickly.
The advantage of this is the size of your log table is only as big as NumRecords * NumDays which in our case is very small. Also any queries on this logs table are very quick.
The disadvantage is you lose the ability to deduce hits by time of day but if you don't need this then it might be worth considering.
You actually have two problems to solve further down the road.
One, which you've yet to run into but which you might earlier than you want, is going to be insert throughput within your stats table.
The other, which you've outlined in your question, is actually using the stats.
Let's start with input throughput.
Firstly, in case you're doing so, don't track statistics on pages that could use caching. Use a php script that advertises itself as an empty javascript, or as a one-pixel image, and include the latter on pages you're tracking. Doing so allows to readily cache the remaining content of your site.
In a telco business, rather than doing an actual inserts related to billing on phone calls, things are placed in memory and periodically sync'ed with the disk. Doing so allows to manage gigantic throughputs while keeping the hard-drives happy.
To proceed similarly on your end, you'll need an atomic operation and some in-memory storage. Here's some memcache-based pseudo-code for doing the first part...
For each page, you need a Memcache variable. In Memcache, increment() is atomic, but add(), set(), and so forth aren't. So you need to be wary of not miss-counting hits when concurrent processes add the same page at the same time:
$ns = $memcache->get('stats-namespace');
while (!$memcache->increment("stats-$ns-$page_id")) {
$memcache->add("stats-$ns-$page_id", 0, 1800); // garbage collect in 30 minutes
$db->upsert('needs_stats_refresh', array($ns, $page_id)); // engine = memory
}
Periodically, say every 5 minutes (configure the timeout accordingly), you'll want to sync all of this to the database, without any possibility of concurrent processes affecting each other or existing hit counts. For this, you increment the namespace before doing anything (this gives you a lock on existing data for all intents and purposes), and sleep a bit so that existing processes that reference the prior namespace finish up if needed:
$ns = $memcache->get('stats-namespace');
$memcache->increment('stats-namespace');
sleep(60); // allow concurrent page loads to finish
Once that is done, you can safely loop through your page ids, update stats accordingly, and clean up the needs_stats_refresh table. The latter only needs two fields: page_id int pkey, ns_id int). There's a bit more to it than simple select, insert, update and delete statements run from your scripts, however, so continuing...
As another replier suggested, it's quite appropriate to maintain intermediate stats for your purpose: store batches of hits rather than individual hits. At the very most, I'm assuming you want hourly stats or quarter-hourly stats, so it's fine to deal with subtotals that are batch-loaded every 15 minute.
Even more importantly for your sake, since you're ordering posts using these totals, you want to store the aggregated totals and have an index on the latter. (We'll get to where further down.)
One way to maintain the totals is to add a trigger which, on insert or update to the stats table, will adjust the stats total as needed.
When doing so, be especially wary about dead-locks. While no two $ns runs will be mixing their respective stats, there is still a (however slim) possibility that two or more processes fire up the "increment $ns" step described above concurrently, and subsequently issue statements that seek to update the counts concurrently. Obtaining an advisory lock is the simplest, safest, and fastest way to avoid problems related to this.
Assuming you use an advisory lock, it's perfectly OK to use: total = total + subtotal in the update the statement.
While on the topic of locks, note that updating the totals will require an exclusive lock on each affected row. Since you're ordering by them, you don't want them processed all in one go because it might mean keeping an exclusive lock for an extended duration. The simplest here is to process the inserts into stats in smaller batches (say, 1000), each followed by a commit.
For intermediary stats (monthly, weekly), add a few boolean fields (bit or tinyint in MySQL) to your stats table. Have each of these store whether they're to be counted for with monthly, weekly, daily stats, etc. Place a trigger on them as well, in such a way that they increase or decrease the applicable totals in your stat_totals table.
As a closing note, give some thoughts on where you want the actual count to be stored. It needs to be an indexed field, and the latter is going to be heavily updated. Typically, you'll want it stored in its own table, rather than in the pages table, in order to avoid cluttering your pages table with (much larger) dead rows.
Assuming you did all the above your final query becomes:
select p.*
from pages p join stat_totals s using (page_id)
order by s.weekly_total desc limit 10
It should be plenty fast with the index on weekly_total.
Lastly, let's not forget the most obvious of all: if you're running these same total/monthly/weekly/etc queries over and over, their result should be placed in memcache too.
you can add indexes and try tweaking your SQL but the real solution here is to cache the results.
you should really only need to caclulate the last 7/30 days of traffic once daily
and you could do the past 24 hours hourly ?
even if you did it once every 5 minutes, that's still a huge savings over running the (expensive) query for every hit of every user.
RRDtool
Many tools/systems do not build their own logging and log aggregation but use RRDtool (round-robin database tool) to efficiently handle time-series data. RRDtools also comes with powerful graphing subsystem, and (according to Wikipedia) there are bindings for PHP and other languages.
From your questions I assume you don't need any special and fancy analysis and RRDtool would efficiently do what you need without you having to implement and tune your own system.
You can do some 'aggregation' in te background, for example by a con job. Some suggestions (in no particular order) that might help:
1. Create a table with hourly results. This means you can still create the statistics you want, but you reduce the amount of data to (24*7*4 = about 672 records per page per month).
your table can be somewhere along the lines of this:
hourly_results (
nid integer,
start_time datetime,
amount integer
)
after you parse them into your aggregate table you can more or less delete them.
2.Use result caching (memcache, apc)
You can easily store the results (which should not change every minute, but rather every hour?), either in a memcache database (which again you can update from a cronjob), use the apc user cache (which you can't update from a cronjob) or use file caching by serializing objects/results if you're short on memory.
3. Optimize your database
10 seconds is a long time. Try to find out what is happening with your database. Is it running out of memory? Do you need more indexes?