Introduction
I am wondering what is the best way to implement a unique views system... I think I know how, so if I could explain how I think to do it and you point out any mistakes or improvements.
obviously I will have to store a log table containing a video id and something which (relatively) uniquely identifies the user. At first I considered a combination of header request and IP but decided to keep it simple and use just IP. Also that way a user can not increase the views of their video by using a different browser.
This is how I would think to do it:
When a user visits I do a SELECT
similar to this:
SELECT 1 FROM tbl_log WHERE IP =
$usersip AND video_id = $video_id
if there is no result then I must
insert a record
INSERT into tbl_log (IP,video_id)
VALUES ($usersip, $video_id)
and increase the views by 1
SELECT views FROM tbl_video WHERE
video_id = $video_id
UPDATE tbl_video SET views =
$result['views'] + 1 WHERE video_id
= $video_id
Questions
I guess I do not want to have
millions of log records slowing down
my site so should I run a cron job to
empty the log table once a day?
Should I make the views
transactional? (I guess a slightly
depreciated view count is less
important than a slow site because of
row locks)
Is there a way to reduce the load on
the mysql server.... I fear if every
view requires an increased view count
and an IP log that it will be pretty
expensive. I have seen that youtube
and the like do not update the views
instantly... do they cache the
updates some how and then run them at
once? if so how?
How efficient is my system? Can you
think of any improvements?
Here are some ideas for improvements you can make.
Set a primary key on tbl_log to be IP + video_id. Then you can simply do a
REPLACE INTO tbl_log (IP,video_id) VALUES ($usersip, $video_id)
(Be sure to escape those php variables to avoid SQL injection.)
Now you're updating your log table with only one query. Next, you can update the views field in tbl_video periodically with something like:
UPDATE tbl_video SET views = (select count(*) from tbl_log where video_id = $video_id) where video_id = $video_id
You can do that with a cron job, or you can add a 'last_count_update' field and update the video when it is accessed if the last count time is older than 2 hours or whatever. This will be a little less work if you have a bunch of videos that aren't visited often.
I guess I do not want to have
millions of log records slowing down my site so should I run a cron
job to empty the log table once a day?
Consider using mysql's ON DUPLICATE KEY UPDATE syntax to avoid using a SELECT which would have an expensive WHERE clause. If your log table also had a timestamp column, you could refresh that value.
INSERT into tbl_log (IP,video_id) VALUES ($usersip, $video_id) ON DUPLICATE KEY UPDATE time_recorded = now();
This would require you to have a UNIQUE constraint on the IP and video_id columns.
Should I make the views transactional?
(I guess a slightly depreciated view
count is less important than a slow
site because of row locks)
No, because you can achieve this with a single UDPATE query.
UPDATE tbl_video SET views = views + 1 WHERE video_id = $video_id
Is there a way to reduce the load on
the mysql server.... I fear if every
view requires an increased view count
and an IP log that it will be pretty
expensive. I have seen that youtube
and the like do not update the views
instantly... do they cache the updates
some how and then run them at once? if
so how?
It's not too bad - there's really no other way to reliably capture record-view data. In the case of Youtube, it's more likely delayed writes or replication that's causing the delay you notice since they have hundreds of servers (although it's possible they are caching the value as well)
How efficient is my system? Can you
think of any improvements?
Other than what I mentioned here already, not off the top of my head.
Related
I wrote a small script which uses the concept of long polling.
It works as follows:
jQuery sends the request with some parameters (say lastId) to php
PHP gets the latest id from database and compares with the lastId.
If the lastId is smaller than the newly fetched Id, then it kills the
script and echoes the new records.
From jQuery, i display this output.
I have taken care of all security checks. The problem is when a record is deleted or updated, there is no way to know this.
The nearest solution i can get is to count the number of rows and match it with some saved row count variable. But then, if i have 1000 records, i have to echo out all the 1000 records which can be a big performance issue.
The CRUD functionality of this application is completely separated and runs in a different server. So i dont get to know which record was deleted.
I don't need any help coding wise, but i am looking for some suggestion to make this work while updating and deleting.
Please note, websockets(my fav) and node.js is not an option for me.
Instead of using a certain ID from your table, you could also check when the table itself was modified the last time.
SQL:
SELECT UPDATE_TIME
FROM information_schema.tables
WHERE TABLE_SCHEMA = 'yourdb'
AND TABLE_NAME = 'yourtable';
If successful, the statement should return something like
UPDATE_TIME
2014-04-02 11:12:15
Then use the resulting timestamp instead of the lastid. I am using a very similar technique to display and auto-refresh logs, works like a charm.
You have to adjust the statement to your needs, and replace yourdb and yourtable with the values needed for your application. It also requires you to have access to information_schema.tables, so check if this is available, too.
Two alternative solutions:
If the solution described above is too imprecise for your purpose (it might lead to issues when the table is changed multiple times per second), you might combine that timestamp with your current mechanism with lastid to cover new inserts.
Another way would be to implement a table, in which the current state is logged. This is where your ajax requests check the current state. Then generade triggers in your data tables, which update this table.
You can get the highest ID by
SELECT id FROM table ORDER BY id DESC LIMIT 1
but this is not reliable in my opinion, because you can have ID's of 1, 2, 3, 7 and you insert a new row having the ID 5.
Keep in mind: the highest ID, is not necessarily the most recent row.
The current auto increment value can be obtained by
SELECT AUTO_INCREMENT FROM information_schema.tables
WHERE TABLE_SCHEMA = 'yourdb'
AND TABLE_NAME = 'yourtable';
Maybe a timestamp + microtime is an option for you?
I have a table article with many articles. I want to track the number of viewers for each article. Here's my idea, of how I plan to do it.
I've a table viewers with rows: id, ip, date, article_id.
article_id is a FOREIGN FIELD referring to the id of the article. So, when a user open up an article, his/her IP address is stored in the table.
Is this a decent approach? It is for a personal site and not a very big website.
EDIT: I want to print the number of view on each article page.
It depends on how frequently you need to display number of viewer. Your general query will be:
select count(*) from viewers
where article_id='10'
With time, your viewers table will grow. Say it have million records after 1 year or two. Now if you are showing number of viewers on each article page or displaying articles with most viewers, it will start impacting on performance even though foreign key is indexed. However that will happen after you added hundreds of articles with each having thousands of viewers.
A better optimized solution may be to keep number of viewers in article table and use that to display results. Existing Viewers table is also necessary to ensure there is no duplicate entry (Same user reading an article ten times must be marked as single entry not ten).
Use a Tool like Google Analytics. This will do the job much more elaborated and you're up and running in minutes, there's more about unique visitors than IP addresses!
If you want to have an on premise solution, look at PIWIK, which is PHP framework for exactly this puprose.
In this design,There is a one problem if the same user open it again and again then either you have to put check before insert the entry or you insert the same ip address multiple time but different time stamp.
Most of the popular sites consider one ip address as one view even if that client or user open that article several times.
I can think of solution.
your approach with single check. if the same client has opened it again don't insert it.
or group by Id when you retrieve the counter.
It depends on what you want to store in your database. If you want to know exactly how many unique users visited this particular article (including date and ip) this is reasonable way do to this. But if you need only a number to show you can alter article table and include new column with visit_counter and store cookie to prevent incrementing counter on same user.
try something like this
// insert
$query = mysqli_query("REPLACE INTO viewers (ip) VALUES ('" . ip2long($_SERVER['REMOTE_ADDR']) . "')");
// retrieve
list($pageviews) = mysqli_fetch_row(mysqli_query("SELECT COUNT(ip) FROM viewers"));
echo $pageviews;
Read : REPLACE INTO
Yes, this is good aproach if you create some kind of cache for displaying how many views each article had. It's not optimal to count views each time user open website.
You can do it in SQL Server. There is something like materialized view. ( https://code.google.com/p/flexviews/ )
select article_id, count(*) as views from viewers group by article_id
Or you can cache it in files and refresh every X time.
To store users who viewed article I suggest using AJAX. When user open website, another 'thread' will call website to add his as viewer. Even if your db is slow, it will not slow down website loading, also web spiders will not be counted.
I've done some searching for this but haven't come up with anything, maybe someone could point me in the right direction.
I have a website with lots of content in a MySQL database and a PHP script that loads the most popular content by hits. It does this by logging each content hit in a table along with the access time. Then a select query is run to find the most popular content in the past 24 hours, 7 day or maximum 30 days. A cronjob deletes anything older than 30 days in the log table.
The problem I'm facing now is as the website grows the log table has 1m+ hit records and it is really slowing down my select query (10-20s). At first I though the problem was a join I had in the query to get the content title, url, etc. But now I'm not sure as in test removing the join does not speed the query as much as I though it would.
So my question is what is best practise of doing this kind of popularity storing/selecting? Are they any good open source scripts for this? Or what would you suggest?
Table scheme
"popularity" hit log table
nid | insert_time | tid
nid: Node ID of the content
insert_time: timestamp (2011-06-02 04:08:45)
tid: Term/category ID
"node" content table
nid | title | status | (there are more but these are the important ones)
nid: Node ID
title: content title
status: is the content published (0=false, 1=true)
SQL
SELECT node.nid, node.title, COUNT(popularity.nid) AS count
FROM `node` INNER JOIN `popularity` USING (nid)
WHERE node.status = 1
AND popularity.insert_time >= DATE_SUB(CURDATE(),INTERVAL 7 DAY)
GROUP BY popularity.nid
ORDER BY count DESC
LIMIT 10;
We've just come across a similar situation and this is how we got around it. We decided we didn't really care about what exact 'time' something happened, only the day it happened on. We then did this:
Every record has a 'total hits' record which is incremented every time something happens
A logs table records these 'total hits' per record, per day (in a cron job)
By selecting the difference between two given dates in this log table, we can deduce the 'hits' between two dates, very quickly.
The advantage of this is the size of your log table is only as big as NumRecords * NumDays which in our case is very small. Also any queries on this logs table are very quick.
The disadvantage is you lose the ability to deduce hits by time of day but if you don't need this then it might be worth considering.
You actually have two problems to solve further down the road.
One, which you've yet to run into but which you might earlier than you want, is going to be insert throughput within your stats table.
The other, which you've outlined in your question, is actually using the stats.
Let's start with input throughput.
Firstly, in case you're doing so, don't track statistics on pages that could use caching. Use a php script that advertises itself as an empty javascript, or as a one-pixel image, and include the latter on pages you're tracking. Doing so allows to readily cache the remaining content of your site.
In a telco business, rather than doing an actual inserts related to billing on phone calls, things are placed in memory and periodically sync'ed with the disk. Doing so allows to manage gigantic throughputs while keeping the hard-drives happy.
To proceed similarly on your end, you'll need an atomic operation and some in-memory storage. Here's some memcache-based pseudo-code for doing the first part...
For each page, you need a Memcache variable. In Memcache, increment() is atomic, but add(), set(), and so forth aren't. So you need to be wary of not miss-counting hits when concurrent processes add the same page at the same time:
$ns = $memcache->get('stats-namespace');
while (!$memcache->increment("stats-$ns-$page_id")) {
$memcache->add("stats-$ns-$page_id", 0, 1800); // garbage collect in 30 minutes
$db->upsert('needs_stats_refresh', array($ns, $page_id)); // engine = memory
}
Periodically, say every 5 minutes (configure the timeout accordingly), you'll want to sync all of this to the database, without any possibility of concurrent processes affecting each other or existing hit counts. For this, you increment the namespace before doing anything (this gives you a lock on existing data for all intents and purposes), and sleep a bit so that existing processes that reference the prior namespace finish up if needed:
$ns = $memcache->get('stats-namespace');
$memcache->increment('stats-namespace');
sleep(60); // allow concurrent page loads to finish
Once that is done, you can safely loop through your page ids, update stats accordingly, and clean up the needs_stats_refresh table. The latter only needs two fields: page_id int pkey, ns_id int). There's a bit more to it than simple select, insert, update and delete statements run from your scripts, however, so continuing...
As another replier suggested, it's quite appropriate to maintain intermediate stats for your purpose: store batches of hits rather than individual hits. At the very most, I'm assuming you want hourly stats or quarter-hourly stats, so it's fine to deal with subtotals that are batch-loaded every 15 minute.
Even more importantly for your sake, since you're ordering posts using these totals, you want to store the aggregated totals and have an index on the latter. (We'll get to where further down.)
One way to maintain the totals is to add a trigger which, on insert or update to the stats table, will adjust the stats total as needed.
When doing so, be especially wary about dead-locks. While no two $ns runs will be mixing their respective stats, there is still a (however slim) possibility that two or more processes fire up the "increment $ns" step described above concurrently, and subsequently issue statements that seek to update the counts concurrently. Obtaining an advisory lock is the simplest, safest, and fastest way to avoid problems related to this.
Assuming you use an advisory lock, it's perfectly OK to use: total = total + subtotal in the update the statement.
While on the topic of locks, note that updating the totals will require an exclusive lock on each affected row. Since you're ordering by them, you don't want them processed all in one go because it might mean keeping an exclusive lock for an extended duration. The simplest here is to process the inserts into stats in smaller batches (say, 1000), each followed by a commit.
For intermediary stats (monthly, weekly), add a few boolean fields (bit or tinyint in MySQL) to your stats table. Have each of these store whether they're to be counted for with monthly, weekly, daily stats, etc. Place a trigger on them as well, in such a way that they increase or decrease the applicable totals in your stat_totals table.
As a closing note, give some thoughts on where you want the actual count to be stored. It needs to be an indexed field, and the latter is going to be heavily updated. Typically, you'll want it stored in its own table, rather than in the pages table, in order to avoid cluttering your pages table with (much larger) dead rows.
Assuming you did all the above your final query becomes:
select p.*
from pages p join stat_totals s using (page_id)
order by s.weekly_total desc limit 10
It should be plenty fast with the index on weekly_total.
Lastly, let's not forget the most obvious of all: if you're running these same total/monthly/weekly/etc queries over and over, their result should be placed in memcache too.
you can add indexes and try tweaking your SQL but the real solution here is to cache the results.
you should really only need to caclulate the last 7/30 days of traffic once daily
and you could do the past 24 hours hourly ?
even if you did it once every 5 minutes, that's still a huge savings over running the (expensive) query for every hit of every user.
RRDtool
Many tools/systems do not build their own logging and log aggregation but use RRDtool (round-robin database tool) to efficiently handle time-series data. RRDtools also comes with powerful graphing subsystem, and (according to Wikipedia) there are bindings for PHP and other languages.
From your questions I assume you don't need any special and fancy analysis and RRDtool would efficiently do what you need without you having to implement and tune your own system.
You can do some 'aggregation' in te background, for example by a con job. Some suggestions (in no particular order) that might help:
1. Create a table with hourly results. This means you can still create the statistics you want, but you reduce the amount of data to (24*7*4 = about 672 records per page per month).
your table can be somewhere along the lines of this:
hourly_results (
nid integer,
start_time datetime,
amount integer
)
after you parse them into your aggregate table you can more or less delete them.
2.Use result caching (memcache, apc)
You can easily store the results (which should not change every minute, but rather every hour?), either in a memcache database (which again you can update from a cronjob), use the apc user cache (which you can't update from a cronjob) or use file caching by serializing objects/results if you're short on memory.
3. Optimize your database
10 seconds is a long time. Try to find out what is happening with your database. Is it running out of memory? Do you need more indexes?
I have three tables, one for articles, one for comments, one for likes, one for visits, in this example schema
**news**
news_id
**comments**
comment_id
news_id
**likes**
like_id
news_id
**hits**
hit_id
news_id
What i want to do is to listen all the articles in a sortable index in a box/div for each article with article count of hits, comments, and likes, i know how to do all this, so it's not the how i am seeking, it's the best way, i am thinking about those two solutions.
do it the normal way, a complex SQL query then cache the query let's say for an hour or two.
write a script that is executed every two or three hours to calculate the data and store it in the same news table in "news_hits, news_likes, news_comments" numeral fields.
and of course the third way is to do the query each time the page is loaded without any caching.
i feel that it's method number one that i shall go after, but i wanted a professional or experienced opinion, i am not expecting a huge number of visitors, around 500-1000 a day maximum, but still i want to be prepared for high traffic.
thank you,
Rami
It would be best to admit redundancy in this case, to improve speed. To the news table, add these fields:
comments_count int not null default 0,
likes_count int not null default 0,
hits_count int not null default 0
When a comment/like/hit is added/deleted, if the database supports triggers, trigger an increment/decrement of the referenced counter, and if not - do it manually on each insert/delete (stored procedure maybe?).
This type of data is more often read than written, so to optimize read speed, slowing down write speed and storage space isn't a big deal.
From time to time, it would be OK to run a query that would update these counters if by some reason they become erroneous.
Break the complex SQL into several smaller queries (less complex) and cache the individual result(s), so in anytime you want to prepare warm-up cache, it won't take too many database resources
With such a simple model, query and low number of visitors I would go for the straight query. It will execute just fine (milliseconds) with proper indexing.
If I understand the scenario correctly, the query should sort news articles by their popularity, which is determined in some way by the nr of likes/hits/comments.
If you are set on fixing a performance problem you may not actually run into, the simplest "solution" would be to use a query cache that expires every 10 seconds. With your current load, each visitor would basically always render the view from the database since the cache expires between page visits. If, one day you suddenly become overrun with say 200,000 visitors, you would only perform the query once every 10 seconds.
I have a website that has user ranking as a central part, but the user count has grown to over 50,000 and it is putting a strain on the server to loop through all of those to update the rank every 5 minutes. Is there a better method that can be used to easily update the ranks at least every 5 minutes? It doesn't have to be with php, it could be something that is run like a perl script or something if something like that would be able to do the job better (though I'm not sure why that would be, just leaving my options open here).
This is what I currently do to update ranks:
$get_users = mysql_query("SELECT id FROM users WHERE status = '1' ORDER BY month_score DESC");
$i=0;
while ($a = mysql_fetch_array($get_users)) {
$i++;
mysql_query("UPDATE users SET month_rank = '$i' WHERE id = '$a[id]'");
}
UPDATE (solution):
Here is the solution code, which takes less than 1/2 of a second to execute and update all 50,000 rows (make rank the primary key as suggested by Tom Haigh).
mysql_query("TRUNCATE TABLE userRanks");
mysql_query("INSERT INTO userRanks (userid) SELECT id FROM users WHERE status = '1' ORDER BY month_score DESC");
mysql_query("UPDATE users, userRanks SET users.month_rank = userRanks.rank WHERE users.id = userRanks.id");
Make userRanks.rank an autoincrementing primary key. If you then insert userids into userRanks in descending rank order it will increment the rank column on every row. This should be extremely fast.
TRUNCATE TABLE userRanks;
INSERT INTO userRanks (userid) SELECT id FROM users WHERE status = '1' ORDER BY month_score DESC;
UPDATE users, userRanks SET users.month_rank = userRanks.rank WHERE users.id = userRanks.id;
My first question would be: why are you doing this polling-type operation every five minutes?
Surely rank changes will be in response to some event and you can localize the changes to a few rows in the database at the time when that event occurs. I'm pretty certain the entire user base of 50,000 doesn't change rankings every five minutes.
I'm assuming the "status = '1'" indicates that a user's rank has changed so, rather than setting this when the user triggers a rank change, why don't you calculate the rank at that time?
That would seem to be a better solution as the cost of re-ranking would be amortized over all the operations.
Now I may have misunderstood what you meant by ranking in which case feel free to set me straight.
A simple alternative for bulk update might be something like:
set #rnk = 0;
update users
set month_rank = (#rnk := #rnk + 1)
order by month_score DESC
This code uses a local variable (#rnk) that is incremented on each update. Because the update is done over the ordered list of rows, the month_rank column will be set to the incremented value for each row.
Updating the users table row by row will be a time consuming task. It would be better if you could re-organise your query so that row by row updates are not required.
I'm not 100% sure of the syntax (as I've never used MySQL before) but here's a sample of the syntax used in MS SQL Server 2000
DECLARE #tmp TABLE
(
[MonthRank] [INT] NOT NULL,
[UserId] [INT] NOT NULL,
)
INSERT INTO #tmp ([UserId])
SELECT [id]
FROM [users]
WHERE [status] = '1'
ORDER BY [month_score] DESC
UPDATE users
SET month_rank = [tmp].[MonthRank]
FROM #tmp AS [tmp], [users]
WHERE [users].[Id] = [tmp].[UserId]
In MS SQL Server 2005/2008 you would probably use a CTE.
Any time you have a loop of any significant size that executes queries inside, you've got a very likely antipattern. We could look at the schema and processing requirement with more info, and see if we can do the whole job without a loop.
How much time does it spend calculating the scores, compared with assigning the rankings?
Your problem can be handled in a number of ways. Honestly more details from your server may point you in a totally different direction. But doing it that way you are causing 50,000 little locks on a heavily read table. You might get better performance with a staging table and then some sort of transition. Inserts into a table no one is reading from are probably going to be better.
Consider
mysql_query("delete from month_rank_staging;");
while(bla){
mysql_query("insert into month_rank_staging values ('$id', '$i');");
}
mysql_query("update month_rank_staging src, users set users.month_rank=src.month_rank where src.id=users.id;");
That'll cause one (bigger) lock on the table, but might improve your situation. But again, that may be way off base depending on the true source of your performance problem. You should probably look deeper at your logs, mysql config, database connections, etc.
Possibly you could use shards by time or other category. But read this carefully before...
You can split up the rank processing and the updating execution. So, run through all the data and process the query. Add each update statement to a cache. When the processing is complete, run the updates. You should have the WHERE portion of the UPDATE reference a primary key set to auto_increment, as mentioned in other posts. This will prevent the updates from interfering with the performance of the processing. It will also prevent users later in the processing queue from wrongfully taking advantage of the values from the users who were processed before them (if one user's rank affects that of another). It also prevents the database from clearing out its table caches from the SELECTS your processing code does.