Daily/Weekly/Monthly Highscores - php

I have an online highscores made with php + mysql but it currently shows the All Time highscores, I want to add Daily/Weekly/Monthly to that and I was wondering what would be the best way todo that?
My current thought is to add 3 new tables and then have the data inserted into each of them, and then having a cron which would run at the appropriate times to delete the data from each of the tables.
Is there any better way I could do this?
Another thing, I want to have it so the page would be highscores.php?t=all t=daily, etc. How would I make it so that the page changed the query depending on that value?
Thanks.

Use one table and add a column with the date of the highscore. Then have three different queries for each timespan, e.g.
SELECT ... FROM highscores WHERE date>"05-12-2011";
If you want to have a generic version without the need to have a fixed date, use this one:
SELECT ...
FROM highscores
WHERE date >= curdate() - INTERVAL DAYOFWEEK(curdate())+6 DAY;

Related

Moving Pivot Table template from csv(Excel) to MySQL

I usually prepare reports and charts from excel manually using pivot table adding several columns manually from the raw data and then using pivot table on the fields and populating it.
And I would like to see if this can be automated by:
a) Loading the data into a mysql database
b) Using several queries to add additional columns and then prepare the data ready to be used by
c) Chart APIs/JQuery.
Since I know csv to mysql is easier, I now have the raw data file in CSV format.
The raw data basically contains different fields mainly time, date time and strings.
Using a PHP script, I was able to load these data using the LOAD DATA LOCAL INFILE command.
Based on dates, I need to prepare a column y which says months and this month column has to be updated with the month name('jan', etc.) depending on the date field(yyyy-mm-dd hh:mm:ss) on certain x column in the same table.
or maybe just use this and reference in the graphs(Not sure how complex that would be):-
mysql> select count(*) as Count, monthname(date) from alerts;
+-------+---------------------------------+
| Count | monthname(date) |
+-------+---------------------------------+
| 24124 | March |
+-------+---------------------------------+
1 row in set (0.19 sec)
Similarly, I need a column a that says "Duration < 5 minutes" and a column b that says "Duration > 5 min < 10 min" , where I would put a numeric value '1', if it falls within the range.
I looked into the self-join examples but I could not make it work in my case inspite of several efforts.
I need some help to get me going because my belief is that a table with all relevant columns is better off than using queries at runtime.
Also, is it better to format the data first and load it to mysql OR load the data and format it?
Please let me know.
Thanks
Update1
Okay, I got this working with a self join as below
UPDATE t1 p1 INNER JOIN ( select monthname(dt_received) AS EXTMONTHNAME from t1)p2 SET p1.MONTH=p2.EXTMONTHNAME;
but why does it update all the month with the same month name even though dt_received has other months ?
Can someone help?
Update2
Again, still struggling, I was made aware of the 1093 error/constraint. The workarounds are simply not helping
Unlike Excel where manual formatting was required, I found querying the database much easier using queryies
This resolved the issue
UPDATE tablename p1 INNER JOIN ( select monthname(dt_received) AS EXTMONTHNAME from tablename )p2 SET p1.MONTH=p2.EXTMONTHNAME where monthname(p1.dt_received)=p2.EXTMONTHNAME;
But would someone know, why it takes close to 14 minutes to change 36879 rows?
How do I optimize it.

Long polling with PHP and jQuery - issue with update and delete

I wrote a small script which uses the concept of long polling.
It works as follows:
jQuery sends the request with some parameters (say lastId) to php
PHP gets the latest id from database and compares with the lastId.
If the lastId is smaller than the newly fetched Id, then it kills the
script and echoes the new records.
From jQuery, i display this output.
I have taken care of all security checks. The problem is when a record is deleted or updated, there is no way to know this.
The nearest solution i can get is to count the number of rows and match it with some saved row count variable. But then, if i have 1000 records, i have to echo out all the 1000 records which can be a big performance issue.
The CRUD functionality of this application is completely separated and runs in a different server. So i dont get to know which record was deleted.
I don't need any help coding wise, but i am looking for some suggestion to make this work while updating and deleting.
Please note, websockets(my fav) and node.js is not an option for me.
Instead of using a certain ID from your table, you could also check when the table itself was modified the last time.
SQL:
SELECT UPDATE_TIME
FROM information_schema.tables
WHERE TABLE_SCHEMA = 'yourdb'
AND TABLE_NAME = 'yourtable';
If successful, the statement should return something like
UPDATE_TIME
2014-04-02 11:12:15
Then use the resulting timestamp instead of the lastid. I am using a very similar technique to display and auto-refresh logs, works like a charm.
You have to adjust the statement to your needs, and replace yourdb and yourtable with the values needed for your application. It also requires you to have access to information_schema.tables, so check if this is available, too.
Two alternative solutions:
If the solution described above is too imprecise for your purpose (it might lead to issues when the table is changed multiple times per second), you might combine that timestamp with your current mechanism with lastid to cover new inserts.
Another way would be to implement a table, in which the current state is logged. This is where your ajax requests check the current state. Then generade triggers in your data tables, which update this table.
You can get the highest ID by
SELECT id FROM table ORDER BY id DESC LIMIT 1
but this is not reliable in my opinion, because you can have ID's of 1, 2, 3, 7 and you insert a new row having the ID 5.
Keep in mind: the highest ID, is not necessarily the most recent row.
The current auto increment value can be obtained by
SELECT AUTO_INCREMENT FROM information_schema.tables
WHERE TABLE_SCHEMA = 'yourdb'
AND TABLE_NAME = 'yourtable';
Maybe a timestamp + microtime is an option for you?

Best way to update view count in PHP/MySQL

My client has a table that tracks total views for each of his articles. The problem is they want me to break the view count into days. I can easily enough query the db and grab the view counts, but I'm unsure of how to grab each days view count (for each article of course).
In case I'm not being clear (which is usually the case I've been told) I have a field in a table that collects all views on each article with no regard to date or time. If the article was viewed, the row is plus one'd. Look at the record a year from now and the view count shows 2,000. That's it.
What I want to do is capture each days view count for each article and plunk that into its own table but I CANNOT impact said view count field/record. This way, the client can view each days view count on each article. Any idea on the best approach?
I hope that all made sense!!
If I were you I would make a new table for views and insert a new record on each view and when was it viewed, then I would select all the views that are dated today and count them and that would give me the number of times the article was viewed today and it would still keep the total count
Something like:
INSERT INTO `daily_views` SET `views` =
SELECT COUNT(*) FROM `views_table`
WHERE `date` BETWEEN '2013-06-10'
AND '2013-06-11'
AND `post_id` = 1
, `post_id` = 1;
Start at New Year Eve. Each day take the count and store it in a separate table. On the next day subtract current count with the stored one - this is yesterday's count - store the difference on another table together with the date. So, clear or not?

Selecting random rows from a table automatically

I'm working on a project that requires back-end service. I am using MySQL and php scripts to achieve communication with server side. I would like to add a new feature on the back-end and that is the ability to generate automatically a table with 3 'lucky' members from a table_members every day. In other words, I would like MySQL to pick 3 random rows from a table and add these rows to another table (if is possible). I understand that, I can achieve this if manually call RAND() function on that table but ... will be painful!
There is any way to achieve the above?
UPDATE:
Here is my solution on this after comments/suggestions from other users
CREATE EVENT `draw` ON SCHEDULE EVERY 1 DAY STARTS '2013-02-13 10:00:00' ON COMPLETION NOT PRESERVE ENABLE DO
INSERT INTO tbl_lucky(`field_1`)
SELECT u_name
FROM tbl_members
ORDER BY RAND()
LIMIT 3
I hope this is helpful and to others.
You can use the INSERT ... SELECT and select 3 rows ORDER BY RAND() with LIMIT 3
For more information about the INSERT ... SELECT statement - see
It's also possible to automate this every day job with MySQL Events(available since 5.1.6)

Is there a better way to get old data?

Say you've got a database like this:
books
-----
id
name
And you wanted to get the total number of books in the database, easiest possible sql:
"select count(id) from books"
But now you want to get the total number of books last month...
Edit: but some of the books have been
deleted from the table since last month
Well obviously you cant total for a month thats already past - the "books" table is always current and some of the records have already been deleted
My approach was to run a cron job (or scheduled task) at the end of the month and store the total in another table, called report_data, but this seems clunky. Any better ideas?
Add a default column that has the value GETDATE(), call it "DateAdded". Then you can query between any two dates to find out how many books there were during that date period or you can just specify one date to find out how many books there were before a certain date (all the way into history).
Per comment: You should not delete, you should soft delete.
I agree with JP, do a soft delete/logical delete. For the one extra AND statement per query it makes everything a lot easier. Plus, you never lose data.
Granted, if extreme size becomes an issue, then yeah, you'll potentially have to start physically moving/removing rows.
My approach was to run a cron job (or scheduled task) at the end of the month and store the total in another table, called report_data, but this seems clunky.
I have used this method to collect and store historical data. It was simpler than a soft-delete solution because:
The "report_data" table is very easy to generate reports/graphs from
You don't have to implement special soft-delete code for anything that needs to delete a book
You don't have to add "and active = 1" to the end of every query that selects from the books table
Because the code to do the historical reporting is isolated from everything else that uses books, this was actually the less clunky solution.
If you needed data from the previous month then you should not have deleted the old data. Instead you can have a "logical delete."
I would add a status field and some dates to the table.
books
_____
id
bookname
date_added
date_deleted
status (active/deleted)
From there you would be able to query:
SELECT count(id) FROM books WHERE date_added <= '06/30/2009' AND status = 'active'
NOTE: It my not be the best schema, but you get the idea... ;)
If changing the schema of the tables is too much work I would add triggers that would track the changes. With this approach you can track all kinds of things like date added, date deleted etc.
Looking at your problem and the reluctance in changing the schema and the code, I would suggest you to go with your idea of counting the books at the end of each month and storing the count for the month in another table. You can use database scheduler to invoke a SP to do this.
You have just taken a baby step down the road of history databases or data warehousing.
A data warehouse typically stores data about the way things were in a format such that later data will be added to current data instead of superceding current data. There is a lot to learn about data warehousing. If you are headed down that road in a serious way, I suggest a book by Ralph Kimball or Bill Inmon. I prefer Kimball.
Here's the websites: http://www.ralphkimball.com/
http://www.inmoncif.com/home/
If, on the other hand, your first step into this territory is the only step you plan to take, your proposed solution is good enough.
The only way to do what you want is to add a column to the books table "date_added". Then you could run a query like
select count(id) from books where date_added <= '06/30/2009';

Categories