User bandwidth quota management with mysql php [closed] - php

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a large user table in mysql with 50k+ entries.
these users have limited download quota for their stored files in my website.
currently the quota resets every 24 hours by resetting the traffic count in a seperate 'traffics' table (where each row has a userid and a trafficused field entry)
so that makes two tables with 50k entries each (representing 50k users)
this system is working fine so far, but I have no way of keeping a user's bandwidth usage history since to reset the quota per day, I have to clear the traffics table.
I use php to update transferred bandwidth on each download completion.
I need to be able to limit quota per day and/or per month as efficiently as possible without making a mess out of the mysql tables.
There is another complication, I have seperate user quotas for different filetypes. for example .iso files have no quota, .mp4 files have 5gb per day limit. and .rar files have 10gb per day limit.
and maximum user quota per day is 20gb.
I know it all sounds very confusing. I can post the table structures here if needed.
Please try to help if you can.
Thanks

50k rows is a medium-size table, not large. Don't fear that size, just index it correctly.
Try adding a DATE column to your traffics table. Once a day run a MySQL event to do this query:
DELETE FROM traffics WHERE trafficdate < CURDATE() - INTERVAL 30 DAY
This will purge old traffic records.
When you need to know today's usage do
SELECT trafficused FROM traffics WHERE userid = whatever
AND traFficdate = CURDATE()
Similarly, when you need 30 days' worth of traffic, do
SELECT SUM(trafficused) FROM traffics WHERE userid = whatever
To store a traffic transaction, do this
INSERT INTO traffics (userid, trafficdate, trafficused)
VALUES (whatever, CURDATE(), filesize)
ON DUPLICATE KEY UPDATE trafficused=trafficused+filesize
Make sure your traffics table has the composite primary key (userid,trafficdate).
Also create the compound index (userid,trafficdate,traffics) to make your queries faster.
This approach, with ON DUPLICATE KEY, means you don't have to have a row for every user for every day.
Handling separate bandwidth per file type is a question of adding a filetype column and putting it into the indexes and queries.

Related

performance concerns when storing log records in MySQL

I have setup a 15 min cron job to monitor 10+ websites' performance (all hosted on different servers). Every 15 mins I will check if the server is up/down, the response time in ms etc.
I would like to save these information into MySQL instead of a log.txt file, so that it can be easily retrieve, query and analysis. (ie. the server performance on x day or x month or server performance between x and y days)
here is my table's going to look like:
id website_ip recorded_timestamp response_time_in_ms website_status
If I insert a new entry for each website, every day I'll have 1440 records for each website (15 x 4 x 24), then for 10 websites it will be 14400 records every day!!
so, I'm thinking of creating only 1 entry each hour / website. In this way instead of creating 14400 records every day, I'll only have 24 x 10 = 240 records every day for 10 websites.
But still, it's not perfect, what if I want to know keep the records for the whole year? then I'll have 87600 records for 365 years for 10 websites.
is 87600 records alot? my biggest concern is the difference between the server local time and client local time. How to improve the design without screwing up the accuracy and timezone?
This is a bit long or too short for a comment. The simple answer to your question is "no".
No, 87,600 records is not a lot of records. In fact, your full data with 5,256,000 records per year is not that much. It could be a lot of data if you had really wide records, but your record is at most a few tens of bytes. The resulting table would still have less than a gigabyte per year. Not very much at all, really.
Databases are designed to have big tables. You have the opportunity to do things to speed your queries on the log files. The most obvious is to create indexes on columns that would often be used for selection purposes. Another opportunity is to use partitioning to break up the storage of the tables into separate "files" (technically table spaces), so queries require less I/O.
You may want to periodically summarize more recent records and store the results in summary tables for your most common queries. However, that level of detail is beyond the scope of this answer.

How to properly handle a car race in MySQL? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
most likely a clueless question but I would like to start off on the good foot:
Despite trying my best, I have actually never really learned to program and I'm kind of "learning as I go" so please excuse me if this seems very obvious to you...
It's more of a suggestion and feedback kind of question rather than pure programming.
My situation is the following:
I'm building a racing game that would receive various inputs from a number of users (through a php website), I am storing that information in a MySQL database, and once a week I would like to process all that information to generate "lap times", which will then create a race (my "output").
Not taking into account the various methods of calculating that output, I need to do two important things which I'm not sure how to begin with at all :
1) Storing the race information for every user (lap time per lap, fastest lap, race position per lap, race position at end of race, award points depending on the position).
Where and how should I optimally store those informations ?
I have created a race DB with a unique identifier that auto increments, I'm thinking I will generate 1 set of data for each race, so should I store all the information pertaining to that race in there ?
Would I then create a data row (with type time?) for the lap time informations (1 row for lap1, 1 row for fastest, etc... ?)? But how would I know which user (I have a unique userID for each) did which lap (how would I assign the userID to the lap time)?
2) At the end of the race I need to award points depending on race position at the end, should I just compare total lap times (additional row?) and sort by lowest first ? The points data would be stored in the user DB ?
I appreciate any input you might have for the modeling of this project !
Drop every lap_round, lap_time and position in the DB and add a user_id and a race_id.
Afterwards query the laps. That way you can tell which is fastest overall, fastest per user, time per lap and much more.
To get the position query the db for the last lap. It holds its position.
Points are user based, so put them in the user table. Just add. But if you want to tell how many points were added per race than make a seperate table points (user_id, race_id, points)

Strategy for finding active/inactive users in a period of time [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to implement a mechanism with the following abilities:
Add new users
Delete a user
Enable a user
Disable a user
And also let the administrator to select a period of time and then show which users were:
Available in the system at that period
Enabled
Disabled
The result should be for that exact period of time.
Note: A user might be enabled or disabled several times and I need to keep track of every single change. So, if the user is disabled between the March 1th and April 2nd, it should not be appear in the results if the administrator querying a time period between March 1th up to April 2nd, but it should be included in the results if the administrator querying any other period of time.
Also the tricky part is to contain the usesr who has been added, deleted, enabled or disabled before the period that administrator querying.
I don't have any set up for now, so I'm pretty up for any idea. Actually I'm thinking of a mechanism like a log which you can query that later, but it should be really fast because I need to use it in many places.
Also I prefer to do everything in a single MySQL Query, however the PHP combination/interaction is also okay.
Per commentary, look into Slowly Changing Dimensions:
http://en.wikipedia.org/wiki/Slowly_changing_dimension
An additional top, having implemented this a few times myself. Personally, I've found it better to have two sets of tables, rather than a single one.
Think of the main one as a normal table with an extra rev (for revision_id) field:
id, rev, field1, field2
rev is a foreign key to the revisions table:
id, rev, field1, field2, start_date, end_date
And if you ever use Postgres to implement it, I'd advise to look into the tsrange type instead of two separate start_date and end_date types.
The main table vs history tables makes "normal" queries perform better and much easier to index.

Possible ways to create a turn based system using PHP/MySQL [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Right, I'm trying to create a system where by the user can do something, but then must wait until all the other users in the mysql table have made their move, i.e
User1 makes move, user2 and 3 must wait
user2 makes move, user 1 and 3 must wait
user3 makes move, user 1 and 2 must wait
user1 makes move...
One way I thought of was to give each of the users an number (ranging from 1 to the total number of players, say 6) and then when a player makes a move, set their number to the max number (6) and decrease everyone else's number by one, so the one with the minimum number is the only one who can play.
That's my only idea, is there an easier or alternative way?
My suggestion would be just store the last move date as a datetime. When you need to check if a user can move, simply just select out of the table all of the other players where the last move date is less than or equal to the current player's last move date. If the number of rows is not 0, then the player cannot move yet.
The benefits of this approach is the simplicity- every time you allow a player to make a move, just update the column with the current date and time.
Your proposed solution seems a little circuitous:
You're updating+reading every player every move, when the minimum information you need to maintain is whose move it is.
You're losing information about player order as you encode next turn information.
A high-level solution:
Create a games table, one row per game, with a column like INT currentTurn
Create a gameUsers table on a per-game basis, linked to its game in games
Do assign each of the n users in gameUsers an INT playerOrder ranging [1-n]
Only accept a move from playerN if playerN == "SELECT playerID FROM gameUsers WHERE playerOrder = currentTurn"
After a successful move: "UPDATE games SET currentTurn = currentTurn + 1 WHERE game = thisGame"
I believe above table structure is a good object oriented representation of an actual game model. You can stash other per-game things into games like winner, length, date, etc. Pardon the pseudoSQL.
You could have a table with column hasMoved tinyint(1) required default 0, and query for where hasMoved == 0; if the query returns null, then all players have moved.
(Note: this is based on "must wait for all other users", NOT for a strict move order - i.e. 'A' must move before 'B' must move before 'C', etc.)
Additionally, queries using this method is somewhat slow and (to me) seems somewhat unnecessarily resource-intensive - perhaps think about using Ajax instead?
Have a game sequence number that starts at zero. Have a "last moved" number for each player. When a player moves, set their "last moved" number equal to the game sequence number. Once every player has moved, increment the game sequence number.
You may want to use a timeout though, otherwise a player who doesn't move can delay the other players indefinitely.
I would first determine $sequence by calculating speed. Then comparing speeds to determine order. Then use the order to send out notices for their move. Use a timestamp to ensure the user doesn't take over a day or however long, you will need a cron job just for this.
Have a variable or array hold the first n last sequence so u can easily move the last moved player to the back without mixing uP orders.
Have the page check the players order sequence and not allow action unless it's at 1 or 0. Be sure to sanitize inputs so no manipulation exists. Then insert your form and graphics and game equations.
You can save date-time of the last move of the each user. So when you DESC sort this table by this date-time column, you will have to fetch only the first row of the result, that will contain the ID of the allowed to make move player.

Mysql - Summary Tables

Which method do you suggest and why?
Creating a summary table and . . .
1) Updating the table as the action occurs in real time.
2) Running group by queries every 15 minutes to update the summary table.
3) Something else?
The data must be near real time, it can't wait an hour, a day, etc.
I think there is a 3rd option, which might allow you to manage your CPU resources a little better. How about writing a separate process that periodically updates the summarized data tables? Rather than recreating the summary with a group by, which is GUARANTEED to run slower over time because there will be more rows every time you do it, maybe you can just update the values. Depending on the nature of the data, it may be impossible, but if it is so important that it can't wait and has to be near-real-time, then I think you can afford the time to tweak the schema and allow the process to update it without having to read every row in the source tables.
For example, say your data is just login_data (cols username, login_timestamp, logout_timestamp). Your summary could be login_summary (cols username, count). Once every 15 mins you could truncate the login_summary table, and then insert using select username, count(*) kind of code. But then you'd have to rescan the entire table each time. To speed things up, you could change the summary table to have a last_update column. Then every 15 mins you'd just do an update for every record newer than the last_update record for that user. More complicated of course, but it has some benefits: 1) You only update the rows that changed, and 2) You only read the new rows.
And if 15 minutes turned out to be too old for your users, you could adjust it to run every 10 mins. That would have some impact on CPU of course, but not as much as redoing the entire summary every 15 mins.

Categories