I'm developing an application in CakePHP 2.4.7
I have my MySQL database and I've come to the necessity of triggering an update when the system's date and hour matches a due date I have in a table.
The table I'm using is the following
CREATE TABLE applied_surveys (id CHAR(36) NOT NULL PRIMARY KEY,
display_name VARCHAR(200),
area_id CHAR(36) NOT NULL,
survey_id CHAR(36) NOT NULL,
system_user_id CHAR(36) NOT NULL,
code VARCHAR(50),
init_date DATE,
due_date DATE,
init_hour TIME,
due_hour TIME,
completed INT,
state TINYINT DEFAULT 1,
max_responders INT,
created DATE, modified DATE,
FOREIGN KEY (area_id) REFERENCES areas(id),
FOREIGN KEY (survey_id) REFERENCES surveys(id),
FOREIGN KEY (system_user_id) REFERENCES system_users(id));
As you can see, I'm using an init date/hour and a due date/hour. My intention here is that I add a survey and I set a due date. When the due date and hour are reached the system must change my status(state) value to 0, meaning that the survey has been closed.
I'm integrating this database to a CakePHP application, but I'm not really sure where I should program the logic for this situation.
You don't necessarily need to update the column, you can simply return the comparison of the current system date and time with the due_date and due_time when the table is queried.
The separation of due_date and due_hour into two separate columns seems a bit odd, if we assume that neither of those will be null, we can convert those into a DATETIME, and then compare to NOW()
e.g.
SELECT NOW() <= CONCAT(s.due_date,' ',s.due_hour) AS `state`
FROM applied_surveys s
A MySQL row trigger gets fired when a row is modified. MySQL triggers don't get fired when the system clock advances.
You could run an UPDATE statement that identifies rows to be updated whenever you wanted, e.g.
UPDATE applied_surveys s
SET s.state = 0
WHERE NOW() >= CONCAT(s.due_date,' ',s.due_hour)
AND NOT (s.state <=> 0)
You can't write "sistem", it's system.
You will need to run a cron job every second/minute/hour, or whatever you prefer, that would check each record and see which ones are later than the system date. You can't expect it to run exactly at the time the dates become exactly the same, especially if you account for the seconds.
You can read about CRON jobs here : http://code.tutsplus.com/tutorials/managing-cron-jobs-with-php--net-19428
You can use events rather than triggers.
A stored procedure is only executed when it is invoked directly; a trigger is executed when an event associated with a table such as an insert, update or delete event occurs while an event can be executed at once or more regular intervals.
To set up events
SET GLOBAL event_scheduler = ON;
CREATE EVENT IF NOT EXISTS update_state
ON SCHEDULE EVERY 1 day
DO
Update applied_surveys
Set state= 0
where due_date < curdate();
//similar condition can be added for hour.
To see all events in the schema
SHOW EVENTS;
Related
I want to make statistics for my website by days and month & years
I think to make date() function
Well, First I create a database that has 3 col
Id
date
statistics
For example today we are 27-05-2016, So, the database has
1 | 27-05-2016 | 5
and when I have a new day that insert a new row for example
2 | 28-05-2016 | 20
and I showed result with while ... etc
but I have a problem I want to know how when the day finished I insert new day ? How I can Automatically if the day ended I insert new date for the new day ?
Get the latest date from database, SELECT * FROM stat_table ORDER BY date DESC;
Keep latest date in a variable $latest_date = $data['date'];
if (current_date = $latest_date) {
update database
} else {
insert new row
}
I am not too good with php but im pretty sure the best you can do is wait for an interaction(logging in ect), and then check the time and compare it with the last logged time. So for example. jDoe787 logged in at 6:56 AM 1/5/16 the logges right back out.(Here the server updates the last time someone logged in) jDoe logs in later at 12:01 AM, a script turns on as a it is a different day then the one he last logged so it changes the database to the new day, and does whatever it needs to
Try to use REPLACE INTO operator. But first you have to make a UNIQUE index on your DATE field
Always run the update query with current date and if the update returns false then insert the new entry
I guess you want a table with one row for each distinct date, and you want to update your statistics column with new data.
You can do this cleanly in MySQL, using some MySQL-specific extensions.
First, you create your table and put a UNIQUE INDEX on your date column. This kind of table definition does the trick. (This is standard SQL.)
CREATE TABLE `stat` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`date` DATE NOT NULL,
`statistics` INT(11) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE INDEX `date_index` (`date`)
);
Notice that, with this definition, an attempt to INSERT a duplicate date value will fail. But that's good, because MySQL has the INSERT ... ON DUPLICATE KEY UPDATE... syntax. (This is MySQL-specific.)
To insert a new row for the current date if necessary, and otherwise to add one to the current date's statistics column, you can use this statement:
INSERT INTO stat (date,statistics) VALUES (CURDATE(), 1)
ON DUPLICATE KEY UPDATE statistics = statistics+1;
The first line of this statement creates a row in the stat table and sets the statistics column to 1. The second line increments statistics if the row already exists.
Notice one other thing: the id column is not strictly necessary. You can use the date column as the primary key. This table definition will work well for you.
CREATE TABLE `stat` (
`date` DATE NOT NULL,
`statistics` INT(11) NOT NULL,
PRIMARY KEY (`date`)
);
Let's say we have a web forum application with a MySQL 5.6 database that are accessed 24/7 by many many users. Now there is a table like this for metadata of notifications sent to users.
| notifications | CREATE TABLE `notifications` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`user_id` bigint(20) unsigned NOT NULL,
`message_store_id` bigint(20) unsigned NOT NULL,
`status` varchar(10) COLLATE ascii_bin NOT NULL,
`sent_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `user_id` (`user_id`,`sent_date`)
) ENGINE=InnoDB AUTO_INCREMENT=736601 DEFAULT CHARSET=ascii COLLATE=ascii_bin |
This table has 1 million rows. With this table, a certain message_store_id becomes suddenly ineffective for some reason and I'm planning to remove all of records with that message_store_id with a single delete statement like
DELETE FROM notifications WHERE message_store_id = 12345;
This single statement affects 10% of the table since this message was sent to so many users. Meanwhile this notifications tables are accessed all the time by thousands of users, so the index must be present. Apparently index recreation is very costly when deleting records, so I'm afraid to do that and cause down time by maxing out the server resources. However, if I drop the index, delete the records then add an index again, I have to shut down the database for some time, unfortunately it is not possible for our service.
I wish MySQL 5.6 is not so stupid that this single statement can kill the database, but I guess it's very likely. My question is, is the index recreation really fatal for a case like this? If so, is there any good strategy for this operation that doesn't require me to halt the database for the maintenance?
There can be a lot of tricks/strategies you could employ depending on details of your application.
If you plan to do these operations on a regular basis (e.g. it's not a one-time thing), AND you have few distinct values in message_store_id, you can use partitions. Partition by value of message_store_id, create X partitions beforehand (where X is some reasonable cap on the amount of values for the id), and then you can delete all the records in that partition in an instant by truncating that partition. A matter of milliseconds. Downside: message_store_id will have to be a part of primary key. Note: you'll have to create partitions beforehand, because the last time I worked with them, alter table add partition re-created the entire table, which is a disaster on large tables.
Even if the alter table truncate partition does not work for you, you can still benefit from partitioning. If you issue a DELETE on the partition, by supplying corresponding where condition, the rest of the table will not be affected/locked by this DELETE op.
Alternative way of deleting records without locking the DB for too long:
while (true) {
// assuming autocommit mode
delete from table where {your condition} limit 10000;
// at this moment locks are released and other transactions have a chance
// to do some stuff.
if (affected rows == 0) {
break;
}
// This is a good place to insert sleep(5) to give other transactions
// more time to do their stuff before the next chunk gets deleted.
}
One option is to perform the delete as several smaller operations, rather than one huge operation.
MySQL provides a LIMIT clause, which will limit the number of rows matched by the query.
For example, you could delete just 1000 rows:
DELETE FROM notifications WHERE message_store_id = 12345 LIMIT 1000;
You could repeat that, leaving a suitable window of time for other operations (competing for
locks on the same table) to complete. To handle this in pure SQL, we can use the MySQL SLEEP() function, to pause for 2 seconds, for example:
SELECT SLEEP(2);
And obviously, this can be incorporated into a loop, in a MySQL procedure, for example, continuing to loop until the DELETE statement affects zero rows.
I have come up with a total of three different, equally viable methods for saving data for a graph.
The graph in question is "player's score in various categories over time". Categories include "buildings", "items", "quest completion", "achievements" and so on.
Method 1:
CREATE TABLE `graphdata` (
`userid` INT UNSIGNED NOT NULL,
`date` DATE NOT NULL,
`category` ENUM('buildings','items',...) NOT NULL,
`score` FLOAT UNSIGNED NOT NULL,
PRIMARY KEY (`userid`, `date`, `category`),
INDEX `userid` (`userid`),
INDEX `date` (`date`)
) ENGINE=InnoDB
This table contains one row for each user/date/category combination. To show a user's data, select by userid. Old entries are cleared out by:
DELETE FROM `graphdata` WHERE `date` < DATE_ADD(NOW(),INTERVAL -1 WEEK)
Method 2:
CREATE TABLE `graphdata` (
`userid` INT UNSIGNED NOT NULL,
`buildings-1day` FLOAT UNSIGNED NOT NULL,
`buildings-2day` FLOAT UNSIGNED NOT NULL,
... (and so on for each category up to `-7day`
PRIMARY KEY (`userid`)
)
Selecting by user id is faster due to being a primary key. Every day scores are shifted down the fields, as in:
... SET `buildings-3day`=`buildings-2day`, `buildings-2day`=`buildings-1day`...
Entries are not deleted (unless a user deletes their account). Rows can be added/updated with an INSERT...ON DUPLICATE KEY UPDATE query.
Method 3:
Use one file for each user, containing a JSON-encoded array of their score data. Since the data is being fetched by an AJAX JSON call anyway, this means the file can be fetched statically (and even cached until the following midnight) without any stress on the server. Every day the server runs through each file, shift()s the oldest score off each array and push()es the new one on the end.
Personally I think Method 3 is by far the best, however I've heard bad things about using files instead of databases - for instance if I wanted to be able to rank users by their scores in different categories, this solution would be very bad.
Out of the two database solutions, I've implemented Method 2 on one of my older projects, and that seems to work quite well. Method 1 seems "better" in that it makes better use of relational databases and all that stuff, but I'm a little concerned in that it will contain (number of users) * (number of categories) * 7 rows, which could turn out to be a big number.
Is there anything I'm missing that could help me make a final decision on which method to use? 1, 2, 3 or none of the above?
If you're going to use a relational db, method 1 is much better than method 2. It's normalized, so it's easy to maintain and search. I'd change the date field to a timestamp and call it added_on (or something that's not a reserved word like 'date' is). And I'd add an auto_increment primary key score_id so that user_id/date/category doesn't have to be unique. That way, if a user managed to increment his building score twice in the same second, both would still be recorded.
The second method requires you to update all the records every day. The first method only does inserts, no updates, so each record is only written to once.
... SET buildings-3day=buildings-2day, buildings-2day=buildings-1day...
You really want to update every single record in the table every day until the end of time?!
Selecting by user id is faster due to being a primary key
Since user_id is the first field in your Method 1 primary key, it will be similarly fast for lookups. As first field in a regular index (which is what I've suggested above), it will still be very fast.
The idea with a relational db is that each row represents a single instance/action/occurrence. So when a user does something to affect his score, do an INSERT that records what he did. You can always create a summary from data like this. But you can't get this kind of data from a summary.
Secondly, you seem unwontedly concerned about getting rid of old data. Why? Your select queries would have a date range on them that would exclude old data automatically. And if you're concerned about performance, you can partition your tables based on row age or set up a cronjob to delete old records periodically.
ETA: Regarding JSON stored in files
This seems to me to combine the drawbacks of Method 2 (difficult to search, every file must be updated every day) with the additional drawbacks of file access. File accesses are expensive. File writes are even more so. If you really want to store summary data, I'd run a query only when the data is requested and I'd store the results in a summary table by user_id. The table could hold a JSON string:
CREATE TABLE score_summaries(
user_id INT unsigned NOT NULL PRIMARY KEY,
gen_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
json_data TEXT NOT NULL DEFAULT '{}'
);
For example:
Bob (user_id=7) logs into the game for the first time. He's on his profile page which displays his weekly stats. These queries ran:
SELECT json_data FROM score_summaries
WHERE user_id=7
AND gen_date > DATE_SUB(CURDATE() INTERVAL 1 DAY);
//returns nothing so generate summary record
SELECT DATE(added_on), category, SUM(score)
FROM scores WHERE user_id=7 AND added_on < CURDATE() AND > DATE_SUB(CURDATE(), INTERVAL 1 WEEK)
GROUP BY DATE(added_on), category; //never include today's data, encode as json with php
INSERT INTO score_summaries(user_id, json_data)
VALUES(7, '$json') //from PHP, in this case $json == NULL
ON DUPLICATE KEY UPDATE json_data=VALUES(json_data)
//use $json for presentation too
Today's scores are generated as needed and not stored in the summary. If Bob views his scores again today, the historical ones can come from the summary table or could be stored in a session after the first request. If Bob doesn't visit for a week, no summary needs to be generated.
method 1 seems like a clear winner to me . If you are concerned about size of single table (graphData) being too big you could reduce it by creating
CREATE TABLE `graphdata` (
`graphDataId` INT UNSIGNED NOT NULL,
`categoryId` INT NOT NULL,
`score` FLOAT UNSIGNED NOT NULL,
PRIMARY KEY (`GraphDataId'),
) ENGINE=InnoDB
than create 2 tables because you obviosuly need to have info connecting graphDataId with userId
create table 'graphDataUser'(
`graphDataId` INT UNSIGNED NOT NULL,
`userId` INT NOT NULL,
)ENGINE=InnoDB
and graphDataId date connection
create table 'graphDataDate'(
`graphDataId` INT UNSIGNED NOT NULL,
'graphDataDate' DATE NOT NULL
)ENGINE=InnoDB
i think that you don't really need to worry about number of rows some table contains because most of dba does a good job regarding number of rows. Its your job only to get data formatted in a way it is easly retrived no matter what is the task for which data is retrieved. Using that advice i think should pay off in a long run.
I have the following schema with the following attributes:
USER(TABLE_NAME)
USER_ID|USERNAME|PASSWORD|TOPIC_NAME|FLAG1|FLAG2
I have 2 questions basically:
How can I make an attribute USER_ID as primary key and it should
automatically increment the value each time I insert the value into
the database.It shouldn't be under my control.
How can I retrieve a record from the database, based on the latest
time from which it was updated.( for example if I updated a record
at 2pm and same record at 3pm, if I retrieve now at 4pm I should get
the record that was updated at 3pm i.e. the latest updated one.)
Please help.
I'm assuming that question one is in the context of MYSQL. So, you can use the ALTER TABLE statement to mark a field as PRIMARY KEY, and to mark it AUTOINCREMENT
ALTER TABLE User
ADD PRIMARY KEY (USER_ID);
ALTER TABLE User
MODIFY COLUMN USER_ID INT(4) AUTO_INCREMENT; -- of course, set the type appropriately
For the second question I'm not sure I understand correctly so I'm just going to go ahead and give you some basic information before giving an answer that may confuse you.
When you update the same record multiple times, only the most recent update is persisted. Basically, once you update a record, it's previous values are not kept. So, if you update a record at 2pm, and then update the same record at 3pm - when you query for the record you will automatically receive the most recent values.
Now, if by updating you mean you would insert new values for the same USER_ID multiple times and want to retrieve the most recent, then you would need to use a field in the table to store a timestamp of when each record is created/updated. Then you can query for the most recent value based on the timestamp.
I assume you're talking about Oracle since you tagged it as Oracle. You also tagged the question as MySQL where the approach will be different.
You can make the USER_ID column a primary key
ALTER TABLE <<table_name>>
ADD CONSTRAINT pk_user_id PRIMARY KEY( user_id );
If you want the value to increment automatically, you'd need to create a sequence
CREATE SEQUENCE user_id_seq
START WITH 1
INCREMENT BY 1
CACHE 20;
and then create a trigger on the table that uses the sequence
CREATE OR REPLACE TRIGGER trg_assign_user_id
BEFORE INSERT ON <<table name>>
FOR EACH ROW
BEGIN
:new.user_id := user_id_seq.nextval;
END;
As for your second question, I'm not sure that I understand. If you update a row and then commit that change, all subsequent queries are going to read the updated data (barring exceptionally unlikely cases where you've set a serializable transaction isolation level and you've got transactions that run for multiple hours and you're running the query in that transaction). You don't need to do anything to see the current data.
(Answer based on MySQL; conceptually similar answer if using Oracle, but the SQL will probably be different.)
If USER_ID was not defined as a primary key or automatically incrementing at the time of table creation, then you can use:
ALTER TABLE tablename MODIFY USER_ID INT NOT NULL PRIMARY KEY AUTO_INCREMENT;
To issue queries based on record dates, you have to have a field defined to hold date-related datetypes. The date and time of record modifications would be something you would manage (e.g. add/change) based on the way in which you are accessing the records (some PHP-related way? it's unclear what scripts you have in play, based on your question.) Once you have dates in your records you can ORDER BY the date field in your SELECT query.
Check this out
For your AUTOINCREMENT, Its a question already asked here
For your PRIMARY KEY use this
ALTER TABLE USER ADD PRIMARY KEY (USER_ID)
Can you provide more information. If the value gets updated you definitely do NOT have your old value that you entered at 2pm present in the dB. So querying for it will be fine
You can use something like this:
CREATE TABLE IF NOT EXISTS user (
USER_ID unsigned int(8) NOT NULL AUTO_INCREMENT,
username varchar(25) NOT NULL,
password varchar(25) NOT NULL,
topic_name varchar(100) NOT NULL,
flag1 smallint(1) NOT NULL DEFAULT 0,
flag2 smallint(1) NOT NULL DEFAULT 0,
update_time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (uid)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
For selection use query:
SELECT * from user ORDER BY update_time DESC
I am creating a ticketing system that will keep track of tickets that a customer creates. The ticket's basic information will be stored in a table 'tickets' who's structure is as follows:
Primary Key (int 255)
Ticket_Key (varchar)
Ticket Number (varchar 500)
Label
Date Created
Delete
and so on..
The issue is that there will eventually be a large amount of tickets and we need a more uniform way of identifying tickets. I would like PHP to create a Ticket Number in the ticket number that will contain mixed values. The date (in format 20111107), followed by a auto incremented value 1001. 1002, 1003, ...). So the Ticket Number will be 201111071001 for an example.
The issue is how do I program this in PHP to insert to the MySQL database? Also, how do I prevent the possibility of duplicate values in the Unique Id in PHP? There will be a very large amount of customers using the table to insert records.
What about using an auto-increment and combining this with the date field to generate a sequence number for that date and hence a ticketId.
So your insert process would be something like this:
INSERT INTO table (...ticket info...)
You would then retrieve the auto-increment for this row and run a query like this
UPDATE table SET sequence = (SELECT ($id-MAX(auto_increment)) FROM table WHERE date_created=DATE_SUB(CURDATE(),INTERVAL 1 DAY)) WHERE auto_increment=$id
You could then easily create a ticketId of format YYYMMDDXXXX. Assuming you never retro-add tickets in the past this would only ever require these two queries even under heavy usage.
[EDIT] Actually, after looking into this there is a much better way to do this natively in MySQL. If you define two columns (date and sequence) and make them a primary key (both columns) with the sequence field as an auto-increment then MySQL will update the sequence column as an auto-increment per date (i.e. it will start with value 1 for each date).
[EDIT] A table structure along these lines would do the job for you:
CREATE TABLE IF NOT EXISTS `table` (
`created_date` date NOT NULL,
`ticket_sequence` int(11) NOT NULL auto_increment,
`label` varchar(100) NOT NULL,
[other fields as required]
PRIMARY KEY (`created_date`,`ticket_sequence`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
When retrieving the data you could then do something like
SELECT CONCAT( DATE_FORMAT(created_date,'%Y%m%d'),LPAD(ticket_sequence,4,'0')) AS ticket_number, other fields.... FROM table
as i understand that you want to make one result of two different fields like datefield and ticketnumfield
in mysql you do this through the command:
SELECT concat( datefield, ticketnumfeild ) FROM `tbl_name`
this query return the result like 201111071001
I did something like this before where I wanted to refresh the counter for each new day. Unfortunately I do not speak PHP so you will have to settle for explanation and maybe some pseudo code.
Firstly, create a couple of fields in a config file to keep track of your counter. This should be a date field and a number fields...
LastCount (Number)
LastCountDate (Date)
Then you make sure that your ticket number field in your database table is set to only unique values, so it throws an error if you try to insert a duplicate.
Then in your code, you load your counter values (LastCount and LastCountDate) and you process them like so...
newCount = LastCount;
if LastCountDate == Today
increment newCount (newCount++)
else
reset newCount (newCount = 1)
you can then use newCount to create your ticket number.
Next, when you try to insert a row, if it is successful, then great. If it fails, then you need to increment newCount again, then try the insert again. Repeat this until the insert is successful (put it in a loop)
Once you have successfully inserted the row, you need to update the database with the Count Values you just used to generate the ticket number - so they are ready for use the next time.
Hope that helps in some way.