Schedule Searching in PHP/MySQL with templates and overrides - php

I'm looking for some advice/help on quite a complex search algorithm. Any articles to relevant techniques etc. would be much appreciated.
Background
I'm building an application, which, in a nutshell, allows users to set their "availability" for any given day. The User first sets a general availability template which allows them to say:
Monday - AM
Tuesday - PM
Wednesday - All Day
Thursday - None
Friday - All Day
So this User is generally available Monday AM, Tuesday PM etc.
Schema:
id
user_id
day_of_week (1-7)(Monday to Sunday)
availability
They can then override specific dates manually, for example:
2013-03-03 - am
2013-03-04 - pm
2013-03-05 - all_day
Schema:
id
user_id
date
availability
This all works well - I have a Calendar being generated which combines the template and overrides and allows Users to modify their availability etc.
The Problem
I now need to allow Admin Users to search for Users who have specific availability. So the Admin User would use a calendar to select required dates and availability's and hit search.
For example, find me Users who are available:
2013-03-03 - pm
2013-03-04 - pm
2013-03-05 - pm
The search process would have to search for available Users using the Templated Availability and Overrides, then return the best results. Ideally, it would return Users who are available all of the time but in the case that no single user can match the dates, I need to provide a combination of Users who can.
I know this is quite a complex problem and I'm not looking for a complete answer, perhaps just some guidance or links to potentially relevant techniques etc.
What I've tried
At the moment, I have a halfway solution. I'm grabbing all the available Users, looping through each of them, and within that loop, looping through all of the required dates and breaking as soon as a User doesn't meet a required date. This is obviously very un-scalable and it's also only returning "perfect matches".
Possible Solutions
Full Text Searching with Aggregate Table
I thought about creating a separate table which had the following schema:
user_id
body
The body field would be populated with the Users template days and overrides so an example record might look like:
user_id: 2
body: monday_am tuesday_pm wednesday_pm thursday_am friday_allday 2013-03-03_all_day 2013-03-03_pm
I would then convert a Users search query into a similar format. So if a User was looking for someone who was available on the 19th March 2013 - All Day and 20th March 2013 - PM, I'd convert that into a string.
Firstly, as 19th March is a Tuesday, I'd convert that into tuesday_allday and same with the 20th. I'd therefore end up with:
tuesday_allday wednesday_pm 2013-03-19_allday 2013-03-20_pm
I'd then do a full text search against our aggregate table and return a "weighted" result set which I can then loop through and further interrogate.
I'm not sure how this would work in practice, so that's why I'm asking if anyone has any links to techniques or relevant articles I could use.

I am confident this problem can be solved with a more well defined DB schema.
By utilizing a more detailed DB schema you will be able to find any available user for any given time frame (not just am & pm) if you should so choose.
It will also allow you to keep template data, while not polluting your availability data with template information (instead you would select from the template table to programmatically fill in the availability for a given date, which then can be modified by the user).
I spent some time diagramming this problem and came up with a schema structure that I believe solves the problem you specified and allows you to grow your application with a minimum of schema changes.
(To make this easier to read I've added the SQL at the end of this proposed answer)
I have also included an example select statement that would allow you to pull availability data with any number of arguments.
For clarity that SELECT is above the SQL for the schema # the end of my explanatory text.
Please don't be intimidated by the select, it may look complicated # first glance but is really a map to the entire schema (save the templates table).
(btw, I'm not saying that because I have any doubt that you can understand it, I'm sure you can, but I've known many programmers who ignore more complex DB structures to their own detriment because it LOOKS overly complex but when analyzed is actually less complex than the acrobatics they have to do in their program to get similar results... Relational DBs are based on a branch of mathematics that is good # accurately, consistently, & (relatively) succinctly, associating data).
General Use:
(for more details read the comments in the SQL CREATE TABLE statements)
-Populate the DaysOfWeek table.
-Populate the TimeFrames table with some time frames you want to track (an AM timeframe might have a StartTime of 00:00:00 & an end time of 11:59:59 while PM might have StartTime of 12:00:00 & EndTime of 23:59:59)
-Add Users
-Add Dates to be tracked (see notes in SQL for thoughts on avoiding bloat & also the virtues of this table)
-Populate the Templates table for each user
-Generate the list of default Availabilities (with their associated AvailableTimes data) for each user
-Expose the default Availabilities to the users so they can override the defaults
NOTE: you can also add an optional table for Engagements to be the opposite of Availabilities (or maybe there is a better abstraction that would include both concepts...)
Disclaimer: I did not take the additional time to fully populate my local DB & verify everything so there may be some weaknesses/errors I did not see in my diagrams... (sorry I spent far longer than intended on this & must get work done on an overdue project).
While I have worked fairly extensively with DB structures & with DBs others have created for 12+ years I'm sure I am not without fault, I hope others on StackOverflow will round out mistakes I may have included.
I apologize for not including more example data.
If I have time in the near future I will provide some, (think adding George, Fred, & Harry to the users table, adding some dates to the Dates table then detailing how busy George & Fred are compared to Harry during their school week using the Availabilities, AvailableTimes & TimeFrames tables).
The SELECT statement (NOTE: I would highly recommend making this into a view... in that way you can select whatever columns you want & add whatever arguments/conditions you want in a WHERE clause without having to write the joins out every time... so the view would NOT include the WHERE clause... just to make that clear):
SELECT *
FROM Users Us
JOIN Availabilities Av
ON Us.User_ID=Av.User_ID
JOIN Dates Da
ON Av.Date_ID=Da.Date_ID
JOIN AvailableTimes Avt
ON Av.Av_ID=Avt.Av_ID
WHERE Da.Date='2014-01-03' -- whatever date
-- alternately: WHERE Da.DayOWeek_ID=3 -- which would be Wednesday
-- WHERE Da.Date BETWEEN() -- whatever date range...
-- etc...
Recommended data in DaysOfWeek (which is effectively a lookup table):
INSERT INTO DaysOfWeek(DayOWeek_ID,Name,Description)
VALUES (1,'Sunday', 'First Day of the Week'),(1,'Monday', 'Second Day of the Week')...(7,'Saturday', 'Last Day of the Week'),(8,'AllWeek','The entire week'),(9,'Weekdays', 'Monday through Friday'),(10,'Weekends','Saturday & Sunday')
Example Templates data:
INSERT INTO Templates(Time_ID,User_ID,DayOWeek_ID)
VALUES (1,1,9)-- this would show the first user is available for the first time frame every weekday as their default...
,(1,2,2) -- this would show the first user available on Tuesdays for the second time frame
The following is the recommended schema structure:
CREATE TABLE `test`.`Users` (
User_ID INT NOT NULL AUTO_INCREMENT ,
UserName VARCHAR(45) NULL ,
PRIMARY KEY (User_ID) );
CREATE TABLE `test`.`Templates` (
`Template_ID` INT NOT NULL AUTO_INCREMENT ,
`Time_ID` INT NULL ,
`User_ID` INT NULL ,
`DayOWeek_ID` INT NULL ,
PRIMARY KEY (`Template_ID`) )
`COMMENT = 'This table holds the template data for general expected availability of a user/agent/person (so the person would use this to set their general availability)'`;
CREATE TABLE `test`.`Availabilities` (
`Av_ID` INT NOT NULL AUTO_INCREMENT ,
`User_ID` INT NULL ,
`Date_ID` INT NULL ,
PRIMARY KEY (`Av_ID`) )
COMMENT = 'This table holds a users actual availability for a particular date.\nIf the use is not available for a date then this table has no entry for that user for that date.\n(btw, this suggests the possiblity of an alternate table that could utilize all other structures except the templates called Engagements which would record when a user is actually busy... in order to use this table & the other table together would need to always join to AvailableTimes as a date would actually be in both tables but associated with different time frames).';
CREATE TABLE `test`.`Dates` (
`Date_ID` INT NOT NULL AUTO_INCREMENT ,
`DayOWeek_ID` INT NULL ,
`Date` DATE NULL ,
PRIMARY KEY (`Date_ID`) )
COMMENT = 'This table is utilized to hold actual dates whith which users/agents can be associated.\nThe important thing to note here is: this may end up holding every day of every year... this suggests a need to archive this data (and everything associated with it for performance reasons as this database is utilized).\nOne more important detail... this is more efficient than associating actual dates directly with each user/agent with an availability on that date... this way the date is only recorded once, the other approach records this date with the user for each availability.';
CREATE TABLE `test`.`AvailableTimes` (
`AvTime_ID` INT NOT NULL AUTO_INCREMENT ,
`Av_ID` INT NULL ,
`Time_ID` INT NULL ,
PRIMARY KEY (`AvTime_ID`) )
COMMENT = 'This table records the time frames that a user is available on a particular date.\nThis allows the time frames to be flexible without affecting the structure of the DB.\n(e.g. if you only keep track of AM & PM at the beginning of the use of the DB but later decide to keep track on an hourly basis you simply add the hourly time frames & start populating them, no changes to the DB schema need to be made)';
CREATE TABLE `test`.`TimeFrames` (
`Time_ID` INT NOT NULL AUTO_INCREMENT ,
`StartTime` TIME NOT NULL ,
`EndTime` TIME NOT NULL ,
`Name` VARCHAR(45) NOT NULL ,
`Desc` VARCHAR(128) NULL ,
PRIMARY KEY (`Time_ID`) ,
UNIQUE INDEX `Name_UNIQUE` (`Name` ASC) )
COMMENT = 'Utilize this table to record the times that are being tracked.\nThis allows the flexibility of having multiple time frames on the same day.\nIt also provides the flexibility to change the time frames being tracked without changing the DB structure.';
CREATE TABLE `test`.`DaysOfWeek` (
`DaysOWeek_ID` INT NOT NULL AUTO_INCREMENT ,
`Name` VARCHAR(45) NOT NULL ,
`Description` VARCHAR(128) NULL ,
PRIMARY KEY (`DaysOWeek_ID`) ,
UNIQUE INDEX `Name_UNIQUE` (`Name` ASC) )
COMMENT = 'This table is a lookup table to hold the days of the week.\nI personally would recommend adding a row for:\nWeekends, All Week, & WeekDays \nThis will often be used in conjunction with the templates and will allow less entries in that table to be made with those 3 entries in this table.';

Ok, this is would I would do:
In the users table create fields for Sunday, Monday ... Saturday.
Use pm , am or both for values in those fields.
You should also index each field in the db for faster querying.
Then make a separate table for user/date/meridian fields (meridian means am or pm). Again the meridian field values would be pm , am or both.
You will need to do a little research with php's date function to pull out the day of the week number and use a switch statement against it perhaps.
Use the requested dates and pull out the day of the week and query the user table for their day of the week availability.
Then use the requested date/meridian itself and query the new user/date/meridian table for the users' individual availability dates/meridians.
I don't think there is going to be much of an algorithm here except when extracting the days of the weeks in the date requests. If you are doing a date range then you could benefit from a algorithm but if it is just a bunch of cherry picked dates then you are just going to have to do them one by one. Let me know and maybe I'll throw you an algo for you.

Related

How to filter out certain rows in MySQL dynamically to query against them?

I have a PHP - MySQL set up . I have a table devicevalue structure of it is like this
devId | vals | date | time
xysz | 23 | 2020.02.17 | 22.06
abcs | 44 | 2020.02.31 | 22.07
The vals columns hold temperature values .
any user loggin in on my webapp have access to only certain devices.
Here are steps
On my website "a user" selects from and to dates for which he wants to see data & submit it
Then these dates are passed a page "getrecords.php " ,where there are lot select queries ( and many are in loop ) to fetch filtered data in required format
The problem is that this table holds almost 2-3 Million records . and in every where clause I have to add to and from conditions. this causes to search in entire table .
My question is there any way that I can get temporary table at step 1 which will have only certain rows based on given two dates and then all my queries on other page will be against that temporary table ?
Edit: If your date column is a text string, you must convert it to a column of type DATE or TIMESTAMP, or you will never get good performance from this table. A vast amount of optimization code is in the MySQL server to make handling of time/date data types efficient. If you store dates or times as strings, you defeat all that optimization code.
Then, put an index on your date column like this.
CREATE INDEX date_from_to ON devicevalue (`date`, devId, vals, `time` );
It's called a covering index because the entire query can be satisfied using it only.
Then, in your queries use
WHERE date >= <<<fromdate>>>
AND date < <<<todate>> + INTERVAL 1 DAY
Doing this indexing correctly gets rid of the need to create temp tables.
If your query has something like `WHERE devId = <<>> in it, you need this index instead (or in addition).
CREATE INDEX date_id_from_to ON devicevalue (devId, `date`, vals, `time` );
If you get a chance to change this table's layout, combine the date and time columns into a single column with TIMESTAMP data type. The WHERE clauses I showed you above will still work correctly if you do that. And everything will be just as fast.
SQL is made to solve your kind of problem simply and fast. With a good data choices and proper indexing, a few million records is a modestly-sized table.
Short answer: No. Don't design temp tables that need to live between sessions.
Longer answer:
Build into your app that the date range will be passed from one page to the next, then use those as initial values in the <form> <input type=text...>
Then make sure you have a good composite index for the likely queries. But, to do that, you must get a feel for what might be requested. You will probably need a small number of multi-column indexes.
You can probably build a SELECT from the form entries. I rarely need to use more than one query, but it is mostly "constructed" on the fly based on the form.
It is rarely a good idea to have separate columns for date and time. It makes it very difficult, for example, to say noon one day to noon the next day. Combine into a DATETIME or TIMESTAMP.
O.Jones has said a lot of things that I would normally add here.

Obtain an unique sequence order number concurrently from PostgreSQL

We are designing an order management system, the order id is designed as a bigint with Postgresql, and the place structure is implemented as follows:
Take 2015072201000010001 as an order id example, the first eight places are considered as the date which is 20150722 here, the next seven places are considered as the region code which is 0100001 here, and the last four places are for the sequence number under the aforementioned region and date.
So every time a new order is created, the php logic application layer will query PostgreSQL with the following like sql statement:
select id from orders where id between 2015072201000010000 and 2015072201000019999 order by id desc limit 1 offset 0
then increase the id for the new order, after this insert the order to PostgreSQL database.
This is ok if there is only one order generation process at one time. But with hundreds of concurrent order generation request, there are such a lot of chances that the order ids will collide since the database read/write lock mechanism of PostgreSQL.
Let's say there are two order requests A and B. A tries to read the the latest order id from the database, then B reads the latest order id too, then A writes to the database, finally B writes to the db will failed since the order id primary key collides.
Any thoughts on how to make this order generation action concurrently feasible?
In the case of many concurrent operations your only option is to work with sequences. In this scenario you would need to create a sequence for every date and region. That sounds like a lot of work, but most of it can be automated.
Creating the sequences
You can name your sequences after the date and the region. So do something like:
CREATE SEQUENCE seq_201507220100001;
You should create a sequence for every combination of day and region. Do this in a function to avoid repetition. Run this function once for every day. You can do this ahead of time or - even better - do this in a scheduled job on a daily basis to create tomorrow's sequences. Assuming you do not need to back-date orders to previous days, you can drop yesterday's sequences in the same function.
CREATE FUNCTION make_and_drop_sequences() RETURNS void AS $$
DECLARE
region text;
tomorrow text;
yesterday text;
BEGIN
tomorrow := to_char((CURRENT_DATE + 1)::date, 'YYYYMMDD');
yesterday := to_char((CURRENT_DATE - 1)::date, 'YYYYMMDD');
FOREACH region IN
SELECT DISTINCT region FROM table_with_regions
LOOP
EXECUTE format('CREATE SEQUENCE %I', 'seq_' || tomorrow || region);
EXECUTE format('DROP SEQUENCE %I', 'seq_' || yesterday|| region);
END LOOP;
RETURN;
END;
$$ LANGUAGE plpgsql;
Using the sequences
In your PHP code you obviously know the date and the region you need to enter a new order id for. Make another function that generates a new value from the right sequence on the basis of the date and the region:
CREATE FUNCTION new_date_region_id (region text) RETURN bigint AS $$
DECLARE
dt_reg text;
new_id bigint;
BEGIN
dt_reg := tochar(CURRENT_DATE, 'YYYYMMDD') || region;
SELECT dt_reg::bigint * 10000 + nextval(quote_literal(dt_reg)) INTO new_id;
RETURN new_id;
END;
$$ LANGUAGE plpgsql STRICT;
In PHP you then call:
SELECT new_date_region_id('0100001');
which will give the next available id for the specified region for today.
The usual way to avoid locking ids in Postgres is through the sequences.
You could use Postgresql sequences for each region. Something like
create sequence seq_0100001;
then you can get a number from that using:
select nextval('seq_'||regioncode) % 10000 as order_seq
That does mean the order numbers will not reset to 0001 each day, but you do have the same 0000 -> 9999 range for order numbers. It will wrap around.
So you may end up with:
2015072201000010001 -> 2015072201000017500
2015072301000017501 -> 2015072301000019983
2015072401000019984 -> 2015072401000010293
Alternatively you could just generate a sequence for each day/region combination, but you'd need to be on top of dropping the previous days sequences at the start of next day.
Try to use UUIDv1 type which is a combination of timestamp and MAC adress. You can have it auto-generated on server side if the order of inserts is important for you. Otherwise, the IDs can be generated from any of your clients before inserting (you might need their clock synchronized). Just be aware that with UUIDv1 is you can disclose the MAC address of the host where the UUID was generated. In this case, you may want to spoof the MAC address.
For your case, you can do something like
CREATE TABLE orders (
id uuid PRIMARY KEY DEFAULT uuid_generate_v1(),
created_at timestamp NOT NULL DEFAULT now(),
region_code text NOT NULL REFERENCES...
...
);
Read more at http://www.postgresql.org/docs/9.4/static/uuid-ossp.html

Database schema advice for real time API calls

I have a project for some local high school sport leagues which want some real time updates with statistics. There will be people at events (american football, basketball, volleyball, golf, wrestling, etc) who will be using my CMS system to update the stats.
I can't seem to wrap my head around how to store those stats so when the REST API calls happen, the latest events will be sent back (ex: gathering all basketball games happening at that time on the server and saving them).
The data coming to the server is in JSON format and I would like to be able to store it as so, each sport being the main key, then the stats on a game by game basis. It seems to me using a RDBMS or another db type would be pointless because adding the stats in real time would mean a ton of rows where the data barely differs, then collecting the most recent games would be a pain if I were to break up each person's POST and save it as it's own row.
On the other hand, I could just store everything in a file, gather the stats as they come in and update the file. But if there will be many writes happening, the responses to the API calls might get slow.
Any suggestions? Which of my thoughts is wrong here?
Storing data as JSON generally limits your ability to query the data. I would suggest against that. JSON is perfectly acceptable format to accept on the server, but you should immediately deserialize it into an object and store it in a way that will meet your use cases. In my opinion your use cases demand a relational database. E.g. a schema like this would give you good performance finding all games that are happening:
Sport:
pk int sportId
varchar description
Game:
pk int gameId
fk int sportId
datetime start
datetime end
Player:
pk int playerId
varchar name
StatType:
pk int statTypeId
varchar description
Stat:
pk bigint statId
fk int gameId
fk int playerId
fk int statTypeId
datetime time
real value
To get the current game:
SELECT * FROM Game WHERE currentTime > start AND end IS NULL
To get all time stats for a player
SELECT max(st.description), sum(value) FROM Stat s LEFT JOIN StatDescription st ON s.statTypeId = st.statTypeId LEFT JOIN Player p ON s.playerId = p.playerId GROUP BY st.statTypeId WHERE p.name = 'John Smith'

how to make history of sql data? (report data changes)

Every day, I am saving (with crontab, php script) into database bugs information. Every row is like:
(Bugidentification, Date, Title, Who, etc....)
(e.g:
Bugidentification, Date, Title, Who, etc....
issue1, 2015-04-01, blabla, bill, etc...
issue2, 2015-04-01, nnnnnnn, john, etc...
issue3, 2015-04-01, vvvvvvv, greg, etc...
issue1, 2015-04-02, blabla, bill, etc...
issue2, 2015-04-02, nnnnnnn, john, etc...
issue3, 2015-04-02, vvvvvvv, mario, etc... (here it is now mario)
issue2, 2015-04-03, nnnnnnn, john, etc... (issue1 dissapeared)
issue3, 2015-04-03, vvvvvvv, tod, etc... (tod is new info)
issue4, 2015-04-03, rrrrrrrr, john, etc... (issue4 is new)
.............................................
)
Basically if I take example I posted above, results should be something like for comparison between date of April 2nd and April 3rd
New row is : issue4
Closed row is : Issue1
Updated row is : Issue3 (with tod instead of mario)
No change row is : Issue2
In my case there are hundreds of rows and I believe I know how to do it thanks to php, but my code will be long like creating foreach loops and see one by one if any change. I am not sure I am getting straightforward solution.
So my question is, is there any simple way to report those changes with "simple" code (like sql special request or any project code out there or simple php functions?).
There are way too many assumptions built into this design. And those assumptions require you to compare rows between different days to make the assumption in the first place -- not to mention you have to duplicate unchanged rows from one day to the next in order to maintain the unbroken daily entry needed to feed the assumptions. Whew.
Rule 1: don't build assumptions into the design. If something is new, it should be marked, "HEY! I'm new here!" When a change has been made to the data, "OK, something changed. Here it is." and when the issue has finally been closed, "OK, that's it for me. I'm done for."
create table Bug_Static( -- Contains one entry for each bug
ID int identity,
Opened date not null default sysdate,
Closed date [null | not null default date '9999-12-31'],
Title varchar(...),
Who id references Who_Table,
<other non-changing data>,
constraint PK_Bug_Static primary key( ID )
);
create table Bug_Versions( -- Contains changing data, like status
ID int not null,
Effective date not null,
Status varchar not null, -- new,assigned,in work,closed,whatever
<other data that may change from day to day>,
constraint PK_Bug_Versions primary key( ID, Effective ),
constraint FK_Bug_Versions_Static foreign key( ID )
references Bug_Static( ID )
);
Now you can select the bugs and the current data (the last change made) on any given day.
select s.ID, s.Opened, s.Title, v.Effective, v.Status
from Bug_Static s
join Bug_Versions v
on v.ID = s.ID
and v.Effective =(
select Max( Effective )
from Bug_Versions
where ID = v.ID
and Effective <= sysdate )
where s.Closed < sysdate;
The where s.Closed < sysdate is optional. What that gives you is all the bugs that were closed on the date the query is executed, but not the ones closed before then. That keeps the closed bugs from reappearing over and over again -- unless that's what you want.
Change the sysdate values to a particular date/time and you will get the data as it appeared as of that date and time.
Normally, when a bug is created, a row is entered into both tables. Then only new versions are entered as the status or any other data changes. If nothing changed on a day, nothing is entered. Then when the bug is finally closed, the Closed field of the static table is updated and a closed version is inserted into the version table. I've shown the Closed field with two options, null or with the defined "maximum date" of Dec 31, 9999. You can use either one but I like the max date method. It simplifies the queries.
I would also front both tables with a couple of views which joins the tables. One which shows only the last versions of each bug (Bug_Current) and one which shows every version of every bug (Bug_History). With triggers on Bug_Current, it can be the one used by the app to change the bugs. It would change, for instance, an update of any versioned field to an insert of a new version.
The point is, this is a very flexible design which you can easily show just the data you want, how you want it, as of any time you want.

One to many query with two date constraints

I have to write a reporting query for a kind of versioning system where I need to retrieve date-based reporting variations of the latest version. Simplified table structures are:
items_register: ir_id (primary, auto inc), ir_name (varchar)
items: i_id (primary, auto inc), i_register_id (int), i_version_name (varchar), i_datetime (datetime), i_date_expiry (datetime)
Each entry in items_register has multiple associated versions, stored as entries in the items table - with the highest value of i_datetime being the most recent version.
I want to retrieve entries from the items_register where the most recent version (item) has i_date_expiry after a requested date ($f_date).
I think I somehow need to join the tables, order the items by i_datetime, limit them to 1 so I get the most recent version, then check if i_date_expiry is after $f_date & retrieve the fields if so.
The fields I want to retrieve are items_register.ir_id, items_register.ir_name, items.i_version_name, items.i_datetime.
TIA for any help.
It looks like you're searching for the "groupwise max" pattern.
Making a few assumptions about things that still aren't clear in your question, I think this may be the query you're looking for:
SELECT items_register.ir_id, items_register.ir_name,
items.i_version_name, items.i_datetime
FROM items_register
JOIN
(
SELECT items.i_register_id,
MAX(items.i_datetime) AS most_recent_item_datetime
FROM items
WHERE items.i_date_expiry > '$f_date'
GROUP BY items.i_register_id
) AS item_date ON item_date.i_register_id = items_register.ir_id
JOIN items ON items.i_register_id = items_register.ir_id
AND items.i_datetime = item_date.most_recent_item_datetime
Bear in mind that this assumes that $f_date is a string that conforms to the standards for datetime and timestamp literals (not date literals!) laid out in this documentation page.
Maybe this can be useful?
http://dev.mysql.com/doc/refman/5.0/en/example-maximum-column-group-row.html

Categories