I am currently working on a simple booking system and I need to select some ranges and save them to a mysql database.
The problem I am facing is deciding if it's better to save a range, or to save each day separately.
There will be around 500 properties, and each will have from 2 to 5 months booked.
So the client will insert his property and will chose some dates that will be unavailable. The same will happen when someone books a property.
I was thinking of having a separate table for unavailable dates only, so if a property is booked from 10 may to 20 may, instead of having one record (2016-06-10 => 2016-06-20) I will have 10 records, one for each booked day.
I think this is easier to work with when searching between dates, but I am not sure.
Will the performance be noticeable worse ?
Should I save the ranges or single days ?
Thank you
I would advise that all "events" go into one table and they all have a start and end datetime. Use of indexes on these fields is of course recommended.
The reasons are that when you are looking for bookings and available events - you are not selecting from two different tables (or joining them). And storing a full range is much better for the code as you can easily perform the checks within a SQL query and all php code to handle events works as standard for both. If you only store one event type differently to another you'll find loads of "if's" in your code and find it harder to write the SQL.
I run many booking systems at present and have made mistakes in this area before so I know this is good advice - and also a good question.
This is too much for a comment,So I will leave this as an answer
So the table's primary key would be the property_id and the Date of a particular month.
I don't recommend it.Because think of a scenario when u going to apply this logic to 5 or 10 years system,the performance will be worse.You will get approximately 30*12*1= 360 raws for 1 year.Implement a logic to calculate the duration of a booking and add it to table against the user.
Related
I have two queries ultimately I think they will be in the same context of the other but in all. I have a user database that I want to pull out for tracking records based on hour. Example registrations per hour. But in this registrations per hour I want to have the query to dump results by hour increments (or weeks, or months) ie: 1,000 regitrations in november, 1,014 in december and so on, or similar for weeks hours.
I also have a similar query where I want to generate a list of states with the counts next to them of how many users I have per state.
My issue is, I'm thinking I think to one dimensionally currently cause the best idea I can think of at the moment is making in the case of the states 50 queries, but I know thats insane, and there has to be an easier way thats less intense. So thats what Im hoping someone from here can help me with, by giving me a general idea. Cause I don't know which is the best course of action for this currently.. be it using distinct, group_by or something else.
Experiment a bit and see if that doesn't help you focus on the question a bit more.
Try selecting from your registrations per hour table and appending the time buckets you are interested in to the select list.
like this:
select userid, regid, date_time, week(date_time), year(date_time), day(date_time)
from registraions;
you can then roll up and count things in that table by using group by and an aggregate function like this:
select count(distinct userid), year(date_time)
from registraions
group by year(date_time)
Read about about date time functions:
MySQL Date Time Functions
Read about aggregate functions"
MySQL Group By
I'm constructing a website for a small collection of parents at a private daycare centre. One of the desired functions of the site is to have a calendar where you can pick what days you can be responsible for the cleaning of the locales. Now, I have made a working calendar. I found a simple script online that I modified abit to fit our purpose. Technically, it works well, but I'm starting to wonder if I really should alter the way it extracts information from the databse.
The calendar is presented monthly, and drawn as a table using a for-loop. That means that said for-loop is run 28-31 times each time the page is loaded depending on the month. To present who is responsible for cleaning each day, I have added a call to a MySQL database where each member's cleaning day is stored. The pseudo code looks like this, simplified:
Draw table month
for day=start_of_month to day=end_ofmonth
type day
select member from cleaning_schedule where picked_day=day
type member
This means that each reload of the page does at least 28 SELECT calls to the database and to me it seems both inefficient and that one might be susceptible to a DDOS-attack. Is there a more efficient way of getting the same result? There are much more complex booking calendars out there, how do they handle it?
SELECT picked_day, member FROM cleaning_schedule WHERE picked_day BETWEEN '2012-05-01' AND '2012-05-31' ORDER BY picked_day ASC
You can loop through the results of that query, each row will have a date and a person from the range you picked, in order of ascending dates.
The MySQL query cache will save your bacon.
Short version: If you repeat the same SQL query often, it will end up being served without table access as long as the underlying tables have not changed. So: The first call for a month will be ca. 35 SQL Queries, which is a lot but not too much. The second load of the same page will give back the results blazing fast from the cache.
My experience says, that this tends to be much faster than creating fancy join queries, even if that would be possible.
Not that 28 calls is a big deal but I would use a join and call in the entire month's data in one hit. You can then iterate through the MySQL Query result as if it was an array.
You can use greater and smaller in SQL. So instead of doing one select per day, you can write one select for the entire month:
SELECT day, member FROM cleaning_schedule
WHERE day >= :first_day_of_month AND day >= :last_day_of_month
ORDER BY day;
Then you need to pay attention in your program to handle multiple members per day. Although the program logic will be a bit more complex, the program will be faster: The interprocess or even network based communication is a lot slower than the additional logic.
Depending on the data structure, the following statement might be possible and more convenient:
SELECT day, group_concat(member) FROM cleaning_schedule
WHERE day >= :first_day_of_month AND day >= :last_day_of_month
GROUP BY day
ORDER BY day;
28 queries isnt a massive issue and pretty common for most commercial websites but is recommend just grabbing your monthly data by each month on one hit. Then just loop through the records day by day.
I have a database(mySQL) with a schedule for a bus. I want to be able to display the schedule based on some user inputs: route, day, and time. The bus makes at least 13 runs around the city in per day. The structure is set up as:
-Select Route(2 diff routes)
-Select Day(2 set of day, Sun-Wed & Thur-Sat)
-Select Time(atLeast 13 runs per day) = Show Schedule
My table structure is:
p_id, route day run# stop time
1 routeA m-w 1 stop1 12:00PM
1 routeA m-w 1 stop2 12:10PM
..and so on
I do have a functioning demo, however, it is very inefficient. I query the db for every possible run. I would like to avoid doing this.
Could anyone give me some tips to make this more efficient? OR show me some examples?
If you google for "bus timetable schema design" you will find lots of similar questions and many different solutions depending on the specific use case. Here is one similar question asked on here - bus timetable using SQL.
The first thing would be to normalise your data structure. There are many different approaches to this but a starting point would be something like -
routes(route_id, bus_no, route_name)
stops(stop_id, stop_name, lat/long, etc)
schedule(schedule_id, route_id, stop_id, arrive, depart)
You should do some searching and look to see the different use cases supported and how they relate to your specific scenario. The above example is only a crude example. It can be broken down further depending on the data being used. You may only want to store the time between stops in one table and then a start time for the route in another.
I've been stuck on this for two days and have gotten no where. I tend to think future and the future problems that will come around. My server's time is set to UTC and linux box is fully updated with the timezones as well as the data is in my database.
I'll explain my system for the best answer.
This site sells "items" but can only sell during open and closed times. The stores can have split hours: ie: open from 8am-12pm 1pm-8pm... etc.
So my hours table looks like:
id (1) | store_id (1) | opens (08:00) | closes (21:00)
Above has the sample data next to the column name. Basically store id#1 may be in Los Angeles (US/Pacific) or it may be in New York City (US/Eastern).
What is the best way to ensure that I don't miss an hour of downtime so I can disalow users to order from these stores during their off hours. If I'm off of the times one hour, that's one hour no one can order when they are really open and an hour users will order when they are really closed.. visa versa depending on time changes.
Has anyone dealt with this? And if so, how did you do it?
What is the best way I can go to solve this issue. I've been dealing with it and it's eating my brain for the past 48 hours.
Please help! :)
It's actually super easy to achieve in Postgres and MySQL. All you have to do is store the timezone in the user side, set server TZ to UTC, then convert between the two.
EX:
SELECT
CASE WHEN (
(CAST((CURRENT_TIMESTAMP at time zone s.timezone) as time) BETWEEN h.opens AND h.closes) AND
h.day = extract(dow from CURRENT_TIMESTAMP at time zone s.timezone)) THEN 0 ELSE 1 END
) as closed
FROM store s
LEFT JOIN store_hours r ON s.id = r.store_id
HERE h.day = extract(dow from CURRENT_TIMESTAMP at time zone s.timezone)
It's something like that. I had to do typecasting that way because I was limited for using Doctrine 1.2.
Works like a charm even with DST changes.
One thing to bear in mind is that some places (think Arizona) don't do DST. You might want to make sure that your database has enough information so you can distinguish between LA and Phoenix should that prove necessary.
Assuming you follow ITroubs' advice, and put offsets in the database (and possibly information about whether a store is in a DST-respecting locale), you could do the following:
Build your code so it checks whether DST is in effect, and builds your queries appropriately. If all your stores are in NY and LA, then you can just add 1 to the offset when needed. If not, you'll need a query which uses different rules for DST and non-DST stores. Something like,
SELECT store_id
FROM hours
WHERE
(supportsDST = true AND opens < dstAdjustedNow AND closes > dstAdjustedNow)
OR (supportsDST = false AND opens < UTCNow AND closes > UTCNow)
If you go this route, I recommend trying to centralize and isolate the code that deals with this as much as possible.
Also, you don't mention this, but I assume that a store with split time would have two rows in the hours table, one for each block that it's open.
I have a separate table for every day's data which is basically webstats type : keywords, visits, duration, IP, sale, etc (maybe 100 bytes total per record)
Each table will have around a couple of million records.
What I need to do is have a web admin so that the user/admin can view reports for different date periods AND sorted by certain calculated values. For example, the user may want the results for the 15th of last month to the 12th of this month , sorted by SALE/VISIT , descending order.
The admin/user only needs to view (say) the top 200 records at a time and will probably not view more than a few hundred total in any one session
Because of the arbitrary date period involved, I need to sum up the relevant columns for each record and only then can the selection be done.
My question is whether it will be possible to have the reports in real time or would they be too slow (the tables are not rarely - if ever - updated after the day's data has been inserted)
Is such a scenario better fitted to indexes or tablescans?
And also, whether a massive table for all dates would be better than having separate tables for each date (there are almost no joins)
thanks in advance!
With a separate table for each day's data, summarizing across a month is going to involve doing the same analysis on each of 30-odd tables. Over a year, you will have to do the analysis on 365 or so tables. That's going to be a nightmare.
It would almost certainly be better to have a soundly indexed single table than the huge number of tables. Some DBMS support fragmented tables - if MySQL does, fragment the single big table by the date. I would be inclined to fragment by month, especially if the normal queries are for one month or less and do not cross month boundaries. (Even if it involves two months, with decent fragment elimination, the query engine won't have to read most of the data; just the two fragments for the two months. It might be able to do those scans in parallel, even - again, depending on the DBMS.)
Sometimes, it is quicker to do sequential scans of a table than to do indexed lookups - don't simply assume that because the query plan involves a table scan that it will automatically be bad performing.
You may want to try a different approach. I think Splunk will work for you. It was designed for this, they even do ads on this site. They have a free version you can try.