Related
I have two tables in my database. In the first table is a column named date where i insert a date period in a 1 day interval. In the second table are the calendar weeks of the same date period and an autoincrement column weekid.
For example I have the calendar week 25 with the weekid 145 (saved in the second table). The date area is from 21.06-27.06 and is saved in the first table.
Now i want to insert the weekid into the first table for every day (date) matching the calendar weeks.
Here are my tables:
CREATE TABLE `day` (
`dayid` int(255) NOT NULL AUTO_INCREMENT,
`userid` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`date` date NOT NULL,
PRIMARY KEY (`dayid`)
CREATE TABLE `week` (
`weekid` int(255) NOT NULL AUTO_INCREMENT,
`userid` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`calendar week` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`year` year(4) NOT NULL,
PRIMARY KEY (`weekid`
Example output for table "day":
weekid: 145
userid: 589
date: 2021-05-06
Example output for table "week":
weekid: 145
userid: 589
calendar week: 25
year: 2021
Does anyone have an idea how to do the date comparison?
If in the second table you don't have the dates of each week you can just calculate it on the first table right?
SELECT (EXTRACT(DOY FROM date_in_table1)/7)
should produce the work week of a certain date.
I have an issue with LEFT JOIN. I do not want to use eloquent relations because I want to keep my models folder clean. I have an appointments application in which I am using "labels" and "statuses". I want to be able to filter my view based on the labels and statuses. The issue with the LEFT JOIN is that when I want to click on my edit link, it uses the "id" field from my "appointments_statuses" table, instead of the "appointments" table. Below is the relevant code:
My controller:
$appointments = $query->orderBy('appointment', 'asc')
->leftJoin('appointments_labels','appointments_labels.id','=','appointments.label_id')
->leftJoin('appointments_statuses','appointments_statuses.id','=','appointments.status_id')
->get();
My view:
#foreach($appointments as $appointment)
{{ $appointment->id }} // Problem here, it uses the "status_id" field from the "appointments" table instead of the "id" field
#endforeach
My database tables:
CREATE TABLE IF NOT EXISTS `appointments` (
`id` int(11) NOT NULL,
`appointment` varchar(255) NOT NULL,
`location` varchar(255) NOT NULL,
`description` text NOT NULL,
`start` datetime NOT NULL,
`end` datetime NOT NULL,
`label_id` int(11) NOT NULL,
`status_id` int(11) NOT NULL,
`contact` int(11) NOT NULL,
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00'
)
CREATE TABLE IF NOT EXISTS `appointments_labels` (
`id` int(11) NOT NULL,
`label` varchar(255) NOT NULL,
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00'
)
CREATE TABLE IF NOT EXISTS `appointments_statuses` (
`id` int(11) NOT NULL,
`status` varchar(255) NOT NULL,
`flag` varchar(255) NOT NULL,
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00'
)
Well, that's because your query collects ALL the fields of the 3 tables, so the columns with the same name get overwritten.
Simply use a select() on what fields you want (which is a good practice, anyway):
$appointments = $query->orderBy('appointment', 'asc')
->leftJoin('appointments_labels','appointments_labels.id','=','appointments.label_id')
->leftJoin('appointments_statuses','appointments_statuses.id','=','appointments.status_id')
->select('appointments.id', 'appointments.name', '........', 'appointments_statuses.name', 'appointments_labels.name')
->get();
NB: I'm guessing the fields you want from the main and the joined tables, but you get the idea :)
NB2: You can also pass an array of values to the select() method:
->select(['appointments.id', 'appointments.name', ....])
I am wondering about the best way for generating mysql reports in large databases, as I have a database for POS application with more than 40 stores working on it.
The database has more that 1.5M rows for each of four tables.
two for headers and other for details.
I am generating reports by joining headers with details and some other tables to get full info for the view.
I tried to archive the data in one table, which has all data required for reporting, but I found it's a huge load, and MySQL events for fetching data to that table not always working, and may lead to data loss.
I've also tried indexing tables, but didn't help too much as the queries are too big and take too much time, which lead to heavy load on the server and may stop the application with not responding at all.
I searched across google, and found some ideas about partitioning tables, and others about archiving, changing the whole engine or even upgrading server requirements.
The relation between two tables (invoice_header and invoice_detail) is (one to many) that the invoice_header is the header of an invoice, with only totals. Which is linked to invoice_detail using location ID (loc_id) and Invoice number (invo_no), as each location has its own serial number. The invoice detail contains the details of each invoice.
Sample Query:
The query takes too long (15 - 20) seconds to fetch
-Total rows: 1495873
-Total Fetched rows: 9 - 12
SELECT SUM(invoice_detail.qty) AS qty, Month(invoice_header.date) AS month
FROM invoice_detail
JOIN invoice_header ON invoice_detail.invo_no = invoice_header.invo_no
AND invoice_detail.loc_id = invoice_header.loc_id
WHERE invoice_detail.item_id = {$itemId}
GROUP BY Month(invoice_header.date)
ORDER BY Month(invoice_header.date)
EXPLAIN:
invoice_header table structure:
CREATE TABLE `invoice_header` (
`invo_type` varchar(1) NOT NULL,
`invo_no` int(20) NOT NULL AUTO_INCREMENT,
`invo_code` varchar(50) NOT NULL,
`date` date NOT NULL,
`time` time NOT NULL,
`cust_id` int(11) NOT NULL,
`loc_id` int(3) NOT NULL,
`cash_man_id` int(11) NOT NULL,
`sales_man_id` int(11) NOT NULL,
`ref_invo_no` int(20) NOT NULL,
`total_amount` decimal(19,2) NOT NULL,
`tax` decimal(19,2) NOT NULL,
`discount_amount` decimal(19,2) NOT NULL,
`net_value` decimal(19,2) NOT NULL,
`split` decimal(19,2) NOT NULL,
`qty` int(11) NOT NULL,
`payment_type_id` varchar(20) NOT NULL,
`comments` varchar(255) NOT NULL,
PRIMARY KEY (`invo_no`,`loc_id`)
) ENGINE=InnoDB AUTO_INCREMENT=20286 DEFAULT CHARSET=utf8
invoice_detail table structure:
CREATE TABLE `invoice_detail` (
`invo_no` int(11) NOT NULL,
`loc_id` int(3) NOT NULL,
`serial` int(11) NOT NULL,
`item_id` varchar(11) NOT NULL,
`size_id` int(5) NOT NULL,
`qty` int(11) NOT NULL,
`rtp` decimal(19,2) NOT NULL,
`type` tinyint(1) NOT NULL,
PRIMARY KEY (`invo_no`,`loc_id`,`serial`),
KEY `item_id` (`item_id`),
KEY `size_id` (`size_id`),
KEY `invo_no` (`invo_no`),
KEY `serial` (`serial`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
After adding "Extract":
EXPLAIN SELECT SUM( invoice_detail.qty ) AS qty, Month( invoice_header.date ) AS
MONTH
FROM invoice_detail
JOIN invoice_header ON invoice_detail.invo_no = invoice_header.invo_no
AND invoice_detail.loc_id = invoice_header.loc_id
WHERE invoice_detail.item_id =11321
GROUP BY EXTRACT(
YEAR_MONTH FROM invoice_header.date )
I am using a quite good dedicated server with:
Intel Xeon Quad Core 3.3GHz (8 threads)
1 Gbps Uplink
16 GB RAM
1,000 GB RAID-1 Drives
25 TB Bandwidth
Any suggestions?
I have Laravel app where i check the thread comment (thread_comment table) created date time and thread (thread table) last visit date.
i will select if any thread comment created date time is > thread last visit date, i will query them out.
The purpose of this is to notify the user how many new comments since their last visit to their thread.
Below is my code.
$new_thread_comment_count = DB::table('ap_thread')
->join('ap_thread_comment', 'ap_thread_comment.ThreadID', '=', 'ap_thread.ThreadID')
->where('ap_thread.CreatedBy', Auth::user()->UserID)
->where('ap_thread_comment.CreatedDateTime','>','ap_thread.last_visit_date')
->count();
My problem is, this doesn't work. If I change the operand to <, it will like display all records, which is incorrect.
Do i need to do any datetime convertion in where clause while comparing datetime between two table?
CREATE TABLE `ap_thread`
(`ThreadID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`ThreadTitle` text NOT NULL,
`Remark` text NOT NULL,
`CountryID` smallint(6) NOT NULL,
`last_visit_date` datetime NOT NULL,
`Thread_StatusID` tinyint(4) NOT NULL,
`StatusID` tinyint(4) NOT NULL,
`CreatedBy` bigint(20) NOT NULL,
`CreatedDateTime` datetime NOT NULL,
`EditedBy` bigint(20) NOT NULL,
`EditedDateTime` datetime NOT NULL,
`IsSelling` tinyint(1) NOT NULL,
PRIMARY KEY (`ThreadID`))
CREATE TABLE `ap_thread_comment` (
`Thread_CommentID` bigint(20) NOT NULL AUTO_INCREMENT,
`ThreadID` bigint(20) NOT NULL,
`StatusID` tinyint(4) NOT NULL,
`reply_to` bigint(20) unsigned NOT NULL,
`CreatedBy` bigint(20) NOT NULL,
`CreatedDateTime` datetime NOT NULL,
`Comment` text NOT NULL,
PRIMARY KEY (`Thread_CommentID`))
Sorry for the confusion. The thread table stores all the threads created by the users. Whereas thread_comment stores the comments that commented by other users.
What i'm trying to do is to select the total new comments for every threads that created by the logged in user by comparing ap_thread_comment CreatedDateTime is > ap_thread last_visit_date.
The last_visited_date will only be updated when the owner of the thread visited the thread.
Solve it using
$new_thread_comment_count = DB::table('ap_thread')
->join('ap_thread_comment', 'ap_thread_comment.thread_id', '=', 'ap_thread.thread_id')
->whereRaw('ap_thread_comment.created_at > ap_thread.last_visit_date')
->where('ap_thread.created_by','=',Auth::user()->user_id)
->Count();
I have two tables
security_stat => having 4 millions record
security_trade => having 10 millions record
I have this query running successfully but how can i OPTIMIZE this to be able to at least query 100,000 record within 10 seconds ( is it possible? ).. Currently it is very very slow.
SELECT `sec_stat_sec_name`, `sec_stat_date`, `sec_stat_market`, `sec_trade_close`, `sec_stat_date`
FROM security_stat` LEFT JOIN `security_trade`
ON `security_trade`.`sec_trade_sec_name` = `security_stat`.`sec_stat_sec_name`
and `security_trade`.`sec_trade_date` = `security_stat`.`sec_stat_date`
limit 100,000
I have INDEX on sec_trade_sec_name, sec_stat_sec_name, sec_trade_date , sec_stat_date
I tried limiting result with WHERE sec_stat_date >= 2005-01-01 but that doesn't help much. (my records range from 1975 - 2014)
EDIT
security_stat schema
CREATE TABLE `security_stat` (
`sec_stat_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`sec_stat_date` date NOT NULL,
`sec_stat_sec_name` varchar(255) NOT NULL,
`sec_stat_sec_id` int(11) NOT NULL,
`sec_stat_market` varchar(255) NOT NULL,
`sec_stat_industry` int(11) NOT NULL,
`sec_stat_sector` int(11) NOT NULL,
`sec_stat_subsector` int(11) NOT NULL,
`sec_stat_sec_type` varchar(1) NOT NULL,
`sec_stat_status` varchar(2) NOT NULL,
`sec_stat_benefit` varchar(2) NOT NULL,
`sec_stat_listed_share` bigint(20) NOT NULL,
`sec_stat_earn_p_share` decimal(12,5) NOT NULL,
`sec_stat_value` decimal(9,2) NOT NULL,
`sec_stat_p_of_earn` int(11) NOT NULL,
`sec_stat_as_date` date NOT NULL,
`sec_stat_div_p_share` decimal(16,12) NOT NULL,
`sec_stat_p_of_div` int(11) NOT NULL,
`sec_stat_end_date_div` date NOT NULL,
`sec_stat_pe` decimal(8,2) NOT NULL,
`sec_stat_pbv` decimal(8,2) NOT NULL,
`sec_stat_div_yield` decimal(8,2) NOT NULL,
`sec_stat_par_value` decimal(16,5) NOT NULL,
`sec_stat_market_cap` decimal(20,2) NOT NULL,
`sec_stat_turn_ratio` decimal(8,2) NOT NULL,
`sec_stat_npg_flag` varchar(1) NOT NULL,
`sec_stat_acc_div` decimal(16,12) NOT NULL,
`sec_stat_acc_no_of_pay` int(11) NOT NULL,
`sec_stat_div_pay_ratio` decimal(6,2) NOT NULL,
`sec_stat_earn_date` date NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`sec_stat_ev` decimal(20,2) DEFAULT NULL,
`sec_stat_ev_revenue` decimal(20,2) DEFAULT NULL,
`sec_stat_ev_ebit` decimal(20,2) DEFAULT NULL,
`sec_stat_ev_ebitda` decimal(20,2) DEFAULT NULL,
`sec_stat_earning_yield` decimal(10,5) DEFAULT NULL,
`sec_stat_ps_ratio` decimal(10,5) DEFAULT NULL,
PRIMARY KEY (`sec_stat_id`),
UNIQUE KEY `sec_stat_date_name_id_cap` (`sec_stat_date`,`sec_stat_market`,`sec_stat_sec_id`,`sec_stat_sector`),
KEY `sec_stat_date` (`sec_stat_date`),
KEY `sec_stat_sec_name` (`sec_stat_sec_name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3598612 ;
security_trade schema
CREATE TABLE `security_trade` (
`sec_trade_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`sec_trade_date` date NOT NULL,
`sec_trade_sec_name` varchar(20) NOT NULL,
`sec_trade_sec_id` int(11) NOT NULL,
`sec_trade_market` varchar(1) NOT NULL,
`sec_trade_trading_method` varchar(1) NOT NULL,
`sec_trade_trade_report` varchar(1) NOT NULL,
`sec_trade_prior_date` date NOT NULL,
`sec_trade_prior` decimal(8,2) NOT NULL,
`sec_trade_open` decimal(8,2) NOT NULL,
`sec_trade_high` decimal(8,2) NOT NULL,
`sec_trade_low` decimal(8,2) NOT NULL,
`sec_trade_close` decimal(8,2) NOT NULL,
`sec_trade_last_bid` decimal(8,2) NOT NULL,
`sec_trade_last_offer` decimal(8,2) NOT NULL,
`sec_trade_transaction` int(11) NOT NULL,
`sec_trade_volume` bigint(20) NOT NULL,
`sec_trade_value` decimal(20,2) NOT NULL,
`sec_trade_avg_price` decimal(8,2) NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`sec_trade_id`),
UNIQUE KEY `sec_trade_close` (`sec_trade_date`,`sec_trade_sec_name`,`sec_trade_market`,`sec_trade_trade_report`,`sec_trade_trading_method`),
KEY `security_trade_sec_trade_sec_name_index` (`sec_trade_sec_name`),
KEY `security_trade_sec_trade_date_index` (`sec_trade_date`),
KEY `security_trade_sec_trade_prior_date_index` (`sec_trade_prior_date`),
KEY `security_trade_sec_trade_close_index` (`sec_trade_close`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=10019817 ;
My Final query will actually have more
WHERE sec_stat_earning_yield IS NULL
ORDER BY updated_at ASC
but because when i add this two statement into the query with LIMIT of 1,000 records it make my query even slower ( may be because i didn't have index on this two columns? )
Thanks in Advance
Taking the following as your actual query:
SELECT `sec_stat_sec_name`, `sec_stat_date`, `sec_stat_market`, `sec_trade_close`, `sec_stat_date`
FROM `security_stat` LEFT JOIN `security_trade`
ON `security_trade`.`sec_trade_sec_name` = `security_stat`.`sec_stat_sec_name`
and `security_trade`.`sec_trade_date` = `security_stat`.`sec_stat_date`
WHERE sec_stat_earning_yield IS NULL
ORDER BY updated_at ASC
limit 100,000
You filter the security_stat table in two ways:
1. Only where sec_stat_earning_yield IS NULL
2. First 100k records when ordered by updated_at
Note: I've assume you mean security_stat.updated_at, but you don't make that clear.
In order to make that as cheap as possible add an index that covers both of those fields (sec_stat_earning_yield, updated_at).
Note: Adding indexes that change a lot, especially when the order of the records changes within the index, can make a INSERTs slower. You will need to balance INSERT performance against SELECT performance.
Then you join the trades table on, and so you want that lookup to be as fast as possible, which can be achieved with an index on that table covering (sec_trade_sec_name, sec_trade_date, sec_trade_close).
- The first two fields in the index make the lookup simpler
- The last field in the index means the DBMS can avoid having to look in the table
Once done you may also be well served looking at the EXPLAIN plan, although relatively complicated it will give you key information to understand the best places to target your optimisation.
First, try createing indexes to match the join so:
security_trade (sec_trade_sec_name, sec_trade_date)
security_stat (sec_stat_sec_name, sec_stat_date)
or possibly
security_stat (sec_stat_earning_yield, sec_stat_sec_name, sec_stat_date)
And as pointed out in the comments above, your "Limit" clause may cause the result set to be not clearly defined.