I need to find a way to modify the morphMany relationship to morph two different columns. Here is the table create syntax that is being morphed:
CREATE TABLE `user_friendships` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`sender_type` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`sender_id` bigint(20) unsigned NOT NULL,
`recipient_type` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`recipient_id` bigint(20) unsigned NOT NULL,
`status` tinyint(4) NOT NULL DEFAULT '0',
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `user_friendships_sender_type_sender_id_index` (`sender_type`,`sender_id`),
KEY `user_friendships_recipient_type_recipient_id_index` (`recipient_type`,`recipient_id`)
) ENGINE=InnoDB AUTO_INCREMENT=12332 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
Sender and recipient can both be a user. So I want to morph based on whatever column of the two contains the user_id.
This only looks at if the sender is the user, but not the recipient.
return $this->morphMany(Friendship::class, 'sender');
The output of the query from that is:
select * from `user_friendships` where `user_friendships`.`sender_id` = 5971 and `user_friendships`.`sender_id` is not null and `user_friendships`.`sender_type` = "App\\\User"
What we actually want is:
select * from `user_friendships` where (`user_friendships`.`sender_id` = 5971 and `user_friendships`.`sender_id` is not null and `user_friendships`.`sender_type` = "App\\\User") OR (`user_friendships`.`recipient_id` = 5971 and `user_friendships`.`recipient_id` is not null and `user_friendships`.`recipient_type` = "App\\\User")
How do I accomplish this?
https://github.com/staudenmeir/laravel-merged-relations
This composer package was able to solve it.
So i have a table that has multiple date time columns and i am trying to select certain records based on a certain date using
SELECT * FROM `posdata` WHERE `CommissionDate` >= '2019-01-01 00:00:00'
the table structure
CREATE TABLE `posdata` (
`ID` int(11) NOT NULL,
`DISTYNAME` varchar(30) DEFAULT NULL,
`ENDCUST` varchar(75) DEFAULT NULL,
`MFGCUST` varchar(50) DEFAULT NULL,
`EXTPRICE` double DEFAULT NULL,
`POSPERIOD` datetime DEFAULT NULL,
`PAYMENTDATE` date DEFAULT NULL,
`QTY` double DEFAULT NULL,
`UNITCOST` double DEFAULT NULL,
`UNITPRICE` double DEFAULT NULL,
`COMMISSION` double DEFAULT NULL,
`SALESORDER` varchar(40) DEFAULT NULL,
`PO` varchar(40) DEFAULT NULL,
`POLineItem` varchar(20) DEFAULT NULL,
`ENTRYDATE` datetime DEFAULT NULL,
`AdjustedCommission` int(11) DEFAULT NULL,
`CustomerPart-NO` varchar(50) DEFAULT NULL,
`CommissionDate` datetime DEFAULT NULL,
`EXTCOST` double DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
ALTER TABLE `posdata`
ADD PRIMARY KEY (`ID`),
ADD KEY `CommissionDate` (`CommissionDate`);
ALTER TABLE `posdata` ADD FULLTEXT KEY `posdata_endcustomer_index` (`ENDCUST`);
a very weird thing happens, it returns all the fields as required, but the CommissionDate column has only '2019-01-01 00:00:00' as the date. The actual CommissionDate column in the database has only '2016-01-01 00:00:00' as the data.
I am using phpmyadmin to to run this query and have used the search filter on that and it always gives me the same result whether i run it thorough a php script or phpmyadmin. What am i doing wrong ?
In your query SELECT * FROM 'posdata' means get all the fields from the table post data and then the next WHERE clause applies.If you only want to get the data from CommissionDate column
Change Your Query to
SELECT `CommissionDate` FROM `posdata`
In the way you get the desired data.
the query was correct can some one delete this question !
I have two tables
security_stat => having 4 millions record
security_trade => having 10 millions record
I have this query running successfully but how can i OPTIMIZE this to be able to at least query 100,000 record within 10 seconds ( is it possible? ).. Currently it is very very slow.
SELECT `sec_stat_sec_name`, `sec_stat_date`, `sec_stat_market`, `sec_trade_close`, `sec_stat_date`
FROM security_stat` LEFT JOIN `security_trade`
ON `security_trade`.`sec_trade_sec_name` = `security_stat`.`sec_stat_sec_name`
and `security_trade`.`sec_trade_date` = `security_stat`.`sec_stat_date`
limit 100,000
I have INDEX on sec_trade_sec_name, sec_stat_sec_name, sec_trade_date , sec_stat_date
I tried limiting result with WHERE sec_stat_date >= 2005-01-01 but that doesn't help much. (my records range from 1975 - 2014)
EDIT
security_stat schema
CREATE TABLE `security_stat` (
`sec_stat_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`sec_stat_date` date NOT NULL,
`sec_stat_sec_name` varchar(255) NOT NULL,
`sec_stat_sec_id` int(11) NOT NULL,
`sec_stat_market` varchar(255) NOT NULL,
`sec_stat_industry` int(11) NOT NULL,
`sec_stat_sector` int(11) NOT NULL,
`sec_stat_subsector` int(11) NOT NULL,
`sec_stat_sec_type` varchar(1) NOT NULL,
`sec_stat_status` varchar(2) NOT NULL,
`sec_stat_benefit` varchar(2) NOT NULL,
`sec_stat_listed_share` bigint(20) NOT NULL,
`sec_stat_earn_p_share` decimal(12,5) NOT NULL,
`sec_stat_value` decimal(9,2) NOT NULL,
`sec_stat_p_of_earn` int(11) NOT NULL,
`sec_stat_as_date` date NOT NULL,
`sec_stat_div_p_share` decimal(16,12) NOT NULL,
`sec_stat_p_of_div` int(11) NOT NULL,
`sec_stat_end_date_div` date NOT NULL,
`sec_stat_pe` decimal(8,2) NOT NULL,
`sec_stat_pbv` decimal(8,2) NOT NULL,
`sec_stat_div_yield` decimal(8,2) NOT NULL,
`sec_stat_par_value` decimal(16,5) NOT NULL,
`sec_stat_market_cap` decimal(20,2) NOT NULL,
`sec_stat_turn_ratio` decimal(8,2) NOT NULL,
`sec_stat_npg_flag` varchar(1) NOT NULL,
`sec_stat_acc_div` decimal(16,12) NOT NULL,
`sec_stat_acc_no_of_pay` int(11) NOT NULL,
`sec_stat_div_pay_ratio` decimal(6,2) NOT NULL,
`sec_stat_earn_date` date NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`sec_stat_ev` decimal(20,2) DEFAULT NULL,
`sec_stat_ev_revenue` decimal(20,2) DEFAULT NULL,
`sec_stat_ev_ebit` decimal(20,2) DEFAULT NULL,
`sec_stat_ev_ebitda` decimal(20,2) DEFAULT NULL,
`sec_stat_earning_yield` decimal(10,5) DEFAULT NULL,
`sec_stat_ps_ratio` decimal(10,5) DEFAULT NULL,
PRIMARY KEY (`sec_stat_id`),
UNIQUE KEY `sec_stat_date_name_id_cap` (`sec_stat_date`,`sec_stat_market`,`sec_stat_sec_id`,`sec_stat_sector`),
KEY `sec_stat_date` (`sec_stat_date`),
KEY `sec_stat_sec_name` (`sec_stat_sec_name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3598612 ;
security_trade schema
CREATE TABLE `security_trade` (
`sec_trade_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`sec_trade_date` date NOT NULL,
`sec_trade_sec_name` varchar(20) NOT NULL,
`sec_trade_sec_id` int(11) NOT NULL,
`sec_trade_market` varchar(1) NOT NULL,
`sec_trade_trading_method` varchar(1) NOT NULL,
`sec_trade_trade_report` varchar(1) NOT NULL,
`sec_trade_prior_date` date NOT NULL,
`sec_trade_prior` decimal(8,2) NOT NULL,
`sec_trade_open` decimal(8,2) NOT NULL,
`sec_trade_high` decimal(8,2) NOT NULL,
`sec_trade_low` decimal(8,2) NOT NULL,
`sec_trade_close` decimal(8,2) NOT NULL,
`sec_trade_last_bid` decimal(8,2) NOT NULL,
`sec_trade_last_offer` decimal(8,2) NOT NULL,
`sec_trade_transaction` int(11) NOT NULL,
`sec_trade_volume` bigint(20) NOT NULL,
`sec_trade_value` decimal(20,2) NOT NULL,
`sec_trade_avg_price` decimal(8,2) NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`sec_trade_id`),
UNIQUE KEY `sec_trade_close` (`sec_trade_date`,`sec_trade_sec_name`,`sec_trade_market`,`sec_trade_trade_report`,`sec_trade_trading_method`),
KEY `security_trade_sec_trade_sec_name_index` (`sec_trade_sec_name`),
KEY `security_trade_sec_trade_date_index` (`sec_trade_date`),
KEY `security_trade_sec_trade_prior_date_index` (`sec_trade_prior_date`),
KEY `security_trade_sec_trade_close_index` (`sec_trade_close`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=10019817 ;
My Final query will actually have more
WHERE sec_stat_earning_yield IS NULL
ORDER BY updated_at ASC
but because when i add this two statement into the query with LIMIT of 1,000 records it make my query even slower ( may be because i didn't have index on this two columns? )
Thanks in Advance
Taking the following as your actual query:
SELECT `sec_stat_sec_name`, `sec_stat_date`, `sec_stat_market`, `sec_trade_close`, `sec_stat_date`
FROM `security_stat` LEFT JOIN `security_trade`
ON `security_trade`.`sec_trade_sec_name` = `security_stat`.`sec_stat_sec_name`
and `security_trade`.`sec_trade_date` = `security_stat`.`sec_stat_date`
WHERE sec_stat_earning_yield IS NULL
ORDER BY updated_at ASC
limit 100,000
You filter the security_stat table in two ways:
1. Only where sec_stat_earning_yield IS NULL
2. First 100k records when ordered by updated_at
Note: I've assume you mean security_stat.updated_at, but you don't make that clear.
In order to make that as cheap as possible add an index that covers both of those fields (sec_stat_earning_yield, updated_at).
Note: Adding indexes that change a lot, especially when the order of the records changes within the index, can make a INSERTs slower. You will need to balance INSERT performance against SELECT performance.
Then you join the trades table on, and so you want that lookup to be as fast as possible, which can be achieved with an index on that table covering (sec_trade_sec_name, sec_trade_date, sec_trade_close).
- The first two fields in the index make the lookup simpler
- The last field in the index means the DBMS can avoid having to look in the table
Once done you may also be well served looking at the EXPLAIN plan, although relatively complicated it will give you key information to understand the best places to target your optimisation.
First, try createing indexes to match the join so:
security_trade (sec_trade_sec_name, sec_trade_date)
security_stat (sec_stat_sec_name, sec_stat_date)
or possibly
security_stat (sec_stat_earning_yield, sec_stat_sec_name, sec_stat_date)
And as pointed out in the comments above, your "Limit" clause may cause the result set to be not clearly defined.
I have such table
CREATE TABLE IF NOT EXISTS `superTable` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`lotID` bigint(20) NOT NULL,
`characterID` bigint(20) NOT NULL,
`confirmChoice` varchar(255) DEFAULT NULL,
`confirmStatus` varchar(255) DEFAULT NULL,
`dateCreate` datetime DEFAULT NULL,
`dateStart` datetime DEFAULT NULL,
`dateEnd` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=0 ;
I need such order
First entries where confirmChoice is null.
Next entries where confirmChoice and dateStart is not null.
All next order by dateEnd;
How can i do it in one query?
If I understood your requirement correctly , this could be one way of achieving it :
SELECT
*
,CASE
WHEN (`confirmChoice` IS NULL) THEN '1'
WHEN (`confirmChoice` IS NOT NULL AND `dateStart` IS NOT NULL ) THEN '2'
ELSE '3'
END AS sort_order
FROM
`supertable`
WHERE
1
ORDER BY
sort_order
,`dateEnd`
You probably would need to tweak it to suit your requirement .
I'm working on a codeigniter project and I'm trying to troubleshoot a sql issue. I have a query that updates a date field in my table and it's not updating it at all.
I have this table
CREATE TABLE `Customer` (
`customer_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`first_name` varchar(55) NOT NULL DEFAULT '',
`last_name` varchar(55) NOT NULL DEFAULT '',
`city` varchar(255) DEFAULT '',
`state` char(2) DEFAULT '',
`zip` int(5) DEFAULT NULL,
`title` varchar(255) NOT NULL DEFAULT '',
`image` varchar(255) DEFAULT '',
`blurb` blob,
`end_date` date DEFAULT NULL,
`goal` int(11) DEFAULT NULL,
`paypal_acct_num` int(11) DEFAULT NULL,
`progress_bar` enum('full','half','none') DEFAULT NULL,
`page_views` int(11) NOT NULL DEFAULT '0',
`total_pages` int(11) NOT NULL DEFAULT '0',
`total_conversions` int(11) NOT NULL DEFAULT '0',
`total_given` int(11) NOT NULL DEFAULT '0',
`conversion_percentage` int(11) NOT NULL DEFAULT '0',
`avg_contribution` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`customer_id`)
) ENGINE=MyISAM AUTO_INCREMENT=15 DEFAULT CHARSET=utf8;
and when I run this query to insert data, it runs fine and sets the date to 2012-11-01
INSERT INTO `Customer` (`first_name`, `last_name`, `end_date`) VALUES ('John', 'Smith2', '2012-11-01');
Then I get the customer_id and try to run this query
UPDATE `Customer` SET `end_date` = '2012-14-01' WHERE `customer_id` = '18';
and it sets the date end_date field to 0000-00-00.
Why is it changing the end date to 0000-00-00 rather than 2012-14-01?
2012-14-01 is first day of fourteenth month :)
(so its invalid date, thus casted to 0000-00-00 and Data truncated for column 'end_date' at row 1 warning was returned by mysql, which you can see by querying SHOW WARNINGS to mysql immediately after badly behaving query)
2012-01-14 is 14th of January.
use this:
UPDATE `Customer` SET `end_date` = date('Y-m-d') WHERE `customer_id` = '18';
Use date function to update this field.