Getting sales values by day in CakePHP and MySQL - php

I have a sales model, with a salesitems related model, the sales model has some modifiers, ie discount.
To get sales totals, I have done this:
var $virtualFields = array(
'total' => '#vad:=(SELECT COALESCE(SUM(price*quantity), 0) FROM saleitems WHERE saleitems.sale_id = Sale.id)',
'paid' => '#pad:=(SELECT COALESCE(SUM(amount), 0) FROM payments WHERE payments.sale_id = Sale.id)',
'discountamount' => '#dis:=(SELECT COALESCE(SUM(price*quantity), 0) FROM saleitems WHERE saleitems.sale_id = Sale.id)*(0.01 * Sale.discount)',
'saleamount' => '#vad - #dis',
);
Which all seems to be working well. However, when I come to do some reporting, and try to get total sales amount per day, I have run up against the limit of brain power. Should I just tot them up in PHP, or run a query? Or is there a way to do this with Cake's ORM?
I tried the query method:
SELECT
created,
(#vad:=(SELECT COALESCE(SUM(price*quantity), 0) FROM saleitems WHERE `saleitems`.`sale_id` = `Sale`.`id`)) AS `Sale__total`,
(#pad:=(SELECT COALESCE(SUM(amount), 0) FROM payments WHERE `payments`.`sale_id` = `Sale`.`id`)) AS `Sale__paid`,
(#dis:=(SELECT COALESCE(SUM(price*quantity), 0) FROM saleitems WHERE `saleitems`.`sale_id` = `Sale`.`id`)*(0.01 * `Sale`.`discount`)) AS `Sale__discountamount`,
sum(#vad - #dis) AS `Sale__saleamount`
FROM `sales` AS `Sale` WHERE `Sale`.`account_id` = 37 GROUP BY DAY(`Sale`.`created`) order by created
But this is giving me completely incorrect answers.

you can run this query:
SELECT SUM((si.price * si.quantity) * (1 - (0.01 * s.discount))) AS SalesByDay
FROM sales s JOIN saleitems si ON s.id = si.sale_id
WHERE s.account_id = 37
GROUP BY DATE(s.created)
Notes:
The DAY function, returnes the day of the month, not the date
I did not join the payments table since i do not see where you use the #pad variable

Related

Unable to get the value from a query with WHERE Clause

I am trying to get the Total Sum of values from a table. Query works without WHERE Clause, but i need to get the total sum per user. Like user ABC has 100USD and user BDC has 200USD. Here is the code
$PWithdrawls = mysqli_query($con, "SELECT * FROM withdraw WHERE status='Pending'");
$S_NO = 0;
while ($row = mysqli_fetch_assoc($PWithdrawls)) {
$S_NO++;
$posted_by = mysqli_query($con,"SELECT * FROM users WHERE userId=".$row['seller_id']);
$user_ad = mysqli_fetch_assoc($posted_by);
$TotalOrders_Amount = mysqli_query($con, "SELECT SUM(amount) as total FROM orders WHERE userId=".$row['seller_id']);
$sum_amount = mysqli_fetch_assoc($TotalOrders_Amount);
$sum = $sum_amount['total'];
And here is my call
<td>$<?php echo $sum; ?></td>
Here is DB
Think you have error in your SQL Query:
SELECT SUM(amount) as total FROM orders WHERE userId=".$row['seller_id'] GROUP BY userId LIMIT 1
You need to use GROUP BY to get actual SUM. Also you can get all users with Total, there is no need to second query:
SELECT u.*, SUM(o.amount) AS total
FROM users u
LEFT JOIN orders o ON (o.userId = u.id)
GROUP BY u.userId
This should get you entire user row + total of their orders.
I found the issue. I was calling the wrong variable. userId was not in my table, it was seller_id. So correct query was
$TotalOrders_Amount = mysqli_query($con, "SELECT SUM(amount) as total FROM orders WHERE seller_id=".$row['seller_id']);
Thanks to everyone. I really appreciate.

How to optimise handle of big data on laravel?

My task is:
"To take transactions table, grouped row by transaction date and calculate statuses. This manipulations will be formed statistics, wich will be rendered on the page".
This is my method of this statistics generation
public static function getStatistics(Website $website = null)
{
if($website == null) return [];
$query = \DB::table('transactions')->where("website_id", $website->id)->orderBy("dt", "desc")->get();
$transitions = collect(static::convertDate($query))->groupBy("dt");
$statistics = collect();
dd($transitions);
foreach ($transitions as $date => $trans) {
$subscriptions = $trans->where("status", 'subscribe')->count();
$unsubscriptions = $trans->where("status", 'unsubscribe')->count();
$prolongations = $trans->where("status", 'rebilling')->count();
$redirections = $trans->where("status", 'redirect_to_lp')->count();
$conversion = $redirections == 0 ? 0 : ((float) ($subscriptions / $redirections));
$earnings = $trans->sum("pay");
$statistics->push((object)[
"date" => $date,
"subscriptions" => $subscriptions,
'unsubscriptions' => $unsubscriptions,
'prolongations' => $prolongations,
'redirections' => $redirections,
'conversion' => round($conversion, 2),
'earnings' => $earnings,
]);
}
return $statistics;
}
if count of transaction rows below 100,000 - it's all wright. But, if count is above 150-200k - nginx throw 502 bad gateway. What can you advise to me? I'm don't have any expierince in bigdata handling. May be, my impiments has fundamental error?
Big data is never easy, but I would suggest using the Laravel chunk instead of get.
https://laravel.com/docs/5.1/eloquent (ctrl+f "::chunk")
What ::chunk does is select n rows at a time, and allow you to process them bit by bit. This is convenient in that it allows you to stream updates to the browser, but at the ~150k result range, I would suggest looking up how to push this work into a background process instead of handling it on request.
After several days of researching information on this question, I found the right answer:
NOT to use PHP for handling raw data. It's better to use SQL!
In my case, we are using PostgreSQL.
Below, i'll write sql-query which worked for me, maybe it will help someone else.
WITH
cte_range(dt) AS
(
SELECT
generate_series('2016-04-01 00:00:00'::timestamp with time zone, '{$date} 00:00:00'::timestamp with time zone, INTERVAL '1 day')
),
cte_data AS
(
SELECT
date_trunc('day', dt) AS dt,
COUNT(*) FILTER (WHERE status = 'subscribe') AS count_subscribes,
COUNT(*) FILTER (WHERE status = 'unsubscribe') AS count_unsubscribes,
COUNT(*) FILTER (WHERE status = 'rebilling') AS count_rebillings,
COUNT(*) FILTER (WHERE status = 'redirect_to_lp') AS count_redirects_to_lp,
SUM(pay) AS earnings,
CASE
WHEN COUNT(*) FILTER (WHERE status = 'redirect_to_lp') > 0 THEN 100.0 * COUNT(*) FILTER (WHERE status = 'subscribe')::float / COUNT(*) FILTER (WHERE status = 'redirect_to_lp')::float
ELSE 0
END
AS conversion_percent
FROM
transactions
WHERE
website_id = {$website->id}
GROUP BY
date_trunc('day', dt)
)
SELECT
to_char(cte_range.dt, 'YYYY-MM-DD') AS day,
COALESCE(cte_data.count_subscribes, 0) AS count_subscribe,
COALESCE(cte_data.count_unsubscribes, 0) AS count_unsubscribes,
COALESCE(cte_data.count_rebillings, 0) AS count_rebillings,
COALESCE(cte_data.count_redirects_to_lp, 0) AS count_redirects_to_lp,
COALESCE(cte_data.conversion_percent, 0) AS conversion_percent,
COALESCE(cte_data.earnings, 0) AS earnings
FROM
cte_range
LEFT JOIN
cte_data
ON cte_data.dt = cte_range.dt
ORDER BY
cte_range.dt DESC

Mysql return results based on SUM of a column

I have a query where I need to get all customers where they have spent less that a certain amount in a given month and return only those that have not met the quota.
The query as it is now is as follows.
SELECT cus.id, cus.email_address, COALESCE(SUM(credit_total),0) AS totalSpend
FROM customers AS cus
LEFT JOIN tasks_custs AS tsk ON tsk.user_id = cus.id
WHERE (
YEAR(date_ordered) = '2013'
AND MONTH(date_ordered) = '09'
AND paid = '1'
AND totalSpend < '300'
)
The error that is being returned is Unknown column 'totalSpend' in 'where clause'.
What I am wondering is can I accomplish what I am trying to do with a single sql query or am I going to have to select all customers and check the spend using php.
I was hoping to just have mysql return only the results that I need.
When working with aggregate functions you need to use the HAVING keyword instead of WHERE.
SELECT cus.id, cus.email_address, COALESCE(SUM(credit_total),0) AS totalSpend
FROM customers AS cus
LEFT JOIN tasks_custs AS tsk ON tsk.user_id = cus.id
WHERE (
YEAR(date_ordered) = '2013'
AND MONTH(date_ordered) = '09'
AND paid = '1')
GROUP BY cus.id
HAVING SUM(credit_total) < 30
If you are interested here is a good explanation in the difference between WHERE and HAVING look here. But if you want a quick summary,in my words, I would say it is this:
WHERE conditions are applied before any grouping on the specified criteria, and cannot be applied to aggregate functions
whereas HAVING is applied after grouping and can use aggregate functions to filter the result set.
How does this work for you? I have group'ed customer information and summed the total, just like you have - except that I have added a HAVING CLAUSE after grouping the data.
SELECT
cus.id,
cus.email_address,
COALESCE(SUM(credit_total),0) AS totalSpend
FROM customers AS cus
LEFT JOIN tasks_custs AS tsk
ON tsk.user_id = cus.id
WHERE
YEAR(date_ordered) = '2013'
AND MONTH(date_ordered) = '09'
AND paid = '1'
GROUP BY
cus.id,
cus.email_address
HAVING COALESCE(SUM(credit_total),0) < '300'

Best way to sum and seperate by date in MYSQL with/witout php

Hi i have such table information:
what i want to do with php with while or just in mysql, is to SUM (time_used) of the rows with status 44 until its reached row with status 55. after that it should begin from start with new summing.
first query should return 37, second 76 (keep in mind it should be universal, for unlimited occurrences of 55 status row)
i thought of a way with time/date filtering and have this:
select sum(time_used) as sumed
from timelog
where start_time > (select end_time from timelog where (status='55')
ORDER BY id DESC LIMIT 1) ORDER BY id DESC
but this works only for last combination of 44 and 55
i know i will need two way filtering( < end_time and > end_time) so it will work for all cases, but cant think of a way to do it in php
can anyone help me?
EDIT:
sqlfiddle whoever want it:
http://sqlfiddle.com/#!2/33820/2/0
There are two ways to do it: Plain SQL or PHP. If you treat thousands of rows, it may be interresting to choose between the two by testing performance.
Plain SQL
select project_id, task_id, user_id, sum(time_used) as time_used,
min(start_time) as start_time, max(end_time) as end_time, max(comment) as comment from
(select t.id, t.project_id, t.task_id, t.user_id, t.time_used,
count(t2.id) as count55, t.start_time, t.end_time, t.comment
from timelog t
left join timelog t2 on t.id>t2.id and t2.status=55 and t.task_id=t2.task_id
group by t.id) as t
group by count55;
I assume here that a task can belong to one user only
SQL and PHP
$link = mysqli_connect( ... );
$query = "select id, project_id, task_id, user_id, time_used, start_time, end_time, status
from timelog order by id";
$result = mysqli_query($link, $query);
$table = array();
$time_used = 0;
$start_sum = true;
$i = 0;
while($row = mysqli_fetch_assoc ($result)){
if($start_sum){
$table[$i] = $row;
$start_sum = false;
} else {
$table[$i]['time_used'] += $row['time_used'];
$table[$i]['end_time'] += $row['end_time'];
}
if($row['state'] == 55){
$i++;
$start_sum = true;
}
}
If two tasks can run in simultaneously, solution 1 will work, but solution 2 will need to be adapted in order to take this in account.
here is my intepretation:
http://sqlfiddle.com/#!2/33820/45
set #n=0;
select project_id, task_id, user_id,sum(time_used) from (
SELECT time_used,project_id, task_id, user_id,
#n:=if(status=55,#n+1,#n),
if(status=55,-1,#n) as grouper FROM timelog
) as t
where grouper>-1
group by grouper;
I'm neither a php nor MySQL programmer, but I can explain the logic you want to follow. You can then code it.
First, query your db and return the results to php.
Next, set two sum variables to 0.
Start looping through your query results. Increment the first sum variable until you reach the first row that has status 55. Once you do, start incrementing the second variable.
The tricky part will be to sort your query by the row number of the table. Here is a link that will help you with that part.

MySql Limit results per grouping in query with multiple joins

I have this query
select distinct
loc.mID,
loc.city,
loc.state,
loc.zip,
loc.country,
loc.latitude,
loc.longitude,
baseInfo.firstname,
baseInfo.lastname,
baseInfo.profileimg,
baseInfo.facebookID,
(((acos(sin(('37.8068406'*pi()/180)) * sin((`latitude`*pi()/180))+cos(('37.8068406'*pi()/180)) * cos((`latitude`*pi()/180)) * cos((('-121.3062367' - `longitude`)*pi()/180))))*180/pi())*60*1.1515) AS `distance`, teams.teamName, teams.leagueType, teams.teamType, teams.subcat
FROM memb_geo_locations loc
left join memb_friends friends on (friends.mID = loc.mID or friends.friendID = loc.mID) and (friends.mID = '100019' or friends.friendID = '100019')
join memb_baseInfo baseInfo on baseInfo.mID = loc.mID
join memb_teams teams on teams.mID = loc.mID
where
loc.primaryAddress = '1' and ((friends.mID is null or friends.friendID is null) or (friends.isactive = 2))
and
(teams.teamName like '%Buffalo Bills%' or teams.teamName like '%New England Patriots%' or teams.teamName like '%Dallas Cowboys%')
and
loc.mID != 100019
having
`distance` < 50
order by baseInfo.firstname asc limit 30
Which works perfectly for my core needs. However I am trying to firgure out how I can take the query and refine it so the part that is
(teams.teamName like '%Buffalo Bills%' or teams.teamName like '%New England Patriots%' or teams.teamName like '%Dallas Cowboys%')
will each yield a max defined amount of results per team name (less or none per is fine to, just seeking a max per), while having a max output of the limit specified at the end of the query. Is there anyway I can refine this query to do what I am hoping? Someone told me in another recent post similar to this that I made to check out UNION but I am not sure how that would apply to this query? Assuming that is the right direction to go.

Categories