update remaining balance to 0 if payment is greater than remaining balance - php

using this query i can easily deduct the payment to the remaining balance
let say that my rem_balance is 3000 and my payment is 5000 so basically the change would be 2000.
UPDATE tbl_users SET rem_balance = (rem_balance - '5000') WHERE user_id = '2017001002'
but since i use this query what happen is the rem_balance updates to -2000 what i wanted to achieve is that if payment is > rem_balance, rem_balance becomes 0 and it will separate the change.

Something like this should work
UPDATE tbl_users
SET rem_balance = (CASE WHEN rem_balance < 5000 THEN 0 ELSE rem_balance - 5000 END)
WHERE user_id = '2017001002'
Please be aware of the fact that you use implicit conversion in your SQL ('5000' instead of 5000). It is a bad habit and it can harm performance of your queries in certain cases. For example, if tbl_users.user_id is integer as well then the update will be unable to use indexes to search the row with the specific user_id and it will do a sequential scan which is very slow.

Related

One big update query, or several single update queries, to update a value in MySQL multi-tenant database

I'm working in a 'point of sale' application (PHP + MySQL), with products, sales, product quantity that have to be updated (+ or -) depending on the operation (sale/buy), etc.
On every sale/buy, I save the operation data (date, customer, totals...) in an 'invoices' table, and the details (all products in the invoice) in a 'invoicecontents' table.
Ok, you got it.
Now (and it works fine), I do the following to save the invoice and update products quantity:
Save the invoice data
Iterate the products in the invoice (from a JSON) and create a single INSERT query to save the invoice contents in second table.
AFTER THAT, once the invoice and details are saved, then I update products quantities WITH A SINGLE QUERY.
Something like this (for a sale operation):
UPDATE products a
SET QUANTITY = QUANTITY -
(SELECT sum(QUANTITY)
FROM details b
WHERE IDPRODUCT = a.ID
and IDCUSTOMER = 4
and IDMOV = 615
and IDPRODUCT <> -1)
where ID in
(SELECT IDPRODUCT
FROM details b
WHERE b.IDPRODUCT = a.ID
and IDCUSTOMER = 4
and IDMOV = 615
and IDARTICULO <> -1)
and IDCUSTOMER = 4
and a.ACTSTOCK = 1;
This works fine. Of course all is in a "begin_transaction...commit".
HOWEVER, I would like to know if this second method is better, faster or even more secure:
Save the invoice data
Iterate the products in the invoice (from a JSON) and...
Inside the iteration, for each product (each json item):
3.1. save that line into the invoice contents table
3.2. update that product quantity in the products table for THAT product.
Something like this, for each product/item:
INSERT INTO detailtable
(IDCUSTOMER,IDPRODUCT,QUANTITY......)
VALUES
(4,615,5);
UPDATE products
SET QUANTITY = QUANTITY - 5
WHERE
IDPRODUCT = 615
and IDCUSTOMER = 4
and ACTSTOCK = 1;
Maybe this second approach is simpler and more undertandable, but I really don't know if this will cause more (or less) CPU, memory consumption, taking into account that it is a multi-tenant database/application.
The problem I have with the first method is that, in the future, I will need to update more fields and tables with each product sold, so the big update query will be even bigger and even not possible.
Thanks!
Of course, single queries are generally more efficient than multiple queries, all other things being equal. But either approach should work. Clarity and readability of code are vital to long-lived SQL-using applications.
If you do choose multiple queries, always process your products in the same order: always sort the list of items in your JSON objects in ID order before working through them one-by-one. This will help avoid deadlocks in your transactions.
In either case, but especially the multi-query case, take care to create efficient indexes for the WHERE clauses in all the queries within your transactions. The less time a transaction takes, the better.
Edit One more thing to do for best performance and fewest deadlocks.
Right after you begin your transaction, lock the rows in products you want to update in the transaction. Run this SELECT query -- it is very similar to the UPDATE query you showed us. You don't need its resultset, just the FOR UPDATE clause.
SELECT COUNT(ID) FROM (
SELECT ID
FROM products
WHERE ID in
(SELECT IDPRODUCT
FROM details b
WHERE b.IDPRODUCT = a.ID
and IDCUSTOMER = 4
and IDMOV = 615
and IDARTICULO <> -1)
and IDCUSTOMER = 4
and a.ACTSTOCK = 1
ORDER BY ID
FOR UPDATE
) subq;
The FOR UPDATE locks are released when you COMMIT (or ROLLBACK) the transaction.
The ORDER BY ID clause avoids deadlocks.
Neither. Use a JOIN in the `UPDATE.
Also
products: INDEX(IDCUSTOMER, IDPRODUCT, ACTSTOCK)
details: INDEX(IDCUSTOMER, IDPRODUCT, IDMOV)

MySQL Prevent sum twice

I have the following query in my PHP script:
SELECT SUM(score) + 1200 AS rating
FROM users_ratings
WHERE user_fk = ?
The problem is that if the user has no rows in the table, the rating is returned as NULL. So I added an if case:
SELECT IF(SUM(score) IS NULL, 0, SUM(score)) + 1200 AS rating
FROM users_ratings
WHERE user_fk = ?
But now I'm wondering how to do the query, without repeating the SUM(score). I'm guessing if I have lot of rows, the sum would repeat twice, which would affect the performance of the application.
Here is a simpler method:
SELECT ( COALESCE(SUM(score), 0) + 1200 ) AS rating
FROM users_ratings
WHERE user_fk = ?;
COALESCE() is an ANSI standard function that returns the first non-NULL value from a list of expressions.
However, the additional overhead of calculating SUM(score) twice -- even if it happens -- should be very minimal compared to the rest of the query (finding the data, reading it in, summarizing it to one row).

Updating thousands of mysql rows with php

I have a script with needs to update a value over night - each night.
My mysql db has 119k rows which is grouped into 35k rows.
For each of these rows i need to calculate the highest and the lowest value and the update the row with a new percent difference between these rows.
Right now i can't even execute the update with a limit of 50+
My code:
$query_updates = mysqli_query($con,"SELECT partner FROM trolls WHERE GROUP BY partner LIMIT 0, 50")
or die(mysqli_error($con));
while($item = mysqli_fetch_assoc($query_updates)) {
$query_updates_prices = mysqli_query($con,"SELECT
MIN(partner1) AS p1,
MAX(partner2) AS p2,
COUNT(partner3) AS p3
FROM trolls WHERE partner='". $item["partner"] ."'")
or die(mysqli_error($con));
$partner = mysqli_fetch_assoc($query_updates_prices);
$partner1 = $partner["p1"];
$partner2 = $partner["p2"];
$difference = $partner1 - $partner2;
$savings = round($difference / $partner1 * 100);
$partner3 = $prices["p3"];
$update_tyre = mysqli_query($con, "UPDATE trolls SET
partner1='". $partner1 ."',
partner2='". $partner2 ."',
partner3='". $partner3 ."',
partner4='". $savings ."'
WHERE partner='". $item["partner"] ."'")
or die(mysqli_error($con));
echo '<strong>Updated: '. $item["partner"] .'</strong><br>';
}
How can i make this more simple / able to execute?
+1 for the cron, also command line running would help you out as it won't timeout. However you might have problems with group by locking tables.
To be honest (you won't like this) but if you are doing a group by on a field with a large amount of fields, then I would say that you have done something wrong.
So I would look at redoing the tables, having a table for 'partner' and then referencing off trolls that would help.
But to give you a solution just to speed this up a touch, move you towards a better database/table setup and to remove the locking problem. I would do this.
Step 1.
Create a table called
Partners
Field1: partner_id
Field2: partner
Field3: p1
Field4: p2
Field5: p3
Step 2:
Run the query
SELECT partner FROM trolls (this could be changed in the future to SELECT * FROM partners)
Step 3:
Check are they in Partners - if not insert
Step 4:
Run your
SELECT
MIN(partner1) AS p1,
MAX(partner2) AS p2,
COUNT(partner3) AS p3
FROM trolls WHERE partner='". $item["partner"] ."'
Step 5:
Updates the values from this into the Partners table, and (for the time being) update the trolls table.
Done.
Oh and incase it's not already there add in an index to the partners field.
You can do those 2 SELECTs in one:
SELECT partner, MIN(partner1) AS p1, MAX(partner2) AS p2, COUNT(partner3) AS p3
FROM trolls GROUP BY partner LIMIT 0, 50
Create a BTREE index on trolls (partner) without locking the table:
CREATE INDEX CONCURRENTLY IX_TROLLS_PARTNER ON trolls USING btree(partner);
If you choose to still do those 2 SELECTs separated, use PDO->prepare instead of PDO->query, PDO->prepare doc on php.net:
Calling PDO::prepare() and PDOStatement::execute() for statements that will be issued multiple times with different parameter values optimizes the performance of your application by allowing the driver to negotiate client and/or server side caching of the query plan and meta information
Maybe change php.ini max_execution_time to a higher value if it's too low (I keep it 300 (5 minutes) but each case is a case :P ).

php pagination slow on large or last pages

I have over 6000 results with come to about more than 235 pages using pagination. When I click first page, it loads really fast ~ 300ms up until around 40th page. after that it really goes down hill with about 30 ~ 40+ seconds of page load time. I am using indexed database. I tried to us mysql catch query, but did not like it. Can someone help me out.
php:
$sql = mysql_query("SELECT * FROM data WHERE (car = '$cars') AND (color = '$color' AND price BETWEEN '".$min."' AND '".$max."'
ORDER BY price LIMIT {$startpoint} , {$limit}");
Index:
data 0 PRIMARY 1 id A 106199 NULL NULL BTREE
data 1 car_index 1 car A 1799 NULL NULL BTREE
data 1 car_index 2 color A 2870 NULL NULL BTREE
data 1 car_index 3 price A 6247 NULL NULL BTREE
data 1 car_index 4 location A 106199 NULL NULL BTREE
This is a common issue with MySQL (and other database systems). Using LIMIT + OFFSET (which is what you are using implicitely with LIMIT x, y) works great at first but slows down exponentially as the number of fetched rows grows.
Adding an index is definitely a good first step, as you should always query data based on an index, to avoid full table scans.
Only having an index on price won't be enough as you have other WHERE attributes. Basically, this is what MySQL is doing:
Assuming that $limit = 25 and $startPoint = 0, MySQL will start reading the table from the beginning and stop after it finds 25 matching rows and will return them. Let's assume that it read 500 rows for this first iteration. Next iteration and because it does not have an index on car + color + price, it does not know how to jump directly to the 25th matching row (the 500th row in the table), so it will start reading from the beginning again, skip the first 25 matching rows and return the 25 next matching rows. Let's assume that this iteration also required 500 extra rows to be read.
Now you see what's going wrong. For every iteration, MySQL will have to read the all the rows from the beginning, exponentially increasing the time it takes to return row.
In my example, to fetch 100 (25 * 4 iterations) rows, MySQL will have to read 500 + 1000 + 1500 + 2000 = 5000 rows while you could expect it to only read 500 * 4 = 2,000 rows. To fetch 1000 (25 * 40 iterations) rows, MySQL will have to read 500 + 1000 + 1500 + ... 20000 = 410,000 rows!! That's way more than the 500 * 40 = 20,000 rows you could expect.
To optimize your query, first only select the data you need (no SELECT *). Then the trick is to remember the last fetched id.
$lastFetchedId = 0;
do {
$sql = mysql_query("SELECT * FROM data WHERE id > $lastFetchedId AND (car = '$cars' AND color = '$color' AND price BETWEEN '".$min."' AND '".$max."')
ORDER BY price LIMIT {$limit}");
$hasFoundRows = false;
while ($row = mysql_fetch_assoc($sql)) {
$hasFoundRows = true;
$lastFetchedId = $row['id'];
// do something with the row
}
} while ($hasFoundRows === false);
Having MySQL taking care of the ordering works well only if you have an index on all the columns you are using in the WHERE clause. Think about it this way: if the data is not sorted, how would MySQL know which rows will match and where the matching rows are. To be able to sort the results and only return a subset, MySQL needs to build a sorted list of ALL the rows that actually match. This means going through the entire table to first get all the matching rows, then sort them and finally return only a few of them.
Hope that helps you understand better what you can do better here :)
It would be a good idea to post here the table structure to see what indexes you have.
Please add an index on the column price, it should improve the query performance.
Cheers

mysql order by rand() performance issue and solution

i was using order by rand() to generate random rows from database without any issue but i reaalised that as the database size increase this rand() causes heavy load on server so i was looking for an alternative and i tried by generating one random number using php rand() function and put that as id in mysql query and it was very very fast since mysql was knowing the row id
but the issue is in my table all numbers are not availbale.for example 1,2,5,9,12 like that.
if php rand() generate number 3,4 etc the query will be blank as there is no id with number 3 , 4 etc.
what is the best way to generate random numbers preferable from php but it should generate the available no in that table so it must check that table.please advise.
$id23=rand(1,100000000);
SELECT items FROM tablea where status='0' and id='$id23' LIMIT 1
the above query is fast but generate sometimes no which is not availabel in database.
SELECT items FROM tablea where status=0 order by rand() LIMIT 1
the above query is too slow and causes heavy load on server
First of, all generate a random value from 1 to MAX(id), not 100000000.
Then there are at least a couple of good solutions:
Use > not =
SELECT items FROM tablea where status='0' and id>'$id23' LIMIT 1
Create an index on (status,id,items) to make this an index-only query.
Use =, but just try again with a different random value if you don't find a hit. Sometimes it will take several tries, but often it will take only one try. The = should be faster since it can use the primary key. And if it's faster and gets it in one try 90% of the time, that could make up for the other 10% of the time when it takes more than one try. Depends on how many gaps you have in your id values.
Use your DB to find the max value from the table, generate a random number less than or equal to that value, grab the first row in which the id is greater than or equal to your random number. No PHP necessary.
SELECT items
FROM tablea
WHERE status = '0' and
id >= FLOOR(1 + RAND() * (SELECT MAX(id) FROM tablea))
LIMIT 1
You are correct, ORDER BY RAND() is not good solution if you are dealing with large datasets. Depending how often it needs to be randomized, what you can do is generate a column with a random number and then update that number at some predefined interval.
You would take that column and use it as your sort index. This works well for a heavy read environment and produces predicable random order for a certain period of time.
A possible solution is to use limit:
$id23=rand(1,$numberOfRows);
SELECT items FROM tablea where status='0' LIMIT $id23 1
This wont produce any missed rows (but as hek2mgl mentioned) requires knowing the number of rows in the select.

Categories