PHP vs MySQL Null check - php

I have a database with very large number of records. More specifically I have a query with multiple sub-queries in it which fetch records and display on front end.
I want to show some default value if field contains NULL.
I wish to find out which of the below approach can give better performance. I am using MySQL approach for now, but it takes lot of time.
MySQL Approach
SELECT A.columnA,
IFNULL (( SELECT B.columnB FROM tableB B), 'default-value') AS columnB,
IFNULL (( SELECT C.columnC FROM tableC C), 'default-value') AS columnC,
IFNULL (( SELECT D.columnD FROM tableD D), 'default-value') AS columnD
FROM tableA A
WHERE 1=1
LIMIT 20
ORDER BY A.columnA ASC
PHP Approach
foreach($recordsFromDB as $record) {
if(is_null($record->columnB)) {
echo 'default-value';
} else {
echo $record->columnA;
}
}

There isn't a perfect way to do something in programming, therefore alot of solutions were found! so let's start from this point,
in my opinion, checking for null values should be using PHP, and then the handles you've gotten can be passed for further MySQL processing!
You obviously will have tough time processing all this amount of queries using just MySQL.
My advice, try checking null values and replacing them in php and if you had to deal with retrieving/fetching or inserting values in database hit MySQL

Related

SQL need help to group my results

Hey guy im looking to display some data from my oracle DB. im looking to group it by common data/column but cant figure it out.
$stid = oci_parse($conn, " SELECT REQ.REQSTN_NO, REQ.WO_NO, REQ.COST_CENTRE_CD, REQ.ACCT_PRIME_CD, REQ.ACCT_SUBSDRY_CD, REQ.STOCK_CD
FROM TE.REQSTNREQ
WHERE REQ.DEPT_CD='ISN'");
oci_execute($stid);
while (($row = oci_fetch_array($stid, OCI_BOTH+OCI_RETURN_NULLS)) != false) {
echo $row['COST_CENTRE_CD']."-".$row['ACCT_PRIME_CD']."-".$row['ACCT_SUBSDRY_CD']." ".$row['WO_NO']." ".$row['REQSTN_NO']." ".$row['STOCK_CD']."<br />";
}
Im looking to create an output like this
Ive tried Group BY and SUM/COUNT but i dont know how to structure the code properly any help would be appreciated.
This is not a real database "grouping" -- it is a display issue: you want to group rows with common column values together and print each shared column value only once.
Such display issues are best left to the presentation layer of your application and best left out of the SQL/data model layer.
Nevertheless, here is a technique you can use to group common column values together and to print each value only once, using SQL.
(Since you didn't provide your data in text form, this example uses DBA_OBJECTS to illustrate the technique).
SELECT
-- Order the row_number () partitions the same way the overall query is ordered...
case when row_number() over (partition by object_type order by object_type, owner, object_name) = 1 THEN object_type ELSE NULL END object_type,
case when row_number() over (partition by object_type, owner order by object_type, owner, object_name) = 1 THEN owner ELSE NULL END owner,
object_name,
created, last_ddl_time
FROM dba_objects o
ORDER BY
-- Important to qualify columns in ORDER BY...
o.object_type, o.owner, o.object_name;
The idea is that case statements check to see if this is the first row in a new shared common value and, only if so, to print the column value. Otherwise, it prints NULL.
You would need to use an object-relational database to achieve such a result.
Edited answer:
In MySQL you can use the following function: GROUP_CONCAT:
See reference: https://dev.mysql.com/doc/refman/5.5/en/group-by-functions.html#function_group-concat
I believe there is a similar solution in oracle. You would need to refer to the following question:
Is there any function in oracle similar to group_concat in mysql?

How to order the ORDER BY using the IN() mysql? [duplicate]

I am wondering if there is away (possibly a better way) to order by the order of the values in an IN() clause.
The problem is that I have 2 queries, one that gets all of the IDs and the second that retrieves all the information. The first creates the order of the IDs which I want the second to order by. The IDs are put in an IN() clause in the correct order.
So it'd be something like (extremely simplified):
SELECT id FROM table1 WHERE ... ORDER BY display_order, name
SELECT name, description, ... WHERE id IN ([id's from first])
The issue is that the second query does not return the results in the same order that the IDs are put into the IN() clause.
One solution I have found is to put all of the IDs into a temp table with an auto incrementing field which is then joined into the second query.
Is there a better option?
Note: As the first query is run "by the user" and the second is run in a background process, there is no way to combine the 2 into 1 query using sub queries.
I am using MySQL, but I'm thinking it might be useful to have it noted what options there are for other DBs as well.
Use MySQL's FIELD() function:
SELECT name, description, ...
FROM ...
WHERE id IN([ids, any order])
ORDER BY FIELD(id, [ids in order])
FIELD() will return the index of the first parameter that is equal to the first parameter (other than the first parameter itself).
FIELD('a', 'a', 'b', 'c')
will return 1
FIELD('a', 'c', 'b', 'a')
will return 3
This will do exactly what you want if you paste the ids into the IN() clause and the FIELD() function in the same order.
See following how to get sorted data.
SELECT ...
FROM ...
WHERE zip IN (91709,92886,92807,...,91356)
AND user.status=1
ORDER
BY provider.package_id DESC
, FIELD(zip,91709,92886,92807,...,91356)
LIMIT 10
Two solutions that spring to mind:
order by case id when 123 then 1 when 456 then 2 else null end asc
order by instr(','||id||',',',123,456,') asc
(instr() is from Oracle; maybe you have locate() or charindex() or something like that)
If you want to do arbitrary sorting on a query using values inputted by the query in MS SQL Server 2008+, it can be done by creating a table on the fly and doing a join like so (using nomenclature from OP).
SELECT table1.name, table1.description ...
FROM (VALUES (id1,1), (id2,2), (id3,3) ...) AS orderTbl(orderKey, orderIdx)
LEFT JOIN table1 ON orderTbl.orderKey=table1.id
ORDER BY orderTbl.orderIdx
If you replace the VALUES statement with something else that does the same thing, but in ANSI SQL, then this should work on any SQL database.
Note:
The second column in the created table (orderTbl.orderIdx) is necessary when querying record sets larger than 100 or so. I originally didn't have an orderIdx column, but found that with result sets larger than 100 I had to explicitly sort by that column; in SQL Server Express 2014 anyways.
SELECT ORDER_NO, DELIVERY_ADDRESS
from IFSAPP.PURCHASE_ORDER_TAB
where ORDER_NO in ('52000077','52000079','52000167','52000297','52000204','52000409','52000126')
ORDER BY instr('52000077,52000079,52000167,52000297,52000204,52000409,52000126',ORDER_NO)
worked really great
Ans to get sorted data.
SELECT ...
FROM ...
ORDER BY FIELD(user_id,5,3,2,...,50) LIMIT 10
The IN clause describes a set of values, and sets do not have order.
Your solution with a join and then ordering on the display_order column is the most nearly correct solution; anything else is probably a DBMS-specific hack (or is doing some stuff with the OLAP functions in standard SQL). Certainly, the join is the most nearly portable solution (though generating the data with the display_order values may be problematic). Note that you may need to select the ordering columns; that used to be a requirement in standard SQL, though I believe it was relaxed as a rule a while ago (maybe as long ago as SQL-92).
Use MySQL FIND_IN_SET function:
SELECT *
FROM table_name
WHERE id IN (..,..,..,..)
ORDER BY FIND_IN_SET (coloumn_name, .., .., ..);
For Oracle, John's solution using instr() function works. Here's slightly different solution that worked -
SELECT id
FROM table1
WHERE id IN (1, 20, 45, 60)
ORDER BY instr('1, 20, 45, 60', id)
I just tried to do this is MS SQL Server where we do not have FIELD():
SELECT table1.id
...
INNER JOIN
(VALUES (10,1),(3,2),(4,3),(5,4),(7,5),(8,6),(9,7),(2,8),(6,9),(5,10)
) AS X(id,sortorder)
ON X.id = table1.id
ORDER BY X.sortorder
Note that I am allowing duplication too.
Give this a shot:
SELECT name, description, ...
WHERE id IN
(SELECT id FROM table1 WHERE...)
ORDER BY
(SELECT display_order FROM table1 WHERE...),
(SELECT name FROM table1 WHERE...)
The WHEREs will probably take a little tweaking to get the correlated subqueries working properly, but the basic principle should be sound.
My first thought was to write a single query, but you said that was not possible because one is run by the user and the other is run in the background. How are you storing the list of ids to pass from the user to the background process? Why not put them in a temporary table with a column to signify the order.
So how about this:
The user interface bit runs and inserts values into a new table you create. It would insert the id, position and some sort of job number identifier)
The job number is passed to the background process (instead of all the ids)
The background process does a select from the table in step 1 and you join in to get the other information that you require. It uses the job number in the WHERE clause and orders by the position column.
The background process, when finished, deletes from the table based on the job identifier.
I think you should manage to store your data in a way that you will simply do a join and it will be perfect, so no hacks and complicated things going on.
I have for instance a "Recently played" list of track ids, on SQLite i simply do:
SELECT * FROM recently NATURAL JOIN tracks;

PHP mssql count rows in a statement

I would like to count the number of rows in a statement returned by a query. The only solutions I found were:
sqlsrv_num_rows() This one seems a bit too complicated for such a simple task like this and I read that using this slows down the execution quite a bit
Executing a query with SELECT COUNT This method seems unnecessary, also it slows down the execution and if you already have a statement why bother with another query.
Counting the rows while generating a table As I have to generate a html table from the statemnt I could put a variable in the table generating loop and increment it by one, but this one only works when you already have to loop through the entire statement.
Am I missing some fundamental function and/or knowledge or is there no simpler way?
Any help or guidance is appreciated.
EDIT: The statement returned is only a small portion of the original table so it wouldn't be practical to execute another query for this purpose.
In sql server table rows information is stored in the catalog views and Dynamic Management Views you can use it to find the count
This method will only work for the physical tables. So you can store the records in one temp table and drop it later
SELECT Sum(p.rows)
FROM sys.partitions AS p
INNER JOIN sys.tables AS t
ON p.[object_id] = t.[object_id]
INNER JOIN sys.schemas AS s
ON t.[schema_id] = s.[schema_id]
WHERE p.index_id IN ( 0, 1 ) -- heap or clustered index
AND t.NAME = N'tablename'
AND s.NAME = N'dbo';
For more info check this article
If you don't want to execute another query then use select ##rowcount after the query. It will get the count of rows returned by previous select query
select * from query_you_want_to_find_count
select ##rowcount

PHP array_diff VS mysql NOT IN

I tried to compare two zipcode columns between two tables to see if values were missing in the second one.
I first wanted to do it with mysql, my query was something like
'SELECT code FROM t1 WHERE t1 NOT IN (select code FROM t2)'
But it was really slow so I tried another way :
I made two select, and then compared the results with array_diff().
With mysql : few minutes, and sometimes crash
With PHP : less than 1 second.
Can someone explain these differences ?
Is my SQL query wrong ?
If your main table has 50k rows, using a sub select in your query will result into 1 + 50k executions of selects. One for the first table, and 50k selects, one for each row. The server compares the row with your sub select that is reloaded every time iterating the main table. This is why your sql code takes its time and it also may be a huge memory problem as well.
See serjoschas information about joins to fix it in sql, it should be even faster that your php solution.
Checking which values are missing within a table (compared to another) can easily be done with a LEFT or RIGHT JOIN they are just made for actions like this.. alternatively take a look at this: How to Find Missing Value Between Two Mysql Tables – serjoscha
One solution to:
SELECT code FROM t1
WHERE code NOT IN ( SELECT code FROM t2 )
will be:
SELECT t1.code
FROM t1
LEFT JOIN t2
ON t1.code = t2.code
WHERE t2.code is null
Have a try. Also have a look on indexing as Cyclone suggests:
If you don't have an index you should definitly add one since this will speed up your query. You could add an index like this: ALTER TABLE ADD INDEX code_idx (code) this should be done for both tables. If you then were to execute EXPLAIN for the query you would see something like Using where; Using index; Using join buffer which is good – Cyclone
Indexing speeds up your query. If the table only provides one column, searching an index table with the same content as the source table will be exactly the same and redundant. Otherwise I strongly recommend indexing the code column of t2 which leads to a high increase of performance and less memory consumtion.

mssql/php - Looping through resultset and performing new query efficiently - In-Memory Tables?

I am fairly new to MSSQL and have never used In-Memory Tables and am not 100% sure if this is what I need.
I have a result set from a query (which cannot be amended) and I loop through and display each row of data. For one field of the result set I need to perform a query to get the relevant display data for that field. The issue is I may have to potentially call this query 1000's of times within the loop depending on how many rows there are in the result set.
Can anyone advise on ways to do this efficiently? I have heard of In Memory tables and am wondering if this is what I need? If so where do I start? Or do I simply store in an array or something?
Any advice much appreciated.
Declare #Test_Memory_Table Table
(
/* Fields needed for lookup; use same datatypes as schema */
IndexOrPkFieldName Numeric(19,0),
Field1 varchar(255),
Field2 date
)
Insert into #Test_Memory_Table Select t2.Field1 as field1, t1.field2 as Field2, CONVERT(char(10),t3.Field3, 101) as Field3
From Table1 t1
INNER JOIN Table2 t2 on t1.pkId = t2.pkId and isnull(t2.IS_ACTIVE,0) = 1 and ISNULL(t2.TYPE, 0) > 0
INNER JOIN Table3 t3 on t2.pkId = t3.pkId
select * from #Test_Memory_Table
Just test the query in SSMS and look at the plan to see how long it takes to return the memory table query versus querying the table directly. remember that ssms can be faster than production because of hints defaulted in ssms (e.g. arithabort), but may not be set that way when querying through .net client. If your tables are small I would expect the difference to be marginal.

Categories