Laravel Query Optimization (SQL) - php

My user_level database structure is
| user_id | level |
| 3 | F |
| 4 | 13 |
| 21 | 2 |
| 24 | 2 |
| 33 | 3 |
| 34 | 12+ |
I have another table users
| id | school_id |
| 3 | 3 |
| 4 | 4 |
| 21 | 2 |
| 24 | 2 |
| 33 | 3 |
| 34 | 1 |
What I have to achieve is that, I will have to update the level of each user based on a certain predefined condition. However, my users table is really huge with thousands of records.
At one instance, I only update the user_level records for a particular school. Say for school_id = 3, I fetch all the users and their associated levels, and then increase the value of level by 1 for those users (F becomes 1, 12+ is deleted, and all other numbers are increased by 1).
When I use a loop to loop through the users, match their user_id and then update the record, it will be thousands of queries. That is slowing down the entire application as well as causing it to crash.
One ideal thing would be laravel transactions, but I have doubts if it optimises the time. I tested it in a simple query with around 6000 records, and it was working fine. But for some reason, it doesnt work that good with the records that I have.
Just looking some recommendation on any other query optimization techniques.
UPDATE
I implemented a solution, where I would group all the records based on the level (using laravel collections), and then I would only have to issue 13 update queries as compared to hundreds/thousands now.
$students = Users::where('school_id', 21)->get();
$groupedStudents = $students->groupBy('level');
foreach ($groupedStudents as $key => $value) :
$studentIDs = $value->pluck('id');
// condition to check and get the new value to update
// i have used switch cases to identify what the next level should be ($NexLevel)
UserLevel::whereIn('userId', $studentIDs)->update(["level" => $nextLevel]);
endforeach;
I am still looking for other possible options.

First defined a relationship in your model, like:
In UserLevel model:
public function user() {
return $this->belongsTo(\App\UserLevel::class);
}
And you can just update the level without 12+ level's query, only by one query, and delete all 12+ level by one query.
UserLevel::where('level', '<=', 12)->whereHas('user', function($user) {
$user->where('school_id', 3);
})->update(['level' => DB::raw("IF(level = 'F', 1, level+1)")]);
UserLevel::whereHas('user', function($user) {
$user->where('school_id', 3);
})->where('level', '>', 12)->delete();
If your datas is too huge. you can also use chunk to split them for reduce memory consumption.
like this:
UserLevel::where('level', '<=', 12)->whereHas('user', function($user) {
$user->where('school_id', 3);
})->chunk(5000, function($user_levels) {
$user_levels->update(['level' => DB::raw("IF(level = 'F', 1, level+1)")]);
});
UserLevel::whereHas('user', function($user) {
$user->where('school_id', 3);
})->where('level', '>', 12)->delete();

Related

How to sum a colum from a related model efficiently on Laravel

Ok I got this table
affiliates_referral_clicks
id | affiliate_id | clicks | date
1 | 1 | 10 | 2021-07-14
2 | 1 | 2 | 2021-07-11
3 | 2 | 1 | 2021-07-11
4 | 2 | 14 | 2021-07-10
...
Of course my Model Affiliate has a relationship with referralClicks
Affiliate.php
public function referralClicks(){
return $this->hasMany(AffiliateReferralClick::class,'affiliate_id');
}
Now I want to bring all Affiliates with the SUM of all their clicks that have a date between a given date. I implemented it like this
$affiliate = Affiliate::with(['referralClicks' => function($query) use($params) {
$query->whereDate('date','>=', $params['dateFrom'])
->whereDate('date','<=', $params['dateTo'])
->select('clicks')
;
}])->get();
foreach ($affiliates as $affiliate){
$affiliate->totalClicks = $affiliate->referralClicks->sum('clicks');
}
this works fine, but since the affiliates_referral_clicks table is waaaay too big and the request ends up being too slow, I think if you do the query without using Eloquent's helpers you can get a much faster query.
So my question would be...how can I do the same I just did but with raw querys (or whatever the most efficient way is)? Im using a MySQL DB I hope you guys can help me!
Haven't tried that yet but that's how I'd solve this (if we assume, you only need the sum and nothing else from the relationship):
$affiliate = Affiliate::withSum(['referralClicks.clicks as totalClicks' => function($query) use($params) {
$query->whereDate('date','>=', $params['dateFrom'])
->whereDate('date','<=', $params['dateTo'])
->select('clicks')
;
}])->get();

Join arrays into one associative array

I am saving many waypointIDs, bearings, and distances in three columns in a single row of my database. Each column stores it's respective item separated by a comma.
WaypointIds: 510,511,512
Bearings: 65,50,32
Distances: 74,19,14
I think I might have coded myself into a corner! I now need to pull them from the database and run through adding them to a table to output on screen.
I'm using the following to put the corresponding columns back into an array but am a little stuck with where to go after that, or if that is even the way to go about it.
$waypointArrays = explode (",", $waypointsString);
$bearingArrays = explode (",", $bearingsString);
$waypointsString & $bearingsStrings are the variables set by my database call.
I think I need to add them all together so I can iterate through each one in turn.
For example, waypointId 510, will have bearing 065, and distance 0.74.
I think I need an associative array but not sure how to add them all together so I can run through each waypoint ID and pull out the rest of the entry.
e.g. for each waypointId give the corresponding bearing and waypoint.
I have checks in place to ensure that as we add waypoints/bearings/distances that they don't fall out of step with each other so there will always be the same number of entries in each column.
Don't continue with this design: your database is not normalised and therefore you are not getting any benefit from the power that a database can offer.
I don't think working around this problem by extracting the information in PHP using explode, ..etc is the right way, so I will clarify how your database should be normalised:
Currently you have a table like this (possibly with many more columns):
Main table: route
+----+---------+-------------+----------+-----------+
| id | Name | WaypointIds | Bearings | Distances |
+----+---------+-------------+----------+-----------+
| 1 | myroute | 510,511,512 | 65,50,32 | 74,19,14 |
| .. | .... | .... | .... | .... |
+----+---------+-------------+----------+-----------+
The comma-separated lists violate the first normal norm:
A relation is in first normal form if and only if the domain of each attribute contains only atomic (indivisible) values, and the value of each attribute contains only a single value from that domain.
You should resolve this by creating a separate table for each of these three columns, which will have one record for each atomic value
Main table: route
+----+---------+
| id | Name |
+----+---------+
| 1 | myroute |
| .. | .... |
+----+---------+
new table route_waypoint
+----------+-------------+------------+----------+
| route_id | waypoint_id | bearing_id | distance |
+----------+-------------+------------+----------+
| 1 | 510 | 65 | 74 |
| 1 | 511 | 50 | 19 |
| 1 | 512 | 32 | 14 |
| 2 | ... | .. | .. |
| .. | ... | .. | .. |
+----------+-------------+------------+----------+
The first column is a foreign key referencing the id of the main table.
To select the data you need, you could have an SQL like this:
select route.*, rw.waypoint_id, rw.bearing_id, rw.distance
from route
inner join route_waypoints rw on rw.route_id = route.id
order by route.id, rw.waypoint_id
Now PHP will receive the triplets (waypoint, bearing, distance) that belong together in the same record. You might need a nested loop while the route.id remains the same, but this is how it is done.
To answer your question, code below will work as long as waypointsIds are unique. That beign said, as other mentioned, fix your database layout. What you have here could really benefit from a separate table.
<?php
$waypointIds = [510, 511, 512];
$bearings = [65, 50, 32];
$distances = [74, 19, 14];
$output = [];
for ($i = 0; $i < count($waypointIds); $i++) {
$output[$waypointIds[$i]] = [
'bearing' => $bearings[$i],
'distance' => $distances[$i]
];
}
print_r($output);
$waypoints = array(510, 511, 512);
$bearings = array(65, 50, 32);
$distances = array(74, 19, 14);
for ($i = 0; $i < count($waypoints); $i++) {
$res[] = array(
'waypoint' => $waypoints[$i],
'bearing' => sprintf("%03d", $bearings[$i]),
'distance' => $distances[$i]/100
);
}
print_r($res);

get rows by date where "index" stops on current ID

1- I am sorry for the title, I couldn't describe my complex situation better.
2- I have a table for a Double Accounting System where I am trying to calculate the balance at a specific date and until a specific transaction, and due to specific situations in the frond-end i need to get the result in a single query.
Table example is like that:
| id | date | amount |
| --- | ---------- | ------ |
| 93 | 2018-03-02 | -200 |
| 94 | 2018-01-23 | 250 |
| 108 | 2018-03-05 | 400 |
| 120 | 2018-01-23 | 720 |
| 155 | 2018-03-02 | -500 |
| 170 | 2018-03-02 | 100 |
And here is my simple query that I am using inside a loop of every transaction, because I want to show the new BALANCE after every transaction is made:
... for ...
Transactions::where('date', '<=', $item->date)->get()
... end ...
That query is returning the balance at the END of the day, means until the last transaction made that day, and I don't want this result.
Desired result is achieved by something like:
... for ...
Transactions::where('date', '<=', $item->date)
-> and transaction is < index of current $item
->get()
... end ...
Of course I can't use the ID because the ID is not related in this situation, as the whole ordering and calculation operations are date related.
So basically what i want is a query to get all the transactions from the age of stone until a specific date BUT exclude all the transactions made after the CURRENT one (in the loop).
For example, in the above table situation the query for:
Transaction ID # 93 should return: 93
Transaction ID # 94 should return: 94
Transaction ID # 108 should return: 94,120,93,155,170,108
Transaction ID # 120 should return: 94,120
Transaction ID # 155 should return: 94,120,155
..
...
....
The last transaction to get should be the current transaction.
I hope I could clear it well, I spend 3 days searching for a solution and I came up with this slow method:
$currentBalance = Transaction::where('date', '<=', $item->date)->get(['id']);
$array = array();
foreach ($currentBalance as $a) {
$array[] = $a->id;
}
$newBalanceA = array_slice($array, 0, array_search($item->id, $array) + 1);
$currentBalance = Transaction::whereIn('id', $newBalanceA)->sum('amount');
return $currentBalance;
It is slow and dirty, I appreciate saving me with a simple solution in 1 query if this is possible.

Select data based on two constraints for each row, using JSON data as input

I have a database table that keeps a column for versioning of the entries. Client applications should be able to query the server with their current versions and IDs, and the server respond with the rows that have a higher version for each of the entries. I am not an expert on MySQL, so I cannot see how to achieve this. I have tried various things, but I am currently far from producing anything that works efficiently.
Example:
Server data:
mysql> SELECT id, version FROM my_data;
+-----+---------+
| id | version |
+-----+---------+
| 1 | 0 |
| 2 | 1 |
| 3 | 2 | <-- The JSON data below has lower version, so this row should be selected.
| 4 | 0 |
| 5 | 1 |
| 6 | 0 |
| 7 | 1 | <-- The JSON data below has lower version, so this row should be selected.
| 8 | 1 |
| 9 | 4 | <-- The JSON data below has lower version, so this row should be selected.
| 10 | 1 |
+-----+---------+
10 rows in set (0.00 sec)
Data sent from the client:
The client then queries the server with the following data in JSON (or whatever, but I have JSON in my case). The server side is php and I need to parse this JSON data and include it in the query somehow. This is the data the client currently contains.
{
"my_data": [
{
"id": 1,
"version": 0
},
{
"id": 2,
"version": 1
},
{
"id": 3,
"version": 0
},
{
"id": 4,
"version": 0
},
{
"id": 5,
"version": 1
},
{
"id": 6,
"version": 0
},
{
"id": 7,
"version": 0
},
{
"id": 8,
"version": 1
},
{
"id": 9,
"version": 2
},
{
"id": 10,
"version": 1
}
]
}
In this example I want the MySQL query to return 3 rows; namely the 3 rows with id 3, 7 and 9 because the client version is lower than the server version, thus it needs to fetch some data for updating. How can I achieve this in a single, simple query? I do not want to run one query for each row, even if this is possible.
Desired result from the sample data:
The resulting data should be the rows in which the version in the database on the server side is greater than the data with the corresponding id in the JSON data set.
mysql> <INSERT PROPER QUERY HERE>;
+-----+---------+
| id | version |
+-----+---------+
| 3 | 2 |
| 7 | 1 |
| 9 | 4 |
+-----+---------+
3 rows in set (0.00 sec)
NOTE: Not used PDO, just query string generation, can be switched easily
To check each version you can do an OR statement for each id but check first that the json is not empty first
$jsonData = json_decode($inputJson, true);
$jsonData = $jsonData['my_data'];
$string = 'select * from my_data where';
foreach($jsonData as $data) {
$conditions[] = '(id='.$data['id'].' and version>'.$data['version'].')';
}
$string .=implode('or', $conditions);
result:
select * from my_data where (id=1 and version>0) or (id=2 and version>0)
something like that no ?
try{
$bdd = new PDO('mysql:host=localhost;dbname=test;charset=utf8', 'user', 'password');
}
catch (Exception $e){
die('Error : ' . $e->getMessage());
}
$output = array();
$req = $bdd->prepare('SELECT `id`, `version` FROM my_data WHERE `id` = :id AND `version` > :version');
foreach($yourJson as $object){
$req->execute(array('id' => $object['id'], 'prixmax' => $object['version']));
$data = $req->fetch();
if(!empty($data))
$output[] = $data;
}
echo $data
That you have to do with json_array_elements:
SELECT id, version
FROM my_data AS md, json_array_elements(md.json_col->'version') AS jsonVersion
WHERE version > jsonVersion->>'version';
json_col being the name of the JSON column.
I am providing few links might be it will helpful to you.
More details in this related answer:
How do I query using fields inside the new PostgreSQL JSON datatype?
More about the implicit CROSS JOIN LATERAL in the last paragraph of this related answer:
PostgreSQL unnest() with element number
Advanced example
Query combinations with nested array of records in JSON datatype
Hope you will get solution.
SELECT id,version FROM my_data WHERE `id` = $APP_ID AND `version` > $APP_VERSION;
Replace $APP_ID with an actual item ID and respectively $APP_ID with ID coming from incoming JSON.
Main result for the below results and discussions: Using the multiple OR query (as suggested by #KA_lin) is faster for small data sets (n < 1000 or so). This approach scales badly for larger data sets however, so I will probably stick with using a query with the TEMPORARY TABLE approach below in case my data set should grow large in the future. The payload for this is not that high.
CREATE TEMPORARY TABLE my_data_virtual(id INTEGER NOT NULL, version TINYINT(3) NOT NULL);
INSERT INTO my_data_virtual VALUES
(1,0), (2,1), (3,0), (4,0), (5,1),
(6,0), (7,0), (8,1), (9,2), (10,1);
SELECT md.id, md.version
FROM my_data AS md
INNER JOIN my_data_virtual AS mdv
ON md.id = mvd.id AND md.id > mvd.id;
I ran a series of tests using the MySQLdb and timeit modules in Python. I created 5 tables: test_100, test_500, test_1000, test_5000 and test_10000. All the databases were given a single table, data, which contained the following columns.
+-------------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+---------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| version | int(11) | NO | | 0 | |
| description | text | YES | | NULL | |
+-------------+---------+------+-----+---------+----------------+
The tables in the databases were then filled with random versions from 0 to 5 and a semi-random amount of lorem ipsum text. The test_100.data table got 100 rows, the test_500.data table got 500 rows and so forth. I then ran test for both the query using nested OR statements and using a temporary table with all ids and random version between 0 and 5.
Results
Results for nested OR query. Number of repeats for each n was 1000.
+----------+-------------+-------------+-------------+-------------+-------------+
| | n = 100 | n = 500 | n = 1000 | n = 5000 | n = 10000 |
+----------+-------------+-------------+-------------+-------------+-------------+
| max | 0.00719 | 0.02213 | 0.04325 | 1.75707 | 8.91687 |
| min | 0.00077 | 0.00781 | 0.02696 | 0.63565 | 5.29613 |
| median | 0.00100 | 0.00917 | 0.02996 | 0.82732 | 5.92217 |
| average | 0.00111 | 0.01001 | 0.03057 | 0.82540 | 5.89577 |
+----------+-------------+-------------+-------------+-------------+-------------+
Results for temporary table query. Number of repeats for each n was 1000.
+----------+-------------+-------------+-------------+-------------+-------------+
| | n = 100 | n = 500 | n = 1000 | n = 5000 | n = 10000 |
+----------+-------------+-------------+-------------+-------------+-------------+
| max | 0.06352 | 0.07192 | 0.08798 | 0.28648 | 0.26939 |
| min | 0.02119 | 0.02027 | 0.03126 | 0.07677 | 0.12269 |
| median | 0.03075 | 0.03210 | 0.043833 | 0.10068 | 0.15839 |
| average | 0.03121 | 0.03258 | 0.044968 | 0.10342 | 0.16153 |
+----------+-------------+-------------+-------------+-------------+-------------+
It seems that using nested OR queries is faster up to about n = 1000. From there on, the the nested OR scales badly and the temporary table approach wins solidly. In my case I am likely to have a maximum of around 1000 rows, so it seems that I can choose between these two approaches relatively freely.
I will probably go for the temporary table approach in case my data set should become larger than expected. The payload is small in any case.
Notes
Since the timeit module in Python is a bit ticklish, the database is opened and closed for each run/repeat. This might produce some overhead to the timings.
The queries for the temporary table approach were done in 3 steps: 1 for creating the temporary, 1 for inserting the data and 1 for joining the tables.
The creation of the queries are not part of the timing; they are created outside of the Python timeit call.
Since both the versions in the inserted data and the random "client" data are randomly chosen between 0 and 5, it is likely that between 33 % and 50 % of the rows are selected. I have not verified this. This is not really the case I have, as the client data will at any point have almost the same data as the server.
I tried adding WHERE id IN (1,2,3...,10) on both the temporary table approach and the nested OR approach, but it neither sped things up nor slowed them down in any of the tests, except for the larger data sets and the multiple OR approach. Here, the times were slightly lower than without this WHERE statement.

MySQL best way to query the database to display a check list

I have 4 tables in a database that allow me to manage a kind of 'check list'. In a few words for each pathology, I have a big step (process) splited into a multiple of tasks. All of this is linked to a specific operation (progress.case_id) in a summary table.
database.pathology
+--------------+------------+
| id_pathology | name |
+--------------+------------+
| 1 | Pathology1 |
| 2 | Pathology2 |
| 3 | Pathology3 |
+--------------+------------+
database.process
+------------+----------+--------------+----------------+
| id_process | name | pathology_id | days_allocated |
+------------+----------+--------------+----------------+
| 1 | BigTask1 | 2 | 5 |
| 2 | BigTask2 | 2 | 3 |
| 3 | BigTask3 | 2 | 6 |
| ... | ... | ... | ... |
+------------+----------+--------------+----------------+
database.task
+---------+-------+------------+
| id_task | name | process_id |
+---------+-------+------------+
| 1 | Task1 | 1 |
| 2 | Task2 | 1 |
| 3 | Task3 | 1 |
| 4 | Task4 | 2 |
| ... | ... | ... |
+---------+-------+------------+
database.progress
+-------------+---------+---------+---------+------------+---------+
| id_progress | task_id | case_id | user_id | date | current |
+-------------+---------+---------+---------+------------+---------+
| 1 | 1 | 120 | 2 | 2015-11-02 | 1 |
| 2 | 2 | 120 | 2 | 2015-11-02 | 0 |
| 3 | 1 | 121 | 3 | 2015-11-02 | 1 |
+-------------+---------+---------+---------+------------+---------+
I have to display something like that
My question is : what is the most efficient way to proceed ?
Is is faster to query only one table (progress) to display the most and only then query the other to get the names of the differents process and days ?
Perhaps the joint function is more efficient ?
Or do you think my database structure is not the more appropriate ?
For each case we can have aproximaly 50 tasks, with a current field translated into a checkbox. A background script is also running. It analyzes the days provided based on the remaining days to determine if there could be a delay for this specific case.
For each case, the progress table is already filled with all the task related to the pathology of the case. And the current field is always '0' at the begining.
I already tried multiple things like
$result = $db->prepare("SELECT DISTINCT process_id,process.name FROM task, progress,process WHERE progress.task_id = task.id_task AND task.process_id = process.id_process AND progress.case_id = ?");
$result->execute(array($id));
foreach($result as $row)
{
echo "<b>".$row[1]."</b><br>";
$result = $db->prepare("SELECT name,id_task FROM task WHERE process_id = ?");
$result->execute(array($row[0]));
foreach($result as $row)
{
echo $row[0];
$result = $db->prepare("SELECT user_id, date, current FROM progress WHERE progress.task_id = ? AND case_id = ?");
$result->execute(array($row[1], $id));
foreach($result as $row)
{
if($row[2] == 0)
{echo "<input type='checkbox' />";}
else
{
echo "<input type='checkbox' checked/>";
echo "user : ".$row[0]." date : ".$row[1]."<br>";
}
}
}
But I am pretty sure that I am not doing it right. Should I change my database infrastructure ? Should I use a specific MySQL trick ? Or maybe just a more efficiant PHP processing ?
In terms of efficiency, a database query is one of the slowest operations that you can perform. Anything that you can do to reduce the number of queries that you make will go a long way towards making your application faster.
But more important than that, your application needs to work as designed, which means that the developers need to understand what's going on, data shouldn't be hanging around just waiting to be overwritten, and the junior developer who will be tasked to maintain this in 3 years won't want to tear their hair out.
Fast is better than slow.
Slow is better than broken.
To your specific problem, if possible, never have a query inside of a loop. Especially when that loop is controlled by data that you pull from that same database. This is a code smell that calls for proper use of JOINs.
A Google Image search for SQL Join Diagrams shows plenty of examples of Venn Diagrams that show the different types of data returned with each JOIN. When in doubt, you usually want LEFT JOINs.
So, let's identify your relationships:
Pathology
Unused in your results.
Find a way to incorporate it into your query, since "Pathology2" appears in your mockup.
Process
References Pathology in a one-to-many relationship. Each Process can have one Pathology, but each Pathology can have 0 or more Processes.
Task
References Task in a one-to-many relationship. Task contains children of Process.
Progress
References Task, as well as the not shown Case and User. Progress appears to be the details of a Task when referencing a specific Case and User.
I am assuming that there is a business constraint where task_id, case_id, and user_id must be unique... That is, user 1 can only have 1 Progress entry for task 1 and case 100.
Besides holding the details for a Task, also acts as a bridge between Task, Case, and User, giving many-to-many relationships to the three tables. Since Task is a direct child of Process, and Process is a direct child of Pathology, it gives a many-to-many relationship to Pathology.
Case
Inferred existence of this table.
Referenced by Task.
User
Inferred existence of this table.
Referenced by Task.
Based on this table structure, our main groupings will be Case, Pathology, and User.
That is, if you're a logged in user and you want to look at your progress by Case, you would want to see the following:
Case 110:
Pathology1:
BigTask1:
Task1: X
Task2: []
BigTask2:
Task3: X
Pathology2:
BigTask3:
Task4: []
Case 120:
Pathology1:
BigTask1:
Task1: []
We would want User ID == 1;
Our first sorting would be based on Case
Our second sorting would be based on Pathology
Our third sorting would be based on Process
And our last sorting would be on Task...
Thus, the data to get our results above would be:
+------+------------+----------+-------+----------+
| case | pathology | process | task | progress |
+------+------------+----------+-------+----------+
| 110 | Pathology1 | BigTask1 | Task1 | 1 |
| 110 | Pathology1 | BigTask1 | Task2 | 0 |
| 110 | Pathology1 | BigTask2 | Task3 | 1 |
| 110 | Pathology2 | BigTask3 | Task4 | 0 |
| 120 | Pathology1 | BigTask1 | Task1 | 0 |
+------+------------+----------+-------+----------+
Our 'ORDER BY' clause is from last to first... ORDER BY task, process, pathology, case... We could sort it in PHP, but the database is better at it than we are. If your indexes are set up properly, the database might not even have to sort things, it'll just fetch it in order.
The query to get the above data for a specific user is:
SELECT
prog.case_id AS case,
path.name AS pathology,
proc.name AS process,
task.name AS task,
prog.current AS progress
FROM
pathology path
LEFT JOIN process proc ON path.id_pathology = proc.pathology_id
LEFT JOIN task ON task.process_id = proc.id_process
LEFT JOIN progress prog ON task.id_task = prog.task_id
WHERE prog.user_id = :userid
ORDER BY task, process, pathology, case
Your PHP might then be along the lines of
<?php
$sql = <<<EOSQL
SELECT
prog.case_id AS case,
path.name AS pathology,
proc.name AS process,
task.name AS task,
prog.current AS progress
FROM
pathology path
LEFT JOIN process proc ON path.id_pathology = proc.pathology_id
LEFT JOIN task ON task.process_id = proc.id_process
LEFT JOIN progress prog ON task.id_task = prog.task_id
WHERE prog.user_id = :userid
ORDER BY task, process, pathology, case
EOSQL;
$result = $db->prepare($sql);
$result->execute(array(':userid' => $id));
$rows = $result->fetchAll(PDO::FETCH_ASSOC);
foreach ($rows as $row) {
var_dump($row);
// array(5) {
// ["case"]=>
// int(110)
// ["pathology"]=>
// string(10) "Pathology1"
// ["process"]=>
// string(8) "BigTask1"
// ["task"]=>
// string(5) "Task1"
// ["progress"]=>
// int(1)
// }
}

Categories