I am saving many waypointIDs, bearings, and distances in three columns in a single row of my database. Each column stores it's respective item separated by a comma.
WaypointIds: 510,511,512
Bearings: 65,50,32
Distances: 74,19,14
I think I might have coded myself into a corner! I now need to pull them from the database and run through adding them to a table to output on screen.
I'm using the following to put the corresponding columns back into an array but am a little stuck with where to go after that, or if that is even the way to go about it.
$waypointArrays = explode (",", $waypointsString);
$bearingArrays = explode (",", $bearingsString);
$waypointsString & $bearingsStrings are the variables set by my database call.
I think I need to add them all together so I can iterate through each one in turn.
For example, waypointId 510, will have bearing 065, and distance 0.74.
I think I need an associative array but not sure how to add them all together so I can run through each waypoint ID and pull out the rest of the entry.
e.g. for each waypointId give the corresponding bearing and waypoint.
I have checks in place to ensure that as we add waypoints/bearings/distances that they don't fall out of step with each other so there will always be the same number of entries in each column.
Don't continue with this design: your database is not normalised and therefore you are not getting any benefit from the power that a database can offer.
I don't think working around this problem by extracting the information in PHP using explode, ..etc is the right way, so I will clarify how your database should be normalised:
Currently you have a table like this (possibly with many more columns):
Main table: route
+----+---------+-------------+----------+-----------+
| id | Name | WaypointIds | Bearings | Distances |
+----+---------+-------------+----------+-----------+
| 1 | myroute | 510,511,512 | 65,50,32 | 74,19,14 |
| .. | .... | .... | .... | .... |
+----+---------+-------------+----------+-----------+
The comma-separated lists violate the first normal norm:
A relation is in first normal form if and only if the domain of each attribute contains only atomic (indivisible) values, and the value of each attribute contains only a single value from that domain.
You should resolve this by creating a separate table for each of these three columns, which will have one record for each atomic value
Main table: route
+----+---------+
| id | Name |
+----+---------+
| 1 | myroute |
| .. | .... |
+----+---------+
new table route_waypoint
+----------+-------------+------------+----------+
| route_id | waypoint_id | bearing_id | distance |
+----------+-------------+------------+----------+
| 1 | 510 | 65 | 74 |
| 1 | 511 | 50 | 19 |
| 1 | 512 | 32 | 14 |
| 2 | ... | .. | .. |
| .. | ... | .. | .. |
+----------+-------------+------------+----------+
The first column is a foreign key referencing the id of the main table.
To select the data you need, you could have an SQL like this:
select route.*, rw.waypoint_id, rw.bearing_id, rw.distance
from route
inner join route_waypoints rw on rw.route_id = route.id
order by route.id, rw.waypoint_id
Now PHP will receive the triplets (waypoint, bearing, distance) that belong together in the same record. You might need a nested loop while the route.id remains the same, but this is how it is done.
To answer your question, code below will work as long as waypointsIds are unique. That beign said, as other mentioned, fix your database layout. What you have here could really benefit from a separate table.
<?php
$waypointIds = [510, 511, 512];
$bearings = [65, 50, 32];
$distances = [74, 19, 14];
$output = [];
for ($i = 0; $i < count($waypointIds); $i++) {
$output[$waypointIds[$i]] = [
'bearing' => $bearings[$i],
'distance' => $distances[$i]
];
}
print_r($output);
$waypoints = array(510, 511, 512);
$bearings = array(65, 50, 32);
$distances = array(74, 19, 14);
for ($i = 0; $i < count($waypoints); $i++) {
$res[] = array(
'waypoint' => $waypoints[$i],
'bearing' => sprintf("%03d", $bearings[$i]),
'distance' => $distances[$i]/100
);
}
print_r($res);
Related
My user_level database structure is
| user_id | level |
| 3 | F |
| 4 | 13 |
| 21 | 2 |
| 24 | 2 |
| 33 | 3 |
| 34 | 12+ |
I have another table users
| id | school_id |
| 3 | 3 |
| 4 | 4 |
| 21 | 2 |
| 24 | 2 |
| 33 | 3 |
| 34 | 1 |
What I have to achieve is that, I will have to update the level of each user based on a certain predefined condition. However, my users table is really huge with thousands of records.
At one instance, I only update the user_level records for a particular school. Say for school_id = 3, I fetch all the users and their associated levels, and then increase the value of level by 1 for those users (F becomes 1, 12+ is deleted, and all other numbers are increased by 1).
When I use a loop to loop through the users, match their user_id and then update the record, it will be thousands of queries. That is slowing down the entire application as well as causing it to crash.
One ideal thing would be laravel transactions, but I have doubts if it optimises the time. I tested it in a simple query with around 6000 records, and it was working fine. But for some reason, it doesnt work that good with the records that I have.
Just looking some recommendation on any other query optimization techniques.
UPDATE
I implemented a solution, where I would group all the records based on the level (using laravel collections), and then I would only have to issue 13 update queries as compared to hundreds/thousands now.
$students = Users::where('school_id', 21)->get();
$groupedStudents = $students->groupBy('level');
foreach ($groupedStudents as $key => $value) :
$studentIDs = $value->pluck('id');
// condition to check and get the new value to update
// i have used switch cases to identify what the next level should be ($NexLevel)
UserLevel::whereIn('userId', $studentIDs)->update(["level" => $nextLevel]);
endforeach;
I am still looking for other possible options.
First defined a relationship in your model, like:
In UserLevel model:
public function user() {
return $this->belongsTo(\App\UserLevel::class);
}
And you can just update the level without 12+ level's query, only by one query, and delete all 12+ level by one query.
UserLevel::where('level', '<=', 12)->whereHas('user', function($user) {
$user->where('school_id', 3);
})->update(['level' => DB::raw("IF(level = 'F', 1, level+1)")]);
UserLevel::whereHas('user', function($user) {
$user->where('school_id', 3);
})->where('level', '>', 12)->delete();
If your datas is too huge. you can also use chunk to split them for reduce memory consumption.
like this:
UserLevel::where('level', '<=', 12)->whereHas('user', function($user) {
$user->where('school_id', 3);
})->chunk(5000, function($user_levels) {
$user_levels->update(['level' => DB::raw("IF(level = 'F', 1, level+1)")]);
});
UserLevel::whereHas('user', function($user) {
$user->where('school_id', 3);
})->where('level', '>', 12)->delete();
I use lengths of code similar to this repeatedly in my scripting because I cannot find a shorter way to to compare the MYSQL columns
if ($them['srel1']=="Y" AND $me['Religion']=='Adventist'){$seek11pts=5;}
if ($them['srel2']=="Y" AND $me['Religion']=='Agnostic'){$seek11pts=5;}
if ($them['srel3']=="Y" AND $me['Religion']=='Atheist'){$seek11pts=5;}
if ($them['srel4']=="Y" AND $me['Religion']=='Baptist'){$seek11pts=5;}
if ($them['srel5']=="Y" AND $me['Religion']=='Buddhist'){$seek11pts=5;}
if ($them['srel6']=="Y" AND $me['Religion']=='Caodaism'){$seek11pts=5;}
if ($them['srel7']=="Y" AND $me['Religion']=='Catholic'){$seek11pts=5;}
if ($them['srel8']=="Y" AND $me['Religion']=='Christian'){$seek11pts=5;}
if ($them['srel9']=="Y" AND $me['Religion']=='Hindu'){$seek11pts=5;}
if ($them['srel10']=="Y" AND $me['Religion']=='Iskcon'){$seek11pts=5;}
if ($them['srel11']=="Y" AND $me['Religion']=='Jainism'){$seek11pts=5;}
if ($them['srel12']=="Y" AND $me['Religion']=='Jewish'){$seek11pts=5;}
if ($them['srel13']=="Y" AND $me['Religion']=='Methodist'){$seek11pts=5;}
if ($them['srel14']=="Y" AND $me['Religion']=='Mormon'){$seek11pts=5;}
if ($them['srel15']=="Y" AND $me['Religion']=='Moslem'){$seek11pts=5;}
if ($them['srel16']=="Y" AND $me['Religion']=='Orthodox'){$seek11pts=5;}
if ($them['srel17']=="Y" AND $me['Religion']=='Pentecostal'){$seek11pts=5;}
if ($them['srel18']=="Y" AND $me['Religion']=='Protestant'){$seek11pts=5;}
if ($them['srel19']=="Y" AND $me['Religion']=='Quaker'){$seek11pts=5;}
if ($them['srel20']=="Y" AND $me['Religion']=='Scientology'){$seek11pts=5;}
if ($them['srel21']=="Y" AND $me['Religion']=='Shinto'){$seek11pts=5;}
if ($them['srel22']=="Y" AND $me['Religion']=='Sikhism'){$seek11pts=5;}
if ($them['srel23']=="Y" AND $me['Religion']=='Spiritual'){$seek11pts=5;}
if ($them['srel24']=="Y" AND $me['Religion']=='Taoism'){$seek11pts=5;}
if ($them['srel25']=="Y" AND $me['Religion']=='Wiccan'){$seek11pts=5;}
if ($them['srel26']=="Y" AND $me['Religion']=='Other'){$seek11pts=5;}
EG: if ($them['srel1']=="Y" AND $me['Religion']=='Adventist'){$seek11pts=5;}
I check to seek if the MYSQL column srel1 has a value of Y. if it does then I check to see if the column Religion equals Adventist. If both are true then $seek11pts=5, if they are not both true then nothing happens.
There are 26 srel type columns with either a Y value or null. There are also 26 different values for Religion as you may see. This is but one section of my code. I have multiple HUGE code groupings like this and I'd love to be able to reduce it down to a few lines. I was thinking some kind of array for the religions and another for the numerical endings of the srel columns but I cant get it.
For this current code you can use this:
<?php
$religions = array(1 => 'Adventist','Agnostic','Atheist','Baptist','Buddhist','Caodaism','Catholic','Christian','Hindu','Iskcon','Jainism','Jewish','Methodist','Mormon','Moslem','Orthodox','Pentecostal','Protestant','Quaker','Scientology','Shinto','Sikhism','Spiritual','Taoism','Wiccan','Other');
$count = count($religions) + 1;
for ($i = 1; $i < $count; $i++) {
if ($them["srel$i"]=="Y" && $me['Religion']==$religions[$i]) {
$seek11pts=5;
break;
}
}
While there are ways to accomplish what you ask, you should instead seriously consider restructuring your data.
Better data structure
If your data had a structure more similar to the following:
db.person
+----+------+
| id | name |
+----+------+
| 1 | Nick |
| 2 | Bob |
| 3 | Tony |
+----+------+
PrimaryKey: id
db.religion
+----+---------+
| id | name |
+----+---------+
| 1 | Atheist |
| 2 | Jainism |
| 3 | FSM |
+----+---------+
PrimaryKey: id
db.person_religion
+--------+----------+
| person | religion |
+--------+----------+
| 1 | 2 |
| 2 | 2 |
| 2 | 3 |
| 3 | 1 |
| 3 | 2 |
| 3 | 3 |
+--------+----------+
UniqueIndex: (person,religion)
...everything you're trying to do could be done with simple queries.
SELECT me.id, me.name, meR.name as religion, count(them.id) as matches
FROM person me
LEFT INNER JOIN person_religion meRlookup
ON me.id = meRlookup.person
LEFT INNER JOIN religion meR
ON meRlookup.religion = meR.id
LEFT INNER JOIN person_religion themRlookup
ON meRlookup.religion = themRlookup.religion
LEFT INNER JOIN person them
ON themRlookup.person = them.id
GROUP BY meR.id
I would recommend using laravel or lumen, since these include a "queries generator" that just write a little code (NOTHING SQL) to make queries and that ..
I have a database table that keeps a column for versioning of the entries. Client applications should be able to query the server with their current versions and IDs, and the server respond with the rows that have a higher version for each of the entries. I am not an expert on MySQL, so I cannot see how to achieve this. I have tried various things, but I am currently far from producing anything that works efficiently.
Example:
Server data:
mysql> SELECT id, version FROM my_data;
+-----+---------+
| id | version |
+-----+---------+
| 1 | 0 |
| 2 | 1 |
| 3 | 2 | <-- The JSON data below has lower version, so this row should be selected.
| 4 | 0 |
| 5 | 1 |
| 6 | 0 |
| 7 | 1 | <-- The JSON data below has lower version, so this row should be selected.
| 8 | 1 |
| 9 | 4 | <-- The JSON data below has lower version, so this row should be selected.
| 10 | 1 |
+-----+---------+
10 rows in set (0.00 sec)
Data sent from the client:
The client then queries the server with the following data in JSON (or whatever, but I have JSON in my case). The server side is php and I need to parse this JSON data and include it in the query somehow. This is the data the client currently contains.
{
"my_data": [
{
"id": 1,
"version": 0
},
{
"id": 2,
"version": 1
},
{
"id": 3,
"version": 0
},
{
"id": 4,
"version": 0
},
{
"id": 5,
"version": 1
},
{
"id": 6,
"version": 0
},
{
"id": 7,
"version": 0
},
{
"id": 8,
"version": 1
},
{
"id": 9,
"version": 2
},
{
"id": 10,
"version": 1
}
]
}
In this example I want the MySQL query to return 3 rows; namely the 3 rows with id 3, 7 and 9 because the client version is lower than the server version, thus it needs to fetch some data for updating. How can I achieve this in a single, simple query? I do not want to run one query for each row, even if this is possible.
Desired result from the sample data:
The resulting data should be the rows in which the version in the database on the server side is greater than the data with the corresponding id in the JSON data set.
mysql> <INSERT PROPER QUERY HERE>;
+-----+---------+
| id | version |
+-----+---------+
| 3 | 2 |
| 7 | 1 |
| 9 | 4 |
+-----+---------+
3 rows in set (0.00 sec)
NOTE: Not used PDO, just query string generation, can be switched easily
To check each version you can do an OR statement for each id but check first that the json is not empty first
$jsonData = json_decode($inputJson, true);
$jsonData = $jsonData['my_data'];
$string = 'select * from my_data where';
foreach($jsonData as $data) {
$conditions[] = '(id='.$data['id'].' and version>'.$data['version'].')';
}
$string .=implode('or', $conditions);
result:
select * from my_data where (id=1 and version>0) or (id=2 and version>0)
something like that no ?
try{
$bdd = new PDO('mysql:host=localhost;dbname=test;charset=utf8', 'user', 'password');
}
catch (Exception $e){
die('Error : ' . $e->getMessage());
}
$output = array();
$req = $bdd->prepare('SELECT `id`, `version` FROM my_data WHERE `id` = :id AND `version` > :version');
foreach($yourJson as $object){
$req->execute(array('id' => $object['id'], 'prixmax' => $object['version']));
$data = $req->fetch();
if(!empty($data))
$output[] = $data;
}
echo $data
That you have to do with json_array_elements:
SELECT id, version
FROM my_data AS md, json_array_elements(md.json_col->'version') AS jsonVersion
WHERE version > jsonVersion->>'version';
json_col being the name of the JSON column.
I am providing few links might be it will helpful to you.
More details in this related answer:
How do I query using fields inside the new PostgreSQL JSON datatype?
More about the implicit CROSS JOIN LATERAL in the last paragraph of this related answer:
PostgreSQL unnest() with element number
Advanced example
Query combinations with nested array of records in JSON datatype
Hope you will get solution.
SELECT id,version FROM my_data WHERE `id` = $APP_ID AND `version` > $APP_VERSION;
Replace $APP_ID with an actual item ID and respectively $APP_ID with ID coming from incoming JSON.
Main result for the below results and discussions: Using the multiple OR query (as suggested by #KA_lin) is faster for small data sets (n < 1000 or so). This approach scales badly for larger data sets however, so I will probably stick with using a query with the TEMPORARY TABLE approach below in case my data set should grow large in the future. The payload for this is not that high.
CREATE TEMPORARY TABLE my_data_virtual(id INTEGER NOT NULL, version TINYINT(3) NOT NULL);
INSERT INTO my_data_virtual VALUES
(1,0), (2,1), (3,0), (4,0), (5,1),
(6,0), (7,0), (8,1), (9,2), (10,1);
SELECT md.id, md.version
FROM my_data AS md
INNER JOIN my_data_virtual AS mdv
ON md.id = mvd.id AND md.id > mvd.id;
I ran a series of tests using the MySQLdb and timeit modules in Python. I created 5 tables: test_100, test_500, test_1000, test_5000 and test_10000. All the databases were given a single table, data, which contained the following columns.
+-------------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+---------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| version | int(11) | NO | | 0 | |
| description | text | YES | | NULL | |
+-------------+---------+------+-----+---------+----------------+
The tables in the databases were then filled with random versions from 0 to 5 and a semi-random amount of lorem ipsum text. The test_100.data table got 100 rows, the test_500.data table got 500 rows and so forth. I then ran test for both the query using nested OR statements and using a temporary table with all ids and random version between 0 and 5.
Results
Results for nested OR query. Number of repeats for each n was 1000.
+----------+-------------+-------------+-------------+-------------+-------------+
| | n = 100 | n = 500 | n = 1000 | n = 5000 | n = 10000 |
+----------+-------------+-------------+-------------+-------------+-------------+
| max | 0.00719 | 0.02213 | 0.04325 | 1.75707 | 8.91687 |
| min | 0.00077 | 0.00781 | 0.02696 | 0.63565 | 5.29613 |
| median | 0.00100 | 0.00917 | 0.02996 | 0.82732 | 5.92217 |
| average | 0.00111 | 0.01001 | 0.03057 | 0.82540 | 5.89577 |
+----------+-------------+-------------+-------------+-------------+-------------+
Results for temporary table query. Number of repeats for each n was 1000.
+----------+-------------+-------------+-------------+-------------+-------------+
| | n = 100 | n = 500 | n = 1000 | n = 5000 | n = 10000 |
+----------+-------------+-------------+-------------+-------------+-------------+
| max | 0.06352 | 0.07192 | 0.08798 | 0.28648 | 0.26939 |
| min | 0.02119 | 0.02027 | 0.03126 | 0.07677 | 0.12269 |
| median | 0.03075 | 0.03210 | 0.043833 | 0.10068 | 0.15839 |
| average | 0.03121 | 0.03258 | 0.044968 | 0.10342 | 0.16153 |
+----------+-------------+-------------+-------------+-------------+-------------+
It seems that using nested OR queries is faster up to about n = 1000. From there on, the the nested OR scales badly and the temporary table approach wins solidly. In my case I am likely to have a maximum of around 1000 rows, so it seems that I can choose between these two approaches relatively freely.
I will probably go for the temporary table approach in case my data set should become larger than expected. The payload is small in any case.
Notes
Since the timeit module in Python is a bit ticklish, the database is opened and closed for each run/repeat. This might produce some overhead to the timings.
The queries for the temporary table approach were done in 3 steps: 1 for creating the temporary, 1 for inserting the data and 1 for joining the tables.
The creation of the queries are not part of the timing; they are created outside of the Python timeit call.
Since both the versions in the inserted data and the random "client" data are randomly chosen between 0 and 5, it is likely that between 33 % and 50 % of the rows are selected. I have not verified this. This is not really the case I have, as the client data will at any point have almost the same data as the server.
I tried adding WHERE id IN (1,2,3...,10) on both the temporary table approach and the nested OR approach, but it neither sped things up nor slowed them down in any of the tests, except for the larger data sets and the multiple OR approach. Here, the times were slightly lower than without this WHERE statement.
I have stored the physical locations of specific files in my database with download counters to provide downloads via shorter urls like /Download/a4s. Each file has a categoryId assigned via foreign keys which just describes to which course/lecture it belongs for an easier overview. The table fileCategories basically looks like this
categoryId | categoryName
---------------------------
1 | Lecture 1
2 | Lecture 2
3 | Personal Stuff
Assume that I have a files table which looks like this with some other columns I did omit
fileId | categoryId | filePath | ...
----------------------------------------
1 | 1 | /Foo/Bar.pdf | ...
2 | 1 | /Foo/Baz.pdf | ...
3 | 2 | /Bar/Foo.pdf | ...
4 | 2 | /Baz/Foo.pdf | ...
5 | 3 | /Bar/Baz.pdf | ...
I have created a page which should display some data about those files and group them by their categories which produces a very simple html table which looks like this:
Id | Path | ...
-----------------------
Lecture 1
-----------------------
1 | /Foo/Bar.pdf | ...
2 | /Foo/Baz.pdf | ...
-----------------------
Lecture 2
-----------------------
3 | /Bar/Foo.pdf | ...
4 | /Baz/Foo.pdf | ...
-----------------------
Personal Stuff
-----------------------
5 | /Bar/Baz.pdf | ...
So far I am using multiple SQL queries to fetch and store all categories in PHP arrays and append file entries to those arrays when iterating over the files table. It is highly unlikely this is the best method even though the number of files is pretty small. I was wondering whether there is a query which will automatically sort those entries into temporary tables (just a spontaneous guess to use those) which I can output to drastically improve my current way to obtain the data.
You can not do this with just mysql but a combination of JOIN and some PHP.
SELECT * FROM files f LEFT JOIN fileCategories c USING (categoryId) ORDER BY c.categoryName ASC
Be sure to order by the category first (name or ID) and optionally order by other params after that to allow the following code example to work as expected.
in PHP then iterate over the result, remember the category id from each row and if it changes you can output the category delimiter. assumung the query result is stored in $dbRes
Example Code:
$lastCategoryId = null;
while ($row = $dbRes->fetchRow()) {
if ($lastCategoryId !== $row['categoryId']) {
echo "--------------------" . PHP_EOL;
echo $row['categoryName'] . PHP_EOL
echo "--------------------" . PHP_EOL;
}
echo $row['filePath'] . PHP_EOL;
$lastCategoryId = $row['categoryId'];
}
My question is about tabulating data in MySql. I was wondering, how to best represent this javascript array in MySql? What index should I use? I'm going to use the data to populate a javascript array via PHP.
A[i] represents a card. B[i] represents a matching card.
A = new Array();
A[0] = new Array();
A[0][0]='eat';
A[0][1] = 1;
A[0][2] = 0;
A[1] = new Array();
A[1][0]='drink';
A[1][1] = 2;
A[1][2] = 0;
B = new Array();
B[0] = new Array();
B[0][0]='tacos';
B[0][1] = 1;
B[0][2] = 0;
B[1] = new Array();
B[1][0]='tequila';
B[1][1] = 2;
B[1][2] = 0;
I need to be able to uniquely identify components within the array later, so that I can use parts of the data to populate new arrays (So I can use and combine different cards into a new array). For example, I might want to populate a new array in javascript using A[0][0], A[0][1], A[0][2],B[0][0], B[0][1] and info from another array stored in the MySql (Lets say Y[2][0], Y[2][1],Y[2][2],Z[2][0], Z[2][1]).
This is what I've come up with so far.
-----------------------------------------
| card pair | card |card info|Tag|Tag2|
-----------------------------------------
| 1 | A | eat | 1 | 0 |
| 1 | B | tacos | 1 | - |
| 2 | A | drink | 2 | 0 |
| 2 | B | tequila | 2 | - |
-----------------------------------------
Maybe I need to add a primary index to the above one?
-------------------------------
|card pair |card info|Tag|Tag2|
-------------------------------
| 1A | eat | 1 | 0 |
| 1B | tacos | 1 | - |
| 2A | drink | 2 | 0 |
| 2B | tequila | 2 | - |
-------------------------------
I thought the card pair could be the index. Not sure if this is possible or a good idea. Also not sure what type of index I would use if I did.
If you have a better way to tabulate the data or can recommend what type of index to use I'd much appreciate it.
EDIT: I think I can do away with the last 2 columns (Tag and Tag2), so I think I might just use the table as below.
----------------------
|card pair |card info|
----------------------
| 1A | eat |
| 1B | tacos |
| 2A | drink |
| 2B | tequila |
----------------------
Should I add an incrementing index to the table? Is the card pair sufficient as the index?If yes, what is the best index type to use?
Thanks!
well, from a database perspective, you will want to 'normalize' this information.
I think it would be more like this:
card
------------
card_id
info
card_pair
------------
card_1_id
card_2_id