I want to populate a table form a MYSQL database.
I have defined the tables and the mysql_query's so that this should be a lot cleaner.
Separately I am able to get the results, but I need to combine them since I am making a 4 column table.
looks like this:
-------------------------------------------------------------
| | | | |
| Menu 01 info | Menu 01 info | Menu 06 info | Menu 06 info |
| | | | |
-------------------------------------------------------------
while ($menu01 = mysql_fetch_array($order01)) AND while ($menu06 = mysql_fetch_array($order06))
{
//TableStuff
It is the line with while that I need to be without errors. Any help would be great. PHP is not my strongest point of knowledge sorry:-)
Boolean and, and mind the grouping.
while (($menu01 = mysql_fetch_array($order01)) && ($menu06 = mysql_fetch_array($order06)))
Related
I have a MySQL server that keeps events in DB, the DB looks like this:
id | epoch_time | type | event_text | ....
---|------------|------|-------------|-----
01 | 1487671205 | 0 | user-login | ....
02 | 1487671284 | 0 | user-logout | ....
03 | 1487671356 | 1 | sys_error | ....
04 | 1487671379 | 0 | user-logout | ....
05 | 1487671389 | 2 | user_error | ....
06 | 1487671397 | 1 | sys_error | ....
On the web UI, there is a summery of the last 24 hours events by type, since the DB is keeping 1 year back log of data, there are over 1M records at the moment which makes the site loads very slow (from the obvious reasons).
The SQL query is simple,
SELECT COUNT(id) as total FROM `eventLog` WHERE `epoch_time` >= (UNIX_TIMESTAMP() - 86400)
My question is - Is there a way to to "tell" MySQL that the epoch_time column is sorted so that once it hits a raw that has:
epoch_time < (UNIX_TIMESTAMP() - 86400)
The query will end.
Thanks
[UPDATE]
Thank you all for your help, I tried to add the index but the performance is still bad (~ 7 - 12 sec' to load the page)
Does it make sense to just keep statistical information just for that ?
You can set epoch_time as index using:
ALTER TABLE `eventLog` ADD INDEX epoch_time (`epoch_time`)
That will make your query runs much faster,
I have a MySQL database that holds log data from a vehicle's OBD-II reader. The application that interfaces with the reader collects a large amount (usually well over 2,000) of data point sets. Each of these data point sets gets its own row in the table, resulting in something like this:
+---------------+----------------------------------+---------------+----+----+-----+
| session | id | time | kd | kf | ... |
+---------------+----------------------------------+---------------+----+----+-----+
| 1420236074526 | 5cff4a3cc80b22cecb7de85266b25355 | 1420236074534 | 14 | 8 | ... |
| 1420236074526 | 5cff4a3cc80b22cecb7de85266b25355 | 1420236075476 | 17 | 8 | ... |
| 1420236074526 | 5cff4a3cc80b22cecb7de85266b25355 | 1420236075476 | 19 | 8 | ... |
| 1420236074526 | 5cff4a3cc80b22cecb7de85266b25355 | 1420236075476 | 23 | 8 | ... |
| 1420236074526 | 5cff4a3cc80b22cecb7de85266b25355 | 1420236077477 | 25 | 8 | ... |
+---------------+----------------------------------+---------------+---------+-----+
k5 and kf are the vehicle data types (vehicle speed and ambient air temperature, respectively).
There are two indexes in this table (which is called raw_logs):
session maps to columns session and id
id maps to column id
What I'd like to do is grab all of the rows that have the same timestamp (1420236074526, for example), and lump them together into a "session". The goal here is to create a select list where I can view data by session. Ideally, I'd have something like this as the output:
<select>
<option>January 1, 2015 - 7:43AM</option>
<option>January 1, 2015 - 5:15PM</option>
<option>January 2, 2015, - 7:38AM</option>
...
</select>
This is what I have so far (I'm using Medoo to try and simplify the queries):
$session = $database->query("SELECT * FROM raw_logs USE INDEX (session)");
$sessionArray = array();
foreach($session as $session){
$s_time = $session["session"];
$sessionArray[$s_time] = array(
"vehicle_speed" => $session["kd"],
"ambient_air_temp" => $session["kf"]
);
} print_r($sessionArray);
This works... sort of. I get the session time as the array and kd and kf under that with the correct key/value pairs, but it doesn't seem to want to iterate through the whole thing. There are around 25,000 rows in the table at the moment, but it's only returning a few, and there doesn't seem to be any logical order to the listing... it'll return two results from January 8th, one from the 9th, 4 from the 10th, etc.
What would be the best way to select all sessions with the same time stamp, group them, and create a selectable object that will only display the data for the given session?
If you are not ordering the query, there likely won't be any logical order to the listing. Also, you are overwriting the session data where duplicate values exist in the session column. You likely want to do something like $sessionArray[$s_time][] = array(... to append each row under the session id. Also, if you are having trouble, it is best to limit the results from your query down to like 20-100 and keep massaging until you get the result you want.
OK, Last post on this subject (I hope). I've been trying to look into normalisation for tables in a website that I've been building and I have to be honest that I've struggled with it, however after my last post it seems that I may have finally grasped it and set my tables properly.
However, one question remains. If I create a table that is seemingly in 3rd normal form, is it acceptable to have areas of white space or empty cells if the data is relevant to that specific table? Let me give you an example:
On a news website I have an Authors_Table
+----+-----------+----------+-----------------+-------------------+---------+----------+---------+
| ID | FIRSTNAME | SURNAME | EMAIL | BIO ( REQUIRED ) | TWITTER | FACEBOOK | WEBSITE |
+----+-----------+----------+-----------------+-------------------+---------+----------+---------+
| 01 | Brian | Griffin | brian#gmail.com | About me... | URL | | URL |
| 02 | Meg | Griffin | meg#gmail.com | About me... | URL | | |
| 03 | Peter | Griffin | peter#gmail.com | About me... | | URL | URL |
| 04 | Glen | Quagmire | glen#gmail.com | About me... | URL | URL | |
+----+-----------+----------+-----------------+-------------------+---------+----------+---------+
This would be used on the article page to give a little details about who has written it, which is very common in newspapers and on modern blogs. Now the last 3 columns Facebook, Twitter, Website are obviously relevant to the Author & therefore to the PK (ID). As you know though, not everyone has either twitter or a wesbite or facebook so the content of these cells is rather flexible so obviously empty cells will occur in some cases.
It was suggested to do it another way so I produced:
Links
+----+-------------------+
| ID | TYPE |
+----+-------------------+
| 01 | Facebook |
| 02 | Twitter |
| 03 | Website |
+----+-------------------+
Author_Links
+----------+--------+------+
| AUTHOR | TYPE | LINK |
+----------+--------+------+
| 01 | 01 | URL |
| 01 | 02 | URL |
| 01 | 03 | URL |
| 02 | 02 | URL |
| 02 | 03 | URL |
| 03 | 01 | URL |
+----------+--------+------+
Now I understand the concept of this however isn't it just as "correct" to have and to use the original table. Updates can be made using a form & php to say:
$update_link_sql = "UPDATE authours SET facebook = ' NEW VALUE ' WHERE id = '$author_id'";
$update_link_res = mysqli_query($con, $update_links_sql);
As for me Authors_Table is correct.
| ID | FIRSTNAME | SURNAME | EMAIL | BIO ( REQUIRED ) | TWITTER | FACEBOOK | WEBSITE |
The only reason to have three tables:
Authors
| ID | FIRSTNAME | SURNAME | EMAIL | BIO ( REQUIRED ) |
Link_types
| ID | TYPE |
Author_links
| AUTHOR_ID | LINK_TYPE_ID | URL |
...is that your authors could have more than one link of specific type (for example two twitter accounts, btw, is it legal?)
If we suppose that any author can have no more than one account of each type - your version with single table is correct.
Either way is acceptable depending on functional requirements.
If you need to dynamically add more url types/fields to profile then use latter.
If there is ever going to be only 3 then former is better.
No need to over-engineer.
Yes, it's "correct" to store "optional" attributes as columns in the entity table. It's just when we have repeated values, e.g. multiple facebook pages for an author, for example, that we'd want to implement the child table. (We don't want to store "repeating" attributes in the entity table.)
As long as there's a restriction in the model, that an attribute will be limited to a single value (a single facebook page, a single twitter, etc.) those attributes can be stored in the entity table. We'd just use a NULL value to indicate that a value is not present.
One benefit of the separate table approach (outlined in your post) is that it would be "easier" to add a new "type" of URL. For example, if in the future we want to store a blogspot URL, or an instagram URL, instead of having to modify the entity table to add new columns, we can simply add rows to the "link_type" table and "author_link" table. That's the big benefit there.
I would like to ask you for advice related to my own analytics system.
So far my system collects all the clicks and save them in a SQL database.
First part of analytics.
The SQL database logs looks like this:
+----+----------------------+-------------+---------------------------------------------+----------------+--------------+----------+
| id | time | address | address_to | ip | resolution | id_guest |
|----+----------------------+-------------+---------------------------------------------+----------------+--------------+----------|
| 1 | 2013-12-03#14:31:35 | index.php | https://www.youtube.com/watch?v=6VJBBUqr1wM | 89.XX.XXX.6 | 1366x768 | 6 |
| 2 | 2013-12-03#14:48:21 | file.php | https://www.youtube.com/watch?v=0EWbonj7f18 | 89.XX.XXX.6 | 1366x768 | 6 |
| 3 | 2013-12-03#16:16:55 | contact.php | https://www.youtube.com/watch?v=_o-XIryB2gg | 178.XX.XXX.140 | 1920x1080 | 11 |
| 4 | 2013-12-03#16:21:32 | index.php | https://www.youtube.com/watch?v=z0M96LyTyX4 | 178.XX.XXX.140 | 1920x1080 | 11 |
| 5 | 2013-12-03#16:44:32 | movies.php | https://www.youtube.com/watch?v=cUhPA5qIxDQ | 178.XX.XXX.140 | 1920x1080 | 11 |
+----+----------------------+-------------+---------------------------------------------+----------------+--------------+----------+
Each click is added to the database as a new record.
All movie on my website is on second table in SQL database (movies):
+----+----------------------+-------------+---------------------+
| id | name | address | tags |
|----+----------------------+-------------+---------------------|
| 1 | 2013-12-03#14:31:35 | 6VJBBUqr1wM | bass,electro,trance |
| 2 | 2013-12-03#14:48:21 | 0EWbonj7f18 | electro,house,new |
| 3 | 2013-12-03#16:16:55 | _o-XIryB2gg | electro,party,set |
| 4 | 2013-12-03#16:21:32 | z0M96LyTyX4 | trance,house,new |
| 5 | 2013-12-03#16:44:32 | cUhPA5qIxDQ | techno,new,set |
+----+----------------------+-------------+---------------------+
Everything works flawlessly. In the database I have all the movies viewed by the user, which I want precisely define, so write down the IP + resolution.
First question:
Is this a good method for determining user?
--
Second part of analytics.
Now I want to use the collected logs and display interface with movies based on browsed materials.
I choose all logs from the database for the user who enters the website.
From the logs I choose identifier film and scan it in the table components for take logs and put into an array. For example, a user with ID = 6 will have an array:
array(
[0] = > bass,
[1] = > electro,
[2] = > trance,
[3] = > electro,
[4] = > house,
[5] = > new
);
Now I will sort the contents of the array in order of most frequently occurring:
array(
[2] = > electro,
[1] = > bass,
[1] = > trance,
[1] = > house,
[1] = > new
);
On the basis of the contents of the array can show user videos that might interest him.
Everything worked perfectly, but the problem I discovered only now ...
In the table logs I've had more than 4.5 million records. As you can imagine, searching of such a large number of records takes a lot of time and enter the site sometimes lasts up to 10 seconds...
I hope my poor English is fairly clear.
Please, any advice how to solve this problem with loading page.
Use indexes where needed, hard to tell exactly where - you didnt show any queries - basically you want to have indexed columns in WHERE part of the queries and also in JOINS. You dont have to index column that stays the same most of the time - isloggedin, isadmin, language or so
Make search tables for data you nead to search - for example if you need to know prefered resolution or how many times user had visited a site, you can make a cron job to parse this data for all users and store it in search table. This can be also used to make some statistics if you need it. For those Tags you could have a table with user_id, tag, count
If you need only last visited site, last resolution,... just make a table for that, where you can store and update one row for every user
I'm displaying a record set using Datatables pulling records from two tables.
Table A
sno | item_id | start_date | end_date | created_on |
===========================================================
10523563 | 2 | 2013-10-24 | 2013-10-27 | 2013-01-22 |
10535677 | 25 | 2013-11-18 | 2013-11-29 | 2013-01-22 |
10587723 | 11 | 2013-05-04 | 2013-05-24 | 2013-01-22 |
10598734 | 5 | 2013-06-14 | 2013-06-22 | 2013-01-22 |
Table B
id | item_name |
=====================================
2 | Timesheet testing |
25 | Vigour |
11 | Fabwash |
5 | Cruise |
Now since the number of records returned is going to turn into a big number in near future, I want the processing to be done serverside. I've successfully managed to achieve that but it came at a cost. I'm running into a problem while dealing with filters.
From the figure above, (1) is the column whose value will be in int (item_id), but using some small modifications inside the while loop of the mysql resource, I'm displaying the corresponding string using Table B.
Now if I use the filter (2), it is working fine since those values come from Table A
The Problem
When I try to filter from the field (3), if I enter a string value such as fab it says no record found. But if I enter an int such as 11 I get a single row which contains Fabwash as the item name.
So while filtering I'm required to use the direct value used in Table A and not its corresponding string value stored in Table B. I hope the point that I'm putting across is understandable because it is hard to explain it in words.
I'm clueless on how to solve the issue.