I am hoping someone can help me because I am attempting to do something that is beyond my limits, I don't even know if a function exists for this within PHP or MySQL so my search on google hasn't been very productive.
I am using PHPWord with my PHP/MySql Project, the intention is that I want to create a word document based on a template.
I have used this guide which is also on stack exchange.
However this approach requires that the number of rows and the values are hard coded, i.e. in his example he has used cloneRow('first_name', 3), which then clones the table to have 3 rows, and then goes on to manually define the tags, i.e.
$doc->setValue('first_name#1', 'Jeroen');
$doc->setValue('last_name#1', 'Moors');
$doc->setValue('first_name#2', 'John');
I am trying to make this dynamic, in my instance I am trying to make a timetable, and one of the child tables is exactly that, so the query I have looks up how many entries there are and then collects a count of them, this $count is then used to dynamically create the correct number of rows. This is the count I am using:
$rs10 = CustomQuery("select count(*) as count FROM auditplanevents where AuditModuleFk='" . $result["AuditModulePk"]."'");
$data10 = db_fetch_array($rs10);
$Count = $data10["count"];
I then use this $document->cloneRow('date', $Count); to executive the clonerow function, which works great and my document now looks something like this.
So, so far so good.
What I now want is for a way to then append each row value of the query into the document, so rather than manually setting the tag value i.e. $doc->setValue('first_name#1', 'Jeroen'); I could use something like $doc->setValue('first_name#1', '$name from row 1'); I suspect this will involve a foreach query but not too sure.
I hope the above makes sense, but please feel free to ask me for anything else and become my personal hero. Thanks
Update: Just for sake of clarity, what I would like is for the output to look something like this:
In my example are 5 results and therefore 5 rows created, I want to set values in following way:
${$date1} = date column from query 1st row
${$date2} = date column from query 2nd row
${$date3} = date column from query 3rd row
${$date4} = date column from query 4th row
${$date5} = date column from query 5th row
I was able to sort this out by inserting the records from the query into a temp table, with an AI ID, then using:
//update timetable with events from temp table
$rs14 = CustomQuery("select * FROM tempauditplan where AuditModuleFk='" . $result["AuditModulePk"]."'");
while ($data14 = db_fetch_array($rs14))
{
$document->setValue('date#'.$data14["rowid"], date('d/m/y', strtotime($data14["date"])));
$document->setValue('time#'.$data14["rowid"], date('H:i', strtotime($data14["time"])));
$document->setValue('auditor#'.$data14["rowid"], $data14["auditor"]);
$document->setValue('area#'.$data14["rowid"], $data14["area"]);
$document->setValue('notes#'.$data14["rowid"], $data14["notes"]);
$document->setValue('contact#'.$data14["rowid"], $data14["contact"]);
}
The trick is to also have a function that truncates the table after use so can be used over again
Might not be the most efficient way, but it works!
Related
I am creating an application that inserts (or updates) values in mysql daily. A simplified recordset with headers is :
ItemName,ItemNumber,ItemQty,Date
test1,1,5,2016/01/01
test1,1,3,2016/01/02
test2,2,7,2016/01/01
test2,2,5,2016/01/02
When using a simple insert statement for the above recordset with 16 columns and 216.000 records takes about 4 minutes (php/mysql) - This covers a week of values. Of course if I import the same recordset I get duplicates. I am trying to find a way to effectively disallow duplicate entries.
The aim is to : In the scenario where I import every day a recordset that has dates for the current week I end up with the addition of the new dates only.
The only thing that might change in consecutive imports is the ItemQty.
In php I made a logic where I query the db for ItemName,ItemNumber,Date with the values I am trying to insert. If there is a result on the SELECT statement, I break. If there isn't, I proceed inserting a new row.
Problem is that with the addition of this logic now it does not take 4 minutes, but a couple of hours. (Works though)
Any ideas?
I was thinking perhaps when I insert, to insert something like a checksum column, for example md5(ItemName,ItemNumber,ItemQty,Date) and then check this checksum rather than SELECT * FROM $table WHERE ItemName = value ,ItemNumber = value,ItemQty = value,Date = value that I currently have.
My problem is that the records I insert have nothing unique basically. Uniqueness comes from a group of fields only if compared to the dataset to be imported. If I manage somehow to get uniqueness, I'll solve my other problem too, which is deleting a row or updating a row when the ItemQty changes.
The one that you are looking for is the unique constraint. Using unique constraint, you can add all your columns to the constraint and if all columns satisfied the inserting data, it will not proceed in inserting
Few options:
1) On PHP, iterate over the records, mapping the duplicate ones and keeping the newests
$itemsArray = []; // The array where you have stored your data
$uniqueItems = [];
foreach($itemsArray as $item)
{
if(isset($uniqueItems[$item['ItemName']]))
{
$oldRecord = $uniqueItems[$item['ItemName']];
$newTimeStamp = strtotime($item['Date']); // Might not work with your format date
$currentTimeStamp = strtotiem($oldRecord['Date']);
if($newTimeStamp > $currentTimeStamp)
{
$uniqueItems[$item['ItemName']] = $item;
}
}
else
{
$uniqueItems[$item['ItemName']] = $item;
}
}
// uniqueItems now hold only 1 record per ItemName (the newest one)
2) Sort the data in php by date on ascending order(before inserting in database). Then, on your clause, use ON DUPLICATE KEY UPDATE. This will cause mysql to update the records with duplicate key. In this case, the older records will be inserted first, so the lastest records will be inserted last, overwritting the old records data.
I have 18 rows in one of my tables with several columns. I would like to extract the data from several different non-sequential rows and echo them individually on different parts of my page.
For example, let's say I wanted to pick records 5, 9 and 13 from column_1 and echo them to the page in different places. How would I accomplish this? Would I need to perform one query each to retrieve these unique fields? Here is my code so far:
// $database connect code, blah blah...
$sql = "SELECT page_id FROM pages WHERE (not sure if something should go here)";
$query = mysqli_query($connect, $query);
while ($row = mysqli_fetch_array($query)){
maybe some code here, not sure...
};
Doing so with ALL of the records in a specific column is easy using a while loop, but that's not what I'm after. I thought there may have been a way to cherry pick the specific row with an associative array like $fetchRow['row']['row_number'], but that doesn't appear to work. Performing one query for each unique instance seems awfully inefficient.
I'm familiar with how to retrieve things from the database and display them on the page. I'm intermediate level, but this has gotten me stumped.
Any ideas?
Thanks.
You'll need an IN clause.
$sql = "SELECT page_id FROM pages WHERE column_1 IN (5, 9, 13)";
Or if you'd rather do it with a PHP array, something like this:
$cols = array(5,9,13);
$in_cols = implode(',', $cols);
$sql = "SELECT page_id FROM pages WHERE column_1 IN ({$in_cols}})";
If you're going to use #2, make sure you properly sanitize/prepare the statement before executing it.
I think what you're looking for is data_seek
However, since you only have 18 rows... I would store all the result set in an array and use this array to echo the rows elements I want on the page. That way you do not have the overhead of typing the supplementary code (the best code is no code) and having to retrieve the specific rows you want. Furthermore, would you ever want to retrieve other records, there would be no need to modify your query.
I have the following call to my database to retrieve the last row ID from an AUTO_INCREMENT column, which I use to find the next row ID:
$result = $mysqli->query("SELECT articleid FROM article WHERE articleid=(SELECT MAX(articleid) FROM article)");
$row = $result->fetch_assoc();
$last_article_id = $row["articleid"];
$last_article_id = $last_article_id + 1;
$result->close();
I then use $last_article_id as part of a filename system.
This is working perfectly....until I delete a row meaning the call retrieves an ID further down the order than the one I want.
A example would be:
ID
0
1
2
3
4-(deleted row)
5-(deleted row)
6-(next ID to be used for INSERT call)
I'd like the filename to be something like 6-0.jpg, however the filename ends up being 4-0.jpg as it targets ID 3 + 1 etc...etc...
Any thoughts on how I get the next MySQL row ID when any number of previous rows have been deleted??
You are making a significant error by trying to predict the next auto-increment value. You do not have a choice, if you want your system to scale... you have to either insert the row first, or rename the file later.
This is a classic oversight I see developers make -- you are coding this as if there would only ever be a single user on your site. It is extremely likely that at some point two articles will be created at almost the same time. Both queries will "predict" the same id, both will use the same filename, and one of the files will disappear, one of the table entries may point to the wrong file, and the other entry will reference a file that does not exist. And you'll be scratching your head asking "how did this happen?!"
Predicting auto-increment values is bad practice. Don't do it. Plan for concurrency.
Also, the information_schema tables are not really tables... they are server internals exposed to the SQL interface. Calls to the "tables" table, and show table status are expensive calls that you do not want to make in production... so don't be tempted to use something you find there.
You can use mysql_insert_id() after you insert the new row to retrieve the new key:
$mysqli->query($yourQueryHere);
$newId = $mysqli->insert_id();
That requires the id field to be a primary key, though (I believe).
As for the filename, you could store it in a variable, then do the query, then change the name and then write the file.
I have a database design here that looks this in simplified version:
Table building:
id
attribute1
attribute2
Data in there is like:
(1, 1, 1)
(2, 1, 2)
(3, 5, 4)
And the tables, attribute1_values and attribute2_values, structured as:
id
value
Which contains information like:
(1, "Textual description of option 1")
(2, "Textual description of option 2")
...
(6, "Textual description of option 6")
I am unsure whether this is the best setup or not, but it is done as such per requirements of my project manager. It definitely has some truth in it as you can modify the text easily now without messing op the id's.
However now I have come to a page where I need to list the attributes, so how do I go about there? I see two major options:
1) Make one big query which gathers all values from building and at the same time picks the correct textual representation from the attribute{x}_values table.
2) Make a small query that gathers all values from the building table. Then after that get the textual representation of each attribute one at a time.
What is the best option to pick? Is option 1 even faster as option 2 at all? If so, is it worth the extra trouble concerning maintenance?
Another suggestion would be to create a view on the server with only the data you need and query from that. That would keep the work on the server end, and you can pull just what you need each time.
If you have a small number of rows in attributes table, then I suggest to fetch them first, fetch all of them! store them into some array using id as index key in array.
Then you can proceed with building data, now you just have to use respective array to look for attribute value
I would recommend something in-between. Parse the result from the first table in php, and figure out how many attributes you need to select from each attribute[x]_values table.
You can then select attributes in bulk using one query per table, rather than one query per attribute, or one query per building.
Here is a PHP solution:
$query = "SELECT * FROM building";
$result = mysqli_query(connection,$query);
$query = "SELECT * FROM attribute1_values";
$result2 = mysqli_query(connection,$query);
$query = "SELECT * FROM attribute2_values";
$result3 = mysqli_query(connection,$query);
$n = mysqli_num_rows($result);
for($i = 1; $n <= $i; $i++) {
$row = mysqli_fetch_array($result);
mysqli_data_seek($result2,$row['attribute1']-1);
$row2 = mysqli_fetch_array($result2);
$row2['value'] //Use this as the value for attribute one of this object.
mysqli_data_seek($result3,$row['attribute2']-1);
$row3 = mysqli_fetch_array($result3);
$row3['value'] //Use this as the value for attribute one of this object.
}
Keep in mind that this solution requires that the tables attribute1_values and attribute2_values start at 1 and increase by 1 every single row.
Oracle / Postgres / MySql DBA here:
Running a query many times has quite a bit of overhead. There are multiple round trips to the db, and if it's on a remote server, this can add up. The DB will likely have to parse the same query multiple times in MySql which will be terribly inefficient if there are tons of rows. Now, one thing that your PHP method (multiple queries) has as an advantage is that it'll use less memory as it'll release the results as they're no longer needed (if you run the query as a nested loop that is, but if you query all the results up front, you'll have a lot of memory overhead, depending on the table sizes).
The optimal result would be to run it as 1 query, and fetch the results 1 at a time, displaying each one as needed and discarding it, which can reek havoc with MVC frameworks unless you're either comfortable running model code in your view, or run small view fragments.
Your question is very generic and i think that to get an answer you should give more hints to how this page will look like and how big the dataset is.
You will get all the buildings with theyr attributes or just one at time?
Cause your data structure look like very simple and anything more than a raspberrypi can handle it very good.
If you need one record at time you don't need any special technique, just JOIN the tables.
If you need to list all buildings and you want to save db time you have to measure your data.
If you have more attribute than buildings you have to choose one way, if you have 8 attributes and 2000 buildings you can think of caching attributes in an array with a select for each table and then just print them using the array. I don't think you will see any speed drop or improvement with so simple tables on a modern computer.
$att1[1]='description1'
$att1[2]='description2'
....
Never do one at a time queries, try to combine them into a single one.
MySQL will cache your query and it will run much faster. PhP loops are faster than doing many requests to the database.
The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again.
http://dev.mysql.com/doc/refman/5.1/en/query-cache.html
I am trying to make my DB more optimized and are in the beginning of indexing it but not sure how to do it right.
I have this query:
$year = date("Y");
$thisYear = $year;
//$nextYear = $thisYear + 1;
$sql = mysql_query("SELECT SUM(points) as userpoints
FROM ".$prefix."_publicpoints
WHERE date BETWEEN '$thisYear" . "-01-01' AND '$thisYear" . "-12-31' AND fk_player_id = $playerid");
$row = mysql_fetch_assoc($sql);
$userPoints = $row['userpoints'];
$sql = mysql_query("SELECT
fk_player_id
FROM ".$prefix."_publicpoints
WHERE date BETWEEN '$thisYear" . "-01-01' AND '$thisYear" . "-12-31'
GROUP BY fk_player_id
HAVING SUM(points) > $userPoints");
$row = mysql_fetch_assoc($sql);
$userWrank = mysql_num_rows($sql)+1;
I am not sure how to index this? I have tried indexing the fk_player_id but it still looks through all the rows (287937).
I have indexed the date field which gives me this back in EXPLAIN:
1
SIMPLE
nf_publicpoints
range
IDXdate
IDXdate
3
NULL
143969
Using where with pushed condition; Using temporary...
I also have 2 calls to the same table... Could that be done in one?
How do I index this and/or could it be done smarter?
You should definitely spend some time reading up on indexing, there's a lot written about it, and it's important to understand what's going on.
Broadly speaking, and index imposes an ordering on the rows of a table.
For simplicity's sake, imagine a table is just a big CSV file. Whenever a row is inserted, it's inserted at the end. So the "natural" ordering of the table is just the order in which rows were inserted.
Imagine you've got that CSV file loaded up in a very rudimentary spreadsheet application. All this spreadsheet does is display the data, and numbers the rows in sequential order.
Now imagine that you need to find all the rows that has some value "M" in the third column. Given what you have available, you have only one option. You scan the table checking the value of the third column for each row. If you've got a lot of rows, this method (a "table scan") can take a long time!
Now imagine that in addition to this table, you've got an index. This particular index is the index of values in the third column. The index lists all of the values from the third column, in some meaningful order (say, alphabetically) and for each of them, provides a list of row numbers where that value appears.
Now you have a good strategy for finding all the rows where the value of the third column is M! For instance, you can perform a binary search! Whereas the table scan requires you to look N rows (where N is the number of rows), the binary search only requires that you look at log-n index entries, in the very worst case. Wow, that's sure a lot easier!
Of course, if you have this index, and you're adding rows to the table (at the end, since that's how our conceptual table works), you need need to update the index each and every time. So you do a little more work while you're writing new rows, but you save a ton of time when you're searching for something.
So, in general, indexing creates a tradeoff between read efficiency and write efficiency. With no indexes, inserts can be very fast -- the database engine just adds a row to the table. As you add indexes, the engine must update each index while performing the insert.
On the other hand, reads become a lot faster.
Hopefully that covers your first two questions (as others have answered -- you need to find the right balance).
Your third scenario is a little more complicated. If you're using LIKE, indexing engines will typically help with your read speed up to the first "%". In other words, if you're SELECTing WHERE column LIKE 'foo%bar%', the database will use the index to find all the rows where column starts with "foo", and then need to scan that intermediate rowset to find the subset that contains "bar". SELECT ... WHERE column LIKE '%bar%' can't use the index. I hope you can see why.
Finally, you need to start thinking about indexes on more than one column. The concept is the same, and behaves similarly to the LIKE stuff -- essentialy, if you have an index on (a,b,c), the engine will continue using the index from left to right as best it can. So a search on column a might use the (a,b,c) index, as would one on (a,b). However, the engine would need to do a full table scan if you were searching WHERE b=5 AND c=1)
Hopefully this helps shed a little light, but I must reiterate that you're best off spending a few hours digging around for good articles that explain these things in depth. It's also a good idea to read your particular database server's documentation. The way indices are implemented and used by query planners can vary pretty widely.
More information and example visit here : http://blog.sqlauthority.com/category/sql-index/
Try create index on date column, indexing fk_payer_id will not help with this query. If does not work - paste explain...
For more information about indexes in Mysql look here: http://hackmysql.com/case1
Why not index the date column, seeing how that's the main criterion that will be evaluated in the lookup?