I have slow query problem, may be i am wrong, here is what i want,
i have to display more than 40 drop down lists at a single page with same fields , fetched by db, but i feel that the query takes much time to execute and also use more resources..
here is an example...
$sql_query = "SELECT * FROM tbl_name";
$rows = mysql_query($sql_query);
now i use while loop to print all records in that query in drop down list,
but i have to reprint same record in next drop down list up to 40 lists, so i use
mysql_data_seek() to move to first record and then reprint the next list and so on till 40 lists.
but this was seems slow to me so i use the second method like this same query for all 40 lists
$sql_query2 = "SELECT * FROM tbl_name";
$rows2 = mysql_query($sql_query2);
do you think that i wrong about the speed of query, or do you suggest me the another way that is faster than these methods....
Try putting the rows into an array like so:
<?php
$rows = array();
$fetch_rows = mysql_query("SELECT * FROM table");
while ($row = mysql_fetch_assoc($fetch_rows)) {
$rows[] = $row;
}
Then just use the $rows array in a foreach ($rows as $row) loop.
There is considerable processing overhead associated with fetching rows from a MySQL result resource. Typically it would be quite a bit faster to store the results as an array in PHP rather than to query and fetch the same rowset again from the RDBMS.
$rowset = array();
$result = mysql_query(...);
if ($result) {
while ($row = mysql_fetch_assoc($result)) {
// Append each fetched row onto $rowset
$rowset[] = $row;
}
}
If your query returns lots of rows (thousands or tens of thousands or millions) and you need to use all of them, you may reach memory limitations by storing all rows into an array in PHP. In that case it may be more memory-conservative to fetch rows individually from MySQL, but it will still probably be more CPU intensive.
Instead of printing the records, going back, and printing them again, put the records in one big string variable, then echo it for each dropdown.
$str = "";
while($row = mysql_fetch_assoc($rows)) {
// instead of echo...
$str .= [...];
}
// now for each dropdown
echo $str;
// will print all the rows.
Related
I have a large database that contains results of an experiment for 1500 individuals. Each individual has 96 data points. I wrote the following script to summarize and then format the data so it can be used by the analysis software. At first all was good until I had more than 500 individuals. Now I am running out of memory.
I was wondering if anyone has a suggestion on now to overcome the memory limit problem without sacrificing speed.
This is how the table look in the database
fishId assayId allele1 allele2
14_1_1 1 A T
14_1_1 2 A A
$mysql = new PDO('mysql:host=localhost; dbname=aquatech_DB', $db_user, $db_pass);
$query = $mysql->prepare("SELECT genotyped.fishid, genotyped.assayid, genotyped.allele1, genotyped.allele2, fishId.sex, " .
"fishId.role FROM `fishId` INNER JOIN genotyped ON genotyped.fishid=fishId.catId WHERE fishId.projectid=:project");
$query->bindParam(':project', $project, PDO::PARAM_INT);
$query->execute();
So this is the call to the database. It is joining information from two tables to build the file I need.
if(!$query){
$error = $query->errorInfo();
print_r($error);
} else {
$data = array();
$rows = array();
if($results = $query->fetchAll()){
foreach($results as $row)
{
$rows[] = $row[0];
$role[$row[0]] = $row[5];
$data[$row[0]][$row[1]]['alelleY'] = $row[2];
$data[$row[0]][$row[1]]['alelleX'] = $row[3];
}
$rows = array_unique($rows);
foreach($rows as $ids)
{
$col2 = $role[$ids];
$alelleX = $alelleY = $content = "";
foreach($snp as $loci)
{
$alelleY = convertAllele($data[$ids][$loci]['alelleY']);
$alelleX = convertAllele($data[$ids][$loci]['alelleX']);
$content .= "$alelleY\t$alelleX\t";
}
$body .= "$ids\t$col2\t" . substr($content, 0, -1) . "\n";
This parses the data. In the file I need I have to have one row per individual rather than 96 rows per individual, that is why the data has to be formatted. In the end of the script I just write $body to a file.
I need the output file to be
FishId Assay 1 Assay 2
14_1_1 A T A A
$location = "results/" . "$filename" . "_result.txt";
$fh = fopen("$location", 'w') or die ("Could not create destination file");
if(fwrite($fh, $body))
Instead of reading the whole result from your database query into a variable with fetchAll(), fetch it row by row:
while($row = $query->fetch()) { ... }
fetchAll() fetches the entire result in one go, which has its uses but is greedy with memory. Why not just use fetch() which handles one row at a time?
You seem to indexing the rows by the first column, creating another large array, and then removing duplicate items. Why not use SELECT DISTINCT in the query to remove duplicates before they get to PHP?
I'm not sure what the impact would be on speed - fetch() may be slower than fetchAll() - but you don't have to remove duplicates from the array which saves some processing.
I'm also not sure what your second foreach is doing but you should be able to do it all in a single pass. I.e. a foreach loop within a fetch loop.
Other observations on your code above:
the $role array seems to do the same indexing job as $rows - using $row[0] as the key effectively removes the duplicates in a single pass. Removing the duplicates by SELECT DISTINCT is probably better but, if not, do you need the $rows array and the array_unique function at all?
if the same value of $row[0] can have different values of $row[5] then your indexing method will be discarding data - but you know what's in your data so I guess you've already thought of that (the same could be true of the $data array)
I have a large database that contains results of an experiment for 1500 individuals. Each individual has 96 data points. I wrote the following script to summarize and then format the data so it can be used by the analysis software. At first all was good until I had more than 500 individuals. Now I am running out of memory.
I was wondering if anyone has a suggestion on now to overcome the memory limit problem without sacrificing speed.
This is how the table look in the database
fishId assayId allele1 allele2
14_1_1 1 A T
14_1_1 2 A A
$mysql = new PDO('mysql:host=localhost; dbname=aquatech_DB', $db_user, $db_pass);
$query = $mysql->prepare("SELECT genotyped.fishid, genotyped.assayid, genotyped.allele1, genotyped.allele2, fishId.sex, " .
"fishId.role FROM `fishId` INNER JOIN genotyped ON genotyped.fishid=fishId.catId WHERE fishId.projectid=:project");
$query->bindParam(':project', $project, PDO::PARAM_INT);
$query->execute();
So this is the call to the database. It is joining information from two tables to build the file I need.
if(!$query){
$error = $query->errorInfo();
print_r($error);
} else {
$data = array();
$rows = array();
if($results = $query->fetchAll()){
foreach($results as $row)
{
$rows[] = $row[0];
$role[$row[0]] = $row[5];
$data[$row[0]][$row[1]]['alelleY'] = $row[2];
$data[$row[0]][$row[1]]['alelleX'] = $row[3];
}
$rows = array_unique($rows);
foreach($rows as $ids)
{
$col2 = $role[$ids];
$alelleX = $alelleY = $content = "";
foreach($snp as $loci)
{
$alelleY = convertAllele($data[$ids][$loci]['alelleY']);
$alelleX = convertAllele($data[$ids][$loci]['alelleX']);
$content .= "$alelleY\t$alelleX\t";
}
$body .= "$ids\t$col2\t" . substr($content, 0, -1) . "\n";
This parses the data. In the file I need I have to have one row per individual rather than 96 rows per individual, that is why the data has to be formatted. In the end of the script I just write $body to a file.
I need the output file to be
FishId Assay 1 Assay 2
14_1_1 A T A A
$location = "results/" . "$filename" . "_result.txt";
$fh = fopen("$location", 'w') or die ("Could not create destination file");
if(fwrite($fh, $body))
Instead of reading the whole result from your database query into a variable with fetchAll(), fetch it row by row:
while($row = $query->fetch()) { ... }
fetchAll() fetches the entire result in one go, which has its uses but is greedy with memory. Why not just use fetch() which handles one row at a time?
You seem to indexing the rows by the first column, creating another large array, and then removing duplicate items. Why not use SELECT DISTINCT in the query to remove duplicates before they get to PHP?
I'm not sure what the impact would be on speed - fetch() may be slower than fetchAll() - but you don't have to remove duplicates from the array which saves some processing.
I'm also not sure what your second foreach is doing but you should be able to do it all in a single pass. I.e. a foreach loop within a fetch loop.
Other observations on your code above:
the $role array seems to do the same indexing job as $rows - using $row[0] as the key effectively removes the duplicates in a single pass. Removing the duplicates by SELECT DISTINCT is probably better but, if not, do you need the $rows array and the array_unique function at all?
if the same value of $row[0] can have different values of $row[5] then your indexing method will be discarding data - but you know what's in your data so I guess you've already thought of that (the same could be true of the $data array)
There are different ways, how to fetch and print MySQL data with PHP.
For example, you can fetch data row by row with PHP loop:
$result = $mysqli->query("SELECT * FROM `table`");
while($data = $result->fetch_assoc())
{
echo '<div>'. $data["field"] .'</div>';
}
Also, you can store all selected data into array, and then go through it:
$result = $mysqli->query("SELECT * FROM `table`");
$data = $result->fetch_all(MYSQLI_ASSOC);
foreach($data as $i => $array)
{
echo '<div>'. $array["field"] .'</div>';
}
Is there any serious reason, why I should use one method instead of the other? And what about performance in case of enermous databases?
In the first example while($data = $result->fetch_assoc()) you call the fetch_assoc function for each time you perform a loop. Second one just calls the fetch_all once, stores the data in an array and later just use it. So, in theory the second approach should be faster, however you'd better to do a simple benchmark to make sure.
I'm probably missing something easy, but I seem to be blocked here... I have a MySQL database with two tables and each table has several rows. So the goal is to query the database and display the results in a table, so I start like so:
$query = "SELECT name, email, phone FROM users";
Then I have this PHP code:
$result = mysql_query($query);
Then, I use this to get array:
$row = mysql_fetch_array($result);
At this point, I thought I could simply loop through the $row array and display results in a table. I already have a function to do the looping and displaying of the table, but unfortunately the array seems to be incomplete before it even gets to the function.
To troubleshoot this I use this:
for ($i = 0; $i < count($row); $i++) {
echo $row[$i] . " ";
}
At this point, I only get the first row in the database, and there are 3 others that aren't displaying. Any assistance is much appreciated.
You need to use the following because if you call mysql_fetch_array outside of the loop, you're only returning an array of all the elements in the first row. By setting row to a new row returned by mysql_fetch_array each time the loop goes through, you will iterate through each row instead of whats actually inside the row.
while($row = mysql_fetch_array($result))
{
// This will loop through each row, now use your loop here
}
But the good way is to iterate through each row, as you have only three columns
while($row = mysql_fetch_assoc($result))
{
echo $row['name']." ";
echo $row['email']." ";
}
One common way to loop through results is something like this:
$result = mysql_query($query);
while ($row = mysql_fetch_assoc($result)) {
print_r($row);
// do stuff with $row
}
Check out the examples and comments on PHP.net. You can find everything you need to know there.
I'm trying to figure out the best way to do something like this. Basically what I want to do (in a simple example) would be query the database and return the 3 rows, however if there aren't 3 rows, say there are 0, 1, or 2, then I want to put other data in place of the missing rows.
Say my query returns 2 rows. Then there should be 1 list element returned with other data.
Something like this:
http://i42.tinypic.com/30xhi1f.png
$query = mysql_query("SELECT * FROM posts LIMIT 3");
while($row = mysql_fetch_array($query))
{
print "<li>".$row['post']."</li>";
}
//(this is just to give an idea of what i would likkeee to be able to do
else
{
print "<li>Add something here</li>";
}
You can get the number of items in the resultset with mysql_num_rows. Just build the difference to find out how many items are "missing".
There are three ways I can think of, get the row count with mysql_num_rows, prime an array with three values and replace them as you loop the result set, or count down from three as your work, and finish the count with a second loop, like this:
$result = db_query($query);
$addRows = 3;
while ($row = mysql_fetch_assoc($result){
$addRows--;
// do your stuff
}
while ($addRows-- > 0) {
// do your replacement stuff
}
If you dont find a row, Add extra information accordingly.
$query = mysql_query("SELECT * FROM posts");
for($i=0;$i<3;$i++){
$row = mysql_fetch_array($query);
if($row){
print "<li>".$row['post']."</li>";
}
//(this is just to give an idea of what i would likkeee to be able to do
else{
print "<li>Add something here</li>";
}
}
Assuming you store the rows in an array or somesuch, you can simply do some padding with a while loop (depending how you generate the other data):
while (count($resultList) < 3) {
// add another row
}