I have a code in which foreach inside another foreach.
$order_id = '1,1,8';
$order_no_first= 'F,SH,C';
$order_id1 = explode(",", $order_id);
$order_no_first1 = explode(',', $order_no_first);
foreach($order_id1 as $ord_id){
foreach($order_no_first1 as $ord_no_first){
if($ord_id != '') {
$this->receipt->chageBagStatus($ord_id, $ord_no_first);
$add = $this->receipt->addJobOrderNew($ord_id, $ord_no_first, $bag_no);
}
}
}
Now the above code iterates 3 times resulting 9 rows in mysql.
//Current Output
order_id orderr_no_first
-------- ---------------
8 C
8 SH
8 F
1 C
1 SH
1 F
1 C
1 SH
1 F
The above output is wrong. I want the output as below,
//Required Output
order_id orderr_no_first
-------- ---------------
8 C
1 SH
1 F
I know it's because am using nested foreach. but I don't know how to solve this issue. Is there any solution. Thankyou.
just use one foreach like this,
foreach($order_id1 as $key => $ord_id){
if($ord_id != '') {
$this->receipt->chageBagStatus($ord_id, $order_no_first1[$key]);
$add = $this->receipt->addJobOrderNew($ord_id, $order_no_first1[$key], $bag_no);
}
}
I hope this will work for you
$order_id = '1,1,8';
$order_no_first= 'F,SH,C';
$order_id1 = explode(",", $order_id);
$order_no_first1 = explode(',', $order_no_first);
rsort($order_id1);
asort($order_no_first1);
$i = 0;
foreach($order_id1 as $ord_id){
echo $ord_id." ".$order_no_first1[$i]."<br/>";
$i++;
}
`
I have a database which has 6 column A B C D E X, for each combination of ABCDE I have a different value of X.
I need a way to search through, that will allow all values of X for different combinations (for example all X when A=1, or all X when A=1 and B=2 etc)
My thought was to translate it into a 5-D array which looks like this:
Array[A][B][C][D][E]=X;
But now I'm trying to extract sub arrays, when I don't know how may of the dimensions will be constant. So I need to be able to extract all value of X for Array[1][5][][][] or Array[2][4][5][][]… etc.
And I'm totally stuck.
I'm trying to do 6 loops one inside another but I don't know how to handle those that are constant.
Help with ideas will be very very helpful.
Edit
Database:
A B C D E X
1 1 1 1 1 53
1 1 2 3 2 34
2 1 1 4 2 64
Turned it into an array:
Array[1][1][1][1][1]=53
Array[1][1][2][3][2]=34
For
Input: A=1
Output 53,34
Input A=1,B=1,C=1
Output: 53,
etc
Try this then
<?php
$arr = array();
$result = mysql_query("SELECT A,B,C,D,E,X FROM table_name ORDER BY A ASC,B ASC,C ASC,D ASC,E ASC");
if(mysql_num_rows($result) > 0) {
while($row = mysql_fetch_assoc($result)) {
array_push($arr,$row);
}
}
function search($arr,$values) {
$return = array();
foreach($arr AS $key => $value) {
$ok = true;
foreach(array('A','B','C','D','E') AS $letter) {
if(array_key_exists($letter,$values)) {
if($value[$letter] != $values[$letter]) {
$ok = false;
break;
}
}
}
if($ok) array_push($return,$value['X']);
}
return (($return) ? implode(',',$return) : false);
}
echo '<pre>';
print_r(search($arr,array('A' => 1)));
echo '</pre>';
?>
My question might be a bit vague, because I cannot quite figure it out.
I have a piece of PHP that tries to convert a mysql query result into an array "tree".
I.e. arrays of arrays depending on the defined groups.
The code assumes that a column name would start with a double underscore __ to indicate grouping and the results will already be ordered by the grouping.
The code works , but in certain cases it slows down to unusable speeds.
Cases which I would expect it to be fast. Only one grouping with only a few unique values and many items in each branch sometimes takes upto 30seconds.
Where other cases with many layers of branches and many different values , it only takes 1 second. (The result set is usually around 20 000 rows)
So, my question I guess is simply, what is wrong with my code ? Where am messing up so bad that it would impact performance significantly.
P.S. I'm a relative php novice , so be gentle :)
Sorry, no code comments O_o
$encodable = array();
$rownum = 0;
$branch = null;
$row = null;
$first = true;
$NULL = null;
$result = mysql_query($value,$mysql);
error_log (date("F j, Y, g:i a")."\r\n",3,"debug.log");
if (gettype($result) == "resource")
{
while($obj = mysql_fetch_object($result))
{
$newrow = true;
$branch = &$encodable;
$row = &$NULL;
if (count($branch) > 0)
{
$row = &$branch[count($branch)-1];
}
foreach ($obj as $column => $value)
{
if ($column[0] == '_' && $column[1] == '_')
{
$gname = substr($column,2);
if (isset($row[$gname]) && $row[$gname] == $value)
{
$branch = &$row["b"];
$row = &$NULL;
if (count($branch) > 0)
{
$row = &$branch[count($branch)-1];
}
}
else
{
$branch[] = array();
$row = &$branch[count($branch)-1];
$row[$gname] = $value;
$row["b"] = array();
$branch = &$row["b"];
$row = &$NULL;
if (count($branch) > 0)
{
$row = &$branch[count($branch)-1];
}
}
}
else
{
if ($newrow)
{
$branch[] = array();
$row = &$branch[count($branch)-1];
$newrow = false;
}
$row[$column] = $value;
}
}
$rownum++;
}
}
$encoded = json_encode($encodable);
EDIT:
A sample output - the resulting arrays is converted to json.
This small set is grouped by "av" , b is created by the code for each branche and then contains a list of the [hid , utd] records per AV.
[{"av":"eset nod","b":[{"hid":"3","utd":"1"}]},{"av":"None","b":[{"hid":"2","utd":"0"},{"hid":"4","utd":"0"},{"hid":"5","utd":"0"},{"hid":"1","utd":"0"}]}]
The actual sql result that produced this result is:
+----------+-----+-----+
| __av | hid | utd |
+----------+-----+-----+
| eset nod | 3 | 1 |
| None | 2 | 0 |
| None | 4 | 0 |
| None | 5 | 0 |
| None | 1 | 0 |
+----------+-----+-----+
Turns out its all the calls to count($branch).
Apparently calling a function that doesnt expect a variable by reference like count , With a variable by reference , causes the function to make a Copy of the variable to operate on.
In my case arrays with thousands of elements. Which also explains why its the results with few (but large branches) are the ones that suffer the most.
See this thread:
Why is calling a function (such as strlen, count etc) on a referenced value so slow?
Given input, which shows tag assignments to images, as follows (reading this from php://stdin line by line, as the input can get rather large)
image_a tag_lorem
image_a tag_ipsum
image_a tag_amit
image_b tag_sit
image_b tag_dolor
image_b tag_ipsum
... (there are more lines, may get up to a million)
Output of the input is shown as follows. Basically it is the same format with another entry showing whether the image-tag combination exists in input. Note that for every image, it will list all the available tags and show whether the tag is assigned to the image by using 1/0 at the end of each line.
image_a tag_sit 0
image_a tag_lorem 1
image_a tag_dolor 0
image_a tag_ipsum 1
image_a tag_amit 1
image_b tag_sit 1
image_b tag_lorem 0
image_b tag_dolor 1
image_b tag_ipsum 1
image_b tag_amit 0
... (more)
I have posted my no-so-efficient solution down there. To give a better picture of input and output, I fed 745 rows (which explains tag assignment of 10 images) into the script via stdin, and I receive 555025 lines after the execution of the script using about 0.4MB of memory. However, it may kill the harddisk faster because of the heavy disk I/O activity (while writing/reading to the temporary column cache file).
Is there any other way of doing this? I have another script that can turn the stdin into something like this (not sure if this is useful)
image_foo tag_lorem tag_ipsum tag_amit
image_bar tag_sit tag_dolor tag_ipsum
p/s: order of tag_* is not important, but it has to be the same for all rows, i.e. this is not what i want (notice the order of tag_* is inconsistent for both tag_a and tag_b)
image_foo tag_lorem 1
image_foo tag_ipsum 1
image_foo tag_dolor 0
image_foo tag_sit 0
image_foo tag_amit 1
image_bar tag_sit 1
image_bar tag_lorem 0
image_bar tag_dolor 1
image_bar tag_ipsum 1
image_bar tag_amit 0
p/s2: I don't know the range of tag_* until i finish reading stdin
p/s3: I don't understand why I get down-voted, if clarification is needed I am more than happy to provide them, I am not trying to make fun of something or posting nonsense here. I have re-written the question again to make it sound more like a real problem (?). However, the script really doesn't have to care about what the input really is or whether database is used (well, the data is retrieved from an RDF data store if you MUST know) because I want the script to be usable for other type of data as long as the input is in right format (hence the original version of this question was very general).
p/s4: I am trying to avoid using array because I want to avoid out of memory error as much as possible (if 745 lines expaining just 10 images will be expanded into 550k lines, just imagine I have 100, 1000, or even 10000+ images).
p/s5: if you have answer in other language feel free to post it here. I have thought of solving this using clojure but still couldn't find a way to do it properly.
Sorry, maby I misunderstood you - this looks too easy:
$stdin = fopen('php://stdin', 'r');
$columns_arr=array();
$rows_arr=array();
function set_empty_vals(&$value,$key,$columns_arr) {
$value=array_merge($columns_arr,$value);
ksort($value);
foreach($value AS $val_name => $flag) {
echo $key.' '.$val_name.' '.$flag.PHP_EOL;
}
$value=NULL;
}
while ($line = fgets($stdin)) {
$line=trim($line);
list($row,$column)=explode(' ',$line);
$row=trim($row);
$colum=trim($column);
if(!isset($rows_arr[$row]))
$rows_arr[$row]=array();
$rows_arr[$row][$column]=1;
$columns_arr[$column]=0;
}
array_walk($rows_arr,'set_empty_vals',$columns_arr);
UPD:
1 million lines is easy for php:
$columns_arr = array();
$rows_arr = array();
function set_null_arr(&$value, $key, $columns_arr) {
$value = array_merge($columns_arr, $value);
ksort($value);
foreach($value AS $val_name => $flag) {
//echo $key.' '.$val_name.' '.$flag.PHP_EOL;
}
$value=NULL;
}
for ($i = 0; $i < 100000; $i++) {
for ($j = 0; $j < 10; $j++) {
$row='row_foo'.$i;
$column='column_ipsum'.$j;
if (!isset($rows_arr[$row]))
$rows_arr[$row] = array();
$rows_arr[$row][$column] = 1;
$columns_arr[$column] = 0;
}
}
array_walk($rows_arr, 'set_null_arr', $columns_arr);
echo memory_get_peak_usage();
147Mb for me.
Last UPD - this is how I see low memory usage(but rather fast) script:
//Approximate stdin buffer size, 1Mb should be good
define('MY_STDIN_READ_BUFF_LEN', 1048576);
//Approximate tmpfile buffer size, 1Mb should be good
define('MY_TMPFILE_READ_BUFF_LEN', 1048576);
//Custom stdin line delimiter(\r\n, \n, \r etc.)
define('MY_STDIN_LINE_DELIM', PHP_EOL);
//Custom stmfile line delimiter - chose smallset possible
define('MY_TMPFILE_LINE_DELIM', "\n");
//Custom stmfile line delimiter - chose smallset possible
define('MY_OUTPUT_LINE_DELIM', "\n");
function my_output_arr($field_name,$columns_data) {
ksort($columns_data);
foreach($columns_data AS $column_name => $column_flag) {
echo $field_name.' '.$column_name.' '.$column_flag.MY_OUTPUT_LINE_DELIM;
}
}
$tmpfile=tmpfile() OR die('Can\'t create/open temporary file!');
$buffer_len = 0;
$buffer='';
//I don't think there is a point to save columns array in file -
//it should be small enough to hold in memory.
$columns_array=array();
//Open stdin for reading
$stdin = fopen('php://stdin', 'r') OR die('Failed to open stdin!');
//Main stdin reading and tmp file writing loop
//Using fread + explode + big buffer showed great performance boost
//in comparison with fgets();
while ($read_buffer = fread($stdin, MY_STDIN_READ_BUFF_LEN)) {
$lines_arr=explode(MY_STDIN_LINE_DELIM,$buffer.$read_buffer);
$read_buffer='';
$lines_arr_size=count($lines_arr)-1;
$buffer=$lines_arr[$lines_arr_size];
for($i=0;$i<$lines_arr_size;$i++) {
$line=trim($lines_arr[$i]);
//There must be a space in each line - we break in it
if(!strpos($line,' '))
continue;
list($row,$column)=explode(' ',$line,2);
$columns_array[$column]=0;
//Save line in temporary file
fwrite($tmpfile,$row.' '.$column.MY_TMPFILE_LINE_DELIM);
}
}
fseek($tmpfile,0);
$cur_row=NULL;
$row_data=array();
while ($read_buffer = fread($tmpfile, MY_TMPFILE_READ_BUFF_LEN)) {
$lines_arr=explode(MY_TMPFILE_LINE_DELIM,$buffer.$read_buffer);
$read_buffer='';
$lines_arr_size=count($lines_arr)-1;
$buffer=$lines_arr[$lines_arr_size];
for($i=0;$i<$lines_arr_size;$i++) {
list($row,$column)=explode(' ',$lines_arr[$i],2);
if($row!==$cur_row) {
//Output array
if($cur_row!==NULL)
my_output_arr($cur_row,array_merge($columns_array,$row_data));
$cur_row=$row;
$row_data=array();
}
$row_data[$column]=1;
}
}
if(count($row_data)&&$cur_row!==NULL) {
my_output_arr($cur_row,array_merge($columns_array,$row_data));
}
Here's a MySQL example that works with your supplied test data:
CREATE TABLE `url` (
`url1` varchar(255) DEFAULT NULL,
`url2` varchar(255) DEFAULT NULL,
KEY `url1` (`url1`),
KEY `url2` (`url2`)
);
INSERT INTO url (url1, url2) VALUES
('image_a', 'tag_lorem'),
('image_a', 'tag_ipsum'),
('image_a', 'tag_amit'),
('image_b', 'tag_sit'),
('image_b', 'tag_dolor'),
('image_b', 'tag_ipsum');
SELECT url1, url2, assigned FROM (
SELECT t1.url1, t1.url2, 1 AS assigned
FROM url t1
UNION
SELECT t1.url1, t2.url2, 0 AS assigned
FROM url t1
JOIN url t2
ON t1.url1 != t2.url1
JOIN url t3
ON t1.url1 != t3.url1
AND t1.url2 = t3.url2
AND t2.url2 != t3.url2 ) tmp
ORDER BY url1, url2;
Result:
+---------+-----------+----------+
| url1 | url2 | assigned |
+---------+-----------+----------+
| image_a | tag_amit | 1 |
| image_a | tag_dolor | 0 |
| image_a | tag_ipsum | 1 |
| image_a | tag_lorem | 1 |
| image_a | tag_sit | 0 |
| image_b | tag_amit | 0 |
| image_b | tag_dolor | 1 |
| image_b | tag_ipsum | 1 |
| image_b | tag_lorem | 0 |
| image_b | tag_sit | 1 |
+---------+-----------+----------+
This should be simple enough to convert to SQLite, so if required you could use PHP to read the data into a temporary SQLite database, and then extract the results.
Put your input data in array and then sort them by using usort, define comparison function which compares array elements by row values and then column values if row values are equal.
This is my current implementation, I don't like it, but it does the job for now.
#!/usr/bin/env php
<?php
define('CACHE_MATCH', 0);
define('CACHE_COLUMN', 1);
define('INPUT_ROW', 0);
define('INPUT_COLUMN', 1);
define('INPUT_COUNT', 2);
output_expanded_entries(
cache_input(array(tmpfile(), tmpfile()), STDIN, fgets(STDIN))
);
echo memory_get_peak_usage();
function cache_input(Array $cache_files, $input_pointer, $input) {
if(count($cache_files) != 2) {
throw new Exception('$cache_files requires 2 file pointers');
}
if(feof($input_pointer) == FALSE) {
cache_match($cache_files[CACHE_MATCH], trim($input));
cache_column($cache_files[CACHE_COLUMN], process_line($input));
cache_input(
$cache_files,
$input_pointer,
fgets($input_pointer)
);
}
return $cache_files;
}
function cache_column($cache_column, $input) {
if(empty($input) === FALSE) {
rewind($cache_column);
$column = get_field($input, INPUT_COLUMN);
if(column_cached_in_memory($column) === FALSE && column_cached_in_file($cache_column, fgets($cache_column), $column) === FALSE) {
fputs($cache_column, $column . PHP_EOL);
}
}
}
function cache_match($cache_match, $input) {
if(empty($input) === FALSE) {
fputs($cache_match, $input . PHP_EOL);
}
}
function column_cached_in_file($cache_column, $current, $column, $result = FALSE) {
return $result === FALSE && feof($cache_column) === FALSE ?
column_cached_in_file($cache_column, fgets($cache_column), $column, $column == $current)
: $result;
}
function column_cached_in_memory($column) {
static $local_cache = array(), $index = 0, $count = 500;
$result = TRUE;
if(in_array($column, $local_cache) === FALSE) {
$result = FALSE;
$local_cache[$index++ % $count] = $column;
}
return $result;
}
function output_expanded_entries(Array $cache_files) {
array_map('rewind', $cache_files);
for($current_row = NULL, $cache = array(); feof($cache_files[CACHE_MATCH]) === FALSE;) {
$input = process_line(fgets($cache_files[CACHE_MATCH]));
if(empty($input) === FALSE) {
if($current_row !== get_field($input, INPUT_ROW)) {
output_cache($current_row, $cache);
$cache = read_columns($cache_files[CACHE_COLUMN]);
$current_row = get_field($input, INPUT_ROW);
}
$cache = array_merge(
$cache,
array(get_field($input, INPUT_COLUMN) => get_field($input, INPUT_COUNT))
);
}
}
output_cache($current_row, $cache);
}
function output_cache($row, $column_count_list) {
if(count($column_count_list) != 0) {
printf(
'%s %s %s%s',
$row,
key(array_slice($column_count_list, 0, 1)),
current(array_slice($column_count_list, 0, 1)),
PHP_EOL
);
output_cache($row, array_slice($column_count_list, 1));
}
}
function get_field(Array $input, $field) {
$result = NULL;
if(in_array($field, array_keys($input))) {
$result = $input[$field];
} elseif($field == INPUT_COUNT) {
$result = 1;
}
return $result;
}
function process_line($input) {
$result = trim($input);
return empty($result) === FALSE && strpos($result, ' ') !== FALSE ?
explode(' ', $result)
: NULL;
}
function push_column($input, Array $result) {
return empty($input) === FALSE && is_array($input) ?
array_merge(
$result,
array(get_field($input, INPUT_COLUMN))
)
: $result;
}
function read_columns($cache_columns) {
rewind($cache_columns);
$result = array();
while(feof($cache_columns) === FALSE) {
$column = trim(fgets($cache_columns));
if(empty($column) === FALSE) {
$result[$column] = 0;
}
}
return $result;
}
EDIT: yesterday's version was bugged :/