Check performance of variable initializing - php

I have loop like following and its run for more than 6000 records,
foreach ($csv as $value) {
$research = ResearchData::create(array('company_id' => Input::get('company'), 'date' => Input::get('date')));
}
in here i used 2 values company_id and date.
i want to know what is the most good way to use this from follow codes
................1....................
$company_id=Input::get('company_id');
$date=Input::get('date');
foreach($csv as value){
$research=ResearchData::create(array('company_id'=>$company_id,'date'=>$date));
}
................2...................
foreach ($csv as $value) {
$research = ResearchData::create(array('company_id' => Input::get('company'), 'date' => Input::get('date')));
}

From a performance point of view, number 1 will be faster, but only because Input::get will take a tiny little bit longer as it does some checks, an array concatenation and eventually grabs something from an array. This will take a completely negligible amount of time, but option 1 does this once whereas option 2 will do this for every iteration of the loop
From any other point of view (code clarity, documentation etc) it's completely opinion based.

You can do a bulk insert. I didn't do a performance check, but I expect a better performance. Check below:
$company_id=Input::get('company_id');
$date=Input::get('date');
$data = array_fill(0, count($csv) - 1, ['company_id' => $company_id, 'date' => $date]); // skip the large foreach
ResearchData::insert(array_values($data)); // skip the numeric keys
Documentation:
http://php.net/array_filter
http://laravel.com/docs/4.2/queries#inserts

Related

Predicting future IDs used before saving to the DB

I am saving a complex dataset in Laravel 4.2 and I am looking for ways to improve this.
$bits has several $bobs. A single $bob can be one of several different classes. I am trying to duplicate a singular $bit and all its associated $bobs and save all of this to the DB with as few calls as possible.
$newBit = $this->replicate();
$newBit->save();
$bobsPivotData = [];
foreach ($this->bobs as $index => $bob) {
$newBob = $bob->replicate();
$newBobs[] = $newBob->toArray();
$bobsPivotData[] = [
'bit_id' => $newBit->id,
'bob_type' => get_class($newBob),
'bob_id' => $newBob->id,
'order' => $index
];
}
// I now want to save all the $bobs kept in $newBobs[]
DB::table('bobs')->insert($newBobs);
// Saving all the pivot data in one go
DB::table('bobs_pivot')->insert($bobsPivotData);
My problem is here, that I cant access $newBob->id before I have inserted the $newBob after the loop.
I am looking for how best to reduce saves to the DB. My best guess is that if I can predict the ids that are going to be used, I can do all of this in one loop. Is there a way I can predict these ids?
Or is there a better approach?
You could insert the bobs first and then use the generated ids to insert the bits. This isn't a great solution in a multi-user environment as there could be new bobs inserted in the interim which could mess things up, but it could suffice for your application.
$newBit = $this->replicate();
$newBit->save();
$bobsPivotData = [];
foreach ($this->bobs as $bob) {
$newBob = $bob->replicate();
$newBobs[] = $newBob->toArray();
}
$insertId = DB::table('bobs')->insertGetId($newBobs);
$insertedBobs = DB::table('bobs')->where('id', '>=', $insertId);
foreach($insertedBobs as $index => $newBob){
$bobsPivotData[] = [
'bit_id' => $newBit->id,
'bob_type' => get_class($newBob),
'bob_id' => $newBob->id,
'order' => $index
];
}
// Saving all the pivot data in one go
DB::table('bobs_pivot')->insert($bobsPivotData);
I have not tested this, so some pseudo-code to be expected.

Nested Foreach Loops - Does it matter if I start with bigger or smaller array size?

Probably a stupid question here, but thought I should ask the experts. I have a foreach loop with approximately 41 thousands values in an array, and I have another foreach loop with about 40 values in another array. My question is, it doesn't really matter to me which foreach loop I start with, but I need to loop though both of these arrays recursively and wondering if there is a difference in PHP memory, or consumption if I start with the bigger loop vs starting with the smaller loop, and also, if the items in the array have anything to do with the memory consumption as well, for example, if there are objects in the array vs. text string values. Example:
$products = array(
0 => array('id' => 1, 'product' => 'My Product 1'),
1 => array('id' => 2, 'product' => 'My Product 2'),
2 => array('id' => 3, 'product' => 'My Product 3'),
// and so on up to index of 39...
);
$users = // this is an array of objects and is over 41,000 entries long... The objects contain only 2 properties.
So, my question is, does it matter which array I start with here for speed, reliability, performance, and memory consumption within php?
foreach($products as $product)
{
foreach($users as $user)
{
// Do Something with the values of $product/$user
}
}
VS.
foreach($users as $user)
{
foreach($products as $product)
{
// Do Something with the values of $user/$product
}
}
I'm also open to other ways to do this (possibly with other types of loops) for speed, reliability, performance, and memory reasons. But 1 thing that is important is that I need the values of each $product and $user
The answer is: Not really, unless you're into micro optimalization. 40 * 41000 is the same as 41000 * 40.
There might be a very tiny difference in the memory footprint depending on which you pick first, since the first array has elements that could be unset after an iteration, but i suspect the difference is very tiny indeed.
Does it matter which array you start with? No. There might a difference. But i don't think so.
If it is applicable, the better solution myth be to loop for only small array and query database filtered array of users for each product separate. But it depends on you data and on thing you do within the loop.

How to pull data from multiple arrays to construct one nice array?

I have 4 arrays :
qb_array = { sku, stock } size:20803
valid_array = { sku, name, price, status } size:199803
by intersect qb_array && valid_array, base one sku then I have :
intersect_sku_array { sku } size:18795
Now, there are some data that I want to grab out of my first 2 arrays.
From qb_array, I want stock
From valid array - I want name, price, and status
Then I decide to create my 4th-array and called it :
4 - inserted_array ( because I will use this array to insert into my database )
I tried to construct it, and now I am stuck.
Here is my approach :
First, I did
foreach ($intersect_sku_array as $key ) {
$inserted_array[] = $key;
}
So far, over here is good - everything is working when I dd($inserted_array); I see all the stuffs in it.
Then, moving on to add the other 3 from my valid_array
foreach ($valid_array as $key => $value ) {
if ( in_array( $key , array_map('strtolower', $intersect_sku_array ) )){
$inserted_array[] = $value['name'];
$inserted_array[] = $value['price'];
$inserted_array[] = $value['status'];
}
}
Then I did dd($inserted_array); at the end, it is hanging on me.
After about 5 mn I got this error :
Maximum execution time of 300 seconds exceeded
Is it because I have too much data, or my code is stuck in the infinite loop some how ?
Can someone please explain all of these in details ?
Maybe this would help:
foreach($intersect_sku_array as $sku)
{
$qbRow = qb_array[array_search($sku,$qb_array)];
$validRow = valid_array[array_search($sku,$valid_array)];
inserted_array[] = array($sku, $qbRow[1], $validRow[1], $validRow[2], $validRow[3]);
}
Although I think it would be easier for you to use named arrays like the following:
qb_array = ['sku' => ['stock'=> 'actual stock']];
valid_array = ['sku' => ['name'=> 'actual name', 'price' => 'actual price', 'status' => 'actual status']];
Like so.
For one thing you are running the lowercasing inside the loop, which is certainly not the best way to save time.
Then you are constructing a "flat" array that will contain (sku, name, price, status) quadruplets (and no stock) in sequence.
I would rather have the database do a join on both tables, since SQL is a much better tool than PHP for that kind of job.
Now if for some reason you can't do that, better use sku as a key for your two arrays.
foreach ($qb_array as $val) $k_qb[strtolower($val['sku']) = $val['stock'];
foreach ($valid_array as $val) $k_va[strtolower($val['sku']) = array ($val['name'], $val['price'], $val['status']);
(if you must lowercase something, better do it late than never, but frankly this should also be done in the database, unless you're forced to work with garbage data)
Then you can do the join manually without any intermediate intersection array, like so:
forach ($k_qb as $sku => $stock) // use smallest array
{
if (!isset($k_va[$sku])) continue; // skip non-matching records
$record = $k_va[$sku];
$inserted_array[] = array (
'sku' => $sku,
'stock' => $stock,
'name' => $record[0],
'price' => $record[1],
'status' => $record[2]);
}
The order of the algorithm will be NlogM instead of MN if M and N are the sizes of both your arrays.

Fetching data from mysql database in the VIEW structure of Zend 2.2?

I am a newbie in php+Zend programming, so need your valuable advice.
1. I have a table in mysql (phpmyadmin), the attributes in the table are ~ user_id, expense_id, date month, year, expense.
2. I have .phtml file (index.phtml) in the View folder (Zend 2.2). It is accessed by indexAction() in the Controller page. Code:
return viewmodel ( return array=>(
'years'=>$this->getExpenseTable()->fetchAll($user_id); )),
[sorry if it's not in the proper format]. This function is meant to return all the values from the db, when I put it into a table with foreach. The code in index.phtml is below:
escapeHtml($expense->expense);?> .....and so on......
Now my problem is:
a) I cannot use the variable 'years' in another table with another foreach loop in the same index.phtml file. it says, "this is a forward-only result-set." I tried implementing unset() and rewind(), both did not work.
b) I want to take unique value of the attribute 'year' from the table (as table header you might think) and put the summation of expenses under each year.
You have multiple questions which should perhaps be wrapped in multiple questions, but anyhow, here it goes:
Iterate multiple times over a single resultset:
Some drivers allow buffering. The value of $years is a Zend ResultSet object. You can call $years->buffer() before you loop to enable the internal buffer so you can iterate twice:
// $set is ResultSet
$set->buffer();
foreach ($set as $result) {} // first time
foreach ($set as $result) {} // second time
Extract the year:
You can use simple view logic for that:
// example resultset
$result = array(
array('year' => '2012', 'value' => 'foo'),
array('year' => '2012', 'value' => 'bar'),
array('year' => '2013', 'value' => 'baz'),
);
// store year for check
$year = null;
foreach ($result as $item) {
if ($item['year']) !== $year) {
$year = $item['year'];
echo $year;
}
echo $item['value'];
}

How to merge multiple arrays that are depending on a function in PHP?

I am seeking for someone's knowledge out here.
Right now, I would need to merge several arrays into a bigger one, but all of those arrays depend on a function.
This function returns a numeric array containing different quantities of numbers between 1 and 7 :
function Possible($i, $j, $grid)
$possible = Possible($i, $j, $grid)
I'm working with a grid, and the function returns a different array for every case of the grid. What I would like to do is to merge those 7 arrays into another one. Some numbers may be present more than once in this big array, but I want it this way.
I tried using for loops, while loops and some other techniques, but nothing worked. In this case, it is not possible to manually define every array, since they change depending of what is contained in the grid and there are too many. It has to be done automatically, and this is where I get stuck.
for ($jj=0; $j<7; $j++){
$possRow = array_merge( ###what do I add here or what do I change in this code to make everything work###
Thank you if someone can help me out!
Etpi
hope this help:
$biggerOneArray = array();
for($k=0;$k<7;$k++) {
$biggerOneArray[] = Possible($i,$j,$grid);
}
Then you can check your bigger array, may contains all arrays of the iterations of the loop (7 arrays merged).
var_dump($biggerOneArray);
The output should be this:
array(
(int) 0 => array(
'key' => 'value',
'key2' => 'value2'
),
(int) 1 => array(
'key3' => 'value3',
'key4' => 'value4'
)
)
etc...
I'm sorry but your description isn't very clear. But just to get you started you might look at this solution.
function Possible($i, $j, $grid) {
// some code ... e.g. $array[] = "some data";
return $array;
}
By creating a small array for each grid and returning it using return $array you get a few little arrays which you can inturn place into a for loop to merge it into one larger array. However i believe the var $jj must have some meaning in the function it self as well.
for($jj=0;$jj<7;$jj++) {
$merged_array[$jj] = Possible($i,$j,$grid);
}
Maybe if you descripe your problem a little more and post an exmple of the array's your working with i can give you a better answer.

Categories