I have this situation where I could pre-define the array in this way:
$packages = array(
'0' => array(
'name' => 'Hotel1', //pcg name
'curr' => '$',
'amount' => '125',
'period' => 'NIGHT', //pcg duration
'client_data' => array(
'Name' =>'Adrien',
'Addr' =>'Sample Street',
'Payment' =>'Credit Card',
'Nights' =>'6',
)
),
);
Or
$packages = array();
$packages[] = array(
'name' => 'PREMIUM', //pcg name
'curr' => '$',
'amount' => '3.95',
'period' => 'MONTH', //pcg duration
'features' => array(
'Clients' =>'100',
'Invoices' =>'300 <small>MONTH</small>',
'Products' =>'30',
'Staff' =>'1',
)
);
The data will be static always so I wont be fetching this from
a sql query or a dynamic search. Would it make any difference
in terms of performance (the slightest difference could be helpful)
by using the first or the second "method" or they're actually 100%
identical in terms of performance.
Theorically the "dynamic" array creation might be slower because
it needs to check the size of the array, the last array index and
maybe other things such as those.
Thank you.
One simple task like that takes absolutely no resources in current hardware reality. Even in my first PC, a 386DX 20MHz it would not make that difference ;)
Anyway, I executed 1k times both options:
FIRST OPTION average:
0.000114s
SECOND OPTION average:
0.000108s
Be happy!
Related
All,
I am attempting to migrate roughly 6GB of Mongo data that is comprised of hundreds of collections to DynamoDB. I have written some scripts using the AWS PHP SDK and am able to port over very small collections but when I try ones that have more than 20k documents (still a very small collection all things considered) it either takes an outrageous amount of time or quietly fails.
Does anyone have some tips/tricks for taking data from Mongo (or any other NoSQL DB) and migrating it to Dynamo, or any other NoSQL DB. I feel like this should be relatively easy because the documents are extremely flat/simple.
Any thoughts/suggestions would be much appreciated!
Thanks!
header.php
<?
require './aws-autoloader.php';
require './MongoGet.php';
set_time_limit(0);
use \Aws\DynamoDb\DynamoDbClient;
$client = \Aws\DynamoDb\DynamoDbClient::factory(array(
'key' => 'MY_KEY',
'secret' => 'MY_SECRET',
'region' => 'MY_REGION',
'base_url' => 'http://localhost:8000'
));
$collection = "AccumulatorGasPressure4093_raw";
function nEcho($str) {
echo "{$str}<br>\n";
}
echo "<pre>";
test-store.php
<?
include('test-header.php');
nEcho("Creating table(s)...");
// create test table
$client->createTable(array(
'TableName' => $collection,
'AttributeDefinitions' => array(
array(
'AttributeName' => 'id',
'AttributeType' => 'N'
),
array(
'AttributeName' => 'count',
'AttributeType' => 'N'
)
),
'KeySchema' => array(
array(
'AttributeName' => 'id',
'KeyType' => 'HASH'
),
array(
'AttributeName' => 'count',
'KeyType' => 'RANGED'
)
),
'ProvisionedThroughput' => array(
'ReadCapacityUnits' => 10,
'WriteCapacityUnits' => 20
)
));
$result = $client->describeTable(array(
'TableName' => $collection
));
nEcho("Done creating table...");
nEcho("Getting data from Mongo...");
// instantiate class and get data
$mGet = new MongoGet();
$results = $mGet->getData($collection);
nEcho ("Done retrieving Mongo data...");
nEcho ("Inserting data...");
$i = 0;
foreach($results as $result) {
$insertResult = $client->putItem(array(
'TableName' => $collection,
'Item' => $client->formatAttributes(array(
'id' => $i,
'date' => $result['date'],
'value' => $result['value'],
'count' => $i
)),
'ReturnConsumedCapacity' => 'TOTAL'
));
$i++;
}
nEcho("Done Inserting, script ending...");
I suspect that you are being throttled by DynamoDB, especially if your tables' throughputs are low. The SDK retries the requests, up to 11 times per request, but eventually, the requests fail, which should throw an exception.
You should take a look at the WriteRequestBatch object. This object is basically a queue of items that get sent in batches, but any items that fail to transfer are re-queued automatically. Should provide a more robust solution for what you are doing.
In ES are filters applied before the query?
Say, for example, I am doing a really slow fuzzy search but I am only doing it on a small date range. For an example you can look below (PHP):
$res=$client->search(array('index' => 'main', 'body' => array(
'query' => array(
'bool' => array(
'should' => array(
array('wildcard' => array('title' => '*123*')),
)
)
),
'filter' => array(
'and' => array(
array('range' => array('created' => array('gte' => date('c',time()-3600), 'lte' => date('c',time()+3600))))
)
),
'sort' => array()
)));
Will the filter be applied before trying that slower search?
Logic would dictate that filters are run and then the query but I would like to be sure.
If you use the filtered-query, then filters will be applied before documents are scored.
This will generally speed things up quite a lot. However, the fuzzy query will still be using the input to build a larger query regardless of the filters.
When you use filter right on the search object, then the query will first run without respecting the filter, then documents will be filtered out of the hits - whereas facets will remain unfiltered.
Therefore, you should almost always use the filtered-query, at least when you are not using facets.
I have a CakePHP model, let's call it Thing which has an associated model called ItemView. ItemView represents one page view of the Thing item. I want to display how many times Thing has been viewed, so I do the following in my view:
<?php echo count($thing['ItemView']); ?>
This works, however as time goes on the result set of this query is going to get huge, as it's currently being returned like so:
array(
'Thing' => array(
'id' => '1',
'thing' => 'something'
),
'ItemView' => array(
(int) 0 => array(
'id' => '1',
'thing_id' => 1,
'created' => '2013-09-21 19:25:39',
'ip_address' => '127.0.0.1'
),
(int) 1 => array(
'id' => '1',
'thing_id' => 1,
'created' => '2013-09-21 19:25:41',
'ip_address' => '127.0.0.1'
),
// etc...
)
)
How can I adapt the model find() to retrieve something like so:
array(
'Thing' => array(
'id' => '1',
'thing' => 'something',
'views' => 2
)
)
without loading the entire ItemView relation into memory?
Thanks!
So it's pretty straight forward, we can make use of countercache - Cake does the counting for you whenever a record is added into/deleted fromItemView:
Nothing to change in your Thing.php model
Add a new INT column views in your things table.
In your ItemView.php model, add counterCache like this:
public $belongsTo = array(
'Thing' => array(
'counterCache' => 'views'
)
);
Then next time when you do addition/deletion via ItemView, Cake will automatically recalculate the counting and cache into views for you, so the next time when you do the query, you also need to make sure you specify recursive = -1 as what #Paco Car has suggested in his answer:
$this->Thing->recursive = -1;
$this->Thing->find(...); //this will returns array of Thing + the field "views"
// --- OR ---
$this->Thing->find(array(
'conditions' => array(
//... your usual conditions here
),
//... fields, order... etc
//this will make sure the recursive applies to this call, once only.
'recursive' => -1
);
I'm trying to build an array and while doing so, I was wondering if it makes any difference in how you create such array. Things I have in mind are performance, maintainability, readability,...
Declaration 1:
$data = array(
'my_array' => array(
'table' => array(
'group' => t('My group'),
'join' => array(
'commerce_product' => array(
'left_field' => 'sku',
'field' => 'artc',
),
),
);
Declaration 2:
$data['my_array']['table']['group'] = 'My group';
$data['my_array']['table']['join']['commerce_product'] = array(
'left_field' => 'sku',
'field' => 'artc',);
Because for some reason, my Drupal site accepts the first declaration but not the second. Since it's about creating arrays in PHP I don't think it has something to do with Drupal, rather with the way the array is created...
The feature of automatically creating an array with the second syntax (assigning a value to a key in a non-existent variable) is a more recent feature than using the array syntax.
Check the PHP version installed on your two servers.
How can you specify a max result set for Magento SOAP queries?
I am querying Magento via SOAP API for a list of orders matching a given status. We have some remote hosts who are taking too long to return the list so I'd like to limit the result set however I don't see a parameter for this.
$orderListRaw = $proxy -> call ( $sessionId, 'sales_order.list', array ( array ( 'status' => array ( 'in' => $orderstatusarray ) ) ) );
I was able to see that we do get data back (6 minutes later) and have been able to deal with timeouts, etc. but would prefer to just force a max result set.
It doesn't seem like it can be done using limit, (plus you would have to do some complex pagination logic to get all records, because you would need know the total number of records and the api does not have a method for that) See api call list # http://www.magentocommerce.com/api/soap/sales/salesOrder/sales_order.list.html
But what you could do as a work around is use complex filters, to limit the result set base on creation date. (adjust to ever hour, day or week base on order volume).
Also, since you are using status type (assuming that you are excluding more that just cancel order), you may want to think about getting all order and keep track of the order_id/status locally (only process the ones with the above status) and the remainder that wasn't proceed would be a list of order id that may need your attention later on
Pseudo Code Example
$params = array(array(
'filter' => array(
array(
'key' => 'status',
'value' => array(
'key' => 'in',
'value' => $orderstatusarray,
),
),
),
'complex_filter' => array(
array(
'key' => 'created_at',
'value' => array(
'key' => 'gteq',
'value' => '2012-11-25 12:00:00'
),
),
array(
'key' => 'created_at',
'value' => array(
'key' => 'lteq',
'value' => '2012-11-26 11:59:59'
),
),
)
));
$orderListRaw = $proxy -> call ( $sessionId, 'sales_order.list', $params);
Read more about filtering # http://www.magentocommerce.com/knowledge-base/entry/magento-for-dev-part-8-varien-data-collections