Part of my mapping is:
"current_price" => ["type" => "float"],
"price_history" => [
"type" => "nested",
"properties" => [
"date" => ["type" => "date"],
"value" => ["type" => "float"]
]
As you can see I keep in storage current price of goods and all the previous values. First thing I would like to notice is when I create goods in a very first time, I have no history, of course. That's why when I create goods, I do not use price_history at all, although it exists in my mapping.
$params = [
'index' => config('storesettings.esIndex'),
'type' => config('storesettings.esType'),
'id' => $id,
'body' => [
...
"current_price" => $request->get('current_price'),
...
]
];
When I edit goods, I change the price. In this case I need to move the current price to archive, moving it to price_history field. And then I need to replace current name. The question is about price_history field. I get previous value ($goods['_source']['price_history']) then I add to this array current_name. Everything is fine when I already have some history. But if I have not, then I get the error 'Undefined index: price_history'. In this case I should do checking: if(isset($goods['_source']['price_history'])). Is it normal? In relational databases I would have an empty array, but in Elasticsearch I haven't and I must do array level (so to speak) checking. How to handle such cases? Maby I should add an epmty array to price_history when I create goods?..
Related
I have a table that has a column called "items", but not all rows have it, so I want to scan all rows that have "items".
Something like:
$resposta = $clientDB->scan(array(
'TableName' => 'tableName',
'Key' => [
'items' => ['S' => 'exists']
]
));
But I can't figure out how to do it...
The table has 10000 rows, but only 10 of them have "items", and I want to get only these 10 rows.
Edit:
As Seth Geoghegan pointed below, it was necessary create a global secondary indexes on the attribute i wanted to filter.
I ended up here:
$params = [
'TableName' => 'tableName',
'FilterExpression' => "attribute_exists(items)"
];
OR
$params = [
'TableName' => 'tableName',
'FilterExpression' => 'items != :null',
'ExpressionAttributeValues' => [
':null' => null,
],
];
But both didnt worked... First one seens necessary ExpressionAttributeValues to be setup and the second the php stop working with no error logs.
This is a perfect use case for global secondary indexes (GSI)!
You can create a GSI on the items attribute. Items with the items attribute defined will get projected into the GSI. Importantly, items that do not contain this attribute will not be in the index. You could then query the GSI and retrieve the items you're after.
Well, after some effort, i found a way though:
$resposta = $clientDB->scan(array(
'TableName' => 'tableName',
'FilterExpression' => "attribute_exists(items)"
));
After i created another global secondary index (GSI) for "items" (pointed by Seth Geoghegan), i just needed to add in the scan function the FilterExpression the "attribute_exists(items") and it worked.
Currently, Getting result based on scoring but what i want to do is i want a result based on scoring + Field Status with value true/false.
If value is true then needed that results in priority but there is possibility that status field is not exist in all indices.
"query" => [
'bool' => [
'filter' => $filter,
'must' => [
"multi_match" => [
'query' => "$string",
"type" => "cross_fields",
'fields' => ['field1','field2','field3'],
"minimum_should_match" => "80%"
]
]
]
],
"sort" => [
"_score",
[ "status" => ["order" => "desc","unmapped_type" => "boolean"] ]
],
But getting error below :
[type] => illegal_argument_exception
[reason] => Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [status] in order to load field data by uninverting the inverted index. Note that this can use significant memory.
Anyone help me out to ignore for indices where that field not available or any other solution with this problem?
As discussed in the chat, the issue happened due to #jilesh
forget to delete the old index mapping and only upate the data that's what this thing was occurring.
Below answer is relevant when you get below error with proper setup
Text fields are not optimised for operations that require
per-document field data like aggregations and sorting, so these
operations are disabled by default. Please use a keyword field
instead. Alternatively, set fielddata=true on [status] in order to
load field data by uninverting the inverted index. Note that this can
use significant memory.
In that case, please enable the field data on the field if you want to get rid of the error but beware it can cause performance issues.
Read more about the field data on official site.
You can enable it in your order field in your mapping as shown.
{
"properties": {
"order": {
"type": "text",
"fielddata": true
}
}
}
works for me
curl -X PUT -H "Content-Type: application/json" \
-d '{"properties": {"format": { "type":"text","fielddata": true}}}' \
<your_host>:9200/<your_index>/_mapping
In my case, I had to use the aggregatable {fieldname}.keyword field. Here's an example using Nest .NET.
.Sort(s => s
.Ascending(f => f.Field1.Suffix("keyword"))
.Ascending(f => f.Field2.Suffix("keyword"))
.Ascending(f => f.Field3.Suffix("keyword")))
I have a multidimensional array containing data from a form and I need this array in another Controller in the same controller to continue working with it, but I don't know how I do that.
The array can look like this example:
array [
"absender" => "Maxim Ivan",
"email" => "maximivan#example.com",
"telefon" => "1234567890",
"fax" => null,
"grund" => "Gehaltserhöhung",
"termin" => [
0 => [
"person" => "Some Name",
"meeting" => "10.05"
],
1 => [
"person" => "Another Name",
"meeting" => "18.05"
],
2 => [
"person" => "Again another name",
"next-possible-meeting" => "1"
],
3 => [
"person" => "And again",
"next-possible-meeting" => "1"
],
],
"bemerkung" => "some notes by Maxim"
]
This array is created(and the input data validated) in the store-method of the 'TerminController'.
This method will return a view where all of this data gets displayed again, to let the user check the info and can then add a document.
When the document is added and the data gets submitted with an input-button the upload-method in the same Controller gets called.
And there's where I need the array with the form-data to go on working with it.
But how do I achieve passing the array through to the next function that is only called with an input-button.
First approach was to save the array into the session, which did even work even though it was hard due to the multidimensional; but it's a really ugly solution.
Should I save the input data into a database in the store-method and fetch it again in the upload-method?
Or is it somehow possible to pass the array through the Controllers/ make it accessible in the upload-Controller even though it gets created in another one?
I have also heard something about using serialize()and unserialize(), but I'm not exactly sure how this could help me..
Or maybe there's another and even better solution I just don't think of?
I'd appreciate all the help I can get.
The array varies, it can be 17 arrays nested in 'termin' but I can also be just one.
You can store it in the cache:
Cache::put('multiArray', $multiArray); //put array in cache
$array = Cache::get('multiArray'); //retreive from cache
I'm trying to send a campaign to a dynamic list segment based on a custom numeric merge field (GMT_OFFSET, in this case) but the code below yields the following error from the MailChimp API:
"errors" => [
0 => [
"field" => "recipients.segment_opts.conditions.item:0"
"message" => "Data did not match any of the schemas described in anyOf."
]
]
My code, using drewm/mailchimp-api 2.4:
$campaign = $mc->post('campaigns', [
'recipients' => [
'list_id' => config('services.mailchimp.list_id'),
'segment_opts' => [
'conditions' => [
[
'condition_type' => 'TextMerge',
'field' => 'GMT_OFFSET',
'op' => 'is',
'value' => 2,
],
],
'match' => 'all',
],
],
],
// Cut for brevity
];
If I am to take the field description literally (see below), the TextMerge condition type only works on merge0 or EMAIL fields, which is ridiculous considering the Segment Type title says it is a "Text or Number Merge Field Segment". However, other people have reported the condition does work when applied exclusively to the EMAIL field. (API Reference)
I found this issue posted but unresolved on both DrewM's git repo (here) and SO (here) from January 2017. Hoping somebody has figured this out by now, or found a way around it.
Solved it! I passed an integer value which seemed to make sense given that my GMT_OFFSET merge field was of a Number type. MailChimp support said this probably caused the error and suggested I send a string instead. Works like a charm now.
I have a elasticsearch index which i update every 10 minutes via cronjob. In this index i have a completion field which works as expected.
But i have one little problem. Lets say i have a "article" field where i change a value from "a" to "b". After 10 minutes the index is been updated and the document which holds article "a" is been updated to article "b". Everything as expected.
But my completion field now holds both values. "a" and "b" both with the same id.
How can this happen?
Mapping:
'suggest' => array(
'type' => 'completion',
'payloads' => true,
'preserve_separators' => false,
'search_analyzer' => 'standard',
'index_analyzer' => 'standard'
),
How i set the field:
'suggest' => array(
'input' => array(
$result["Name"],
$result["Name"],
$result["Name2"],
$result["Name3"],
$result["Name4"],
$result["Name5"]
),
'output' => $result["Name"].' (' . $result["Name1"].', '.$result["Name2"].')',
'payload' => array(
'id' => $result["ID"]
)
)
Found the answer in the docs.
The suggest data structure might not reflect deletes on documents immediately. You may need to do an Optimize for that. You can call optimize with the only_expunge_deletes=true to only cater for deletes or alternatively call a Merge operation.