So I've set up an index with the following mapping:
PUT test_index
{
"mappings": {
"doc": {
"properties": {
"title": {
"type": "text"
},
"author": {
"type": "text"
},
"reader_stats": {
"type": "join",
"relations": {
"book": "reader"
}
}
}
}
}
}
each parent document represents a book and its children represent a reader of that book. However, if I was to run:
GET test_index/_search
{
"query":{"match_all":{}}
}
The results would be populated with both books and readers like so:
"hits" : [
{
"_index" : "test_index",
"_type" : "doc",
"_id" : "2",
"_score" : 1.0,
"_source" : {
"title" : "my second book",
"author" : "mr author",
"reader_stats" : {
"name" : "book"
}
}
},
{
"_index" : "test_index",
"_type" : "doc",
"_id" : "7",
"_score" : 1.0,
"_routing" : "2",
"_source" : {
"name" : "michael bookworm",
"clicks" : 1,
"reader_stats" : {
"name" : "reader",
"parent" : 2
}
}
}
]
Is there some way I can exclude reader documents and only show books? I already used match_all in my app to grab books so it would be good if I can avoid having to change that query but I guess that's not possible.
Also I'm a bit confused as to how mappings work with join fields as there is no definition for what fields are required of child documents. For example, in my mapping there's nowhere to specify that 'reader' documents must have 'name' and 'clicks' fields. Is this correct?
You need to use has_child (to search only parent docs) and has_parent (to search only child docs) keywords in your query.
Is there some way I can exclude reader documents and only show books?
YES
Your query will be:
GET test_index/_search
{
"query": {
"has_child": {
"type": "reader",
"query": {
"match_all": {}
}
}
}
}
For more detail info you can take a look at here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-has-child-query.html
For example, I have this document :
{
"id": "kek",
"children": [
{
"id": "child1"
"moreInfo":...
},
{
"id": "child1",
"moreInfo": ...
}
]
}
Is it ok to do :
{
"id": "kek",
"children": {
"child1": {
"id": "child1"
"moreInfo":...
},
"child2:" {
"id": "child2",
"moreInfo": ...
}
}
}
?
With this structure, in PHP it is easier to get the child we want :
$kek[children][child1]
Also, a child can have children too so it could looks like that :
$kek[children][child1][children][child3] ...
So we don't have to do recursive search.
But is it a good document structure in MongoDB's point of view ?
TY
I need to filter my data with year only using elastic search. I am using PHP to fetch and show the results. Here is my JSON Format data
{ loc_cityname: "New York",
location_countryname: "US",
location_primary: "North America"
admitted_date : "1994-12-10"
},
{ loc_cityname: "New York",
location_countryname: "US",
location_primary: "North America"
admitted_date : "1995-12-10"
},
I am using below codes to filter the values by year.
$options='{
"query": {
"range" : {
"admitted_date" : {
"gte" : 1994,
"lte" : 2000
}
}
},
"aggs" : {
"citycount" : {
"cardinality" : {
"field" : "loc_cityname",
"precision_threshold": 100
}
}
}
}';
How can i filter the results with year only. Please somebody help me to fix this.
Thanks in advance,
You simply need to add the format parameter to your range query like this:
$options='{
"query": {
"range" : {
"admitted_date" : {
"gte" : 1994,
"lte" : 2000,
"format": "yyyy" <--- add this line
}
}
},
"aggs" : {
"citycount" : {
"cardinality" : {
"field" : "loc_cityname",
"precision_threshold": 100
}
}
}
}';
UPDATE
Note that the above solution only works for ES 1.5 and above. With previous versions of ES, you could use a script filter instead:
$options='{
"query": {
"filtered": {
"filter": {
"script": {
"script": "(min..max).contains(doc.admitted_date.date.year)",
"params": {
"min": 1994,
"max": 2000
}
}
}
}
},
"aggs": {
"citycount": {
"cardinality": {
"field": "loc_cityname",
"precision_threshold": 100
}
}
}
}';
In order to be able to run this script filter, you need to make sure that you have enabled scripting in elasticsearch.yml:
script.disable_dynamic: false
I'm working on a membership administration program, for wich we want to use Elasticsearch as search engine. At this point we're having problems with indexing certain fields, because they generate an 'immense term'-error on the _all field.
Our settings:
curl -XGET 'http://localhost:9200/my_index?pretty=true'
{
"my_index" : {
"aliases" : { },
"mappings" : {
"Memberships" : {
"_all" : {
"analyzer" : "keylower"
},
"properties" : {
"Amount" : {
"type" : "float"
},
"Members" : {
"type" : "nested",
"properties" : {
"Startdate membership" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"Enddate membership" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"Members" : {
"type" : "string",
"analyzer" : "keylower"
}
}
},
"Membership name" : {
"type" : "string",
"analyzer" : "keylower"
},
"Description" : {
"type" : "string",
"analyzer" : "keylower"
},
"elementId" : {
"type" : "integer"
}
}
}
},
"settings" : {
"index" : {
"creation_date" : "1441310632366",
"number_of_shards" : "1",
"analysis" : {
"filter" : {
"my_char_filter" : {
"type" : "asciifolding",
"preserve_original" : "true"
}
},
"analyzer" : {
"keylower" : {
"filter" : [ "lowercase", "my_char_filter" ],
"tokenizer" : "keyword"
}
}
},
"number_of_replicas" : "1",
"version" : {
"created" : "1040599"
},
"uuid" : "nn16-9cTQ7Gn9NMBlFxHsw"
}
},
"warmers" : { }
}
}
We use the keylower-analyzer, because we don't want the fullname to be split on whitespace. This is because we want to be able to search on 'john johnson' in the _all field as well as in the 'Members'-field.
The 'Members'-field can contain multiple members, wich is where the problems start. When the field only contains a couple of members (as in the example below), there is no problem. However, the field may contain hundreds or thousands of members, wich is when we get the immens term error.
curl 'http://localhost:9200/my_index/_search?pretty=true&q=*:*'
{
"took":1,
"timed_out":false,
"_shards":{
"total":1,
"successful":1,
"failed":0
},
"hits":{
"total":1,
"max_score":1.0,
"hits":[
{
"_index":"my_index",
"_type":"Memberships",
"_id":"15",
"_score":1.0,
"_source":{
"elementId":[
"15"
],
"Membership name":[
"My membership"
],
"Amount":[
"100"
],
"Description":[
"This is the description."
],
"Members":[
{
"Members":"John Johnson",
"Startdate membership":"2015-01-09",
"Enddate membership":"2015-09-03"
},
{
"Members":"Pete Peterson",
"Startdate membership":"2015-09-09"
},
{
"Members":"Santa Claus",
"Startdate membership":"2015-09-16"
}
]
}
}
]
}
}
NOTE: The above example works! It's only when the field 'Members' contains (a lot) more members that we get the error. The error we get is:
"error":"IllegalArgumentException[Document contains at least one
immense term in field=\"_all\" (whose UTF8 encoding is longer than the
max length 32766), all of which were skipped. Please correct the
analyzer to not produce such terms. The prefix of the first immense
term is: '[...]...', original message: bytes can be at most 32766 in
length; got 106807]; nested: MaxBytesLengthExceededException[bytes can
be at most 32766 in length; got 106807]; " "status":500
We only get this error on the _all-field, not on the original Members-field. With ignore_above, it's not possible to search in the _all field on fullname anymore. With the standard analyzer, i would find this document if i would search on 'Santa Johnson', because the _all-fields has a token 'Santa' and 'Johnson'. That's why i use keylower for these fields.
What i would like is an analyzer that tokenizes on field, but doesn't break up the values in the fields itself. What happens now, is that the entire field 'Members' is being fed as one token, including the childfields. (so, the token in the example above would be:
John Johnson 2015-01-09 2015-09-03 Pete Peterson 2015-09-09 Santa Claus 2015-09-16
Is it possible to tokenize these fields in such a way that every field is being fed to _all as separate tokens, but without breaking up the values in the fields themself? So that the tokens would be:
John Johnson
2015-01-09
2015-09-03
Pete Peterson
2015-09-09
Santa Claus
2015-09-16
Note: We use the Elasticsearch php library.
There is a much better way of doing this. Whether or not the phrase search can span multiple field values is determined by position_offset_gap (in 2.0 it will be renamed into position_increment_gap). This parameter basically specifies how many words/positions should be "inserted" between the last token of one field and the first token of the following fields. By default, in elasticsearch prior to 2.0 position_increment_gap has value of 0. That's is what causing the issues that you describe.
By combining copy_to feature and specifying position_increment_gap you can create an alternative my_all field that will not have this issue. By setting this new field in index.query.default_field setting you can tell elasticsearch to use this field by default instead of _all field when no fields are specified.
curl -XDELETE "localhost:9200/test-idx?pretty"
curl -XPUT "localhost:9200/test-idx?pretty" -d '{
"settings" :{
"index": {
"number_of_shards": 1,
"number_of_replicas": 0,
"query.default_field": "my_all"
}
},
"mappings": {
"doc": {
"_all" : {
"enabled" : false
},
"properties": {
"Members" : {
"type" : "nested",
"properties" : {
"Startdate membership" : {
"type" : "date",
"format" : "dateOptionalTime",
"copy_to": "my_all"
},
"Enddate membership" : {
"type" : "date",
"format" : "dateOptionalTime",
"copy_to": "my_all"
},
"Members" : {
"type" : "string",
"analyzer" : "standard",
"copy_to": "my_all"
}
}
},
"my_all" : {
"type": "string",
"position_offset_gap": 256
}
}
}
}
}'
curl -XPUT "localhost:9200/test-idx/doc/1?pretty" -d '{
"Members": [{
"Members": "John Johnson",
"Startdate membership": "2015-01-09",
"Enddate membership": "2015-09-03"
}, {
"Members": "Pete Peterson",
"Startdate membership": "2015-09-09"
}, {
"Members": "Santa Claus",
"Startdate membership": "2015-09-16"
}]
}'
curl -XPOST "localhost:9200/test-idx/_refresh?pretty"
echo
echo "Should return one hit"
curl "localhost:9200/test-idx/doc/_search?pretty=true" -d '{
"query": {
"match_phrase" : {
"my_all" : "John Johnson"
}
}
}'
echo
echo "Should return one hit"
curl "localhost:9200/test-idx/doc/_search?pretty=true" -d '{
"query": {
"query_string" : {
"query" : "\"John Johnson\""
}
}
}'
echo
echo "Should return no hits"
curl "localhost:9200/test-idx/doc/_search?pretty=true" -d '{
"query": {
"match_phrase" : {
"my_all" : "Johnson 2015-01-09"
}
}
}'
echo
echo "Should return no hits"
curl "localhost:9200/test-idx/doc/_search?pretty=true" -d '{
"query": {
"query_string" : {
"query" : "\"Johnson 2015-01-09\""
}
}
}'
echo
echo "Should return no hits"
curl "localhost:9200/test-idx/doc/_search?pretty=true" -d '{
"query": {
"match_phrase" : {
"my_all" : "Johnson Pete"
}
}
}'
I'm new in elastica search.
Can you help me with creating query ? I need search by name.
GET /site/file/_search
"hits": [
{
"_index": "site",
"_type": "file",
"_id": "135",
"_score": 1,
"_source": {
"userId": 0,
"name": "P1030021j.jpg",
"extension": "jpg",
"size": 1256
}
}
]
Thanks,
I find solution for my problem:
{
"fuzzy_like_this" : {
"fields" : ["name"],
"like_text" : "Search string",
"max_query_terms" : 12
}
}
Search by URL:
GET /site/file/_search?q=name:P1030021j.jpg
Search By Restful API
GET /site/file/_search
{
"query" : {
"query_string" : {
"query" : "name:P1030021j.jpg"
}
}
}