elasticsearch returning all found aggregations - php

I'm using the example application from github.com/searchly/searchly-php-sample with Searchly service.
I've came a simple where I want the search results to return all the aggregations(continued as 'aggs') from the search results, not only the ones I specified.
Currently the code for the aggs is:
$searchParams['body']['aggs']['resolution']['terms']['field'] = 'resolution';
this returns the resolution agg but I can not find the way for it to return all of the possible aggs from the search results.
Is it possible or does it require me to save the aggs some where and then just list them when I do the actual search request?
Thank you!

As far as I know there is no way to do this directly - you have to specify each field you are interested in.
However if you can build up a list of all the fields in the index then you could generate the required aggregations fairly easily.
So, how to build up that list? I can think of three ways that might work
A) build it up by doing some pre-processing before you index each document into ElasticSearch
B) Use the GET MAPPING api to see what fields have been created by dynamic mapping (http://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-mapping.html)
C) Use a Scripted Metric Aggregation and write scripts that build up a de-duped list of fields in the documents (http://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-scripted-metric-aggregation.html)

Related

Efficient way to combine MySQL and ElasticSearch queries on PHP? [duplicate]

Lets say I have the following "expenses" MySQL Table:
id
amount
vendor
tag
1
100
google
foo
2
450
GitHub
bar
3
22
GitLab
fizz
4
75
AWS
buzz
I'm building an API that should return expenses based on partial "vendor" or "tag" filters, so vendor="Git" should return records 2&3, and tag="zz" should return records 3&4.
I was thinking of utilizing elasticsearch capabilities, but I'm not sure the correct way..
most articles I read suggest replicating the table records (using logstash pipe or other methods) to elastic index.
So my API doesn't even query the DB and return an array of documents directly from ES?
Is this considered good practice? replicating the whole table to elastic?
What about table relations... What If I want to filter by nested table relation?...
So my API doesn't even query the DB and return an array of documents
directly from ES?
Yes, As you are doing query to elasticsearch, you will get result only from Elasticsearch. Another way is, just get id from Elasticsearch and use id to retrive documeents from MySQL, but this might impact response time.
Is this considered good practice? replicating the whole table to
elastic? What about table relations... What If I want to filter by
nested table relation?...
It is not about good practice or bad practice, it is all about what type of functionality and use case you want to implement and based on that technology stack can be used and data can be duplicated. There is lots of company using Elasticsearch as secondary data source where they have duplicated data just because there usecase is best fit with Elasticsearh or other NoSQL db.
Elasticsearch is NoSQL DB and it is not mantain any relationship between data. Hence, you need to denormalize your data before indexing to the Elasticsearch. You can read this article for more about denormalizetion and why it is required.
ElasticSearch provide Nested and Join data type for parent child relationship but both have some limitation and performance impact.
Below is what they have mentioned for join field type:
The join field shouldn’t be used like joins in a relation database. In
Elasticsearch the key to good performance is to de-normalize your data
into documents. Each join field, has_child or has_parent query adds a
significant tax to your query performance. It can also trigger global
ordinals to be built.
Below is what they have mentioned for nested field type:
When ingesting key-value pairs with a large, arbitrary set of keys,
you might consider modeling each key-value pair as its own nested
document with key and value fields. Instead, consider using the
flattened data type, which maps an entire object as a single field and
allows for simple searches over its contents. Nested documents and
queries are typically expensive, so using the flattened data type for
this use case is a better option.
most articles I read suggest replicating the table records (using
logstash pipe or other methods) to elastic index.
Yes, You can use logstash or any language client like java, python etc, to sync data from DB to Elasticsearch. You can check this SO answer for more information on this.
Your Search Requirements
If you go ahead with Elasticsearch then you can use N-Gram Tokenizer or Regex Query and achieve your search requirements.
Maybe you can try TiDB: https://medium.com/#shenli3514/simplify-relational-database-elasticsearch-architecture-with-tidb-c19c330b7f30
If you want to scale your MySQL and have fast filtering and aggregating, TiDB could simplify the architecture and reduce development work.

Solr query field Mandatory for query search

I am using Bitnami Apache Solr 7.4.0(Latest)
I indexd documents
Now in admin Panel for query search i need to write field:value format
But I just want to search with only value
Example:
q=field:value (It works)
q=value (It give 0 result)
So what should i configure in schema.xml file that i can search through only by Value of the field
In Solr Admin --> Query page, you can add the field name to df to which you want to route your queries. df means default search field.In order to use you dont need to use dismax or edismax parsers. df will work with Standard Query parser itself. So, I hope this is what you are looking for. Thanks.
You don't need to modify the schema. You can create your own request handler which can perform query operations based on your requirements by creating a new requestHandler in the solrconfig.xml file. For more details on how to do this see here.
That being said, I would suggest you first go through the basics of querying in solr and understand how the different parameters like q, qf, defType etc. work and what different query parsers (standard, dismax etc.) are available for use. See this.
There is nothing special to configure, but you have to use the edismax or dismax query parsers. These query parses are made to support free form user input, and you can use it with just q=value. You tell Solr to use the edismax query parser by providing defType=edismax in the query URL.
Since the field to search no longer is part of the actual query, you tell the edismax handler which field to search by giving the qf parameter. You can give multiple fields in qf, and you can give each field different weights by using the syntax field^<weight>.
So to get the same result as in your first example:
?q=value&defType=edismax&qf=field

Elastic Search: Get A List Of Available Facets / Aggregations For This Result Set

In Elastic Search I need to obtain a list of available Aggregations (formally Facets?) for the current result-set.
For example, if I do a search for "car" in a set of cars which have defined MAKE and MODEL fields, I would like it to not only give me a result set of cars, but also a list of makes and models I can filter by.
From what I can read, you have to request the aggregations you want. That can't be right because if I was eBay and I had a catalogue with hundreds of possible attributes, all of which are searchable, then telling the search engine what facets I would like to search by would be unscalable.
I'm Using:
Elastic Search
PHP Elastica Client
I would expect to simply be able to call Elastica/ResultSet.php::getAggregations() on Line 194 here:
https://github.com/ruflin/Elastica/blob/master/lib/Elastica/ResultSet.php#L194
Point of reference:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations.html
Could someone please clarify what I need to do to achieve this?
In Elasticsearch, you have to explicitly mention which fields you want to aggregate upon. Why don't you implement a client side (client of Elasticsearch) logic of getting all the field names and build an Elasticsearch search request with aggregations for all those fields?

Using Views module (Drupal 7) to display php array

I have a php array (let's call it $people) that I specifically created from LinkedIn API. The fact is I don't want to record it in my own database due to the huge amount of data and I don't want to use a Cron to update results.
So, the problem is I want to display results from my php array in a filterable/sortable table. Notice: I can parameter this array to catch results from a START parameter with a COUNT parameter; this is mandatory for the pagination.
How to use Views module of Drupal 7 to do that? Any idea?
Thanks in advance.
I didn't get the exact question but assuming you want to display some data in a paginated table, I can give you some hints. You can try the theme_table() to render a paginated table. Here is rough sample code https://drupal.org/node/156863
Integration with views in a relatively complex task. Easiest way is to go with https://drupal.org/project/data. But if you want integrate more, then start with the hook_views_data()

Magento - Retrieve Advanced Search results as a plain array

I am currently working on a Magento application and have a requirement to sort advanced search results based on the precedence of the categories in the store.
Basically, i have the algorithm prepared where i would loop through the advanced search results, run a query to retrieve the position of the products category and then sort the final result set before returning it back to the calling function.
But the issue I'm having is that i am unable to retrieve the search results as a plain array to work with. Could any one of the experts tell a way to retrieve this array please?
Regards,
Maximumus 69
assuming that you're working in list.phtml, this should work.
$_productCollection=$this->getLoadedProductCollection();
$_productCollection->toArray($requiredFields)
where $requiredFields is null (if you want all fields) or an array containing the fields that you're interested in.
Note that your choice to convert to array and then sort is particularly inefficient. You should be using Magento's inbuilt Collection sorting mechanisms. Read the documentation and API then then give setOrder('position') a try.
Good luck,
JD

Categories