Pull mysql database into elasticsearch - php

I am using elasticsearch in my project and my requirement pulling a large MySQL data into Elasticsearch using Elasticsearch JDBC River plugin. My need is to sync mysql table to elasticsearch so i'm creating a mapping for jdbc river index.
curl -XPOST http://localhost:9200/city -d '
{
"mappings" : {
"city_type": {
"properties" : {
"domain" : {
"type" : "multi_field",
"fields" : {
"domain" : {
"type" : "string",
"index" : "analyzed"
},
"exact" : {
"type" : "string",
"index" : "not_analyzed"
}
}
},
"sent_date" : {
"type" : "date",
"format" : "dateOptionalTime"
}
}
}
}
}'
After creating the mapping in elasticsearch . i want to load the mysql table data into it. so i'm using the following command.
curl -XPUT 'localhost:9200/river/city/_meta?pretty' -d '{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:mysql://localhost:3306/test",
"user" : "root",
"password" : "root",
"sql" : "select id as _id,id as domain from city;",
"strategy":"oneshot"
},
"index" :{
"index" : "city",
"type" : "city_type",
"bulk_size":500
}
}'
These queries are successfully run and after these query when i run the command to find the data in elasticsearch is empty.
http://localhost:9200/river/_search?pretty&q=*
Please check the response of the above query here. Why the data is not showing in the elasticsearch query please help.

River has been deprecated https://github.com/elastic/elasticsearch/issues/10345 by the way.
I would highly recommend jprante jdbc importer which is a java stand-alone allowing to do the operations you are needing. https://github.com/jprante/elasticsearch-jdbc. It is not exactly a river as you have defined one.
Concerning your question, could you please try http://localhost:9200/_search?pretty&q=* ? With your syntax, you are actually looking for data in index river. You should look on all index with the query I wrote or in city index : http://localhost:9200/city/city_type/_search?pretty&q=*

If I were in your shoes, I would use logstash to push the data from MySQL to Elastic. River is deprecated since a long time ago as #Artholl already mentioned.
See https://www.elastic.co/blog/logstash-jdbc-input-plugin

Related

Limit returned fields with PHP MongoClient find() query

I am writing a PHP MongoClient Model which accesses mongodb that stores deploy logs with gitlab information, server hosts, and zend restart instructions. I have a mongo Collection called deployAppConfigs. Its document structure looks like this:
{
"_id" : ObjectId("54de193790ded22d1cd24c36"),
"app_name" : "ai2_api",
"name" : "AI2 Admin API",
"app_directory" : "path_to_app",
"app_owner" : "www-data:deployers",
"directories" : [],
"vcs" : {
"type" : "git",
"name" : "input/ai2-api"
},
"environments" : {
"development" : {
...
},
"qa" : {
...
},
"staging" : {
...
},
"production" : {
...
},
"actions" : {
"post_checkout" : [
"composer_install"
]
}
}
Because there are many documents in this collection, I would like to query the entire collection for only the "vcs" sub document and the "app_name". I am able to execute this command in Robomongo's mongo shell with the following find() query:
db.deployAppConfigs.find({}, {"vcs": 1, "app_name": 1})
This returns exactly what I want for each document in the collection:
{
"_id" : ObjectId("54de193790ded22d1cd24c36"),
"app_name" : "ai2_api",
"vcs" : {
"type" : "git",
"name" : "input/ai2-api"
}
}
I am having a problem writing a PHP MongoClient equivalent to that mongo shell command. I basically want to make a PHP MongoClient version of this mongo docs example on Limit Fields to Return from a Query
I have tried using an empty array to replace the "{}" in the mongo shell command like this, but it hasn't worked:
$query = array (
array(),
array("vcs"=> 1, "app_name"=> 1)
);
All the fields share the vcs.type = "git" so I tried wrote a query that selects all fields in every document based on that shared value. It looks like this:
$query = array (
"vcs.type" => "git"
);
But this returns the entire document, which is what I want to avoid.
The alternative could be to do a limit projection find() for the first document in the collection and then use the MongoCursor to iterate through the whole collection, but I'd rather not have to do the extra loop if possible.
Essentially, I am asking how to limit the return fields of a find() query to only one subdocument of each document in the entire collection.
looks like I was able to find the solution... I will solve the question and leave it up in case it ends up being useful to anyone else.
What I ended up having to do was alter my MongoClient custom class find() function, which calls the $collection->find() query, to include a $fields parameter.
Now, the MongoClient->find() query looks like this:
$collection->find(
array("vcs.type" => "git"),
array("vcs" => 1, "app_name" = 1)
)
Found the answer on the MongoClient::cursor::find() : here

Geo Searching with Elasticsearch

This is just a quick question. I am planning to change my search system to elasticsearch. If I send elasticsearch three pieces of information in my request:
Long
Lat
Distance
And my elasticsearch results are tagged with the following information:
Long
Lat
Is it possible for me to only return results within that distance without needed any additional elasticsearch plugins? Is this functionality built into elasticsearch?
Yes, in three pretty simple steps. You must map the data (like setting up a database schema), insert the data, and then search for it (filter).
You must appropriately map the field so that it is recognized as a geo_point. Note: If you wanted more flexibility to store other types of geolocations (e.g., polygons or linestrings), then you would want to map it as a geo_shape.
{
"mappings" : {
"TYPE_NAME" : {
"properties" : {
"fieldName" : { "type" : "geo_point" }
}
}
}
}
Once mapped, you can insert the location details using the point type.
{
"fieldName" : {
"type" : "point",
"coordinates" : [ longitude, latitude ]
}
}
You can type it in differently to get the same effect, but perhaps more intuitively (rather than [ X, Y ], which is not the most common geospatial order).
{
"fieldName" : {
"type" : "point",
"coordinates" : {
"lat" : latitude,
"lon" : longitude
}
}
}
Either way, it will represent the same thing (same goes for below in the filter).
Once you have mapped the field and started to insert points, then you can run a geo_distance filter to find points within the desired distance:
{
"filtered" : {
"query" : { "match_all" : {} },
"filter" : {
"geo_distance" : {
"distance" : "10mi",
"fieldName" : [ longitude, latitude ]
}
}
}
}

Mongo and Yii -> update with $set a field in all the arrays of a subdocument

I'm having problems updating a specific field in all the arrays of a subdocument. I have the following structure in MongoDB:
{
"_id" : ObjectId("539c9e97cac5852a1b880397"),
"DocumentoDesgloseER" : [
{
"elemento" : "COSTO VENTA",
"id_rubroer" : "11",
"id_documento" : "45087",
"abreviatura" : "CV",
"orden" : "1",
"formula" : "Cuenta Contable",
"tipo_fila" : "1",
"color" : "#FFD2E9",
"sucursal" : "D",
"documentoID" : "0",
"TOTAL" : "55426.62",
},
{ ... MORE OF THE SAME ... }
],
"id_division" : "2",
"id_empresa" : "9",
"id_sucursal" : "37",
"ejercicio" : "2008",
"lastMonthNumber" : NumberLong(6),
}
I need to update the field "documentoID" to a specific value; like "20" for example, in all the arrays of the subdocument "DocumentoDesgloseER". How I can do this?
I tried the following (with $ operator) and is not working:
$querySearch = array('id_division'=>'2', 'id_empresa'=>'9', 'id_sucursal'=>'37', 'ejercicio'=>'2008');
$queryUpdate = array('$set'=>array('DocumentoDesgloseER.$.documentoID'=>'20'));
Yii::app()->edmsMongoCollection('DocumentosDesgloseER')->update($querySearch,$queryUpdate);
By the way, I'm using Yii Framework to make the connection with Mongo. Any help or advice is welcome.
Thanks ;D!
Unfortunately, you can't currently use a positional operator to update all items in an array. There is a ticket opened in the MongoDB JIRA about this issue.
There a two "solutions":
Change your schema so that your embedded documents are in the separate collection (it's probably not what you want).
The best you can do, if you don't want to change your schema, is to update each subdocument in PHP and then save the whole document.

MongoDB Map Reduce newbie (PHP)

I'm new to the map reduce concept and even though I'm making some slow progress, I'm finding some issues that I need some help with.
I have a simple collection consisting of an id, city and and destination, something like this:
{ "_id" : "5230e7e00000000000000000", "city" : "Boston", "to" : "Chicago" },
{ "_id" : "523fe7e00000000000000000", "city" : "New York", "to" : "Miami" },
{ "_id" : "5240e1e00000000000000000", "city" : "Boston", "to" : "Miami" },
{ "_id" : "536fe4e00000000000000000", "city" : "Washington D.C.", "to" : "Boston" },
{ "_id" : "53ffe7e00000000000000000", "city" : "New York", "to" : "Boston" },
{ "_id" : "5740e1e00000000000000000", "city" : "Boston", "to" : "Miami" },
...
(Please do note that this data is just made up for example purposes)
I'd like to group by city the destinations including a count:
{ "city" : "Boston", values : [{"Chicago",1}, {"Miami",2}] }
{ "city" : "New York", values : [{"Miami",1}, {"Boston",1}] }
{ "city" : "Washington D.C.", values : [{"Boston", 1}] }
For this I'm starting to playing with this function to map:
function() {
emit(this.city, this.to);
}
which performs the expected grouping. My reduce function is this:
function(key, values) {
var reduced = {"to":[]};
for (var i in values) {
var item = values[i];
reduced.to.push(item);
}
return reduced;
}
which gives somewhat an expected output:
{ "_id" : ObjectId("522f8a9181f01e671a853adb"), "value" : { "to" : [ "Boston", "Miami" ] } }
{ "_id" : ObjectId("522f933a81f01e671a853ade"), "value" : { "to" : [ "Chicago", "Miami", "Miami" ] } }
{ "_id" : ObjectId("5231f0ed81f01e671a853ae0"), "value" : "Boston" }
As you can see, I still haven't counted the repeated cities, but as can be seen above, for some reason the last result in the output doesn't look good. I'd expected it to be
{ "_id" : ObjectId("5231f0ed81f01e671a853ae0"), "value" : { "to" : ["Boston"] } }
Has this anything to do with the fact that there is a single item? Is there any way to obtain this?
Thank you.
I see you are asking about a PHP issue, but you are using javascript to ask, so I’m assuming a javascript answer will help you move things along. As such here is the javascript needed in the shell to run your aggregation. I strong suggest getting your aggregation working in the shell(or some other javascript editor) in general and then translating it into the language of your choice. It is a lot easier to see what is going on and there faster using this method. You can then run:
use admin
db.runCommand( { setParameter: 1, logLevel: 2 } )
to check the bson output of your selected language vs what the shell looks like. This will appear in the terminal if mongo is in the foreground, otherwise you’ll have ot look in the logs.
Summing the routes in the aggregation framework [AF] with Mongo is fairly strait forward. The AF is faster and easier to use then map reduce[MR]. Though in this case they both have similar issues, simply pushing to an array won’t yield a count in and of itself (in MR you either need more logic in your reduce function or to use a finalize function).
With the AF using the example data provided this pipeline is useful:
db.agg1.aggregate([
{$group:{
_id: { city: "$city", to: "$to" },
count: { $sum: 1 }
}},
{$group: {
_id: "$_id.city",
to:{ $push: {to: "$_id.to", count: "$count"}}
}}
]);
The aggregation framework can only operate on known fields, but many pipeline operations so a problem needs to broken down with that as a consideration.
Above, the 1st stage calculates the numbers need, for which there are 3 fixed fields: the source, the destination, and the count.
The second stage has 2 fixed fields, one of which is an array, which is only being pushed to (all the data for the final form is there).
For MR you can do this:
var map = function() {
var key = {source:this.city, dest:this.to};
emit(key, 1);
};
var reduce = function(key, values) {
return Array.sum(values);
};
A separate function will have to pretty it however.
If you have any additional questions please don’t hesitate to ask.
Best,
Charlie

Filtering in Elastic Search?

I'm using date_histogram api to get the actual count using the interval (hour/day/week or month). Also I have a feature which I'm having trouble implementing, a user can filter the results by entering an startDate and endDate (textbox) which will be queried using a field timestamp. So how can I filter the results by querying only one field (which is TIMESTAMP) while using date_histogram api or any api so I can achieve my desire result.
In SQL I will just use a between operator to get the result but from what I've read so far their is no BETWEEN operator in Elastic Search (not sure).
I have this script so far:
curl 'http://anotherdomain.com:9200/myindex/_search?pretty=true' -d '{
"query" : {
"filtered" : {
"filter" : {
"exists" : {
"field" : "adid"
}
},
"query" : {
"query_string" : {
"fields" : [
"adid", "imp"
],
"query" : "525826 AND true"
}
}
}
},
"facets" : {
"histo1":{
"date_histogram":{
"field":"timestamp",
"interval":"day"
}
}
}
}'
In elasticsearch you can use range query of filter to achieve that.

Categories