In one of my project, I am planning to use ElasticSearch with MySQL.
I have successfully installed ElasticSearch. I am able to manage index in ES separately. but I don't know how to implement the same with MySQL.
I have read a couple of documents but I am a bit confused and not having a clear idea.
As of ES 5.x , they have given this feature out of the box with logstash plugin.
This will periodically import data from database and push to ES server.
One has to create a simple import file given below (which is also described here) and use logstash to run the script. Logstash supports running this script on a schedule.
# file: contacts-index-logstash.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/mydb"
jdbc_user => "user"
jdbc_password => "pswd"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "/path/to/latest/mysql-connector-java-jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from contacts where updatedAt > :sql_last_value"
}
}
output {
elasticsearch {
protocol => http
index => "contacts"
document_type => "contact"
document_id => "%{id}"
host => "ES_NODE_HOST"
}
}
# "* * * * *" -> run every minute
# sql_last_value is a built in parameter whose value is set to Thursday, 1 January 1970,
# or 0 if use_column_value is true and tracking_column is set
You can download the mysql jar from maven here.
In case indexes do not exist in ES when this script is executed, they will be created automatically. Just like a normal post call to elasticsearch
Finally i was able to find the answer. sharing my findings.
To use ElasticSearch with Mysql you will require The Java Database Connection (JDBC) importer. with JDBC drivers you can sync your mysql data into elasticsearch.
I am using ubuntu 14.04 LTS and you will require to install Java8 to run elasticsearch as it is written in Java
following are steps to install ElasticSearch 2.2.0 and ElasticSearch-jdbc 2.2.0 and please note both the versions has to be same
after installing Java8 ..... install elasticsearch 2.2.0 as follows
# cd /opt
# wget https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/deb/elasticsearch/2.2.0/elasticsearch-2.2.0.deb
# sudo dpkg -i elasticsearch-2.2.0.deb
This installation procedure will install Elasticsearch in /usr/share/elasticsearch/ whose configuration files will be placed in /etc/elasticsearch .
Now lets do some basic configuration in config file. here /etc/elasticsearch/elasticsearch.yml is our config file
you can open file to change by
nano /etc/elasticsearch/elasticsearch.yml
and change cluster name and node name
For example :
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: servercluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: vps.server.com
#
# Add custom attributes to the node:
#
# node.rack: r1
Now save the file and start elasticsearch
/etc/init.d/elasticsearch start
to test ES installed or not run following
curl -XGET 'http://localhost:9200/?pretty'
If you get following then your elasticsearch is installed now :)
{
"name" : "vps.server.com",
"cluster_name" : "servercluster",
"version" : {
"number" : "2.2.0",
"build_hash" : "8ff36d139e16f8720f2947ef62c8167a888992fe",
"build_timestamp" : "2016-01-27T13:32:39Z",
"build_snapshot" : false,
"lucene_version" : "5.4.1"
},
"tagline" : "You Know, for Search"
}
Now let's install elasticsearch-JDBC
download it from http://xbib.org/repository/org/xbib/elasticsearch/importer/elasticsearch-jdbc/2.3.3.1/elasticsearch-jdbc-2.3.3.1-dist.zip and extract the same in /etc/elasticsearch/ and create "logs" folder also there ( path of logs should be /etc/elasticsearch/logs)
I have one database created in mysql having name "ElasticSearchDatabase" and inside that table named "test" with fields id,name and email
cd /etc/elasticsearch
and run following
echo '{
"type":"jdbc",
"jdbc":{
"url":"jdbc:mysql://localhost:3306/ElasticSearchDatabase",
"user":"root",
"password":"",
"sql":"SELECT id as _id, id, name,email FROM test",
"index":"users",
"type":"users",
"autocommit":"true",
"metrics": {
"enabled" : true
},
"elasticsearch" : {
"cluster" : "servercluster",
"host" : "localhost",
"port" : 9300
}
}
}' | java -cp "/etc/elasticsearch/elasticsearch-jdbc-2.2.0.0/lib/*" -"Dlog4j.configurationFile=file:////etc/elasticsearch/elasticsearch-jdbc-2.2.0.0/bin/log4j2.xml" "org.xbib.tools.Runner" "org.xbib.tools.JDBCImporter"
now check if mysql data imported in ES or not
curl -XGET http://localhost:9200/users/_search/?pretty
If all goes well, you will be able to see all your mysql data in json format
and if any error is there you will be able to see them in /etc/elasticsearch/logs/jdbc.log file
Caution :
In older versions of ES ... plugin Elasticsearch-river-jdbc was used which is completely deprecated in latest version so do not use it.
I hope i could save your time :)
Any further thoughts are appreciated
Reference url : https://github.com/jprante/elasticsearch-jdbc
The logstash JDBC plugin will do the job:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/testdb"
jdbc_user => "root"
jdbc_password => "factweavers"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/home/comp/Downloads/mysql-connector-java-5.1.38.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
schedule => "* * * *"
statement => "SELECT" * FROM testtable where Date > :sql_last_value order by Date"
use_column_value => true
tracking_column => Date
}
output {
stdout { codec => json_lines }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "test-migrate"
"document_type" => "data"
"document_id" => "%{personid}"
}
}
To make it more simple I have created a PHP class to Setup MySQL with Elasticsearch. Using my Class you can sync your MySQL data in elasticsearch and also perform full-text search. You just need to set your SQL query and class will do the rest for you.
Related
I have a PHP app deployed with Dockerfile, using EFK stack (Elasticsearch, Fluent-d, and Kibana) and I'm using "Logger\FluentLogger"
plugin to send the log to the elasticsearch.
$logger = new FluentLogger(FLUENTD_ENDPOINT, FLUENTD_PORT);
$logger -> post("c", array("message"=>"executed query: ..."));
When I build the docker image I export two variables :
RUN export SERVICE_VERSION=$(head -n 1 .version) && export BUILD_TIMESTAMP=$(head -2 .version | tail -1)
I want to write these two variable with every logging operation. I want also to display the container name like this:
output example
Over the past few days I've been trying to learn more about ldap, and I would now like to be able to create a new account on my phpldapadmin server from a webform. I have the values being passed back through php correctly, but I keep getting an objectclass violation error. I've scoured many different resources (including this one) and basically all that I can find is that the objectclass needs to match exactly how the dictionary is setup. I ran an export for some of the manually created users I already have working in there successfully, and this is an example of the output:
# LDIF Export for cn=api user,cn=students,ou=users,dc=myhost,dc=com
# Server: LDAP (ip)
# Search Scope: sub
# Search Filter: (objectClass=*)
# Total Entries: 1
#
# Generated by phpLDAPadmin (http://phpldapadmin.sourceforge.net) on June 4, 2016 3:15 pm
# Version: 1.2.2
version: 1
# Entry 1: cn=api user,cn=students,ou=users,dc=myhost,dc=co...
dn: cn=test user,cn=students,ou=users,dc=myhost,dc=com
cn: test
gidnumber: 502
givenname: test
homedirectory: /home/users/testuser
loginshell: /bin/sh
objectclass: inetOrgPerson
objectclass: posixAccount
objectclass: top
sn: tuser
uid: testuser
uidnumber: 1003
userpassword: {MD5}pass==
and I have tried mimicking it as closely as possible in my script (below), but I am still getting the violation error. No problems connecting or with any of the other fields, only the objectclass problem.
$ds = ldap_connect($AD_server);
if ($ds) {
ldap_set_option($ds, LDAP_OPT_PROTOCOL_VERSION, 3);
$r = ldap_bind($ds, $AD_Auth_User, $AD_Auth_PWD);
$info["cn"] = $user_full_name;
$info["sn"] = $user_username;
$info['objectclass'][0] = "top";
$info['objectclass'][1] = "posixAccount";
$info['objectclass'][2] = "inetOrgPerson";
$info['uid'] = $user_username;
$info['userpassword'] = $newPassw;
$info['loginshell'] = '/bin/sh';
$info['homedirectory'] = "/home/users/$user_username";
// add data to directory
$r = ldap_add($ds, $dn, $info);
ldap_close($ds);
} else {
echo "Unable to connect to LDAP server";
}
I've played around with the objectclasses and tried switching their positions or using only inetOrgPerson, and still no luck. Any thoughts?
When creating entries within LDAP you need to know which Attributes are "MUST" (required) for the ObjectClasses used when creating the entry.
In your example:
person MUST ( sn $ cn )
posixAccount MUST ( cn $ uid $ uidNumber $ gidNumber $ homeDirectory )
So to create the entry in LDAP you MUST have values for all of these:
sn
cn
uid
uidNumber
gidNumber
homeDirectory
You can tell which is required by a LDAP Query For Schema and reading each ObjectClass to determine which Attributes are "MUST" (required).
It looks like you need to make sure to pass every value back through. I was missing the uidnumber givenname and gidnumber fields. But now it works! :)
I want to generate clients key with PHP. When a client key generated it should give me the expiry date of the key.
root#zohaib-VirtualBox:/etc/openvpn/easy-rsa# ./build-key client1
Generating a 2048 bit RSA private key .............................................................+++ ............................+++
writing new private key to 'client1.key'
You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [GB]:
State or Province Name (full name) [London]:
Locality Name (eg, city) [London]:
Organization Name (eg, company) [Org]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) [client1]:
Name [OrgServer]:
Email Address [admin#org.com]:
Please enter the following 'extra' attributes to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from /etc/openvpn/easy-rsa/openssl-1.0.0.cnf
Check that the request matches the signature Signature ok The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'GB'
stateOrProvinceName :PRINTABLE:'London'
localityName :PRINTABLE:'London'
organizationName :PRINTABLE:'Org'
commonName :PRINTABLE:'client1'
name :PRINTABLE:'OrgServer'
emailAddress :IA5STRING:'admin#gamban.com'
Certificate is to be certified until Apr 21 15:43:47 2026 GMT (3650 days) Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
root#zohaib-VirtualBox:/etc/openvpn/easy-rsa#
You can use shell_exec and get the result to use, for example, with a regex to match expiry the date of key etc, i.e.:
$ovpnKey = shell_exec("your command here");
The result of the command will held on var $ovpnKey.
Update:
To automatize the creation of new OpenVPN client certificates, use the following script. Make sure you edit, at least, the following variables OPENVPN_RSA_DIR OPENVPN_KEYS KEY_DOWNLOAD_PATH
#! /bin/bash
# Script to automate creating new OpenVPN clients
# The client cert and key, along with the CA cert is
# zipped up and placed somewhere to download securely
#
# H Cooper - 05/02/11
#
# Usage: new-openvpn-client.sh <common-name>
# Set where we're working from
OPENVPN_RSA_DIR=/etc/openvpn/easy-rsa/2.0
OPENVPN_KEYS=$OPENVPN_RSA_DIR/keys
KEY_DOWNLOAD_PATH=/var/www/secure
# Either read the CN from $1 or prompt for it
if [ -z "$1" ]
then echo -n "Enter new client common name (CN): "
read -e CN
else
CN=$1
fi
# Ensure CN isn't blank
if [ -z "$CN" ]
then echo "You must provide a CN."
exit
fi
# Check the CN doesn't already exist
if [ -f $OPENVPN_KEYS/$CN.crt ]
then echo "Error: certificate with the CN $CN alread exists!"
echo " $OPENVPN_KEYS/$CN.crt"
exit
fi
# Enter the easy-rsa directory and establish the default variables
cd $OPENVPN_RSA_DIR
source ./vars > /dev/null
# Copied from build-key script (to ensure it works!)
export EASY_RSA="${EASY_RSA:-.}"
"$EASY_RSA/pkitool" --batch $CN
# Take the new cert and place it somewhere it can be downloaded securely
zip -q $KEY_DOWNLOAD_PATH/$CN-`date +%d%m%y`.zip keys/$CN.crt keys/$CN.key keys/ca.crt
# Celebrate!
echo ""
echo "#############################################################"
echo "COMPLETE! Download the new certificate here:"
echo "https://domain.com/secure/$CN-`date +%d%m%y`.zip"
echo "#############################################################"
Save the above bash script as new-openvpn-client.sh and give it execute permissions.
Then use php shell_exec to generate the keys:
$ovpnKey = shell_exec("sh /full/path/to/new-openvpn-client.sh <common-name>");
Sources:
https://gist.github.com/hcooper/814247
I have started using MongoDB for one of my PHP project. In that database, i am trying to use MongoDB sharding concept. I got the below link and tried,
MongoDB Sharding Example
It is working well. But the problem is, in the above example, everything is done in command prompt. But i am trying to do everything in PHP. I am not able to get any example in PHP.
So far, i have started with these piece of codes,
$connection = new Mongo();
$db = $connection->selectDB('TestDB');
$db = $connection->TestDB;
$connection->selectDB('admin')->command(array('addshard'=>'host:port'));
$connection->selectDB('admin')->command(array('enablesharding'=>'TestDB'));
$connection->selectDB('admin')->command(array('shardcollection'=>'TestDB.large', 'key' => '_id'));
It is not working. Also, i dont know how to set shard servers and config database in PHP.
Is there any other way to do MongoDB sharding in PHP?
MongoDB's database commands are case-sensitive. You were passing in lowercase command names, while the real commands are camelCased (see: sharding commands). Additionally, the key paremeter of shardCollection needs to be an object.
A corrected version of your above code would look like:
$connection->selectDB('admin')->command(array('addShard'=>'host:port'));
$connection->selectDB('admin')->command(array('enableSharding'=>'TestDB'));
$connection->selectDB('admin')->command(array('shardCollection'=>'TestDB.large', 'key' => array('_id' => 1)));
Additionally, you should have been able to examine the command result of to determine why this wasn't working. The ok field would be zero, and an errmsg field would explain the error (e.g. "no such cmd: addshard").
Lastly, if you're attempting to mimic the shell's functionality to configure sharding in PHP, you may realize that some of the shell methods are more than just database commands. For any of those methods, you can get the JS source by omitting the parentheses. For example:
> sh.addTagRange
function ( ns, min, max, tag ) {
var config = db.getSisterDB( "config" );
config.tags.update( {_id: { ns : ns , min : min } } ,
{_id: { ns : ns , min : min }, ns : ns , min : min , max : max , tag : tag } ,
true );
sh._checkLastError( config );
}
That would be a useful hint for what an addTagRange() function in PHP would need to do.
In Subversion 1.6, there was an .svn directory in every working copy directory. I could use the following code to quickly retrive the current revision number without the need for shell access/execution.
public function getSubversionRevision() {
if(file_exists("../.svn/entries")) {
$svn = File("../.svn/entries");
return (int)$svn[3];
}
return false;
}
Subversion 1.7 breaks this code. There is now only one .svn directory per local repository. There is an entries file in this directory but it no longer has anything useful for me. It looks like everything I need is now in a SQLite database. Specifically wc.db. I suppose I could use PHP's SQLite functions to get the info I need, but this sounds a little too expensive to run on every (or close to every) page load.
Any ideas? Breaking out the exec function and hoping that Subversion binaries are installed (and in the $PATH!) is a last resort. To summarize, I need to find a way to locate the .svn directory at the root of the repository (which could be different depending on your checkout location) and then somehow parse a file in there (probably wc.db) in a cost-effective way.
try using a SVN library SVN as it will give you access to repository information and more functionality over SVN repository.
Take a look at function svn_status you will receive and array of svn repository information
Array (
[0] => Array (
[path] => /home/bob/wc/sandwich.txt
[text_status] => 8 // item was modified
[repos_text_status] => 1 // no information available, use update
[prop_status] => 3 // no changes
[repos_prop_status] => 1 // no information available, use update
[name] => sandwich.txt
[url] => http://www.example.com/svnroot/deli/trunk/sandwich.txt
[repos] => http://www.example.com/svnroot/
[revision] => 123 // <-- Current Revision
//..
)
)
You can write a script to svn update for you every time, and scrap the revision number from the update's output into a file. The following bash script should more or less do it:
#!/bin/bash
svn --non-interactive update ..
svn --non-interactive update .. | perl -p -e "s/.* revision ([\d]*)\./\$1/" > ../version.phtml;
I implemented a clumsy (but relatively thorough) implementation of Subversion 1.7 revision retrieval in TheHostingTool. Check the commit message for an explanation. The function is contained in class_main.php which is one directory after /trunk (/trunk/includes/class_main.php). You'll need to adjust relative paths for your specific needs. This is a slightly modified version of the function found in class_main.php for use elsewhere.
public function getSubversionRevision() {
// This will work for Subverson 1.6 clients and below
if(file_exists("../.svn/entries")) {
$svn = File("../.svn/entries");
return (int)$svn[3];
}
// Check the previous directories recursively looking for wc.db (Subversion 1.7)
$searchDepth = 3; // Max search depth
// Do we have PDO? And do we have the SQLite PDO driver?
if(!extension_loaded('PDO') || !extension_loaded('pdo_sqlite')) {
$searchDepth = 0; // Don't even bother...
}
for($i = 1; $i <= $searchDepth; $i++) {
$dotdot .= '../';
if(!file_exists("$dotdot.svn/wc.db")) {
continue;
}
$wcdb = new PDO("sqlite:$dotdot.svn/wc.db");
$result = $wcdb->query('SELECT "revision" FROM "NODES" WHERE "repos_path" = "'.basename(realpath('..')).'"');
return (int)$result->fetchColumn();
}
if($this->canRun('exec')) {
exec('svnversion ' . realpath('..'), $out, $return);
// For this to work, svnversion must be in your PHP's PATH enviroment variable
if($return === 0 && $out[0] != "Unversioned directory") {
return (int)$out[0];
}
}
return false;
}
Instead of .= '../' you may want to use str_repeat if you don't start at 1. Or, you could simply define $dotdot ahead of time to wherever you would like to start.
Or just query:
SELECT "changed_revision" FROM "NODES" ORDER BY changed_revision DESC LIMIT 1;
if you wonder the last revision of the whole repo! :)
Excuse me, but if you want portable and bullet-proof solution, why not call Subversion CLI inside PHP?
You can use svn info inside any directory of WC (capture, parse output and get a set of data) of svnversion if you want to get only global revision