I have a PHP app deployed with Dockerfile, using EFK stack (Elasticsearch, Fluent-d, and Kibana) and I'm using "Logger\FluentLogger"
plugin to send the log to the elasticsearch.
$logger = new FluentLogger(FLUENTD_ENDPOINT, FLUENTD_PORT);
$logger -> post("c", array("message"=>"executed query: ..."));
When I build the docker image I export two variables :
RUN export SERVICE_VERSION=$(head -n 1 .version) && export BUILD_TIMESTAMP=$(head -2 .version | tail -1)
I want to write these two variable with every logging operation. I want also to display the container name like this:
output example
Related
How do I show the Run Report for a given xhprof run using xhprof-html?
I installed tideways-xhprof.
apt-get install php-dev
git clone "https://github.com/tideways/php-xhprof-extension.git"
cd php-xhprof-extension
phpize
./configure
make
make install
I enabled it in my php.ini
extension=/usr/lib/php/20190902/tideways_xhprof.so
I configured wordpress to write-out xhprof runs in my wp-config.php file
# PHP profiling with tideways-xhprof
# * https://pressjitsu.com/blog/profiling-wordpress-performance/
if ( isset( $_GET['profile'] ) && $_GET['profile'] === 'secret-string' ) {
tideways_xhprof_enable( TIDEWAYS_FLAGS_MEMORY + TIDEWAYS_FLAGS_CPU );
register_shutdown_function( function() {
$results = tideways_xhprof_disable();
file_put_contents( '/path/to/my/xhprof/dir/' .date('Y-m-d\TH:i:s\Z', time() - date('Z')). '.xhprof' , serialize( $results ) );
});
}
/** Sets up WordPress vars and included files. */
require_once(ABSPATH . 'wp-settings.php');
And I was successfully able to dump-out xhprof files by visiting https://example.com/index.php?profile=secret-string
root#host:/path/to/my/xhprof/dir/# du -sh *
1,6M 2022-05-25T18:15:45Z.xhprof
1,6M 2022-05-25T18:18:38Z.xhprof
root#host:/path/to/my/xhprof/dir/#
I also created a website with xhprof-html. And I configured it to find the above .xhprof files
root#host:/var/www/xhprof/xhprof-html# diff index.orig.php index.php
83c83
< $xhprof_runs_impl = new XHProfRuns_Default();
---
> $xhprof_runs_impl = new XHProfRuns_Default( '/path/to/my/xhprof/dir' );
root#host:/var/www/xhprof/xhprof-html#
Now I can load access the xhprof-html/index.php file and it successfully displays my two .xhprof files at https://xhprof.example.com/xhprof-html/index.php
No XHProf runs specified in the URL.
Existing runs:
2022-05-25T18:18:38Z.xhprof 2022-05-25 18:18:38
2022-05-25T18:15:45Z.xhprof 2022-05-25 18:15:45
And if I click on either one, I'm redirected (as expected) to either of these pages:
https://xhprof.example.com/xhprof-html/index.php?run=2022-05-25T18:18:38Z&source=xhprof
https://xhprof.example.com/xhprof-html/index.php?run=2022-05-25T18:15:45Z&source=xhprof
However, I would expect the above pages to render the actual Run Report. But they do not. Instead, I just see
No XHProf runs specified in the URL.
Existing runs:
2022-05-25T18:18:38Z.xhprof 2022-05-25 18:18:38
2022-05-25T18:15:45Z.xhprof 2022-05-25 18:15:45
Hello? The XHProf run is clearly specified in the URL.
How do I get xhprof-html to display the actual Run Report?
The Run Report is not shown because it can't read the file.
Problem
You are generating the file using the tideways xhprof and writing a serialized output to a file
https://github.com/tideways/php-xhprof-extension
$results = tideways_xhprof_disable();
file_put_contents( '/path/to/my/xhprof/dir/' .date('Y-m-d\TH:i:s\Z', time() - date('Z')). '.xhprof' , serialize( $results ) );
(source)
But then, you switch to hosting an older (no longer maintained) fork of xhprof to serve xhprof_html in your web server's document root
https://github.com/phacility/xhprof
Note: Of course you have to do this, and even the tideways docs link you to the phacility repo. The tideways repo itself curiously does not include the xhprof_html directory, possibly because tideways is a business that sells xhprof visualization SaaS -- so they stripped the "poor man's" xhprof_html from the fork of xhprof that they maintain
If you want to read the file using XHProfRuns_Default() then you should write the file using XHProfRuns_Default().
Solution
Change the block in your wp-config.php to the following
# PHP profiling with tideways-xhprof
# * https://pressjitsu.com/blog/profiling-wordpress-performance/
if ( isset( $_GET['profile'] ) && $_GET['profile'] === 'secret-string' ) {
tideways_xhprof_enable( TIDEWAYS_FLAGS_MEMORY + TIDEWAYS_FLAGS_CPU );
register_shutdown_function( function() {
$results = tideways_xhprof_disable();
include_once( '/var/www/xhprof/xhprof_lib/utils/xhprof_runs.php' );
$XHProfRuns = new XHProfRuns_Default( '/path/to/my/xhprof/dir' );
$XHProfRuns->save_run( $results, date('Y-m-d\TH:i:s\Z', time() - date('Z')) );
});
}
Then generate a new profile
https://example.com/index.php?profile=secret-string
You'll now see a new item named something like 62906fb2c49b4.2022-05-27T06:29:06Z.xhprof
No XHProf runs specified in the URL.
Existing runs:
62906fb2c49b4.2022-05-27T06:29:06Z.xhprof 2022-05-27 06:29:06
2022-05-25T18:18:38Z.xhprof 2022-05-25 18:18:38
2022-05-25T18:15:45Z.xhprof 2022-05-25 18:15:45
Click this new item, and you'll acctually see the Run Report now that it can read the xhprof file format.
https://xhprof.example.com/xhprof-html/index.php?run=62906fb2c49b4&source=2022-05-27T06:29:06Z
For more info on this issue, see:
https://tech.michaelaltfield.net/2022/06/07/wordpress-xhprof/
In one of my project, I am planning to use ElasticSearch with MySQL.
I have successfully installed ElasticSearch. I am able to manage index in ES separately. but I don't know how to implement the same with MySQL.
I have read a couple of documents but I am a bit confused and not having a clear idea.
As of ES 5.x , they have given this feature out of the box with logstash plugin.
This will periodically import data from database and push to ES server.
One has to create a simple import file given below (which is also described here) and use logstash to run the script. Logstash supports running this script on a schedule.
# file: contacts-index-logstash.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/mydb"
jdbc_user => "user"
jdbc_password => "pswd"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "/path/to/latest/mysql-connector-java-jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from contacts where updatedAt > :sql_last_value"
}
}
output {
elasticsearch {
protocol => http
index => "contacts"
document_type => "contact"
document_id => "%{id}"
host => "ES_NODE_HOST"
}
}
# "* * * * *" -> run every minute
# sql_last_value is a built in parameter whose value is set to Thursday, 1 January 1970,
# or 0 if use_column_value is true and tracking_column is set
You can download the mysql jar from maven here.
In case indexes do not exist in ES when this script is executed, they will be created automatically. Just like a normal post call to elasticsearch
Finally i was able to find the answer. sharing my findings.
To use ElasticSearch with Mysql you will require The Java Database Connection (JDBC) importer. with JDBC drivers you can sync your mysql data into elasticsearch.
I am using ubuntu 14.04 LTS and you will require to install Java8 to run elasticsearch as it is written in Java
following are steps to install ElasticSearch 2.2.0 and ElasticSearch-jdbc 2.2.0 and please note both the versions has to be same
after installing Java8 ..... install elasticsearch 2.2.0 as follows
# cd /opt
# wget https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/deb/elasticsearch/2.2.0/elasticsearch-2.2.0.deb
# sudo dpkg -i elasticsearch-2.2.0.deb
This installation procedure will install Elasticsearch in /usr/share/elasticsearch/ whose configuration files will be placed in /etc/elasticsearch .
Now lets do some basic configuration in config file. here /etc/elasticsearch/elasticsearch.yml is our config file
you can open file to change by
nano /etc/elasticsearch/elasticsearch.yml
and change cluster name and node name
For example :
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: servercluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: vps.server.com
#
# Add custom attributes to the node:
#
# node.rack: r1
Now save the file and start elasticsearch
/etc/init.d/elasticsearch start
to test ES installed or not run following
curl -XGET 'http://localhost:9200/?pretty'
If you get following then your elasticsearch is installed now :)
{
"name" : "vps.server.com",
"cluster_name" : "servercluster",
"version" : {
"number" : "2.2.0",
"build_hash" : "8ff36d139e16f8720f2947ef62c8167a888992fe",
"build_timestamp" : "2016-01-27T13:32:39Z",
"build_snapshot" : false,
"lucene_version" : "5.4.1"
},
"tagline" : "You Know, for Search"
}
Now let's install elasticsearch-JDBC
download it from http://xbib.org/repository/org/xbib/elasticsearch/importer/elasticsearch-jdbc/2.3.3.1/elasticsearch-jdbc-2.3.3.1-dist.zip and extract the same in /etc/elasticsearch/ and create "logs" folder also there ( path of logs should be /etc/elasticsearch/logs)
I have one database created in mysql having name "ElasticSearchDatabase" and inside that table named "test" with fields id,name and email
cd /etc/elasticsearch
and run following
echo '{
"type":"jdbc",
"jdbc":{
"url":"jdbc:mysql://localhost:3306/ElasticSearchDatabase",
"user":"root",
"password":"",
"sql":"SELECT id as _id, id, name,email FROM test",
"index":"users",
"type":"users",
"autocommit":"true",
"metrics": {
"enabled" : true
},
"elasticsearch" : {
"cluster" : "servercluster",
"host" : "localhost",
"port" : 9300
}
}
}' | java -cp "/etc/elasticsearch/elasticsearch-jdbc-2.2.0.0/lib/*" -"Dlog4j.configurationFile=file:////etc/elasticsearch/elasticsearch-jdbc-2.2.0.0/bin/log4j2.xml" "org.xbib.tools.Runner" "org.xbib.tools.JDBCImporter"
now check if mysql data imported in ES or not
curl -XGET http://localhost:9200/users/_search/?pretty
If all goes well, you will be able to see all your mysql data in json format
and if any error is there you will be able to see them in /etc/elasticsearch/logs/jdbc.log file
Caution :
In older versions of ES ... plugin Elasticsearch-river-jdbc was used which is completely deprecated in latest version so do not use it.
I hope i could save your time :)
Any further thoughts are appreciated
Reference url : https://github.com/jprante/elasticsearch-jdbc
The logstash JDBC plugin will do the job:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/testdb"
jdbc_user => "root"
jdbc_password => "factweavers"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/home/comp/Downloads/mysql-connector-java-5.1.38.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
schedule => "* * * *"
statement => "SELECT" * FROM testtable where Date > :sql_last_value order by Date"
use_column_value => true
tracking_column => Date
}
output {
stdout { codec => json_lines }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "test-migrate"
"document_type" => "data"
"document_id" => "%{personid}"
}
}
To make it more simple I have created a PHP class to Setup MySQL with Elasticsearch. Using my Class you can sync your MySQL data in elasticsearch and also perform full-text search. You just need to set your SQL query and class will do the rest for you.
I'm looking to parse the output that I'm getting back from PHP shell_exec. But some of the output that's returned is in tabular format and the data is wrapping within the columns as shown here...
Name Installed Proposed Message
Version version
Views Bulk Operations 7.x-3.3 7.x-3.4 Update available
(views_bulk_operations)
Chaos tools (ctools) 7.x-1.12 7.x-1.13 Update available
CAS (cas) 7.x-1.5 7.x-1.7 Update available
Custom Search 7.x-1.18 7.x-1.20 Update available
(custom_search)
Date iCal (date_ical) 7.x-3.5 7.x-3.9 Update available
Entity API (entity) 7.x-1.8 7.x-1.9 SECURITY UPDATE available
Field Group (field_group) 7.x-1.5 7.x-1.6 Update available
Media (media) 7.x-1.6 7.x-2.16 SECURITY UPDATE available
Insert (insert) 7.x-1.3 7.x-1.4 Update available
Views (views) 7.x-3.16 7.x-3.18 SECURITY UPDATE available
Views Data Export 7.x-3.0-beta8 7.x-3.2 SECURITY UPDATE available
(views_data_export)
Views PHP (views_php) 7.x-1.0-alpha 7.x-1.0- Update available
1 alpha3
Athena (athena) Unknown Unknown Project was not packaged
by drupal.org but
obtained from git. You
need to enable git_deploy
module
To make it easier to parse, I was hoping to be able to change the width of the output so the data doesn't wrap.
I assumed that specifying the width of the console with stty would work, but it doesn't. I tried this...
$output = shell_exec('stty cols 180; cd /; cd ' . $drupal_sites_folder_path . $list_of_drupal_sites[22] . '; drush pm-updatestatus');
...but the output width is still set to 80 cols.
Any suggestions on how to change the width of the output from shell_exec?
I am using php api for adding/editing/viewing a FileMaker database. I am using Filemaker pro 14 and FMS 14 in windows environment.
I am having an issue with adding/editing container fields. Tried the solution given in the following link: https://community.filemaker.com/thread/66165
It was success. The FM script is:
Goto Layout[ The layout that shows your container field ]
New Record/Request
Set Variable[$url ; Value:Get(ScriptParameter)]
Insert from URL [Select, No Dialog ; database_name::ContainerField ; $url]
Exit Script
I don't want to add new record. I have several container fields in the layout so it's not a solution to add a record for each one, and I need to be able to modify older records' container fields.
I tried modifying the script as follows:
Go to Layout ["products" (products)]
Go to Record/Request/Page [Last]
Open Record/Request
Set Variable [$url; Value: Get(ScriptParameter)]
Insert from URL [Select, No Dialog; products::brochure; $url]
Exit Script []
note: (Last) parameter is just experimental.
The php script is as follows:
$runscript = $fm->newPerformScriptCommand('products', 'addContainerData', 'http://link_to_uploded_file');
$result = $runscript->execute();
$result returns success, but the file wasn't inserted in the container field.
Somebody pointed to me that to use "Insert from URL" I have to specify a record ID. So I did the follows:
modified the php script to:
$editCommand = $fm->newEditCommand('products', $recordID, $editedData);
$editCommand->setPreCommandScript('addContainerData', 'http://url_to_my_uploaded_file');
$result = $editCommand->execute();
and the FM script (addContainerData) to
Set Variable [$url; Value: Get(ScriptParameter)]
Insert from URL [Select, No Dialog; products::brochure; $url]
Exit Script []
Also the result was success BUT without inserting the file to the container field.
What am I missing? What to do to be able to add container data to new/old records?
A possible workaround is to use PHP functions to encode the file to base 64 and set that value to a text field in FileMaker. Once there, you can have auto enter or a script to take the base 64 value and decode it to a container field. This works well especially for files with smaller file sizes.
I have started using MongoDB for one of my PHP project. In that database, i am trying to use MongoDB sharding concept. I got the below link and tried,
MongoDB Sharding Example
It is working well. But the problem is, in the above example, everything is done in command prompt. But i am trying to do everything in PHP. I am not able to get any example in PHP.
So far, i have started with these piece of codes,
$connection = new Mongo();
$db = $connection->selectDB('TestDB');
$db = $connection->TestDB;
$connection->selectDB('admin')->command(array('addshard'=>'host:port'));
$connection->selectDB('admin')->command(array('enablesharding'=>'TestDB'));
$connection->selectDB('admin')->command(array('shardcollection'=>'TestDB.large', 'key' => '_id'));
It is not working. Also, i dont know how to set shard servers and config database in PHP.
Is there any other way to do MongoDB sharding in PHP?
MongoDB's database commands are case-sensitive. You were passing in lowercase command names, while the real commands are camelCased (see: sharding commands). Additionally, the key paremeter of shardCollection needs to be an object.
A corrected version of your above code would look like:
$connection->selectDB('admin')->command(array('addShard'=>'host:port'));
$connection->selectDB('admin')->command(array('enableSharding'=>'TestDB'));
$connection->selectDB('admin')->command(array('shardCollection'=>'TestDB.large', 'key' => array('_id' => 1)));
Additionally, you should have been able to examine the command result of to determine why this wasn't working. The ok field would be zero, and an errmsg field would explain the error (e.g. "no such cmd: addshard").
Lastly, if you're attempting to mimic the shell's functionality to configure sharding in PHP, you may realize that some of the shell methods are more than just database commands. For any of those methods, you can get the JS source by omitting the parentheses. For example:
> sh.addTagRange
function ( ns, min, max, tag ) {
var config = db.getSisterDB( "config" );
config.tags.update( {_id: { ns : ns , min : min } } ,
{_id: { ns : ns , min : min }, ns : ns , min : min , max : max , tag : tag } ,
true );
sh._checkLastError( config );
}
That would be a useful hint for what an addTagRange() function in PHP would need to do.