Cannot access ES on localhost:9200 - php

So i'm trying to setup Elastic Search with xampp, I have installed elastic search from here, which comes with JDK. I have also set SSL to false in my elasticsearch.yml config file.
my config looks like this:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 15-04-2022 16:08:51
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: false <----- I MODIFIED THIS
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["MENT-PLAN-L560"]
# Allow HTTP API connections from localhost and local networks
# Connections are encrypted and require user authentication
http.host: [_local_, _site_]
# Allow other nodes to join the cluster from localhost and local networks
# Connections are encrypted and mutually authenticated
#transport.host: [_local_, _site_]
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
I did this because otherwise I couldn't connect to 127.0.0.1:9200. Now, the problem is, it promps me with a username and password form and I know the default username is elastic although I don't know the password because it is stored in the cluster.
So I tried resetting the password by running the command: elasticsearch-reset-password.bat -u elastic -i
But i get this:
I also tried forcing the command elasticsearch-reset-password.bat -u elastic -i -f but since my cluster is unhealthy for whatever reason, it fails after I put the password. What am I missing?

Related

Not able to connect to DB in php with mariadb gssapi, authentication method unknown to client

I am trying to auth users with gssapi using mariadb gssapi plugin in php on a local installation with xampp. I have set up xampp and a local installation which works. Now i want to connect to the db by using the windows ldap user and gssapi authentication.
the problem was somehow discuessed here, but without any results:
GSSAPI-Auth with PHP to MariaDB (Windows)
the gssapi authentication for the mariadb seems to work. I created a user in phpmyadmin with authentication method = gssapi. In the CLI i am able to connect, see picture below:
Successful mysql connect with domain user
now when trying to connect with
if (($dbcon=mysqli_connect("localhost","$mysql_userid","$password"))===FALSE) {
exit("4:Login process failed while connecting to database");
echo "Debug-Fehlermeldung: " . mysqli_connect_error . PHP_EOL;
}else{
$auth_result=TRUE;
}
i am getting the following error:
Warning: mysqli_connect(): The server requested authentication method unknown to the client [auth_gssapi_client] in C:\xampp\htdocs\oa5-maria\trunk\login.php on line 82
Warning: mysqli_connect(): (HY000/2054): The server requested authentication method unknown to the client in C:\xampp\htdocs\oa5-maria\trunk\login.php on line 82
4:Login process failed while connecting to database
I have set the default-authentication-plugin=gssapi in the my.ini file. But i have no idea if this is the correct approach.
Do you have any suggestions to solve that problem?
This is my my.ini file:
# Example MySQL config file for small systems.
#
# This is for a system with little memory (<= 64M) where MySQL is only used
# from time to time and it's important that the mysqld daemon
# doesn't use much resources.
#
# You can copy this file to
# C:/xampp/mysql/bin/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options (in this
# installation this directory is C:/xampp/mysql/data) or
# ~/.my.cnf to set user-specific options.
#
# In this file, you can use all long options that a program supports.
# If you want to know which options a program supports, run the program
# with the "--help" option.
# The following options will be passed to all MySQL clients
[client]
# password = your_password
port=3306
socket="C:/xampp/mysql/mysql.sock"
# Here follows entries for some specific programs
# The MySQL server
default-character-set=utf8mb4
[mysqld]
port=3306
socket="C:/xampp/mysql/mysql.sock"
basedir="C:/xampp/mysql"
tmpdir="C:/xampp/tmp"
datadir="C:/xampp/mysql/data"
pid_file="mysql.pid"
# enable-named-pipe
key_buffer=16M
max_allowed_packet=200M
sort_buffer_size=512K
net_buffer_length=8K
read_buffer_size=256K
read_rnd_buffer_size=512K
myisam_sort_buffer_size=8M
log_error="mysql_error.log"
#neu für authentifizierung
default-authentication-plugin=gssapi
# Change here for bind listening
# bind-address="127.0.0.1"
# bind-address = ::1 # for ipv6
# Where do all the plugins live
plugin_dir="C:/xampp/mysql/lib/plugin/"
# Don't listen on a TCP/IP port at all. This can be a security enhancement,
# if all processes that need to connect to mysqld run on the same host.
# All interaction with mysqld must be made via Unix sockets or named pipes.
# Note that using this option without enabling named pipes on Windows
# (via the "enable-named-pipe" option) will render mysqld useless!
#
# commented in by lampp security
#skip-networking
#skip-federated
# Replication Master Server (default)
# binary logging is required for replication
# log-bin deactivated by default since XAMPP 1.4.11
#log-bin=mysql-bin
# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id =1
# Replication Slave (comment out master section to use this)
#
# To configure this host as a replication slave, you can choose between
# two methods :
#
# 1) Use the CHANGE MASTER TO command (fully described in our manual) -
# the syntax is:
#
# CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>,
# MASTER_USER=<user>, MASTER_PASSWORD=<password> ;
#
# where you replace <host>, <user>, <password> by quoted strings and
# <port> by the master's port number (3306 by default).
#
# Example:
#
# CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306,
# MASTER_USER='joe', MASTER_PASSWORD='secret';
#
# OR
#
# 2) Set the variables below. However, in case you choose this method, then
# start replication for the first time (even unsuccessfully, for example
# if you mistyped the password in master-password and the slave fails to
# connect), the slave will create a master.info file, and any later
# change in this file to the variables' values below will be ignored and
# overridden by the content of the master.info file, unless you shutdown
# the slave server, delete master.info and restart the slaver server.
# For that reason, you may want to leave the lines below untouched
# (commented) and instead use CHANGE MASTER TO (see above)
#
# required unique id between 2 and 2^32 - 1
# (and different from the master)
# defaults to 2 if master-host is set
# but will not function as a slave if omitted
#server-id = 2
#
# The replication master for this slave - required
#master-host = <hostname>
#
# The username the slave will use for authentication when connecting
# to the master - required
#master-user = <username>
#
# The password the slave will authenticate with when connecting to
# the master - required
#master-password = <password>
#
# The port the master is listening on.
# optional - defaults to 3306
#master-port = <port>
#
# binary logging - not required for slaves, but recommended
#log-bin=mysql-bin
# Point the following paths to different dedicated disks
#tmpdir = "C:/xampp/tmp"
#log-update = /path-to-dedicated-directory/hostname
# Uncomment the following if you are using BDB tables
#bdb_cache_size = 4M
#bdb_max_lock = 10000
# Comment the following if you are using InnoDB tables
#skip-innodb
innodb_data_home_dir="C:/xampp/mysql/data"
innodb_data_file_path=ibdata1:10M:autoextend
innodb_log_group_home_dir="C:/xampp/mysql/data"
#innodb_log_arch_dir = "C:/xampp/mysql/data"
## You can set .._buffer_pool_size up to 50 - 80 %
## of RAM but beware of setting memory usage too high
innodb_buffer_pool_size=16M
## Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size=5M
innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=1
innodb_lock_wait_timeout=50
## UTF 8 Settings
#init-connect=\'SET NAMES utf8\'
#collation_server=utf8_unicode_ci
#character_set_server=utf8
#skip-character-set-client-handshake
#character_sets-dir="C:/xampp/mysql/share/charsets"
sql_mode=NO_ZERO_IN_DATE,NO_ZERO_DATE,NO_ENGINE_SUBSTITUTION
log_bin_trust_function_creators=1
character-set-server=utf8mb4
collation-server=utf8mb4_general_ci
[mysqldump]
max_allowed_packet=16M
[mysql]
# Remove the next comment character if you are not familiar with SQL
#safe-updates
[isamchk]
key_buffer=20M
sort_buffer_size=20M
read_buffer=2M
write_buffer=2M
[myisamchk]
key_buffer=20M
sort_buffer_size=20M
read_buffer=2M
write_buffer=2M
[mysqlhotcopy]
lower_case_table_names=0
The difference between your client and PHP is, that the client is linked against libmariadb (and is therefore able to load the auth_gssapi_plugin, while mysqli is either linked against libmysql or PHP's internal mysqlnd driver.
Beside Kerberos/GSSAPI MariaDB also provides ed25519 and pam authentication (via dialog plugin) which is not supported by libmysql and mysqlnd.
Building ext/mysqli against MariaDB Connector/C unfortunately doesn't work and recent pull requests which fixed that problem were rejected.

Elastic Search Give an error No alive nodes found in your cluster

I am start working with elastic search.I successfully install elastic search on my server(Different from application server).But When I try to call Elatic search from my Application server it gives an error
Fatal error: Uncaught exception 'Elasticsearch\Common\Exceptions\NoNodesAvailableException' with message 'No alive nodes found in your cluster'
When I check Elastic search status it shows Active.
How can I call elastic search from my Application server to my Elastic search server.
<?php
require 'vendor/autoload.php';
$hosts = [
'ip_address:9200' // IP + Port
];
$client = Elasticsearch\ClientBuilder::create()->setHosts($hosts)->build();
$params = [
'index' => 'my_index',
'type' => 'my_type',
'id' => 'my_id',
'body' => ['testField' => 'abc']
];
$response = $client->index($params);
?>
My elasticsearch.yml Settings
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.memory_lock: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 0.0.0.0
#network.bind_host: 0
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true
elasticsearch.yml settings which are not working
network.host: 127.0.0.1
network.host: 0
network.host: 0.0.0.0
network.host: IP_Address
network.bind_host: 0
network.bind_host: IP_Address
When I set the above settings then elasticsearch shows the failed status.
NOTE : Elastic search install on different server from my Application sever.
I found the error.Error is coming due to space before node.name and cluster.name.Remove the space and its working fine.
Updated elasticsearch.yml file
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application-shakedeal
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: shakedeal-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.memory_lock: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
network.bind_host: IP_ADDRESS
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true
PHP Code
<?php
require 'vendor/autoload.php';
$indexParams = [
'index' => 'my_index',
'body' => [
'settings' => [
'number_of_shards' => 5,
'number_of_replicas' => 1
]
]
];
$client = Elasticsearch\ClientBuilder::create()
->setSSLVerification(false)
->setHosts(["IP_ADDRESS:9200"])->build();
$response = '';
try {
/* Create the index */
$response = $client->indices()->create($indexParams);
print_r($response);
print_r($response);
} catch(Exception $e) {
echo "Exception : ".$e->getMessage();
}
die('End : Elastic Search');
?>
Success Response :
Array
(
[acknowledged] => 1
)
I have found this error and solved it by following steps:
Comments the plugin setting on app/etc/env.php file.
'system' => [
'default' => [
/* 'smile_elasticsuite_core_base_settings' => [
'es_client' => [
'servers' => 'node-1:9200,node-2:9200',
'enable_https_mode' => '0',
'enable_http_auth' => '0',
'http_auth_user' => '',
'http_auth_pwd' => ''
]
] */
]
]
Start the server.
upgrade magento
php bin/magento setup:upgrade
I also facing the same problem. I was specified in host 'myhostaddress:9200'. Change it with IP of your Elasticsearch host server like '127.0.0.1:9200' . It resolve my problem.
As you know, S.O is also used as a keyword-based reference for the Community years later. Therefore I'm jumping in to add an important discovery I made about resolving this No alive nodes found in your cluster message:
If you guys are trying to connect to ES Cloud using PHP for example, and are getting this exact message, whereas you've been following the Elasticsearch PHP Client > Connecting documentation section, and in particular, trying to connect via the CloudId:
$client = ClientBuilder::create()
->setElasticCloudId('<cloud-id>')
->setBasicAuthentication('<username>', '<password>')
->build();
Well you should try to add CA Bundle verification:
$caBundle = \Composer\CaBundle\CaBundle::getBundledCaBundlePath();
$client = ClientBuilder::create()
->setElasticCloudId($ES_CloudID)
->setSSLVerification($caBundle)
->setBasicAuthentication($ES_username, $ES_password)
->build();
I demonstrated that commenting out ->setSSLVerification($caBundle) makes the construction fail.
Your answer can be found at the link I am providing https://forums.aws.amazon.com/thread.jspa?threadID=230626
I got the same problem but through this link it resolved, the answer which suggest attaching a suffix :443 at the end will work
Remove the trailing slash in elastic search. Sometimes it creates error.
Atleast in my case I removing the trailing slash from Amazon ES URL make it working.
I was also same facing issue in magento2
then, I installed
$sudo apt-get install elasticsearch

Fresh install of Laravel homestead, edited Vagrantfile renders "Unexpected end" error

I am working away on my first website using the Laravel framework and am attempting top install Homestead on OS X el captin. after failing to sync my shared folders on my host and guest machine, instead of continuing to fiddle with the Homestead.ymal file, I decided to add the shared folders to the Vagrantfile manually:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "laravel/homestead"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define "atlas" do |push|
# push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
# end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision "shell", inline: <<-SHELL
# sudo apt-get update
# sudo apt-get install -y apache2
# SHELL
end
After making the changes, when I run vagrant up I get the following error message;
There is a syntax error in the following Vagrantfile. The syntax error message is reproduced below for convenience:
/Users/tommorison/vendor/laravel/homestead/Vagrantfile:71: syntax
error, unexpected end-of-input, expecting keyword_end
I am not understanding why i am getting this error/ i just changed the synced folders
I found a solution to this error. just edit the vagrantfile in the homestead directory and the folders would mount to vagrant. make sure there are no whitesoaces in your homestead.ymal file.

How to set Vagrant using virtual Box?

I am new to vagrant, I am trying to setup a project using vagrant/virtual box, What I have done till now.
1- I have installed vagrant 1.0.1
2- i have installed virtual box on checking its version which shows -- 4.1.44_Ubuntu
3- Now I have added the virtual box in project directory using below command (Although I needed the LAMP stack for my project on ubuntu) and my system config is (ubuntu 12.04 insatlled using wubi installer on widows 7 )--
1- vagrant box add hashicorp/precise32 http://files.vagrantup.com/precise32.box
2- vagrant init
3- vagrant up hashicorp/precise32
Then it starts downloading the file from specified url (for first time it seems ok, But for all time it repeats the same process).
Am I missing some steps for congiuring it using vagrant ?
Any help would be appreciated.
Here is vagrant file.
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant::Config.run do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "hasicorp/precise32"
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
# Boot with a GUI so you can see the screen. (Default is headless)
# config.vm.boot_mode = :gui
# Assign this VM to a host-only network IP, allowing you to access it
# via the IP. Host-only networks can talk to the host machine as well as
# any other machines on the same network, but cannot be accessed (through this
# network interface) by any external networks.
# config.vm.network :hostonly, "192.168.33.10"
# Assign this VM to a bridged network, allowing you to connect directly to a
# network using the host's network device. This makes the VM appear as another
# physical device on your network.
# config.vm.network :bridged
# Forward a port from the guest to the host, which allows for outside
# computers to access the VM, whereas host only networking does not.
# config.vm.forward_port 80, 8080
# Share an additional folder to the guest VM. The first argument is
# an identifier, the second is the path on the guest to mount the
# folder, and the third is the path on the host to the actual folder.
# config.vm.share_folder "v-data", "/vagrant_data", "../data"
# Enable provisioning with Puppet stand alone. Puppet manifests
# are contained in a directory path relative to this Vagrantfile.
# You will need to create the manifests directory and a manifest in
# the file hasicorp/precise32.pp in the manifests_path directory.
#
# An example Puppet manifest to provision the message of the day:
#
# # group { "puppet":
# # ensure => "present",
# # }
# #
# # File { owner => 0, group => 0, mode => 0644 }
# #
# # file { '/etc/motd':
# # content => "Welcome to your Vagrant-built virtual machine!
# # Managed by Puppet.\n"
# # }
#
# config.vm.provision :puppet do |puppet|
# puppet.manifests_path = "manifests"
# puppet.manifest_file = "hasicorp/precise32.pp"
# end
# Enable provisioning with chef solo, specifying a cookbooks path (relative
# to this Vagrantfile), and adding some recipes and/or roles.
#
# config.vm.provision :chef_solo do |chef|
# chef.cookbooks_path = "cookbooks"
# chef.add_recipe "mysql"
# chef.add_role "web"
#
# # You may also specify custom JSON attributes:
# chef.json = { :mysql_password => "foo" }
# end
# Enable provisioning with chef server, specifying the chef server URL,
# and the path to the validation key (relative to this Vagrantfile).
#
# The Opscode Platform uses HTTPS. Substitute your organization for
# ORGNAME in the URL and validation key.
#
# If you have your own Chef Server, use the appropriate URL, which may be
# HTTP instead of HTTPS depending on your configuration. Also change the
# validation key to validation.pem.
#
# config.vm.provision :chef_client do |chef|
# chef.chef_server_url = "https://api.opscode.com/organizations/ORGNAME"
# chef.validation_key_path = "ORGNAME-validator.pem"
# end
#
# If you're using the Opscode platform, your validator client is
# ORGNAME-validator, replacing ORGNAME with your organization name.
#
# IF you have your own Chef Server, the default validation client name is
# chef-validator, unless you changed the configuration.
#
# chef.validation_client_name = "ORGNAME-validator"
end
If you're using one of the Vagrant Cloud boxes, then you don't need to specify a box URL. And 'hashicorp' is misspelled in your Vagrantfile. It should be...
config.vm.box = "hashicorp/precise32"
If you seem to be in a non-working state for some reason, just vagrant destroy your current box and vagrant up again. I also highly recommend downloading and installing the latest version of Vagrant (1.8.1 currently). Hope this helps.

Cant load index.php page with apache using vagrant and virtualbox

This is my vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "ubuntu32"
config.vm.provision :shell, :path => "vagrant/inicio.sh"
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/raring/current/raring-server-cloudimg-i386-vagrant-disk1.box"
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
config.vm.network :forwarded_port, guest: 80, host: 8085, auto_correct: true
config.vm.provider "virtualbox" do |v|
v.name = "funcook"
end
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network :private_network, ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network :public_network
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider :virtualbox do |vb|
# # Don't boot with headless mode
# vb.gui = true
#
# # Use VBoxManage to customize the VM. For example to change memory:
# vb.customize ["modifyvm", :id, "--memory", "1024"]
# end
#
# View the documentation for the provider you're using for more
# information on available options.
# Enable provisioning with Puppet stand alone. Puppet manifests
# are contained in a directory path relative to this Vagrantfile.
# You will need to create the manifests directory and a manifest in
# the file base.pp in the manifests_path directory.
#
# An example Puppet manifest to provision the message of the day:
#
# # group { "puppet":
# # ensure => "present",
# # }
# #
# # File { owner => 0, group => 0, mode => 0644 }
# #
# # file { '/etc/motd':
# # content => "Welcome to your Vagrant-built virtual machine!
# # Managed by Puppet.\n"
# # }
#
# config.vm.provision :puppet do |puppet|
# puppet.manifests_path = "manifests"
# puppet.manifest_file = "init.pp"
# end
# Enable provisioning with chef solo, specifying a cookbooks path, roles
# path, and data_bags path (all relative to this Vagrantfile), and adding
# some recipes and/or roles.
#
# config.vm.provision :chef_solo do |chef|
# chef.cookbooks_path = "../my-recipes/cookbooks"
# chef.roles_path = "../my-recipes/roles"
# chef.data_bags_path = "../my-recipes/data_bags"
# chef.add_recipe "mysql"
# chef.add_role "web"
#
# # You may also specify custom JSON attributes:
# chef.json = { :mysql_password => "foo" }
# end
# Enable provisioning with chef server, specifying the chef server URL,
# and the path to the validation key (relative to this Vagrantfile).
#
# The Opscode Platform uses HTTPS. Substitute your organization for
# ORGNAME in the URL and validation key.
#
# If you have your own Chef Server, use the appropriate URL, which may be
# HTTP instead of HTTPS depending on your configuration. Also change the
# validation key to validation.pem.
#
# config.vm.provision :chef_client do |chef|
# chef.chef_server_url = "https://api.opscode.com/organizations/ORGNAME"
# chef.validation_key_path = "ORGNAME-validator.pem"
# end
#
# If you're using the Opscode platform, your validator client is
# ORGNAME-validator, replacing ORGNAME with your organization name.
#
# If you have your own Chef Server, the default validation client name is
# chef-validator, unless you changed the configuration.
#
# chef.validation_client_name = "ORGNAME-validator"
end
And this is the content of inicio.sh:
#!/usr/bin/env bash
if [ ! -f ~/initial_provosioning_done ];
then
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get install -y -q lamp-server^ php5-gd
sed -i 's,www-data,vagrant,g' /etc/apache2/envvars
sed -i 's,/var/www,/vagrant/www,g' /etc/apache2/sites-available/default
sed -i 's,AllowOverride None,AllowOverride All,g' /etc/apache2/sites-available/default
mysqladmin -u root password root
mysql -uroot -proot < /vagrant/bd/script.sql
rm -r /var/lock/apache2
a2enmod rewrite
service apache2 restart
touch ~/initial_provosioning_done
fi
This is how I am starting the VM:
minirafa:beta.funcook.com TONIWEB$ vagrant reload
[default] Attempting graceful shutdown of VM...
[default] Clearing any previously set forwarded ports...
[default] Creating shared folders metadata...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] -- 80 => 8085 (adapter 1)
[default] Booting VM...
[default] Waiting for machine to boot. This may take a few minutes...
[default] Machine booted and ready!
[default] Mounting shared folders...
[default] -- /vagrant
minirafa:beta.funcook.com TONIWEB$
The thing is that:
Chrome will log:
ERR_EMPTY_RESPONSE
and:
minirafa:~ TONIWEB$ curl 'http://localhost:80'
curl: (7) couldn't connect to host
Or:
minirafa:~ TONIWEB$ curl 'http://localhost:8085'
curl: (52) Empty reply from server
minirafa:~ TONIWEB$
Usually this settings work for me with other projects,
Any idea what could I try next?
-EDIT-
Also:
minirafa:beta.funcook.com TONIWEB$ curl -v http://localhost:8085
* About to connect() to localhost port 8085 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8085 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8y zlib/1.2.3
> Host: localhost:8085
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
* Closing connection #0
minirafa:beta.funcook.com TONIWEB$
When you're trying to reach localhost on port 80 with curl, you're actually trying to reach the host machine (not the guest running in virtualbox/vagrant). So if the host doesn't run a webserver (on port 80), it's normal you get a couldn't connect to host message.
You should be trying to reach localhost on port 8085, because that's the port you're forwarding to port 80 on the guest machine. And that apparently tells you Empty reply from server...
I cannot say much about this, unless I can get some additional info about the webserver running on the guest:
Is it running properly? (check the error-logs)
Does it respond to requests from inside the guest machine?
Does the request from the host reach the webserver? (check the access-logs)
If so, does the webserver encounter errors? (check the error-logs)
If not, is there a firewall is running on the guest, dropping requests to port 80?
PS: Just a tip: You could create a private network on the guest by enabling this line:
config.vm.network :private_network, ip: "192.168.33.10"
This way you can reach the webserver at 192.168.33.10:80, and won't need to forward any ports to it.

Categories