everyone!
The situation is like this:
I have a pod in kubernetes, I forward a port to it like this -
kubectl port-forward deploy/reporter-dev 8585:8081 --address 0.0.0.0.0
I connect from Postman to my local address 192.168.31.205:8585 - the server from kubernetes responds correctly.
But as soon as I try to connect from PHP application from Docker nothing works.
This is connection code from PHP: (php classes generated from PROTO before)
class GroupService
{
private ReportGroupsClient|BaseStub|null $reportGroupsClient;
public function __construct()
{
$this->reportGroupsClient = Yii::$container->get(
ReportGroupsClient::class,
['192.168.31.205:8585', [
'credentials' => ChannelCredentials::createInsecure(),
]]
);
}
public function getReportGroupsList()
{
$res = $this->reportGroupsClient->GetList(new ReportGroupsGetListRequest([
'limit' => 10,
'offset' => 0,
]))->wait();
//exec('/var/www/grpcurl -insecure 192.168.31.205:8585 retail.reporter.ReportGroups/GetList', $res);
return $res;
}
}
public function actionHealth(): void
{
$this->log("Start DEBUG gRPC");
$rr = new GroupService();
$res = $rr->getReportGroupsList();
var_dump($res);
$this->log("End of debug");
}
that return the error:
/var/www # GRPC_VERBOSITY=debug GPRC_TRACE=all php yii reports/report/health
I0210 07:22:57.540640544 357 ev_epoll1_linux.cc:121] grpc epoll fd: 4
D0210 07:22:57.540835960 357 ev_posix.cc:171] Using polling engine: epoll1
D0210 07:22:57.541246752 357 lb_policy_registry.cc:42] registering LB policy factory for "grpclb"
D0210 07:22:57.541344669 357 lb_policy_registry.cc:42] registering LB policy factory for "rls_experimental"
D0210 07:22:57.541406252 357 lb_policy_registry.cc:42] registering LB policy factory for "priority_experimental"
D0210 07:22:57.541479794 357 lb_policy_registry.cc:42] registering LB policy factory for "weighted_target_experimental"
D0210 07:22:57.541527044 357 lb_policy_registry.cc:42] registering LB policy factory for "pick_first"
D0210 07:22:57.541548752 357 lb_policy_registry.cc:42] registering LB policy factory for "round_robin"
D0210 07:22:57.541621877 357 lb_policy_registry.cc:42] registering LB policy factory for "ring_hash_experimental"
D0210 07:22:57.541887044 357 certificate_provider_registry.cc:33] registering certificate provider factory for "file_watcher"
D0210 07:22:57.541919794 357 lb_policy_registry.cc:42] registering LB policy factory for "cds_experimental"
D0210 07:22:57.541977919 357 lb_policy_registry.cc:42] registering LB policy factory for "xds_cluster_impl_experimental"
D0210 07:22:57.542039169 357 lb_policy_registry.cc:42] registering LB policy factory for "xds_cluster_resolver_experimental"
D0210 07:22:57.542087877 357 lb_policy_registry.cc:42] registering LB policy factory for "xds_cluster_manager_experimental"
2023-02-10 07:22:57 Start DEBUG gRPC
D0210 07:22:57.649119960 357 dns_resolver.cc:162] Using native dns resolver
I0210 07:22:57.652611710 359 socket_utils_common_posix.cc:429] Disabling AF_INET6 sockets because ::1 is not available.
I0210 07:22:58.759204128 357 subchannel.cc:948] subchannel 0xffff81952db0 {address=ipv4:192.168.31.205:8585, args=grpc.client_channel_factory=0xffff8195d050, grpc.default_authority=192.168.31.205:8585, grpc.internal.channel_credentials=0xffff81916b90, grpc.internal.security_connector=0xffff8191be30, grpc.internal.subchannel_pool=0xffff8191a740, grpc.resource_quota=0xffff8191c420, grpc.server_uri=dns:///192.168.31.205:8585}: connect failed: {"created":"#1676013778.758779961","description":"Endpoint read failed","file":"/tmp/pear/temp/grpc/src/core/ext/transport/chttp2/transport/chttp2_transport.cc","file_line":2572,"occurred_during_write":0,"referenced_errors":[{"created":"#1676013778.758739461","description":"Socket closed","fd":6,"file":"/tmp/pear/temp/grpc/src/core/lib/iomgr/tcp_posix.cc","file_line":802,"grpc_status":14,"target_address":"ipv4:192.168.31.205:8585"}]}
I0210 07:22:58.759450044 357 subchannel.cc:888] subchannel 0xffff81952db0 {address=ipv4:192.168.31.205:8585, args=grpc.client_channel_factory=0xffff8195d050, grpc.default_authority=192.168.31.205:8585, grpc.internal.channel_credentials=0xffff81916b90, grpc.internal.security_connector=0xffff8191be30, grpc.internal.subchannel_pool=0xffff8191a740, grpc.resource_quota=0xffff8191c420, grpc.server_uri=dns:///192.168.31.205:8585}: Retry immediately
I0210 07:22:58.759495461 357 subchannel.cc:914] subchannel 0xffff81952db0 {address=ipv4:192.168.31.205:8585, args=grpc.client_channel_factory=0xffff8195d050, grpc.default_authority=192.168.31.205:8585, grpc.internal.channel_credentials=0xffff81916b90, grpc.internal.security_connector=0xffff8191be30, grpc.internal.subchannel_pool=0xffff8191a740, grpc.resource_quota=0xffff8191c420, grpc.server_uri=dns:///192.168.31.205:8585}: failed to connect to channel, retrying
array(2) {
[0]=>
NULL
[1]=>
object(stdClass)#241 (3) {
["metadata"]=>
array(0) {
}
["code"]=>
int(14)
["details"]=>
string(34) "failed to connect to all addresses"
}
}
2023-02-10 07:22:58 End of debug
If I uncomment "exec" function - grpcurl answer correctly!
From this I conclude that the request from the Docker container works fine, but the GRPC client specifically does not work for some reason. What can it be?
The error message Disabling AF_INET6 sockets because ::1 is not available says that it tried to use IPv6 and failed, so it's using IPv4 instead. That message has been silenced in the most recent release of the gRPC package. Check your gRPC version, if required update to the recent version. Refer to the similar SO for more information.
DNS resolution failure with gRPC: Looks like you're using a workaround to run your binary under environment variable setting: GRPC_DNS_RESOLVER=native, still you're getting the below error:
`connect failed: {"created":"#1676013778.758779961","description":"Endpoint read failed","file":"/tmp/pear/temp/grpc/src/core/ext/transport/chttp2/transport/chttp2_transport.cc","file_line":2572,"occurred_during_write":0,"referenced_errors":[{"created":"#1676013778.758739461","description":"Socket closed","fd":6,"file":"/tmp/pear/temp/grpc/src/core/lib/iomgr/tcp_posix.cc","file_line":802,"grpc_status":14,"target_address":"ipv4:192.168.31.205:8585"}]`}
See github issue on Endpoint read failed in client request #144 which may help to resolve your issue.
Related
I'm using this repo to connect to SAP software:
https://github.com/gkralik/php7-sapnwrfc
but I don't know why my script connect and return data successfully from my zfunctions on SAP server only under Command Line Interface CLI
php /path/tomy/script.php
but errors under web browser always return:
Fatal error: Uncaught SAPNWRFC\ConnectionException: Failed to set trace directory in /var/www/html/sap/test.php
or
Exception: Could not open connection
Exception INFO:
Array
(
[code] => 1
[key] => RFC_COMMUNICATION_FAILURE
[message] =>
LOCATION CPIC (TCP/IP) on local host with Unicode
ERROR partner 123.4.5.6:3300 not reached
TIME Sat Feb 4 23:42:27 2023
RELEASE 753
COMPONENT NI (network interface)
VERSION 40
RC -10
MODULE /bas/753_REL/src/base/ni/nixxi.cpp
LINE 3067
DETAIL NiPConnect: 123.4.5.6:3300
SYSTEM CALL connect
ERRNO 13
ERRNO TEXT Permission denied
COUNTER 6
)
with any user, and I have checked file permissions too.
Thank you very much for your help
By default, SELinux forbids Apache to make outgoing network connections. If Apache needs to make requests to an outside network service, then run the following command to allow this action.
setsebool -P httpd_can_network_connect on
I am trying to install selenium php binding in my Window 10 computer. I download selenium 3.13.0 and https://code.google.com/archive/p/php-webdriver-bindings/downloads version 0.9.1. I also download geckodriver-v0.21.0-win64.zip and run it ad administrator.
Since my gecko is running on port 4444, I start selenium server on port 4445
java -jar selenium-server-standalone-3.13.0.jar -port 4445
The example code I use is
require_once "phpwebdriver/WebDriver.php";
$webdriver = new WebDriver("localhost", "4445");
$webdriver->connect("firefox");
$webdriver->get("http://google.com");
$element = $webdriver->findElementBy(LocatorStrategy::name, "q");
if ($element) {
$element->sendKeys(array("php webdriver" ) );
$element->submit();
}
But I get the following error. I am using php 5.6.30. My Firefox is 61.0.1. My javs version is 1.8.0_171.
Could someone advise me how to fix the problem? Thanks.
Notice: Undefined property: stdClass::$sessionId in C:\AppServ\www\php-webdriver-bindings\phpwebdriver\WebDriver.php on line 60
stdClass Object ( [sessionId] => [value] => stdClass Object ( [error] => invalid session id [message] => No active session with ID [stacktrace] => ) [status] => 6 )
Fatal error: Uncaught exception 'WebDriverException' with message '6' in C:\AppServ\www\php-webdriver-bindings\phpwebdriver\WebDriverBase.php:130 Stack trace: #0 C:\AppServ\www\php-webdriver-bindings\phpwebdriver\WebDriverBase.php(170): WebDriverBase->handleResponse(Object(stdClass)) #1 C:\AppServ\www\php-webdriver-bindings\example2.php(25): WebDriverBase->findElementBy('name', 'q') #2 {main} thrown in C:\AppServ\www\php-webdriver-bindings\phpwebdriver\WebDriverBase.php on line 130
The selenium server output is :
D:\Selenium-server>java -jar selenium-server-standalone-3.13.0.jar -port 4445
19:12:35.888 INFO [GridLauncherV3.launch] - Selenium build info: version: '3.13.0', revision: '2f0d292'
19:12:35.888 INFO [GridLauncherV3$1.launch] - Launching a standalone Selenium Server on port 4445
2018-07-10 19:12:36.128:INFO::main: Logging initialized #911ms to org.seleniumhq.jetty9.util.log.StdErrLog
19:12:36.923 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4445
19:12:51.768 INFO [ActiveSessionFactory.apply] - Capabilities are: {
"browserName": "firefox",
"javascriptEnabled": true,
"nativeEvents": false,
"version": ""
}
19:12:51.774 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.remote.server.ServicedSession$Factory (provider: org.openqa.selenium.firefox.GeckoDriverService)
19:13:02.494 INFO [ActiveSessionFactory.apply] - Capabilities are: {
"browserName": "firefox",
"javascriptEnabled": true,
"nativeEvents": false,
"version": ""
}
19:13:02.494 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.remote.server.ServicedSession$Factory (provider: org.openqa.selenium.firefox.GeckoDriverService)
Download chrome driver and try below command
To register a Selenium Grid Hub you need to use the following command:
>java -jar /Users/admin/selenium-server-standalone-3.14.0.jar -role hub
To register a Selenium Grid Node for ChromeDriver and Chrome you need to pass the absolute path of the ChromeDriver along with the Key and Value of the Registration URI as follows:
>java -Dwebdriver.chrome.driver=/path/to/chromedriver.exe -jar /Users/admin/selenium-server-standalone-3.14.0.jar -role node -hub http://<IP_GRID_HUB>:4444/grid/register
I have been running kubernetes e2e conformance test using a local cluster. When I running the guestbook test, I can see all the pods are running:
```
[root#localhost kubernetes]# cluster/kubectl.sh get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
e2e-tests-kubectl-6z70j frontend-1478469098-3hc82 1/1 Running 0 22s
e2e-tests-kubectl-6z70j frontend-1478469098-673gg 1/1 Running 0 22s
e2e-tests-kubectl-6z70j frontend-1478469098-hblqz 1/1 Running 0 22s
e2e-tests-kubectl-6z70j redis-master-1101892589-vd4gm 1/1 Running 0 21s
e2e-tests-kubectl-6z70j redis-slave-2881398718-58fq2 1/1 Running 0 21s
e2e-tests-kubectl-6z70j redis-slave-2881398718-vqdvf 1/1 Running 0 21s
kube-system kube-dns-3884805971-4sh48 3/3 Running 0 2h
```
But the test fails with the error:
```
Apr 13 04:08:29.608: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>: Uncaught exception 'Predis\Response\ServerException' with message 'DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authen in <b>/usr/local/lib/php/Predis/Client.php</b> on line <b>370</b><br />
```
Now, I manually logged in to redis-slave container and did a wget on guestbook.php:
```
root#redis-slave-2881398718-vqdvf:/data# wget http://10.0.0.175:80/guestbook.php
converted 'http://10.0.0.175:80/guestbook.php' (ANSI_X3.4-1968) -> 'http://10.0.0.175:80/guestbook.php' (UTF-8)
--2017-04-13 09:08:41-- http://10.0.0.175/guestbook.php
Connecting to 10.0.0.175:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: 'guestbook.php'
guestbook.php [ <=> ] 82.54K --.-KB/s in 0s
2017-04-13 09:08:41 (224 MB/s) - 'guestbook.php' saved [84518]
```
Here are the env variables on redis-slave:
```
root#redis-slave-2881398718-vqdvf:/data# env
FRONTEND_PORT_80_TCP_ADDR=10.0.0.175
REDIS_SLAVE_PORT_6379_TCP=tcp://10.0.0.80:6379
REDIS_SLAVE_SERVICE_HOST=10.0.0.80
HOSTNAME=redis-slave-2881398718-vqdvf
REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-3.2.0.tar.gz
KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
TERM=xterm
FRONTEND_PORT_80_TCP_PORT=80
REDIS_SLAVE_PORT=tcp://10.0.0.80:6379
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_HOST=10.0.0.1
GET_HOSTS_FROM=dns
FRONTEND_PORT_80_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.153
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.153:6379
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
REDIS_SLAVE_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_SERVICE_PORT=6379
PWD=/data
REDIS_SLAVE_SERVICE_PORT=6379
FRONTEND_PORT=tcp://10.0.0.175:80
FRONTEND_SERVICE_PORT=80
REDIS_MASTER_SERVICE_HOST=10.0.0.153
SHLVL=1
HOME=/root
FRONTEND_SERVICE_HOST=10.0.0.175
REDIS_SLAVE_PORT_6379_TCP_ADDR=10.0.0.80
KUBERNETES_PORT_443_TCP_PROTO=tcp
REDIS_DOWNLOAD_SHA1=0c1820931094369c8cc19fc1be62f598bc5961ca
REDIS_VERSION=3.2.0
KUBERNETES_SERVICE_PORT_HTTPS=443
REDIS_MASTER_PORT_6379_TCP_PORT=6379
FRONTEND_PORT_80_TCP=tcp://10.0.0.175:80
REDIS_SLAVE_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
GOSU_VERSION=1.7
REDIS_MASTER_PORT=tcp://10.0.0.153:6379
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
_=/usr/bin/env
```
Can you please help me debug this issue? TIA.
I'm currently testing Maxscale with a Galera Cluster of 3 nodes in Read/Write Split mode. By default, Maxscale defines one node as a master and the other as slaves (my configuration says 100% of the slaves).
My intend is to check how Maxscale handles node shutdowns.
The problem is that with benchmarks (Sysbench, Mysqlslap) and also custom scripts (PHP), the connection to the backend (MariaDB) gets lost when I shut down a node of the cluster.
Error log:
MariaDB Corporation MaxScale /var/log/maxscale/error1.log Thu Oct 29 13:00:11 2015
-----------------------------------------------------------------------
--- Logging is enabled.
2015-10-29 13:00:11 Error: Failed to obtain address for host ::1, Address family for hostname not supported
2015-10-29 13:00:11 Warning: Failed to add user root#::1 for service [RW Split Router]. This user will be unavailable via MaxScale.
2015-10-29 13:00:11 Warning: Duplicate MySQL user found for service [RW Split Router]: cmon#127.0.0.1 for database: (null)
2015-10-29 13:00:11 Warning: Duplicate MySQL user found for service [RW Split Router]: root#127.0.0.1 for database: (null)
2015-10-29 13:00:11 Warning: Duplicate MySQL user found for service [RW Split Router]: root#10.58.224.113 for database: (null)
2015-10-29 13:00:35 Error : Unable to write to backend due to authentication failure.
2015-10-29 13:00:40 Error : Monitor was unable to connect to server 10.58.224.113:3306 : "Can't connect to MySQL server on '10.58.224.113' (111)"
Trace log:
2015-10-29 13:00:33 [4] Route query to slave 10.58.224.113:3306 <
2015-10-29 13:00:33 [4] Servers and router connection counts:
2015-10-29 13:00:33 [4] current operations : 0 in 10.58.224.113:3306 RUNNING SLAVE
2015-10-29 13:00:33 [4] current operations : 0 in 10.26.116.84:3306 RUNNING SLAVE
2015-10-29 13:00:33 [4] current operations : 0 in 10.26.84.103:3306 RUNNING MASTER
2015-10-29 13:00:33 [4] Selected RUNNING SLAVE in 10.58.224.113:3306
2015-10-29 13:00:33 [4] Selected RUNNING SLAVE in 10.26.116.84:3306
2015-10-29 13:00:33 [4] Selected RUNNING MASTER in 10.26.84.103:3306
2015-10-29 13:00:34 [4] > Autocommit: [enabled], trx is [not open], cmd: COM_QUERY, type: QUERY_TYPE_READ, stmt: SELECT COUNT(*) FROM sbtest1
2015-10-29 13:00:34 [4] Route query to slave 10.58.224.113:3306 <
2015-10-29 13:00:36 [4] Stopped RW Split Router client session [4]
2015-10-29 13:00:42 Server changed state: server1[10.58.224.113:3306]: slave_down
PHP test script
<?php
# Test MaxScale
$db = new PDO('mysql:host=127.0.0.1;dbname=sbtest;charset=utf8;port=4446;', 'root', '***', array(PDO::ATTR_TIMEOUT => "10", PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION));
for($i=0; $i<5000; $i++)
{
try{
$q = $db->query('SELECT COUNT(*) FROM sbtest1', PDO::FETCH_NUM);
if($q){
$res = $q->fetchAll();
#var_dump($res);
echo time()." Result: {$res[0][0]}\n";
sleep(1);
}
}
catch(PDOException $Exception) {
echo "PDOException: " . $Exception->getMessage() . "\n";
die('forced script to stop');
}
}
Mysqlslap benchmark:
mysqlslap -h127.0.0.1 -uroot -p*** -P4446 --create="CREATE TABLE a (b int);INSERT INTO a VALUES (23)" --query="SELECT * FROM a" --concurrency=50 --iterations=200 --delimiter=";"
Sysbench benchmark:
sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --oltp-table-size=2500 --mysql-user=root --mysql-password=*** --mysql-host=127.0.0.1 --db-ps-mode=disable --mysql-port=4446 prepare
sysbench --num-threads=16 --max-requests=5000 --test=/usr/share/doc/sysbench/tests/db/oltp.lua --oltp-skip-trx=on --oltp-read-only=on --oltp-table-size=250000 --mysql-host=127.0.0.1 --mysql-user=root --mysql-password=*** --mysql-port=4446 run
Encountered errors:
PDOException: SQLSTATE[HY000]: General error: 2003 Authentication with backend failed. Session will be closed.
PDOException: SQLSTATE[HY000]: General error: 2006 MySQL server has gone away
PDOException: SQLSTATE[HY000]: General error: 2013 Lost connection to MySQL server during query
Maxscale configuration:
[maxscale]
threads=4
auth_connect_timeout=20
auth_read_timeout=20
auth_write_timeout=20
log_trace=1
[Galera Monitor]
type=monitor
module=galeramon
servers=server1,server2,server3
user=maxmon
passwd=***
monitor_interval=30000
backend_connect_timeout=10
backend_read_timeout=10
backend_write_timeout=10
[RW Split Router]
type=service
router=readwritesplit
servers=server2,server3,server1
user=root
passwd=***
max_slave_connections=100%
enable_root_user=1
router_options=slave_selection_criteria=LEAST_CURRENT_OPERATIONS
[Debug Interface]
type=service
router=debugcli
[CLI]
type=service
router=cli[Debug Interface]
type=service
router=debugcli
[CLI]
type=service
router=cli
[RW Split Listener]
type=listener
service=RW Split Router
protocol=MySQLClient
port=4446
[Debug Listener]
type=listener
service=Debug Interface
protocol=telnetd
address=127.0.0.1
port=4442
[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
port=6603
[server1]
type=server
address=10.58.224.113
port=3306
protocol=MySQLBackend
[server2]
type=server
address=10.26.84.103
port=3306
protocol=MySQLBackend
[server3]
type=server
address=10.26.116.84
port=3306
protocol=MySQLBackend
Session monitoring shows that the sessions gets invalid like in the following example:
# maxadmin -pmariadb show sessions
Session 9 (0x7f60a4000b50)
State: Invalid State
Service: RW Split Router (0x342f460)
Client DCB: 0x7f60a40009a0
Client Address: root#127.0.0.1
Connected: Thu Oct 29 13:28:57 2015
I also played around with different timeouts variables and monitor_interval in Maxscale and also in my PHP testing script (PDO timeout), but the problem seems to be how Maxscale handles MySQL sessions.
I also read about the optimistic way of Maxscale which is forwarding the quickest response it gets from one of the nodes, but not sure if this is the cause.
Is there a way to make node shutdowns unharmful for any SQL requests propagated by Maxscale to all slave nodes of a cluster ?
I have such network configuration (see link - http://s58.radikal.ru/i160/1110/4c/1c2c5d74edd0.jpg)
Where:
Notebook - contain Apache+php+mongodb+php drivers for mongodb+web project on Zend (Windows)
router - virtual station (nat on 192.168.5.23 interface + ipfw)
natd.conf:
interface le0
same_ports
use_sockets
redirect_port tcp 192.168.5.23:27017 27017
redirect_port tcp 192.168.5.23:27017 27017
ipfw:
allow from any to any
virtual station 2 - contain ONLY mongodb (no php, apache, or mongodb php drivers )
1 - ping from notebook to mongodb host and backward - works.
2 - shell on virtual mongodb host: mongo 192.168.5.20:27017 - connected to notebook's mongodb successfully
3 - attempt to connect from notebook to virtual host cause such error:
C:\mongodb1.8.2\bin>mongo 192.168.9.21:27017
MongoDB shell version: 1.8.2
connecting to: 192.168.9.21:27017/test
Sun Oct 02 22:31:14 Error: couldn't connect to server 192.168.9.21:27017 shell/mongo.js:81
exception: connect failed
4 - attempt to use remote host with DB in php project (www.vm.lcl):
an exception occured while bootstrapping
connecting to vm-db1.lcl failed: Unknown error
Stack Trace:
#0 C:\www\vm-db1.lcl\library\Zirrk\Database\MongoConnection.php(16): Mongo->__construct('vm-db1.lcl')
Please, give me advise - in what direction should I search my mistakes!
Thank a lot!
I've solve this problem by changing rule in natd.conf
redirect_port tcp 192.168.5.23:27017 27017
to
redirect_port tcp 192.168.5.23:27017 192.168.9.21:27017
Before understanding how to fix it, I've create in virtual network (192.168.9.0/24) web-server (192.168.9.11) with apache+php+mongo-php-driver (mongodb - was not installed), and tried to connect to 192.168.9.21
$m = new Mongo("mongodb://192.168.9.21:27017");
This was no sense. I've spent whole day in brainstorm and googling information, but still nothing. (Error was in timeout while connection to server). Then I rest for few hours, and understand, that in my case, all traffic goes through Freebsd-gateway host and add to natd.conf
redirect_port tcp 192.168.9.11:27017 192.168.9.21:27017
reboot gate-way server, and it come to work!