We have two AWS EC2 instances and from the first server using PHP I am making a request to the second server like this: file_get_contents('https://somedomain.com/api/feed/obj/1234'). It tries to connect but just hangs.
Using wget in the terminal I get:
$ wget -dv https://somedomain.com/api/feed/obj/1234
Setting --verbose (verbose) to 1
DEBUG output created by Wget 1.17.1 on linux-gnu.
Reading HSTS entries from /home/abc/.wget-hsts
URI encoding = 'ANSI_X3.4-1968'
converted 'https://somedomain.com/api/feed/obj/1234' (ANSI_X3.4-1968) -> 'https://somedomain.com/api/feed/obj/1234' (UTF-8)
--2021-06-01 11:08:01-- https://somedomain.com/api/feed/obj/1234
Resolving somedomain.com (somedomain.com)... vv.xx.yy.zz
Caching somedomain.com => vv.xx.yy.zz
Connecting to somedomain.com (somedomain.com)|vv.xx.yy.zz|:443...
I guess it has something to do with internal/external ip's.
Related
My Code fails to reach the Docker Socket
$client = new GuzzleHttp\Client();
$test = $client->request('GET','http://v1.40/containers/json',[
'curl' => [CURLOPT_UNIX_SOCKET_PATH => '/var/run/docker.sock']
]);
I only get a generic cURL error 7 from that and I´ve checked that the socket is available and working inside the Container with the cURL command from the cmd. Its only when i try to connect via PHP it fails ominously and frankly im out of ideas.
So just in case someone stumbles upon this in the future and has the same or a similar problem.
Guzzle was not the problem in this case but phpfpm. I did not realize that the phpfpm workers in the official php docker image used the www-data user by default. The way I used is to change the user in the www.conf (default /usr/local/etc/php-fpm.d/www.conf in docker)
user = root
group = root
you will have to append the -R flag to the run command to allow running the workers as root.
Outline:
We are trying to connect up varnish-4.1.11 to magento 1 in kubernetes using the nexcess turpentine addon, but the same error is returned each time:
Error determining Varnish version: Varnish admin socket timeout
Failed to load configurator
Application stack:
We have a kubernetes cluster running a magento 1 stack with the following containers:
php-fpm:7.2/nginx:latest
mysql:5.7
redis:latest
nfs-provisioner:latest
nginx:latest (acts as a proxy for varnish to point to)
varnish:4.1.11
kubernetes info:
Networking: cilium:v16.3
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Varnish config:
NFILES=131072
MEMLOCK=82000
NPROCS="unlimited"
RELOAD_VCL=1
VARNISH_VCL_CONF=/var/www/html/site/var/default.vcl
VARNISH_LISTEN_PORT=6081
VARNISH_ADMIN_LISTEN_PORT=6082
VARNISH_SECRET_FILE=/etc/varnish/secret
VARNISH_MIN_THREADS=5
VARNISH_MAX_THREADS=50
VARNISH_THREAD_TIMEOUT=120
VARNISH_STORAGE="malloc,512M"
VARNISH_TTL=120
DAEMON_OPTS="-F -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
-f ${VARNISH_VCL_CONF} \
-T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
-t ${VARNISH_TTL} \
-S ${VARNISH_SECRET_FILE} \
-s ${VARNISH_STORAGE}" \
-p esi_syntax=0x2 \
-p cli_buffer=16384
What we've tried so far:
Downgrading to varnish-3.0.7
Pointing magento to varnish's IP directly
Running a generic varnish connection script in PHP
Notes:
Pinging the varnish pod from the nginx/fpm pod works fine
Curling to the varnish ports from the nginx/fpm pod also works fine
The generic connection script noted above works successfully when run from inside the varnish container itself, which very likely indicates a networking issue.
Running the stack locally in docker-compose works fine, which also indicates a networking issue.
I appreciate that this is a very very niche issue, but hopefully someone else has some insight into what could be going wrong.
In case anyone else encounters this or a similar issue, it was due to the linkerd service mesh we have in place not properly passing traffic.
Whilst not an ideal solution, disabling linkerd for the relevant pods resolved the issue.
I am new for docker and consul. I have created the 4 instances in AWS. I have added the use of the following instance.
First Instance - Server 1
Second Instance - Server 2
Third Instance - Server 3
Fourth Instance - Server 4
This instance having the ubuntu 18.04 OS. I am trying to implement an auto-discovery concept using consul.
I have done the following steps.
I have installed the docker in my four instances using the below link https://docs.docker.com/install/linux/docker-ce/ubuntu/
And pulling the consul image using the below link
https://hub.docker.com/_/consul?tab=description
I have checked on 'Running Consul for Development'. Its working fine for all the instances.
Server 1:
I am trying to run on consul agent in client mode. It's showing below error.
sudo docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' consul agent -bind=<external ip> -retry-join=<root agent ip>
external ip - I have given on server1 private IP.
root agent ip - I have given on bootstrap server private IP.
Output:
I got the 64 letter key. EG:
b93b160ef52b9203d67bb6db27793963dc419276145f4c247c9ba4e2bd6deb03
But that reference sites having a different response.
dig #bootstrap_server_private_ip -p 8600 consul.service.consul
It's showing on connection timed out an error.connection timed out error
Apache/PHP/Symphony is being used in my middle layer for authentication and routing. A request comes through on http then if the request authenticates the person making the request is authorized then makes a call to a backend web service over https. The back end web service is over https with a self signed certificate. I can hit the backend service directly and see my certificate information via chrome inspector. When I make the request directly to the backend via the url, everything works. When I try to go through the middle layer, I get a response back that is a 504:
{"code":504,"message":"A network communication error has occurred","error":{"code":77,"message":"SSL: can\u0027t load CA certificate file \/etc\/pki\/tls\/certs\/ca-bundle.crt"}}
I generated the certificate to:
/usr/local/jboss-eap-6.4/standalone/configuration/.keystore
Using the command:
keytool -genkey -keyalg RSA -alias jboss -keystore .keystore -storepass changeit -validity 9999 -keysize 2048
I also updated my standalone.xml to reference the file via:
<ssl name="ssl" key-alias="jboss" password="changeit" certificate-key-file="/usr/local/jboss-eap-6.4/standalone/configuration/.keystore" protocol="TLSv1" verify-client="false"/>
My dev machine is OSX.
It seems that apache or symfony is looking for the cert in /etc/pki/tls/certs/ca-bundle.cert which is a file that doesn't exist on my system. Searching for "pki" in /etc/apache2/httpd.conf yields no results.
How do I setup apache/symfony2 to trust this cert, or is there a different way to trust this cert more globally?
Create the CA Cert Bundle File
The system is looking for /etc/pki/tls/certs/ca-bundle.cert which is a standard path on linux, but not on osx. We get around this by generating the file.
I generated the .keystore file using keytool and used jboss for my alias. In order to build the ca bundle file, we need it to be in the pem format, so we need to add the -rfc to our export statement. Below are the commands:
cd /usr/local/jboss-eap-6.4/standalone/configuration
keytool -export -alias jboss -file local-sbx.dev.yourcompany.com.crt -keystore .keystore -rfc
After you have the file, you can cat it out and verify that the file has the BEGIN CERTIFICATE and END CERTIFICATE stuff in it. If so, its in the right format.
Lastly, create the directory structure, move the cert to act like the bundle (which is just a bunch of certs appended to each other) and then restart apache:
mkdir -p /etc/pki/tls/certs/
sudo cp local-sbx.dev.yourcompany.com.crt /etc/pki/tls/certs/ca-bundle.crt
sudo apachectl restart
Note: This was a sub problem of SSL: can't load CA certificate file /etc/pki/tls/certs/ca-bundle.crt so if you are still having issues, you might need to update your php setup too... directions are in the link provided.
My dev server is Debian Squeeze and I'm running Gearman 1.1.5 which I compiled from source along with the php pecl extension v1.1.1
If I run the reverse_client.php script I get the GEARMAN_COULD_NOT_CONNECT error.
PHP Warning: GearmanClient::do(): send_packet(GEARMAN_COULD_NOT_CONNECT) Failed to send server-options packet -> libgearman/connection.cc:430 in /home/bealers/build/gearman-1.1.1/examples/reverse_client.php on line 26
There are a few similar posts on here about this and they all point to GM not running.
It is definitely running.
I'm starting it with these params:
PARAMS="--queue-type=MySQL --mysql-db=test_db --mysql-user=gearman --mysql-password=gearman"
If I drop the gearman_queue table in test_db then restart the daemon the table is recreated, so its mysql connection is fine and it's clearly starting.
I can also telnet to 4730 on localhost, so there's no firewall issue.
Initially GM had problems starting because it was starting before mysql, so I edited the init script
### BEGIN INIT INFO
# Provides: gearman-job-server
# Required-Start: $network $remote_fs $syslog mysql
and an update-rd.c gearman-job-server defaults sets it to start after and it starts fine on boot up now.
The only other thing I can think of is that initially I'd installed via apt but the version was to old so I removed it and compiled from source. /usr/sbin/gearmand no longer exists the only version is /usr/local/sbin/gearmand
ps ax | grep gearman shows only one process running.
Netstat shows only one process running`
tcp 0 0 *:4730 *:* LISTEN 2325/gearmand
The PECL lib seems fine:
php -i | grep gearman
/etc/php5/cli/conf.d/gearman.ini,
gearman
gearman support => enabled
libgearman version => 1.1.5
I'm out of ideas
I had the same problem and recently solved them after a couple days of frustration (hard to troubleshoot since there are three processes to worry about :-)
It appears (at least in my case) that the PHP documentation for GearmanClient::addServer() and GearmanWorker::addServer() is incorrect. Specifically, the docs seem to imply that hostname and port number are optional and that it will use localhost and port 4730 as defaults if you do not specify them. This never works - it suddenly occurred to me today to try explicitly specifying them for both client and worker processes and everything started working.
Try specifying all values for hostnames and ports and see if this works for you.
In case if you have used something like this
$client->addServers('127.0.0.1', 4730);
or
$client->addServers();
use something like this
$client->addServers('127.0.0.1:4730');
PS - I have used localhost IP, this can be replaced with actual host IP.
In my case it's little different. I got this same error when I had my addServer code inside the loop.
$client = new GearmanClient();
for ($i=0; $i<100000; $i++) {
$client->addServer("127.0.0.1", 4730);
$data = json_encode(array('job_id' => $i, 'task_name' => 'send-email'));
$client->addTaskBackground('sendEmail', $data);
}
$client->runTasks();
And this fixed it for me:
$client = new GearmanClient();
$client->addServer("127.0.0.1", 4730);
for ($i=0; $i<100000; $i++) {
$data = json_encode(array('job_id' => $i, 'task_name' => 'send-email'));
$client->addTaskBackground('sendEmail', $data);
}
$client->runTasks();
May be this could help someone.
If you want to use single server, you can use
$client->addServer($host, $port)
e.g. $client->addServer('127.0.0.1', 4730)
http://php.net/manual/en/gearmanclient.addserver.php
If you want to use multiple server, then you can use
$client->addServers($host1:$port1, $host2:$port2, $host3:$port3)
e.g. $client->addServers('127.0.0.1:4730', '127.0.0.2:8080')
http://php.net/manual/en/gearmanclient.addservers.php