Varnish admin socket timeout with magento 1 in kubernetes - php

Outline:
We are trying to connect up varnish-4.1.11 to magento 1 in kubernetes using the nexcess turpentine addon, but the same error is returned each time:
Error determining Varnish version: Varnish admin socket timeout
Failed to load configurator
Application stack:
We have a kubernetes cluster running a magento 1 stack with the following containers:
php-fpm:7.2/nginx:latest
mysql:5.7
redis:latest
nfs-provisioner:latest
nginx:latest (acts as a proxy for varnish to point to)
varnish:4.1.11
kubernetes info:
Networking: cilium:v16.3
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Varnish config:
NFILES=131072
MEMLOCK=82000
NPROCS="unlimited"
RELOAD_VCL=1
VARNISH_VCL_CONF=/var/www/html/site/var/default.vcl
VARNISH_LISTEN_PORT=6081
VARNISH_ADMIN_LISTEN_PORT=6082
VARNISH_SECRET_FILE=/etc/varnish/secret
VARNISH_MIN_THREADS=5
VARNISH_MAX_THREADS=50
VARNISH_THREAD_TIMEOUT=120
VARNISH_STORAGE="malloc,512M"
VARNISH_TTL=120
DAEMON_OPTS="-F -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
-f ${VARNISH_VCL_CONF} \
-T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
-t ${VARNISH_TTL} \
-S ${VARNISH_SECRET_FILE} \
-s ${VARNISH_STORAGE}" \
-p esi_syntax=0x2 \
-p cli_buffer=16384
What we've tried so far:
Downgrading to varnish-3.0.7
Pointing magento to varnish's IP directly
Running a generic varnish connection script in PHP
Notes:
Pinging the varnish pod from the nginx/fpm pod works fine
Curling to the varnish ports from the nginx/fpm pod also works fine
The generic connection script noted above works successfully when run from inside the varnish container itself, which very likely indicates a networking issue.
Running the stack locally in docker-compose works fine, which also indicates a networking issue.
I appreciate that this is a very very niche issue, but hopefully someone else has some insight into what could be going wrong.

In case anyone else encounters this or a similar issue, it was due to the linkerd service mesh we have in place not properly passing traffic.
Whilst not an ideal solution, disabling linkerd for the relevant pods resolved the issue.

Related

Http GET request between Amazon EC2 servers

We have two AWS EC2 instances and from the first server using PHP I am making a request to the second server like this: file_get_contents('https://somedomain.com/api/feed/obj/1234'). It tries to connect but just hangs.
Using wget in the terminal I get:
$ wget -dv https://somedomain.com/api/feed/obj/1234
Setting --verbose (verbose) to 1
DEBUG output created by Wget 1.17.1 on linux-gnu.
Reading HSTS entries from /home/abc/.wget-hsts
URI encoding = 'ANSI_X3.4-1968'
converted 'https://somedomain.com/api/feed/obj/1234' (ANSI_X3.4-1968) -> 'https://somedomain.com/api/feed/obj/1234' (UTF-8)
--2021-06-01 11:08:01-- https://somedomain.com/api/feed/obj/1234
Resolving somedomain.com (somedomain.com)... vv.xx.yy.zz
Caching somedomain.com => vv.xx.yy.zz
Connecting to somedomain.com (somedomain.com)|vv.xx.yy.zz|:443...
I guess it has something to do with internal/external ip's.

Trying to communicate with the external ip into my docker consul

I am new for docker and consul. I have created the 4 instances in AWS. I have added the use of the following instance.
First Instance - Server 1
Second Instance - Server 2
Third Instance - Server 3
Fourth Instance - Server 4
This instance having the ubuntu 18.04 OS. I am trying to implement an auto-discovery concept using consul.
I have done the following steps.
I have installed the docker in my four instances using the below link https://docs.docker.com/install/linux/docker-ce/ubuntu/
And pulling the consul image using the below link
https://hub.docker.com/_/consul?tab=description
I have checked on 'Running Consul for Development'. Its working fine for all the instances.
Server 1:
I am trying to run on consul agent in client mode. It's showing below error.
sudo docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' consul agent -bind=<external ip> -retry-join=<root agent ip>
external ip - I have given on server1 private IP.
root agent ip - I have given on bootstrap server private IP.
Output:
I got the 64 letter key. EG:
b93b160ef52b9203d67bb6db27793963dc419276145f4c247c9ba4e2bd6deb03
But that reference sites having a different response.
dig #bootstrap_server_private_ip -p 8600 consul.service.consul
It's showing on connection timed out an error.connection timed out error

Nginx + php-fpm: Bad gateway only when xdebug server is running

Problem
When xdebug server is running from IntelliJ IDEA, I get 502 Bad Gateway from nginx when I try loading my site to trigger breakpoints.
If I stop the xdebug server, the site works as intended.
So, I'm not able to run the debugger, but it did work previously (!). Not able to pinpoint why it suddenly stopped working.
Setup
A short explanation of the setup (let me know if I need to expand on this).
My php app is running in a docker container, and it is linked to nginx running in a different container using volumes_fromin the docker compose config.
After starting the app, I can verify using phpinfo(); the xdebug module is loaded.
My xdebug.ini has the following content:
zend_extension=xdebug.so
xdebug.remote_enable=1
xdebug.remote_host=10.0.2.2
xdebug.remote_connect_back=0
xdebug.remote_port=5555
xdebug.idekey=complex
xdebug.remote_handler=dbgp
xdebug.remote_log=/var/log/xdebug.log
xdebug.remote_autostart=1
I got the ip address for remote_host (where the xdebug server is running) by these steps:
docker-machine ssh default
route -n | awk '/UG[ \t]/{print $2}' <-- Returns 10.0.2.2
To verify I could reach the debugging server from within my php container, I did the following steps
docker exec -it randomhash bash
nc -z -v 10.0.2.2 5555
Giving the following output depending on xdebug server running or not:
Running: Connection to 10.0.2.2 5555 port [tcp/*] succeeded!
Not running: nc: connect to 10.0.2.2 port 5555 (tcp) failed: Connection refused
So IntelliJ IDEA is surely set up to receive connections on 5555. I also did the appropriate path mapping between my source file paths and the remote path (when setting up the PHP Remote Debugging server from within IDEA).
Any ideas? Kind of lost on this one as I don't have much experience with any of these technologies :D
This sometimes happens, the reason is the errors in php-fpm and xdebug (exactly)!
When I refactored my colleagues code, оne page on the project returned 502 Bad Gateway
Here's what I found:
php-fpm.log
WARNING: [pool www] child 158 said into stderr: "*** Error in `php-fpm: pool www': free(): invalid size: 0x00007f1351b7d2a0 ***"
........
........
WARNING: [pool www] child 158 exited on signal 6 (SIGABRT - core dumped) after 38.407847 seconds from start
I found a piece of code that caused the error:
ob_start();
$result = eval("?>".$string."<"."?p"."hp return 1;");
$new_string = ob_get_clean();
But that is not all. The error occurred only in a certain state $string which at first glance, did not differ from the others. In my case, everything is simple. I removed the code that caused the error. This did not affect the functionality of the web page. I continued to debug the code further.
I had the same problem with the Vagrant Homestead Parallels box with a Silicon chip. Switching from php 7.3 to 7.4 fixed the issue for me.

PHPUnit Selenium Xvfb Centos

I am trying to setup functional tests on my Centos Server using Selenium Web Server and Phpunit.
When I run the tests, I get an error in the command line :
PHPUnit_Extensions_Selenium2TestCase_WebDriverException:
Unable to connect to host vmdev-pando-56 on port 7055 after 45000 ms.
Firefox console output: Error: no display specified
I've been doing research for more than three days and I couldn't find a solution. I read many posts, including SOverflow. As per my understanding, everything is properly set up, and yet I am experiencing the same problem as many other people, and the solutions that work for them seem to be not working for me.
This is my setup:
OS: Centos 6.5 x86 in command line (no GUI)
PHP: 5.6
Phpunit: 3.7, although I also tried with 5.3
Selenium standalone web server: 2.53, downloaded from here, although I also tried with 2.9
Xvfb system: xorg-x11-server-Xvfb
Firefox: 38.0.1, although I also tried with 38.7
I also set the DISPLAY to :99 in my bash profile:
This is what I do to set up the environment:
First, I launch the Xvfb system: /usr/bin/Xvfb :99 -ac -screen 0 1280x1024x24 &
Then I launch the Selenium server: /usr/bin/java -jar /usr/lib/selenium/selenium-server-standalone-2.53.0.jar &
I launch Firefox: firefox & (although I know this is not necessary, but just in case)
All of the three processes are running in background.
At this point, I know that Firefox is operative, as well as the X buffer. I can run the command firefox http://www.stackoverflow.com & and then take an snapshot of the buffer by executing import -window root /tmp/buffer_snapshot.png, which happens to be something like this:
I of course received a warning on the terminal: Xlib: extension "RANDR" missing on display ":99", but I read countless of times that this is not a problem.
Anyway, the problem begins just now.
I've written a rather simple functional test (please notice that other tests I've written other than functional, they work just fine, so the environment in that respect seem to be properly configured):
<?php
namespace My\APP\BUNDLE\Tests\Functional\MyTest;
use PHPUnit_Extensions_Selenium2TestCase;
class HelloWorldTest extends PHPUnit_Extensions_Selenium2TestCase {
protected function setUp() {
$this->setBrowser('firefox');
$this->setHost('localhost');
$this->setPort(4444);
$this->setBrowserUrl('http://www.stackoverflow.com');
}
public function testTitle() {
$this->url('/');
$this->assertEquals("1", "1");
}
}
And when I run the test by issuing phpunit HelloWorldTest.php, I get the following error:
PHPUnit_Extensions_Selenium2TestCase_WebDriverException:
Unable to connect to host vmdev-pando-56 on port 7055
after 45000 ms. Firefox console output:
Error: no display specified
Checking the log file generated by selenium, I found the following (interesting) lines:
21:55:46.135 INFO - Creating a new session for Capabilities [{browserName=firefox}]
[...]
java.util.concurrent.ExecutionException:
org.openqa.selenium.WebDriverException:
java.lang.reflect.InvocationTargetException
Build info: version: '2.53.0',
revision: '35ae25b',
time: '2016-03-15 17:00:58'
System info: host: 'vmdev-pando-56',
ip: '127.0.0.1',
os.name: 'Linux',
os.arch: 'i386',
os.version: '2.6.32-431.el6.i686',
java.version: '1.7.0_99'
Driver info: driver.version: unknown
[...]
(The file contains the complete stack trace dump, and the original message of no display specified)
No errors in the Xvfb log file.
At this point I have no clue of what I am doing wrong.
Can anyone help?
Thanks a lot
A reason for the Unable to connect error is that the version of Selenium Server does not know how to work with the version of Firefox you have installed. Selenium standalone web server 2.53 is the latest and greatest. selenium-firefox-driver is also 2.53. Firefox version 38 is old. I am running firefox 45.0.1 with selenium 2.53.

php symfony ERR_CONTENT_DECODING_FAILED on openshift

I have a php 5.4 gear on openshift on which I can run my wordpress installation.
However, when I try to deploy my symfony app, I get Content Encoding Error, it seems like the html output is broken (not yet complete). I've tried modifying the .htaccess file as well as enable/disable output buffering and/or output compression setting on the apache's httpd.conf. Here is the output from curl:
$ curl --compress --raw -i http://webfront-interiorpediadev.rhcloud.com
HTTP/1.1 200 OK
Date: Mon, 08 Dec 2014 01:52:30 GMT
Server: Apache/2.2.15 (Red Hat)
X-Pingback: http://webfront-interiorpediadev.rhcloud.com/xmlrpc.php
Cache-Control: no-cache
x-pingback: http://webfront-interiorpediadev.rhcloud.com/xmlrpc.php
vary: Accept-Encoding
content-encoding: gzip
accept-ranges: none
Content-Length: 20
X-Debug-Token: a9e740
X-Debug-Token-Link: /_profiler/a9e740
Content-Type: text/html; charset=UTF-8
Set-Cookie: PHPSESSID=gtarr5ls6tdcejdelrjrffr9p6; path=/
set-cookie: PHPSESSID=jtebjs0ihrdqhvo6f2gicq0rp0; path=/
<!DOCTYPE html>
<!--
I really have no idea why I do not receive the entire html output, Content-Length header output looks suspiciously small, but google search and php/apache documentation says that Content-Length header will not be reliable for gzip output.
I've been debugging this thingy for quite some time now and really run out of idea to tackle it. The app coding is fine, as I have deployed the exact same copy on my localhost during development. Any idea would be highly appreciated.
Here is the phpinfo() output for my apache configuration on openshift:
phpinfo on openshift
The best way to deploy a Symfony app to Openshift is:
Be sure you have a Symfony2 app working well in localhost (dev and prod)
Your proyect have to be using git.
Your .gitignore file is ignoring vendors, cache, bootstrap, logs, composer etc.
You have committed every pending change.
You need an openshift gear using PHP 5.4 and a cartridge of MySql 5.5
You need rhc to be installed and configured
Config your gear to public a branch called release: rhc app-configure --deployment-branch release -a <app-name>
Create a new php file that will give MySQL access to your app:
<?php
# app/config/params.php
if (getEnv("OPENSHIFT_APP_NAME")!='') {
$container->setParameter('database_host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database_port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));
$container->setParameter('database_name', getEnv("OPENSHIFT_APP_NAME"));
$container->setParameter('database_user', getEnv("OPENSHIFT_MYSQL_DB_USERNAME"));
$container->setParameter('database_password', getEnv("OPENSHIFT_MYSQL_DB_PASSWORD"));
}?>
This will tell the app that if is openshift environment it needs to load different user an database
Import this file (params.php) to your app/config/config.yml file:
imports:
- { resource: parameters.yml }
- { resource: security.yml }
- { resource: params.php }
...
Commit your changes.
Create a new branch that will push your changes to Openshift: git checkout -b release
Add your remote repository from openshift: git remote add openshift -f <youropenshiftrepository.git>
Merge the differences between both repositories git merge openshift/master -s recursive -X ours
Create a 'deploy' file (the one executed in openshift after you push your app) in your new folder "/.openshift/action-hooks" (Created when you added your openshift repository):
#!/bin/bash
# Symfony deploy
export COMPOSER_HOME="$OPENSHIFT_DATA_DIR/.composer"
if [ ! -f "$OPENSHIFT_DATA_DIR/composer.phar" ]; then
curl -s https://getcomposer.org/installer | php -- --install-dir=$OPENSHIFT_DATA_DIR
else
php $OPENSHIFT_DATA_DIR/composer.phar self-update
fi
unset GIT_DIR
cd $OPENSHIFT_REPO_DIR/
php $OPENSHIFT_DATA_DIR/composer.phar install
php $OPENSHIFT_REPO_DIR/app/console cache:clear --env=dev
chmod -R 0777 $OPENSHIFT_REPO_DIR/app/cache
chmod -R 0777 $OPENSHIFT_REPO_DIR/app/logs
rm -r $OPENSHIFT_REPO_DIR/php
ln -s $OPENSHIFT_REPO_DIR/web $OPENSHIFT_REPO_DIR/php
rm -r $OPENSHIFT_REPO_DIR/php
ln -s $OPENSHIFT_REPO_DIR/web $OPENSHIFT_REPO_DIR/php
php $OPENSHIFT_REPO_DIR/app/console doctrine:schema:update --force
Give this file permissions to be executed. In windows: git update-index --chmod=+x .openshift/action_hooks/deploy In Linux and Mac: chmod +x .openshift/action_hooks/deploy
Add your new file to the git project and make the commit.
Push to openshift: git push openshift HEAD
Your console will show you every step it is working on.
Come back to your master branch. git checkout master
Then you can keep working normaly on your project, commit your changes and move to release branch to deploy your new changes: git checkout release git merge master git push openshift HEAD git checkout master
And that's how I work with Symfony and Openshift. (These instructions are a mix from many ways I read and I imporved with some changes. It works very well for every app I've made.

Categories