Docker Environment Variables missing for PhpStorm Unit Testing - php

I am currently trying to set up a development environment with Docker. The goal is have a single Docker Compose file to start all necessary containers and develop and test code within the containers.
I now ran into a problem with my environment variables not being available when running tests using PhpStorm's built-in Docker support. All my tests are succeeding when running docker compose exec api-internal-php vendor/bin/phpunit but not when running my tests in PhpStorm. To clarify: In both cases I am executing my tests in the running container. The reason for my tests failing is that the environment variables I defined in my Docker Compose file, and that are necessary for connecting to my database, are not available when running PHPUnit in PhpStorm. My Docker image is based on php:8.2-fpm-alpine.
My system configuration (up-to-date at the time of writing):
macOS 13.1
Docker Desktop 4.15.0
PhpStorm 2022.3.1
Of course I ensured that I am actually running the tests using Docker. I am using a Docker network and connect to the database by using the service's name. I can confirm that the scope of my environment variables is the issue, because when replacing the environment variables in my code by their actual value, the tests are succeeding even in PhpStorm. That also means that the test is able to connect to the database using the service's name and the Docker network.
I installed Xdebug in my image and debugging is working fine. By adding a breakpoint at the beginning of my test I can confirm that my variables are missing in the $_ENV array.
I tried different things and was searching for a solution for hours but did not succeed yet. I tried adding an fpm config file with my environment variables like this post suggested, which did not help. Otherwise, I did not find any question with the exact same problem.
Relevant excerpt of my Docker Compose file:
api-internal-php:
build:
context: .
dockerfile: dockerfiles/php.dockerfile
args:
APP_ENV: dev
SYMFONY_DIRECTORY: backend-api-internal
restart: unless-stopped
volumes:
- ./backend-api-internal:/var/www/html:ro
- ./backend-api-internal/var:/var/www/html/var
- ./backend-api-internal/vendor:/var/www/html/vendor
- ./backend-api-internal/.phpunit.result.cache:/var/www/html/.phpunit.result.cache
environment:
DB_ANSWERS_URL: mysqli://root:local#mariadb/
DB_DEV_NAME: dev_env_db
DB_TEST_NAME: test_env_db
depends_on:
- mariadb
- model
One interesting side note: My application/working directory in the container is (obviously) /var/www/html. Somehow, PhpStorm uses /opt/project as the path mapping. As I said, when replacing the environment variables in the code by their actual value, the tests run just fine, so I do not think that this is the problem.

So as it turned out, the Docker integration in PhpStorm works quite well if you know how to use it. The mistake I made was to first set up the CLI Interpreter as "Docker" instead of "Docker Compose" and then, when switching to "Docker Compose", not picking the correct "Lifecycle" option.
So first of all, to execute tests in your Docker container, go to the "PHP > CLI Interpreter" setting, click on the three dots and create a new one. Select "From Docker, ..." and then make sure to select "Docker Compose" if your setup is based on a Docker Compose configuration YAML file. There you can select your configuration file and the service. You do not need to set anything under "Environment Variables".
Hit "OK" and then, lastly, to execute tests in a running container, change the "Lifecycle" setting to "Connect to existing container". Under the hood, this will use the docker compose exec command instead of docker compose run.
Thanks to #Foobar for the solution!

Related

Use docker-compose correctly so that two containers can files in the same volume

I've got a laravel application (based on laradock) that is amended to productionize it.
github
I'm trying to build a solution where:
files are copied from host to php app to a volume. As you can see in the docker file here, the files are copied into the image: image file copy and composer update is running as expected.
composer update
php app is built in docker (the composer update command needs to run on the volume)
when docker-compose up is called, multiple containers start and nginx and php-fpm share the same content. The Nginx container can therefore server php application.
When i run the code in the repo above, i am seeing a 404 in the browser. The reason is (i think) because in the docker-compose file, on line 97, the statement:
- app-data:/var/www/
(mounting the volume), has erased the files that have been added as it mounts. (These files are being correctly added to the image as part of the docker build php-fpm.).
So, the question is, how can i mount a volume at run time, and not erase the files that have been added as part of image build? The volume needs to be added to that path so that both containers can see the files (AFAIK).
The Docker volume will persist it data except the first time created it will use your container data to init the volume. I guess that you have wrongly created the volume before. So you just need to manually delete the app-data volume and re-deploy your docker-compose file.
Stop your docker compose
docker volume rm <project_name_or_stack_name>_app-data
Start yout docker compose again
Besides, you will need to make sure that your php-fpm which contains your source code data will be started first before any other services sharing app-data volume.

Docker image on CentOS with php, mysql and apache

Right now we have a website running on php 5.6 at Azure on a CentOS 7 based operating system.
Every time if we want to do a new code deploy we have to do it using ftp to our server and manually transfer files and folders. This is very error prone and takes us hours of deploying and debugging afterwards every single time.
we develop on our localhost Windows machine using PHP with WAMP. So there's already a discrepancy between our localhost environments and the production environment.
I started reading a lot about docker lately and how it integrates with BitBucket pipelines. So I wanted to make our deployment Flow more streamlined and automatic with BitBucket pipelines.
Before I get to the technical stuff I have already tried, I want to make sure that I have the general picture of steps that need to be done correct.
What I want to achieve is a way for me and my colleague to write our code and push it to our BitBucket repository, from there on the pipeline picks it up, creates a docker container and deploys it (automatically) (is this a good idea, what about active users during a new container deploy?) to our website.
These are the steps I think that need to be done, please correct me where I am wrong:
1) I create a CentOS virtual machine using VirtualBox
on this VM I install docker
I create a dockerfile here where I use the php7.3-apache base image and I will install mysql on top of it as well.
?? Do I need to do extra stuff here like copying of folders with code or is
that done by bitbucket??
Now the problem I encounter is whilst creating this "docker container" for my situation. I realize this is probably a very common use case for Docker, but I've read through thousand of tutorials and watches tons of videos, but I cannot find answers to my most basic questions and I end up being stuck and frustrated for days/weeks.
I've got a fully working website created in CodeIgniter, but for the sake of the question I just want to have a working version of the docker container containing PHP MySQL and Apache.
I've logged into the CentOS VM and performed the following commands:
mkdir dockertest
touch index.html (and i placed some text in here)
touch index.php (and i placed a basic echo "hello world" in here)
touch docker-compose.yml
mkdir .docker
cd .docker
touch Dockerfile
touch vhost.conf
Dockerfile looks like this:
FROM php:7.3.0-apache-stretch
MAINTAINER Dennis
COPY . /srv/app
COPY .docker/vhost.conf /etc/apache2/sites-available/000-default.conf
RUN chown -R www-data:www-data /srv/app && a2enmod rewrite
Then I'm able to build the image using
docker build --file .docker/Dockerfile -t docker-test .
Right now I can run the container with the following command:
docker run --rm -p 8080:80 docker-test
At this point I go to my CentOS VM and I try to do a curl localhost:8080 and I get the following HTML:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access on this server. <br />
</p>
<hr>
<address>Apache/2.4.25 (Debian) Server at localhost Port 8080</address>
</body></html>
So I guess this means that the Apache server is running, but it does not see my index files anywhere.
I am massively overwhelmed by the amount of documentation and tutorials that are available for Docker, but they all seem to be too high level for me or none which targets CentOS 7, PHP, MySQL and Apache combined.
A question that's also bugging me: The advantage of Docker is that it can be deployed to anywhere and the environment is the exact same. This causes no problems like "it works on my localhost". But how does this exactly work, do me and my colleague need to develop our code INSIDE the docker container? How does this even work?
The process should be:
develop: you and your colleages develop code, they push that to a version control system (git on bitbucket/github) -> the code is in one trusted repository
build: you take this code and create a (or multiple) Docker image(s) with it: on the Apache server, you need the HTML, Javascript code. Build a Docker image starting from Apache image, which then has a step to PULL the code from the git repository inside the container. That's your front end server.
For the DB part, you probably want another container, or even use a managed service that handles the migrations/updates for you, so you only need to worry about the data in the database. If you want to have you own container, make sure the data is in a VOLUME that is mounted in the container, but is otherwise stored on a local or network drive, (i.e. NOT inside the container which would get destroyed on any update)
deploy: pull the images from registry of choice, and make sure the containers are connected as needed (i.e. either on the same host and linked, or on different nodes that have access to each other through a private network)
Notes:
Use Docker for Windows rather than virtual machine and installing Docker inside it.
The host doesn't matter, it's the base image in the container that matters whether you deploy on a Ubuntu, CentOS or CoreOS host, the Docker base image is what matters for you to install dependencies and make your code run.
On the build phase, you probably don't want to pull from git inside the image if your project is a private repository, because you would need to have credentials inside the image to do that: rather you either pull the code from git outside the image, and ADD it to the image, or use another (private) container that has the git pull credentials to pull the code, do the build, and dump a build file that you can then ADD to a shippable container.

Where does the data live in a Multi-Container Dockerized application?

I'm trying to build a couple of Docker images for different PHP-based web apps, and for now I've been deploying them to Elastic Beanstalk using the Sample Application provided by AWS as a template. This application contains two Docker container definitions: one for PHP itself, and the other for Nginx (to act as a reverse-proxy).
However, it seems a little odd to me that the source code for my PHP application is effectively deployed outside of the Docker image. As you can see from the Github sample project linked above, there's a folder called php-app which contains all the PHP source files, but these aren't part of the container definitions. The two containers are just the stock images from Dockerhub. So in order to deploy this, it's not sufficient to merely upload the Dockerrun.aws.json file by itself; you need to ZIP this file together with the PHP source files in order for things to run. As I see it, it can (roughly) be represented by this visual tree:
*
|
|\
| - PHP Docker Container
|\
| - Linked Nginx Container
\
- Volume that Beanstalk auto-magically creates alongside these containers
Since the model here involves using two Docker images, plus a volume/file system independent of those Docker images, I'm not sure how that works. In my head I keep thinking that it would be better/easier to roll my PHP source files and PHP into one common Docker container, instead of doing whatever magic that Beanstalk is doing to tie everything together.
And I know that Elastic Beanstalk is really only acting as a facade for ECS in this case, where Task definitions and the like are being created. I have very limited knowledge of ECS but I'd like to keep my options open, in case I wanted to manually create an ECS task (using Fargate, for instance), instead of relying on Beanstalk to do it for me. And I'm worried that Beanstalk is doing something magical with that volume that would make things difficult to manually write a Task definition, if I wanted to go down that route.
What is the best practice for packaging PHP applications in a Docker environment, when the reverse proxy (be it Nginx or Apache or whatever) is in a separate container? Can anyone provide a better explanation (or correct any misunderstandings) of how this works? And how would I do the equivalent of what Beanstalk is doing here, in ECS, for a PHP application?
You have several options to build it.
The easiest is to have one service ecs with two container foreach web app (one container for php app and one container for nginx).
The only exposed port in each service is nginx 80 port with ecs dynamic port.
Each exposed service should have an alb that redirect trafic to nginx port.
In this case nginx is not used as loadbalancer, only for front web server.
Edit:
Your DockerFile for php app should be like that:
...
# Add Project files.
COPY . /home/usr/src
...
And for dev mode your docker-compose:
version: '3.0'
services:
php:
build:.
depends_on:
...
environment:
...
ports:
...
tty: true
working_dir: /home/usr/src
volumes:
- .:/home/usr/src
Then in local use docker-compose and live edit your files in container.
And in production mode file was copied in container at build time.
Is it more clear?

Run PHPUnit in a Docker container and start containers it depends on

Background
A php application in running in a Docker container. With docker-compose, this container is ran in a configuration together with a postgres database container and a lot of other containers.
Attempting to run my phpunit tests in phpstorm, I have created a Docker Remote Interpreter test configuration which runs the PHP application container.
Problem
The container complains that it can not connect to the database, which of course is not started because it's configured in the docker-compose.yml and not started up along with the single container used by PhpStorm.
Attempts to solve
A PHP Remote Debug can use a deployment, so I tried to create a Docker Deployment configuration which uses the docker-compose.yml (therefore starting all containers) and is launched before the PHPUnit launch, but I cannot select this deployment.
Starting the Docker Compose containers except the one from the PHP app and have it connect to it. This proves difficult, because they are on different networks, so the php app container still complains about not finding the database. I cannot configure which network the container uses in PhpStorm.
tl;dr
My PhpStorm project is a PHP application. This application can run in a Docker container which serves through nginx. I can run my PHPUnit tests in the container with a run configuration, but it also needs other containers which are not started up automatically.
Question
How can I use PHPStorm to run PHPUnit tests in a PHP application container together with the containers it depends on (already described in a docker-compose.yml)?
The answer is too long. I hope this 5 minutes video by Jetbrains TV helps you.
https://www.youtube.com/watch?v=I7aGWO6K3Ho
In short you need:
Configure Docker instance in PHPStorm
Configure new PHP Interpreter from Docker container
Configure PHPUnit to use your new interpreter

gcloud, where's the code? How can I update the deployed code manually?

I deployed a local project using gcloud command since there everything OK. I'm getting a 500 error in the browser, but I still have hundreds of questions. Where's the code? What is doing gcloud behind the scenes when I do a deploy? Why do I see 3 instances when I just deployed just one project?
I did SSH to each of the three compute instances I see and I couldn't find the code. I want to do something very silly and easy, just go to the index.php file and do echo '1';die; to check that's the code I can play with to make my project work on Google Platform.
Because I'm noob on this I won't be able to tweak my project perfectly to work on Google Cloud at first, so it's probably silly but a must!
My current and only config file:
runtime: php
vm: true
runtime_config:
document_root: public
You are using the AppEngine Flexible Environment (what used to be called Managed VMs). This environment uses Docker to build an image out of your application code and run it in a container.
See the Additional Debugging part of the Managed VMs PHP tutorial for more info on how to debug on the machine. After SSHing into an instance, you are on the host machine, but you still need to run additional commands to access the container, which is where your application code is running. The following command will get you on your machine:
sudo docker exec -t -i gaeapp /bin/bash
Once there, you can edit your running application by running the following commands
apt-get update
apt-get install vim # or your editor of choice
vi /app/public/index.php # I am assuming this is where your file is
Yes, you have to install vim on the container because it will not be installed by default, as this is your production image.
Also be sure to check the Logging page in Developer Console, as that is where the 500 error message will be logged, and it's a lot easier than going through these steps!

Categories