php-fpm docker container not reachable in Google App Engine - php

I am trying to move my Symfony application to the Google Cloud. For this I need 2 containers, one nginx and one php-fpm. Currently I develop locally with compose and run it in a compute instance via compose. Now I want to switch to a more scalable (and CI applicable) approach.
My Problem is that the docker image I use does not work in the App Engine (in Cloud Run it does not work either). Ready & Health Check are failing, and even without the checks I cannot connect to the service (e.g. a locally running nginx with upstream set to the App Engine). Locally everything works fine.
My workflow is simply build this image, and push it to the app engine via
gcloud app deploy --image-url eu.gcr.io/digital-index-ws1920/php-fpm-prod:0.15
edit: app yaml is
runtime: custom
env: flex
When I connect to the app engine via browser, I see a 502, with local nginx it times out.
FROM php:7.4-fpm-alpine3.11
RUN sed -i 's/9000/${PORT}/' /usr/local/etc/php-fpm.d/zz-docker.conf
COPY .docker/php.ini /usr/local/etc/php/conf.d/php.ini
RUN apk --update --no-cache add git
RUN docker-php-ext-install pdo_mysql
COPY --from=composer /usr/bin/composer /usr/bin/composer
WORKDIR /var/www
COPY app /var/www
RUN mkdir http-cache && chown www-data:www-data http-cache/
RUN composer update; composer; composer dump-autoload --no-interaction
CMD php-fpm
EXPOSE ${PORT}

Related

How to setup and run laravel, from git?

Either I miss something, or the whole chain lacks something.
Here's my assumption:
The whole point of containerization in development, is to reduce the cost of environment setup, and create a prepared image with all the required pieces.
So, when I read that Laravel Sail is installing laravel via containerization, I get excited. Thus I install it via their instructions, and everything works.
Then the problem begins. Because:
After a successful installation, I create a git repo, with GitHub's default laravel .gitignore
Then I push the newly installed laravel app into my git repo.
Then I ask a developer to start developing it. Please note that:
He does not have PHP installed
He does not have Composer installed
He clonse the repo, and as per installation guide, runs ./vendor/bin/sail up
But ./vender folder is correctly excluded in .gitignore
Thus his command results in:
bash: ./vendor/bin/sail: No such file or directory
He Googles it of course, and finds out that people suggest to run composer update
He goes to install composer, then before that PHP, then all extensoins of PHP, then ...
Do I miss something here? The whole point of containerization was to not install the required environment locally.
What is the proper way of running a laravel app, that is not installed from https://laravel.build, but is cloned from a git repo, WITHOUT having PHP or Composer installed locally?
Update
I found Bitnami laravel docker and it's exactly what containers should be.
You are right and the other developer doesn't need to have php nor composer installed.
All he/she needs is Docker installed on the local machine.
If you scaffolded the project with what is mentioned in the official Laravel docs under the Getting started section, then you will have a docker-compose.yml file in your project root directory.
For Windows
For Linux
For Mac OS
All the developer has to do after git cloning the repository is to run
docker-compose up --build -d
That's it.
For those struggling with this issue... I've found a command that work perfectly fine.
First of all, you don't need to locally have any PHP or Composer installed, maybe there is a misunderstanding about it, all you need is Docker.
Docker will install everything you need in something I understand is like a sandbox, not locally, for each project.
And for those downloaded projects, from GIT as example, that does not have vendor folder, and obviously cannot execute sail up you can simple execute:
docker run --rm --interactive --tty -v $(pwd):/app composer install
That command will download a composer image for docker, if you do not have one yet. Then, will run a composer install and you are free to execute a ./vendor/bin/sail up if you hadn't configured an alias or just sail up if you already configure an alias.
That's all.
The official documentation lists the following command.
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
If you were to clone a Laravel project and run this command in the project root, it would create a very small container with php and composer installed and run composer in the project root to install all php dependencies. In effect, this installs the Laravel core code into the cloned project. Once the project in set up this way, the user should create a local .env file to match their development evironment.
cp .env.example .env # creates a .env file to be populated for the local environment
With the envronment set up, they can now create the application containers in docker and run the application. Laravel provides the Sail helper for this.
./vendor/bin/sail up -d # runs the docker containers in detached mode
Now it's a matter of setting up the laravel app and running the Laravel app. (I'm assuming the app uses one of the Laravel start kits that rely on Node.js. If you are using a Blade only application, you can skip the "npm" commands.)
sail artisan key:generate # (Best Practice) Generate a new application key on each machine
sail artisan migrate # Scaffold the database structure
sail artisan db:seed # (Optional) Seed the database with data
sail npm install # (Optional) Install front-end dependencies (Inertia, Vue, React, others...)
sail npm run dev # (Optional) Run the front-end framework in development mode
With this, the new developer should be running an exact copy of both the project and the development environment as the original developer.
Your project README may include additional steps to set up some other dependencies, but this is the basic workflow for contributing to a Laravel project.
The only prerequisites for this workflow is to have Docker installed with an Internet connection. This is most easily accomplished on Windows, Mac, and Linux by installing Docker Desktop.
Alternate for Older Projects
If you are working on an older project that doesn't use Laravel Sail, but does have a docker-compose.yml file, you should be able to build and run the necessary containers with the following command.
docker-compose up --build -d
Once you have the containers running, you would need to install the project dependencies directly into the container.
docker ps # find the container ID of your project's container
docker exec -it CONTAINER_ID php artisan key:generate
docker exec -it CONTAINER_ID php artisan migrate
docker exec -it CONTAINER_ID php artisan db:seed
docker exec -it CONTAINER_ID npm install
docker exec -it CONTAINER_ID npm run dev
Of course, Docker Desktop simplifies this process. With a button click you can have a terminal shell open directly in your container eliminating the need for the docker exec command.

Running Laravel migrations to Cloud SQL part of continous deployment

I'm working on a Laravel project with fully continuous deployments to Cloud Run and using Cloud SQL as storage service. Right now, I need to perform php artisan migrate manually using the cloud_sql_proxy within the local environment.
Does anyone know whether it is possible to perform this step automatically, possibly part of the Dockerfile.
This is my current Dockerfile:
FROM php:7
ENV PORT=8080
ENV HOST=0.0.0.0
RUN apt-get update -y \
&& apt-get install --no-install-recommends -y openssl zip unzip git libonig-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN ["/bin/bash", "-c", "set -o pipefail && curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer"]
RUN docker-php-ext-install pdo mbstring
WORKDIR /app
COPY . /app
RUN composer validate && composer install
EXPOSE 8080
CMD ["php", "artisan", "serve", "--host=0.0.0.0", "--port=8080"]
Thanks for any help!
It's not recommended to put in Dockerfile the migration script as that will be triggered for EVERY future requests. You need to run it only ONCE and that execution should be triggered by a build script or by a developer.
For migrations that introduce breaking changes either on commit or on rollback, it's mandatory to have a full stop, and of course, rollback planned accordingly.
Also pay attention, that a commit/push should not trigger immediately the new migrations. Often these are not part of the regular CI/CD pipeline that goes to production.
Make sure you have a manual deploy for migrations and not under CI/CD.
After you deploy a service, you can create a new revision and assign a tag that allows you to access the revision at a specific URL without serving traffic.
A common use case for this, is to run and control the first visit to this container. You can then use that tag to gradually migrate traffic to the tagged revision, and to rollback a tagged revision.
To deploy a new revision of an existing service to production:
gcloud beta run deploy myservice --image IMAGE_URL --no-traffic --tag TAG_NAME
The tag allows you to directly test(or run via this the migration - the very first request) the new revision at a specific URL, without serving traffic. The URL starts with the tag name you provided: for example if you used the tag name green on the service myservice, you would test the tagged revision at the URL https://green---myservice-abcdef.a.run.app
I got the migrations running with every deployment via ENTRYPOINT.
Details are at the reply here : https://stackoverflow.com/a/69088911/867451

How to deploy a dockerized php application

I have a Laravel application and docker setup for local development using docker-compose. My source code for the application is kept in BitBucket and now I would like to deploy the application to a Linode instance and serve it from the docker system. How this can be done ? As of now I have a LAMPP image running in Linode and I push my source code to the corresponding path when deployment is triggered. Now I would like to use the same docker image in the server instead of the LAMPP server I am using. How this can be done ? Or this is the correct method of doing it ?
It will be helpful if someone can point out a tutorial or guide for doing this ?
If you are locked on staying with linode, I would try one of these options:
Linode docker machine driver - note that this is a non official docker machine driver.
Linode containers guide - using Kubernetes, which I usually try to avoid for my small-scale apps.
If you are NOT locked on staying with linode, and you wish to avoid the complexities of Kubernetes, I can tell you I have had success with running a docker machine on Digital Ocean - this solution (as most other docker-machine solutions) makes deployment as easy as running it locally.
List of docker machine drivers
Digital Ocean docker machine guide
As for how to get your code PHP code to the container, here is an example Dockerfile I have been using for one of my PHP dockerized apps:
FROM php:7-apache
# Packages
RUN apt-get -y update && apt-get -y install git zip
RUN a2enmod rewrite && docker-php-ext-install sockets
# App
COPY . .
# Composer
COPY private/composer.phar /usr/local/bin/composer
RUN chmod +x /usr/local/bin/composer
RUN [[ ! -f composer.json ]] || composer install --ansi --no-interaction
You can adjust it to your needs.

Docker mysql environment

I have a Dockerfile that is based FROM php:alpine and I'm trying to add mysql to the build.
FROM php:alpine
COPY test-data/ /var/www/
RUN apk add --update --no-cache \
mysql
# Composer
RUN curl -s https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer
ENV COMPOSER_ALLOW_SUPERUSER=1
WORKDIR /var/www
My problem is after successfully building, I tried and run the container with the mysql environment overrides but I cant login to mysql within the container.
$ docker run -e MYSQL_DATABASE=homestead -e MYSQL_USER=homestead -e MYSQL_PASSWORD=secret -e MYSQL_ROOT_PASSWORD=secret -ti --rm idecardo /bin/sh
Testing mysql login fails
$ mysql -uroot -p # with password "secret"
Since you are newbie - always try to learn by copying ready working code over and breaking down what is being done in that code.
For Docker:
You can see the docker repository for mysql - https://hub.docker.com/_/mysql/
Under description section you will see links to various docker files for different MySQL versions.
Take one of them as a source for your inspiration, for example the link to 8.0/Dockerfile: https://github.com/docker-library/mysql/blob/fc3e856313423dc2d6a8d74cfd6b678582090fc7/8.0/Dockerfile
Notice that after mysql installation instructions in that dockerfile there is Entrypoint and CMD instructions.
In general:
Since you want php and mysql to work from docker - my advise is for you to see about docker-compose. Docker containers can be run in a variety of ways and docker-compose allows you to launch several docker containers, share some folders between them. In that scenario you would want to launch separate mysql container and separate php container, share host data folders between them and launch your code.
Also, watch some video tutorials online - they explain in details the basics of what docker is all about and how it works.

Project layout with vagrant, docker and git

So I recently discovered docker and vagrant, and I'm starting a new Php project in which I want to use both:
Vagrant in order to have a interchangeable environment that all the developers can use.
Docker for production, but also inside the vagrant machine so the development environment resembles the production one as closely as possible.
The first approach is to have all the definition files together with the source code in the same repository with this layout:
/docker
/machine1-web_server
/Dockerfile
/machine2-db_server
/Dockerfile
/machineX
/Dockerfile
/src
/app
/public
/vendors
/vagrant
/Vagrantfile
So the vagrant machine, on provision, runs all docker "machines" and sets databases and source code properly.
Is this a good approach? I'm still trying to figure out how this will work in terms of deployment to production.
Is this a good approach?
Yes, at least it works for me since a few months now.
The difference is that I also have a docker-compose.yml file.
In my Vagrantfile there is a 1st provisioning section that installs docker, pip and docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
if ! type docker >/dev/null; then
echo -e "\n\n========= installing docker..."
curl -sL https://get.docker.io/ | sh
echo -e "\n\n========= installing docker bash completion..."
curl -sL https://raw.githubusercontent.com/dotcloud/docker/master/contrib/completion/bash/docker > /etc/bash_completion.d/docker
adduser vagrant docker
fi
if ! type pip >/dev/null; then
echo -e "\n\n========= installing pip..."
curl -sk https://bootstrap.pypa.io/get-pip.py | python
fi
if ! type docker-compose >/dev/null; then
echo -e "\n\n========= installing docker-compose..."
pip install -U docker-compose
echo -e "\n\n========= installing docker-compose command completion..."
curl -sL https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
fi
SCRIPT
and finally a provisioning section that fires docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
cd /vagrant
docker-compose up -d
SCRIPT
There are other ways to build and start docker containers from vagrant, but using docker-compose allows me to externalize any docker specificities out of my Vagrantfile. As a result this Vagrantfile can be reused for other projects without changes ; you would just have to provide a different docker-compose.yml file.
An other thing I do differently is to put the Vagrantfile at the root of your project (and not in a vagrant directory) as it is a place humans and tools (some IDE) expect to find it. PyCharm does, PhpStorm probably does.
I also put my docker-compose.yml file at the root of my projects.
In the end, for developing I just go to my project directory and fire up vagrant which tells docker-compose to (eventually build then) run the docker containers.
I'm still trying to figure out how this will work in terms of deployment to production.
For deploying to production, a common practice is to provide your docker images to the ops team by publishing them on a private docker registry. You can either host such a registry on your own infrastructure or use online services that provides them such as Docker Hub.
Also provide the ops team a docker-compose.yml file that will define how to run the containers and link them. Note that this file should not make use of the build: instruction but rely instead on the image: instruction. Who wants to build/compile stuff while deploying to production?
This Docker blog article can help figuring out how to use docker-compose and docker-swarm to deploy on a cluster.
I recommend to use docker for development too, in order to get full replication of dependencies. Docker Compose is the key tool.
You can use an strategy like this:
docker-compose.yml
db:
image: my_database_image
ports: ...
machinex:
image: my_machine_x_image
web:
build: .
volumes:
- '/path/to/my/php/code:/var/www'
In your Dockerfile you can specify the dependencies to run your PHP code.
Also, i recommend to keep my_database_image and my_machine_x_image projects separated with their Dockerfiles because perfectly can be used with another projects.
If you are using Mac, you are already using a VM called boot2docker
I hope this helps.

Categories