I'm running a TeamCity agent that spawns a docker container, running several tasks inside that (php) container. Such as phpunit, phplint and composer. I zipped the content inside the container if all tests pass, it will create a phpproject.zip.
After it's done I would like to push that phpproject.zip as an Artifact back to the TeamCity server from inside the docker container.
My docker container is running with the --rm parameters to remove the container after the script is done.
Is this possible?
Tim
You can map a volume of the Docker daemon to the container with the -v parameter and publish the artifacts to the daemon:
...
# Your build path and build command here
VOLUME /foo/build
ENTRYPOINT make
In TeamCity, configure a Docker build step to build the Dockerfile, and name:tag the resulting image. Add a second step in which you configure an other... Docker command as run, with the arguments:
-v /tmp/build:/foo/build --rm <name of image>
The result is then available in /tmp/build on the agent, and you can configure that as artifact path in the project's settings, or alternatively echo "##teamcity[publishArtifacts '/tmp/build']" somewhere.
Related
Hello dear community,
I am trying to accomplish something very simple, I want to start a php-fpm service from a docker container using a dockerfile. My dockerfile content is posted here below:
FROM debian
RUN apt-get update && apt-get install php -y && apt-get install php7.3-fpm -y && service php7.3-fpm start
When I build this image from the dockerfile and run it as a container, the php-fpm service is not active.
I even tried it with using docker's "interactive mode" (-i arg) to ensure that the container was not exiting in the case that the service was running as a daemon.
I am confused because the command RUN service php7.3-fpm start from my dockerfile should have started the service.
To successfully start the service inside my container I actually have to manually log into it using the command docker exec -it #containerID bash and run the command service php7.3-fpm start myself, and then the service works and becomes active.
I don't understand why the php-fpm service is not starting automatically from my Dockerfile, any help would be very much appreciated. Thanks in advance!
To a first approximation, commands like service don't work in Docker at all.
A Docker container runs only a single foreground process. That's not usually an init system, or if it is, it's just enough to handle some chores like zombie process cleanup. Conversely, a Docker image only contains a filesystem image and some metadata on how to start that process, but it does not persist any running processes. So for example if you
RUN service php7.3-fpm start
it might record in some file that the service was supposed to have been started, but once the RUN command completes, the running process doesn't exist at all any more.
The easiest way to get a running PHP-FPM setup is to use the Docker Hub php image:
FROM php:7.3-fpm
This should do all of the required setup, including arranging for the FPM server to run as the main container command; you just need to COPY your application code in.
If you really want to run it yourself, you need to make it be the main command of your custom image
CMD ["php-fpm"]
as is done in php:7.3-fpm's Dockerfile.
I have created a command line application using symfony 3.4 which doesn't need to display any web page.
I generally run the commands like following:
php bin/console MY_COMMAND_NAME
I want to dockerize the application and share it with others, so inside the root directory of my project I created a docker-compose.yml file, which looks like following:
version: "3.3"
services:
web:
image: php:7.3-cli
Then I ran docker-compose up, after that I checked the PHP version by the following command and it showed my the correct version:
docker run php:7.3-cli php -v
However, when I ran docker ps, it didn't show any container running.
My question is how to run the commands inside my project root directory. FYI, I am using Docker Toolbox, on windows 10 Home Edition and my project location is:
C:\Users\{my_user_name}\Desktop\folder_1\folder_2
The docker container need to have a long running process defined in CMD to stay running. php-cli is not that. If you run composer up, you'll see something like this:
$ docker-compose up
Creating network "tempphpdocker_default" with the default driver
Pulling web (php:7.3-cli)...
7.3-cli: Pulling from library/php
b8f262c62ec6: Pull complete
a98660e7def6: Pull complete
4d75689ceb37: Pull complete
639eb0368afa: Pull complete
2cdbfdb779b1: Pull complete
e0b637fa9606: Pull complete
da7333b0ef25: Pull complete
01d65ff46009: Pull complete
673e50bed3b9: Pull complete
bf6c6e34305d: Pull complete
Digest: sha256:1453f5ef0d4d1d424ed8114dd90a775bdec06cc6fb3bbae9521dcb4ca0c8ca90
Status: Downloaded newer image for php:7.3-cli
Creating tempphpdocker_web_1 ...
Creating tempphpdocker_web_1 ... done
Attaching to tempphpdocker_web_1
web_1 | Interactive shell
web_1 |
tempphpdocker_web_1 exited with code 0
The exit code is 0. This means your command in the docker image php:7.3-cli has successfully run and finished.
To properly dockerize your applicaiton, you should override this by writing you own docker file with proper COPY calls that bundle your CLI program into it. Your Dockerfile should probably look something like this:
FROM php:7.3-cli
RUN mkdir -p /opt/workdir/bin
RUN mkdir -p /opt/workdir/vendor
COPY bin/ /opt/workdir/bin
COPY vendor/ /opt/workdir/vendor
WORKDIR /opt/workdir
CMD php ./bin/console COMMAND
You can simply build and run this Dockerfile, or you if you prefer docker-compose, you can define docker-compose.yml in the same folder as the Dockerfile:
version: "3.3"
services:
web:
image: php-custom
build: ./
Please noted that a dockerized application can only access files and folder in the docker image. You should bind volumes of your local file system to the container before it can actually work on your filesystem.
Quick and dirty fix to keep you container running just override the container command in docker-compose.
version: "3.3"
services:
web:
image: php:7.3-cli
command: tail -f /dev/null
when you run docker-compose up it will keep the docker container but it will do not thing, just will give away to run command inside container.
docker exec -it php-cli_web_1 ash
My question is how to run the commands inside my project root
directory.
As mentioned by #David, you need to mount your host project to the container in docker-compose.
For instance
your project is placed on the host /home/myporject, mount the project within docker-compose and it will be available inside the container. then you can update the command of your docker-compose to run the script.
keep in mind
The life of container is the life of docker-compose command
When the execution completed your container will be die after execution. so your container will run until the php:7.3-cli /app/your_script.php this script completed.
version: "3.3"
services:
web:
image: php:7.3-cli
command: php:7.3-cli /app/your_script.php
volumes:
- /home/myporject:/app
I'm looking for a way, to execute symfonys console after a docker container is started, such as a database migration. I'm using a alpine php-fpm image, which has the following command at the end of its Dockerfile:
CMD ["php-fpm"]
When i try to override this in my docker-compose file, either php-fpm won't start or the symfony console command won't run. Ideally the execution should wait a few seconds, to ensure that the database is started.
What would be a good solution for this scenario?
So I recently discovered docker and vagrant, and I'm starting a new Php project in which I want to use both:
Vagrant in order to have a interchangeable environment that all the developers can use.
Docker for production, but also inside the vagrant machine so the development environment resembles the production one as closely as possible.
The first approach is to have all the definition files together with the source code in the same repository with this layout:
/docker
/machine1-web_server
/Dockerfile
/machine2-db_server
/Dockerfile
/machineX
/Dockerfile
/src
/app
/public
/vendors
/vagrant
/Vagrantfile
So the vagrant machine, on provision, runs all docker "machines" and sets databases and source code properly.
Is this a good approach? I'm still trying to figure out how this will work in terms of deployment to production.
Is this a good approach?
Yes, at least it works for me since a few months now.
The difference is that I also have a docker-compose.yml file.
In my Vagrantfile there is a 1st provisioning section that installs docker, pip and docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
if ! type docker >/dev/null; then
echo -e "\n\n========= installing docker..."
curl -sL https://get.docker.io/ | sh
echo -e "\n\n========= installing docker bash completion..."
curl -sL https://raw.githubusercontent.com/dotcloud/docker/master/contrib/completion/bash/docker > /etc/bash_completion.d/docker
adduser vagrant docker
fi
if ! type pip >/dev/null; then
echo -e "\n\n========= installing pip..."
curl -sk https://bootstrap.pypa.io/get-pip.py | python
fi
if ! type docker-compose >/dev/null; then
echo -e "\n\n========= installing docker-compose..."
pip install -U docker-compose
echo -e "\n\n========= installing docker-compose command completion..."
curl -sL https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
fi
SCRIPT
and finally a provisioning section that fires docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
cd /vagrant
docker-compose up -d
SCRIPT
There are other ways to build and start docker containers from vagrant, but using docker-compose allows me to externalize any docker specificities out of my Vagrantfile. As a result this Vagrantfile can be reused for other projects without changes ; you would just have to provide a different docker-compose.yml file.
An other thing I do differently is to put the Vagrantfile at the root of your project (and not in a vagrant directory) as it is a place humans and tools (some IDE) expect to find it. PyCharm does, PhpStorm probably does.
I also put my docker-compose.yml file at the root of my projects.
In the end, for developing I just go to my project directory and fire up vagrant which tells docker-compose to (eventually build then) run the docker containers.
I'm still trying to figure out how this will work in terms of deployment to production.
For deploying to production, a common practice is to provide your docker images to the ops team by publishing them on a private docker registry. You can either host such a registry on your own infrastructure or use online services that provides them such as Docker Hub.
Also provide the ops team a docker-compose.yml file that will define how to run the containers and link them. Note that this file should not make use of the build: instruction but rely instead on the image: instruction. Who wants to build/compile stuff while deploying to production?
This Docker blog article can help figuring out how to use docker-compose and docker-swarm to deploy on a cluster.
I recommend to use docker for development too, in order to get full replication of dependencies. Docker Compose is the key tool.
You can use an strategy like this:
docker-compose.yml
db:
image: my_database_image
ports: ...
machinex:
image: my_machine_x_image
web:
build: .
volumes:
- '/path/to/my/php/code:/var/www'
In your Dockerfile you can specify the dependencies to run your PHP code.
Also, i recommend to keep my_database_image and my_machine_x_image projects separated with their Dockerfiles because perfectly can be used with another projects.
If you are using Mac, you are already using a VM called boot2docker
I hope this helps.
I have a docker base image that runs CentOS 6.5. This image is saved on my computer. I could not find anything that talks about how to add more images into this base image. So for example I have this base image of CentOS6.5, I need too add postgresql 9.3, and php too this base image. Is there a way once you already have a base image made, to add more packages too that base image?
That's the whole purpose of the Docker file : build something on top of an image.
Create your dockerfile
Build the new image and tag it with docker build -t <tag> <path/to/build/context>
Then if you want to share it, push it to your private registry or to the docker hub to make it world accessible (docker push <tag>).
The build context of step 2 is the parent directory of your Dockerfile. For instance if you run the command in the directory where your dockerfile is it would be docker build -t <tag> ..
You can use Dockerfiles and docker build to do this e.g.:
FROM yourCentos
MAINTAINER your name
RUN yum install ...
CMD ...
And then docker build -t="myImage" . in the direcotry where you created the dockerfile.
Or you can upgrade youe images via CLI (not the preferd way!!) and commit them.