I have a local docker environment and ansible scripts to start/stop the environment and all devs using it for development. And now I need to add the private repository as a dependency of one of the projects, so I need a way to pass developers private ssh key to docker instance and to use it by the composer to install that project (otherwise it'll prompt user/password which is not very good in ansible). To copy ssh kee I made a task like this:
- name: Copy SSH private key to container
shell: docker cp {{pathToSshPrivateKey}} container:/home/www-data/.ssh/id_rsa
but how can I tell the composer to use that key?? I only found that to force using the key instead of user/password I need to run composer with -n but how to provide a path to that key?
I use something like the following to allow me to execute composer commands with SSH access from within a docker container:
docker run --rm \
--user $(id -u):$(id -g) \
-v $HOME/.ssh:/var/www/.ssh:ro \
-v $HOME/.composer:/.composer \
-v $(pwd):/var/www \
custom-image-name:tag composer install -n
Related
After deploying the application to laravel, I need to run these commands.
docker exec -it php bash
composer update --ignore-platform-reqs
exit
cd back/src
sudo chmod o+w ./storage/ -R
But when deploying to other developers, this is inconvenient, how can I include these commands in a dockerfile or docker-compose.yml? And even, it is possible that after build, docker-composer up -d is immediately filled
Composer does not start from the system(root), so i have to run it from another container
In the docker-compose.yml file you can set the command to update the packakges. Also set the volume, which will give the correct rights.
php:
command:
- composer update --ignore-platform-reqs
volumes:
- ./storage/:/app/storage:rw
But all of this does depend on the image you're using. Which docker image do you use?
I'm working on a Laravel project with fully continuous deployments to Cloud Run and using Cloud SQL as storage service. Right now, I need to perform php artisan migrate manually using the cloud_sql_proxy within the local environment.
Does anyone know whether it is possible to perform this step automatically, possibly part of the Dockerfile.
This is my current Dockerfile:
FROM php:7
ENV PORT=8080
ENV HOST=0.0.0.0
RUN apt-get update -y \
&& apt-get install --no-install-recommends -y openssl zip unzip git libonig-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN ["/bin/bash", "-c", "set -o pipefail && curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer"]
RUN docker-php-ext-install pdo mbstring
WORKDIR /app
COPY . /app
RUN composer validate && composer install
EXPOSE 8080
CMD ["php", "artisan", "serve", "--host=0.0.0.0", "--port=8080"]
Thanks for any help!
It's not recommended to put in Dockerfile the migration script as that will be triggered for EVERY future requests. You need to run it only ONCE and that execution should be triggered by a build script or by a developer.
For migrations that introduce breaking changes either on commit or on rollback, it's mandatory to have a full stop, and of course, rollback planned accordingly.
Also pay attention, that a commit/push should not trigger immediately the new migrations. Often these are not part of the regular CI/CD pipeline that goes to production.
Make sure you have a manual deploy for migrations and not under CI/CD.
After you deploy a service, you can create a new revision and assign a tag that allows you to access the revision at a specific URL without serving traffic.
A common use case for this, is to run and control the first visit to this container. You can then use that tag to gradually migrate traffic to the tagged revision, and to rollback a tagged revision.
To deploy a new revision of an existing service to production:
gcloud beta run deploy myservice --image IMAGE_URL --no-traffic --tag TAG_NAME
The tag allows you to directly test(or run via this the migration - the very first request) the new revision at a specific URL, without serving traffic. The URL starts with the tag name you provided: for example if you used the tag name green on the service myservice, you would test the tagged revision at the URL https://green---myservice-abcdef.a.run.app
I got the migrations running with every deployment via ENTRYPOINT.
Details are at the reply here : https://stackoverflow.com/a/69088911/867451
I am trying to install the skeleton application of Zend Framework 3 with Docker.
The installation works fine, but I'm not able to run some composer scripts. In the composer.json there are some custom composer scripts, which should be generally launched with
composer cs-fix
I would like to lauch there commands with the Composer Docker image, using
docker run --rm -ti --volume $PWD:/app composer cs-fix
When I try to do this, I obtain the following error
/docker-entrypoint.sh: line 60: exec: cs-fix: not found
Is my command wrong?
Found it! Instead of trying to run the custom composer script, I need to use the special run-script command, as in
docker run --rm -it --volume $PWD:/app composer run-script "cf-fix"
I've built an image for the purpose of PHP development, and it became clear to me that I didn't really thought about how to access the tools that I need for every day development. For example: composer, package manager for PHP, I need it to run whenever composer.json updates. I thought it is worth installing those tools inside the same image, but then I don't have a way to access them. So, I can:
Create separate image for composer and run it in different container
Install composer on my host machine.
I'd like to avoid option 2), but then, does it have sense having a setup like 1) ? How did you guys solved this issue ?
Unless you have some quite specific requirements there is a third option:
Connect to the container using docker exec command:
docker exec -it CONTAINER-NAME/ID COMMAND [ARG...]
Here is the example:
1: Create your application:
echo "<?php phpinfo();" > index.php
2: Start container:
docker run -it --rm --name my-apache-php-app -p 80:80 -v "$PWD":/var/www/html php:5.6-apache
3: Open another terminal window and exec required commands inside running container:
docker exec -it my-apache-php-app curl -sS https://getcomposer.org/installer | php
docker exec -it my-apache-php-app ls
If you need shell inside running container - run:
docker exec -it my-apache-php-app bash
That's it!
I'm beginner with Docker, and I'm trying to build my own image: Ubuntu + Nginx + PHP.
So, I have a directory called test. Inside directory two other directories, app and sites-enabled. Also, there's a Dockerfile, with content:
FROM ubuntu:trusty
RUN apt-get update && \
apt-get install -y nginx php5-fpm php5-mysql php-apc php5-imagick php5-imap php5-mcrypt php5-gd libssh2-php && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
ADD sites-enabled/ /etc/nginx/sites-enabled/
ADD app/ /app/
EXPOSE 80
CMD ["php5-fpm", "-c", "/etc/php5/fpm"]
CMD ["/usr/sbin/nginx"]
I build this image successfully. Then I create container with docker run -d image_name. I get the ID, and then I run docker inspect -f "{{.NetworkSettings.IPAddress}}" ID in order to get the IP address of the container.
I need this IP address, because I also run HAProxy in another container, so I can configure it to point to the right localtion.
So, both HAProxy and container with PHP app are running OK. HAProxy is pointing at the right application. PHP application files are uploaded at the right location inside the container.
But, Nginx doesn't execute PHP. Instead, when I try to access the application, I just get a downloaded file with my index.php PHP code.
What could be the problem? Please help.
My first guess was that I'm doing something wrong in Dockerfile when I run php5-fpm. I've tried few different ways, but non of them seem to work.
One CMD only.
If you need "services", look into supervisord or runit.