After deploying the application to laravel, I need to run these commands.
docker exec -it php bash
composer update --ignore-platform-reqs
exit
cd back/src
sudo chmod o+w ./storage/ -R
But when deploying to other developers, this is inconvenient, how can I include these commands in a dockerfile or docker-compose.yml? And even, it is possible that after build, docker-composer up -d is immediately filled
Composer does not start from the system(root), so i have to run it from another container
In the docker-compose.yml file you can set the command to update the packakges. Also set the volume, which will give the correct rights.
php:
command:
- composer update --ignore-platform-reqs
volumes:
- ./storage/:/app/storage:rw
But all of this does depend on the image you're using. Which docker image do you use?
Related
I have a local docker environment and ansible scripts to start/stop the environment and all devs using it for development. And now I need to add the private repository as a dependency of one of the projects, so I need a way to pass developers private ssh key to docker instance and to use it by the composer to install that project (otherwise it'll prompt user/password which is not very good in ansible). To copy ssh kee I made a task like this:
- name: Copy SSH private key to container
shell: docker cp {{pathToSshPrivateKey}} container:/home/www-data/.ssh/id_rsa
but how can I tell the composer to use that key?? I only found that to force using the key instead of user/password I need to run composer with -n but how to provide a path to that key?
I use something like the following to allow me to execute composer commands with SSH access from within a docker container:
docker run --rm \
--user $(id -u):$(id -g) \
-v $HOME/.ssh:/var/www/.ssh:ro \
-v $HOME/.composer:/.composer \
-v $(pwd):/var/www \
custom-image-name:tag composer install -n
As far as I understand the official composer image is meant to be used as a is php management tool and not like any other images you can use within the docker-compose file. So basically I can use the docker container if I don't have or don't want to install composer locally/nativelly.
So, I have created a root directory for my app which is empty at the moment, but if I run for example docker run --rm -it -volume $PWD:/app composer create-project laravel/laravel . I can't see the Laravel app being installed within my directory. Have I missunderstood something ora any ideas what I'm doing wrong?
You should either use the flag -v or --volume.
This command worked for me, just make sure you are in an empty directory:
docker run --rm -it -v $PWD:/app composer create-project laravel/laravel .
Hello I am creating a dockerfile for my laravel project. This is it so far:
FROM php:7.2-cli
FROM nginx
FROM node:8
MAINTAINER zachary tyhacz
# does not install mysql
# mysql is outside container
RUN apt-get update -y && apt-get install -y openssl zip unzip git
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
WORKDIR /var/www/public
COPY . /var/www/public
COPY nginx.conf /etc/nginx/sites-available/domain
RUN ln -s /etc/nginx/sites-available /etc/nginx/sites-enabled
RUN npm install
RUN composer install
# sets up the database
CMD php artisan migrate:fresh --seed
# resets configuration files
CMD php artisan config:cache
# refreshes routes
CMD php artisan route:cache
# enables serve
CMD php artisan serve --host=0.0.0.0 --port=436
EXPOSE 8080/udp
EXPOSE 8080/tcp
EXPOSE 80/udp
EXPOSE 80/tcp
EXPOSE 436/tcp
EXPOSE 436/udp
upon docker tag to create an image, it gets to this instruction:
Step 6/22 : RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
and it throws this error and stops.
/bin/sh: 1: php: not found
curl: (23) Failed writing body (0 != 16133)
I am not sure what is going wrong. I think it could be a permissions issue or a directory issue.
Thanks anyone for any suggestions in helping me out
also, my reference in helping me create this dockerfile is this:
https://buddy.works/guides/laravel-in-docker
You can only have one base image using FROM in a Dockerfile. Basically, that tells docker what to start with. In your case, you have several FROMs, so it appears that Docker simply takes the last one you give it, in this case node:8. So PHP is never being installed.
To fix this issue, you'll need to pick a single base image (for example php), and install your other dependencies on top of that, so you could manually install nginx and node on top of the php image using RUN. You may also want to consider building a separate nginx image. This is considered good practice to separate your services into different images when possible.
Also, instead of using multiple CMD entries, use a small startup shell script. For example
#!/usr/bin/env bash
set -e
php artisan migrate:fresh --seed
php artisan config:cache
php artisan route:cache
exec php artisan serve --host=0.0.0.0 --port=436
Put that in a script called start.sh or something like that, then in your Dockerfile, use
CMD ["./start.sh"]
Then, you'll probably also want to start a second container for your nginx service. You could do this manually using docker run, but I suggest checking out docker-compose. It helps you build and run multiple containers at once.
I am trying to install the skeleton application of Zend Framework 3 with Docker.
The installation works fine, but I'm not able to run some composer scripts. In the composer.json there are some custom composer scripts, which should be generally launched with
composer cs-fix
I would like to lauch there commands with the Composer Docker image, using
docker run --rm -ti --volume $PWD:/app composer cs-fix
When I try to do this, I obtain the following error
/docker-entrypoint.sh: line 60: exec: cs-fix: not found
Is my command wrong?
Found it! Instead of trying to run the custom composer script, I need to use the special run-script command, as in
docker run --rm -it --volume $PWD:/app composer run-script "cf-fix"
I've built an image for the purpose of PHP development, and it became clear to me that I didn't really thought about how to access the tools that I need for every day development. For example: composer, package manager for PHP, I need it to run whenever composer.json updates. I thought it is worth installing those tools inside the same image, but then I don't have a way to access them. So, I can:
Create separate image for composer and run it in different container
Install composer on my host machine.
I'd like to avoid option 2), but then, does it have sense having a setup like 1) ? How did you guys solved this issue ?
Unless you have some quite specific requirements there is a third option:
Connect to the container using docker exec command:
docker exec -it CONTAINER-NAME/ID COMMAND [ARG...]
Here is the example:
1: Create your application:
echo "<?php phpinfo();" > index.php
2: Start container:
docker run -it --rm --name my-apache-php-app -p 80:80 -v "$PWD":/var/www/html php:5.6-apache
3: Open another terminal window and exec required commands inside running container:
docker exec -it my-apache-php-app curl -sS https://getcomposer.org/installer | php
docker exec -it my-apache-php-app ls
If you need shell inside running container - run:
docker exec -it my-apache-php-app bash
That's it!