How to know when composer install finishes in docker container - php

I am using docker to deploy java and php components.
From jenkins I run something like docker run --name my_php_component -d -t my_php_image.
Inside of the container deploy.sh script will be executed. This script runs composer install.
Jenkins needs to know that/when this has finished successfully and then it can run end-to-end tests.
What is the best way for it to check that composer install has successfully installed all packages inside the docker container?

Related

How to setup and run laravel, from git?

Either I miss something, or the whole chain lacks something.
Here's my assumption:
The whole point of containerization in development, is to reduce the cost of environment setup, and create a prepared image with all the required pieces.
So, when I read that Laravel Sail is installing laravel via containerization, I get excited. Thus I install it via their instructions, and everything works.
Then the problem begins. Because:
After a successful installation, I create a git repo, with GitHub's default laravel .gitignore
Then I push the newly installed laravel app into my git repo.
Then I ask a developer to start developing it. Please note that:
He does not have PHP installed
He does not have Composer installed
He clonse the repo, and as per installation guide, runs ./vendor/bin/sail up
But ./vender folder is correctly excluded in .gitignore
Thus his command results in:
bash: ./vendor/bin/sail: No such file or directory
He Googles it of course, and finds out that people suggest to run composer update
He goes to install composer, then before that PHP, then all extensoins of PHP, then ...
Do I miss something here? The whole point of containerization was to not install the required environment locally.
What is the proper way of running a laravel app, that is not installed from https://laravel.build, but is cloned from a git repo, WITHOUT having PHP or Composer installed locally?
Update
I found Bitnami laravel docker and it's exactly what containers should be.
You are right and the other developer doesn't need to have php nor composer installed.
All he/she needs is Docker installed on the local machine.
If you scaffolded the project with what is mentioned in the official Laravel docs under the Getting started section, then you will have a docker-compose.yml file in your project root directory.
For Windows
For Linux
For Mac OS
All the developer has to do after git cloning the repository is to run
docker-compose up --build -d
That's it.
For those struggling with this issue... I've found a command that work perfectly fine.
First of all, you don't need to locally have any PHP or Composer installed, maybe there is a misunderstanding about it, all you need is Docker.
Docker will install everything you need in something I understand is like a sandbox, not locally, for each project.
And for those downloaded projects, from GIT as example, that does not have vendor folder, and obviously cannot execute sail up you can simple execute:
docker run --rm --interactive --tty -v $(pwd):/app composer install
That command will download a composer image for docker, if you do not have one yet. Then, will run a composer install and you are free to execute a ./vendor/bin/sail up if you hadn't configured an alias or just sail up if you already configure an alias.
That's all.
The official documentation lists the following command.
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
If you were to clone a Laravel project and run this command in the project root, it would create a very small container with php and composer installed and run composer in the project root to install all php dependencies. In effect, this installs the Laravel core code into the cloned project. Once the project in set up this way, the user should create a local .env file to match their development evironment.
cp .env.example .env # creates a .env file to be populated for the local environment
With the envronment set up, they can now create the application containers in docker and run the application. Laravel provides the Sail helper for this.
./vendor/bin/sail up -d # runs the docker containers in detached mode
Now it's a matter of setting up the laravel app and running the Laravel app. (I'm assuming the app uses one of the Laravel start kits that rely on Node.js. If you are using a Blade only application, you can skip the "npm" commands.)
sail artisan key:generate # (Best Practice) Generate a new application key on each machine
sail artisan migrate # Scaffold the database structure
sail artisan db:seed # (Optional) Seed the database with data
sail npm install # (Optional) Install front-end dependencies (Inertia, Vue, React, others...)
sail npm run dev # (Optional) Run the front-end framework in development mode
With this, the new developer should be running an exact copy of both the project and the development environment as the original developer.
Your project README may include additional steps to set up some other dependencies, but this is the basic workflow for contributing to a Laravel project.
The only prerequisites for this workflow is to have Docker installed with an Internet connection. This is most easily accomplished on Windows, Mac, and Linux by installing Docker Desktop.
Alternate for Older Projects
If you are working on an older project that doesn't use Laravel Sail, but does have a docker-compose.yml file, you should be able to build and run the necessary containers with the following command.
docker-compose up --build -d
Once you have the containers running, you would need to install the project dependencies directly into the container.
docker ps # find the container ID of your project's container
docker exec -it CONTAINER_ID php artisan key:generate
docker exec -it CONTAINER_ID php artisan migrate
docker exec -it CONTAINER_ID php artisan db:seed
docker exec -it CONTAINER_ID npm install
docker exec -it CONTAINER_ID npm run dev
Of course, Docker Desktop simplifies this process. With a button click you can have a terminal shell open directly in your container eliminating the need for the docker exec command.

Why despite "drush" being installed via `composer global install` during image build, I cannot find the tool from within a running PHP script?

I am developing a PHP web application inside of a Docker container. Using volumes: inside of my docker-compose.yml file, I have specified a local directory so that any files generated are dumped and persist after the container is destroyed.
volumes:
- ./docroot:/var/www/html
Inside my Dockerfile, I RUN a command that installs a command line management tool:
RUN curl -sS https://getcomposer.org/installer | php && \
mv composer.phar /usr/local/bin/composer && \
ln -s /root/.composer/vendor/bin/drush /usr/local/bin/drush
RUN composer global require drush/drush:8.3.3 && \
composer global update
When the container comes up, I can use docker-compose exec -it <container> bash to get inside the container, and everything works fine. drush is in my path, and I can use it globally throughout the container to manage the app.
Now here is the strange part. Part of my application is that I have to run that command from a PHP script inside the container to help automatically manage some of the build process.
Using php, I run exec('drush dbupdate', $output, $retval); $retval returns a exit status of 127, or command not found and $output is empty. If I switch up the exec to use the full path I get an exit status 126.
If I go back into the container, I can run that command just fine. Note all other cli commands work as expected with exec (ls, whoami, etc but which drush returns exist status 1)
What am I missing? Why can I use it with no problems manually, but PHP exec() can't find it? passthru(), shell_exec(), and others have the same behavior.
composer global install will not install the command "globally" for all users, but "globally" as in "for all projects".
Generally, these packages are installed in the home directory for the user executing the command (e.g. ~/.composer), and if they are available in your path is because ~/.composer/vendor/bin is added to the session path.
But when you run composer global require (while building the image) or when you "log in" to the running container (using exec [...] bash) the user involved is root. But when your PHP script runs, it's being executed by another user (presumably www-data). And for that user, ~/.composer does not contain anything.
Maybe do not install drush using composer, but rather download the PHAR file directly or something like that while you are building the image, and put it in /usr/local/bin.
If you are using Drupal >= 8, the recommended way of installing Drush is not as a "global" dependency, but as "project" dependency, so that the appropriate drush version is installed. This comes straight from the docs:
It is recommended that Drupal 8 sites be built using Composer, with Drush listed as a dependency. That project already includes Drush in its composer.json. If your Composer project doesn't yet depend on Drush, run composer require drush/drush to add it. After this step, you may call Drush via vendor/bin/drush

PHP-FPM wont start from a Dockerfile

Hello dear community,
I am trying to accomplish something very simple, I want to start a php-fpm service from a docker container using a dockerfile. My dockerfile content is posted here below:
FROM debian
RUN apt-get update && apt-get install php -y && apt-get install php7.3-fpm -y && service php7.3-fpm start
When I build this image from the dockerfile and run it as a container, the php-fpm service is not active.
I even tried it with using docker's "interactive mode" (-i arg) to ensure that the container was not exiting in the case that the service was running as a daemon.
I am confused because the command RUN service php7.3-fpm start from my dockerfile should have started the service.
To successfully start the service inside my container I actually have to manually log into it using the command docker exec -it #containerID bash and run the command service php7.3-fpm start myself, and then the service works and becomes active.
I don't understand why the php-fpm service is not starting automatically from my Dockerfile, any help would be very much appreciated. Thanks in advance!
To a first approximation, commands like service don't work in Docker at all.
A Docker container runs only a single foreground process. That's not usually an init system, or if it is, it's just enough to handle some chores like zombie process cleanup. Conversely, a Docker image only contains a filesystem image and some metadata on how to start that process, but it does not persist any running processes. So for example if you
RUN service php7.3-fpm start
it might record in some file that the service was supposed to have been started, but once the RUN command completes, the running process doesn't exist at all any more.
The easiest way to get a running PHP-FPM setup is to use the Docker Hub php image:
FROM php:7.3-fpm
This should do all of the required setup, including arranging for the FPM server to run as the main container command; you just need to COPY your application code in.
If you really want to run it yourself, you need to make it be the main command of your custom image
CMD ["php-fpm"]
as is done in php:7.3-fpm's Dockerfile.

Run a database migration command when deploying a Docker container to AWS

Please bear with me. Pretty new to Docker.
I'm deploying Docker containers (detached) to an AWS EC2 registry using CodeDeploy. On deploy, the following command is run after setting some environmental variables etc:
exec docker run -d ${PORTS} -v cache-${CACHE_VOLUME} --env-file $(dirname $0)/docker.env --tty "${IMAGE}:${TAG}"
The container runs an image located and tagged in EC2 Container Service. No problems so far.
Since this is a PHP application (specifically a Symfony2 application) I would normally need to issue the following command to execute database migrations on deployment:
php app/console doctrine:migrations:migrate --no-interaction
Now, is there any to run this command during "docker run..." while keeping the container running, or do I need to run another container specifically for this command?
Many thanks!
You need create entrypoint. This script runs at the container startup.
entrypoint.sh file:
#create or update db
./waitforit.sh <DB_HOST>:<DP_PORT> -t 30
php app/console doctrine:migrations:execute
# start apache
apache2-foreground
wait for it it is a script waited when database is started up
Just leaving this here for the next one that searches for this... ;-)
When using a recent version of Doctrine, there is a pretty handy parameter for this:
php bin/console doctrine:migrations:migrate --no-interaction --allow-no-migration
The "allow-no-migration" parameter instructs doctrine not to throw an exception, when there is nothing to do...
I do as follows:
docker-compose exec [containerID] ./app/console migrations:migrate --no-interaction

custom script on docker run

I am trying to install the skeleton application of Zend Framework 3 with Docker.
The installation works fine, but I'm not able to run some composer scripts. In the composer.json there are some custom composer scripts, which should be generally launched with
composer cs-fix
I would like to lauch there commands with the Composer Docker image, using
docker run --rm -ti --volume $PWD:/app composer cs-fix
When I try to do this, I obtain the following error
/docker-entrypoint.sh: line 60: exec: cs-fix: not found
Is my command wrong?
Found it! Instead of trying to run the custom composer script, I need to use the special run-script command, as in
docker run --rm -it --volume $PWD:/app composer run-script "cf-fix"

Categories