I'm trying to create a Docker container (using docker-compose) for an application wit Doctrine, the problem is: if I just run the application, it works, but when I try to use the application before I run command ./vendor/bin/doctrine orm:generate-proxies, I get the error:
PHP Warning: require(/tmp/__CG__DomainEntitiesAnyEntity.php): failed to open stream: No such file or directory in /var/www/html/vendor/doctrine/common/lib/Doctrine/Common/Proxy/AbstractProxyFactory.php on line 204
PHP Fatal error: require(): Failed opening required '/tmp/__CG__DomainEntitiesAnyEntity.php' (include_path='.:/usr/local/lib/php') in /var/www/html/vendor/doctrine/common/lib/Doctrine/Common/Proxy/AbstractProxyFactory.php on line 204
OK, so just run the command on docker-compose.yml
version: '3'
services:
apache_server:
build: .
working_dir: /var/www/html
ports:
- "80:80"
volumes:
- ./:/var/www/html
- ../uploads:/var/www/uploads
- ./.docker/apache2.conf:/etc/apache2/apache2.conf
- ./.docker/000-default.conf:/etc/apache2/sites-avaliable/000-default.conf
- ./.docker/php.ini:/etc/php/7.4/apache2/php.ini
depends_on:
- postgres_database
command: sh -c "./vendor/bin/doctrine orm:generate-proxies"
networks:
- some-network
Yes, it works as expected and generates the proxies to /tmp folder, but after the command run and after the prompt with proxies generated, I get the message exited with code 0. It happens because Docker finish the container execution after getting the status code 0. So I tried two more things:
Add tail to something:
command: sh -c "./vendor/bin/doctrine orm:generate-proxies && tail -f /var/www/html/log.txt"
but when I do this, the server doesn't respond to requests (http://localhost/) anymore.
Add tty before running the command:
tty: true
# restart: unless-stopped <--- also tried this
and doesn't work also. Is there another way to solve this without I have to manually run the command inside the container every time?
PS: my dockerfile is this one:
FROM php:7.4-apache
WORKDIR /var/www/html
RUN a2enmod rewrite
RUN a2enmod headers
RUN mkdir /var/www/uploads
RUN mkdir /var/www/uploads/foo-upload-folder
RUN mkdir /var/www/uploads/bar-upload-folder
RUN chmod 777 -R /var/www/uploads
RUN apt-get update \
&& apt-get install -y \
libpq-dev \
zlib1g-dev \
libzip-dev \
unzip \
&& docker-php-ext-install \
pgsql \
pdo \
pdo_pgsql \
zip
RUN service apache2 restart
Cause of your issue
Your Docker Compose configuration of command
command: sh -c "./vendor/bin/doctrine orm:generate-proxies"
in docker-compose.yml overwrites the Cmd in the Docker image php:7.4-apache that normally would start the Apache server, see
docker inspect php:7.4-apache
or more specific
docker inspect --format="{{ .Config.Cmd }}" php:7.4-apache
which gives you
[apache2-foreground]
Solution in general
If you like to run a command before the original command of a Docker image, use Entrypoint and make sure you call the original entrypoint, see
$ docker inspect --format="{{ .Config.Entrypoint }}" php:7.4-apache
[docker-php-entrypoint]
For example, instead of command define
entrypoint: sh -c "./vendor/bin/doctrine orm:generate-proxies && docker-php-entrypoint"
Solution in your case
However, in your case, I would configure Doctrine like this (see Advanced Doctrine Configuration)
$config = new Doctrine\ORM\Configuration;
// ...
if ($applicationMode == "development") {
$config->setAutoGenerateProxyClasses(true);
} else {
$config->setAutoGenerateProxyClasses(false);
}
In development your code changes (mounted as volume) and proxies may have to be updated/generated. In production your code does not change anymore (copy code to Docker image). Hence, you should generate proxies in your Dockerfile (after you copied the source code), e.g.
FROM php:7.4-apache
WORKDIR /var/www/html
# ...
Copy . /var/www/html
RUN ./vendor/bin/doctrine orm:generate-proxies
Related
I am trying to upgrade my docker image from php:7.4-fpm-alpine3.13 to php:7.4-fpm-alpine3.14, in which this issue happened.
Error: EACCES: permission denied, open '/var/www/app/public/mix-manifest.json'
Dev team is currently use Laravel Mix to generate static files.
Logs:
/var/www/app # npm run development
> development
> mix
glob error [Error: EACCES: permission denied, scandir '/root/.npm/_logs'] {
errno: -13,
code: 'EACCES',
syscall: 'scandir',
path: '/root/.npm/_logs'
}
Browserslist: caniuse-lite is outdated. Please run:
npx browserslist#latest --update-db
Why you should do it regularly:
https://github.com/browserslist/browserslist#browsers-data-updating
● Mix █████████████████████████ sealing (92%) asset processing SourceMapDevToolPlugin
attached SourceMap
internal/fs/utils.js:332
throw err;
^
Error: EACCES: permission denied, open '/var/www/app/public/mix-manifest.json'
at Object.openSync (fs.js:497:3)
at Object.writeFileSync (fs.js:1528:35)
at File.write (/var/www/app/node_modules/laravel-mix/src/File.js:211:12)
at Manifest.refresh (/var/www/app/node_modules/laravel-mix/src/Manifest.js:75:50)
at /var/www/app/node_modules/laravel-mix/src/webpackPlugins/ManifestPlugin.js:21:48
at Hook.eval [as callAsync] (eval at create (/var/www/app/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:12:1)
at Hook.CALL_ASYNC_DELEGATE [as _callAsync] (/var/www/app/node_modules/tapable/lib/Hook.js:18:14)
at Compiler.emitAssets (/var/www/app/node_modules/webpack/lib/Compiler.js:850:19)
at /var/www/app/node_modules/webpack/lib/Compiler.js:438:10
at processTicksAndRejections (internal/process/task_queues.js:77:11) {
errno: -13,
syscall: 'open',
code: 'EACCES',
path: '/var/www/app/public/mix-manifest.json'
}
My dockerfile:
FROM php:7.4-fpm-alpine3.14
ARG COMPONENT
ARG APP_ENV
ARG SRC_DIR
# Update & add nginx
RUN apk update && \
apk add nginx && mkdir -p /var/cache/nginx/ && \
chmod 777 -R /var/lib/nginx/tmp
COPY ./docker/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./docker/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
# Give permission to nginx folder
RUN chown -R www-data:www-data /var/lib/nginx
RUN chmod 755 /var/lib/nginx/tmp/
# Add php.ini
COPY ./docker/${COMPONENT}/php.ini /etc/php7/php.ini
# Add entrypoint
COPY ./docker/${COMPONENT}/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# Install nodejs, npm
RUN apk add --no-cache nodejs npm
# Create source code directory within container
RUN mkdir -p /var/www/app
RUN chown -R www-data:www-data /var/www/app
# Add source code from local to container
WORKDIR /var/www/app
COPY ${SRC_DIR} .
# Grant permission for folders & install packages
RUN chmod 777 -R bootstrap storage && \
cp ./env/.env.${APP_ENV} .env && \
composer install
RUN rm -rf .env
RUN npm install && npm run ${APP_ENV} && rm -rf node_modules
# Expose webserver ports
EXPOSE 80 443
# Command-line to run supervisord
CMD [ "/bin/bash", "/usr/local/bin/entrypoint.sh" ]
What I have tried:
rm -rf ./node_modules and install again
npm config set unsafe-perm true before running npm
RUN npm config set user 0 && npm config set unsafe-perm true before npm install
Any help is appreciated!
After almost a year later, I am facing my nemesis once again, and this time I told myself that I would resolve this issue once and for all.
And for whom facing this issue in the future, this is what you need to run Laravel-Mix with Nodejs on an Alpine Image
There are 2 options:
#1
If you are stubborn, run it with an unofficial image of nodejs 14 built from musl instead of official provided package from Alpine Repository.
Then extract and add executables (node14.4.0 and npm6.14.5) to PATH
FROM php:8-fpm-alpine3.15
ARG SRC_DIR
...
# setting up packages bla bla
...
# Install nodejs 14 from unofficial repo instead of
# This will not work RUN apk add --no-cache nodejs npm
RUN wget https://unofficial-builds.nodejs.org/download/release/v14.4.0/node-v14.4.0-linux-x64-musl.tar.xz -P /opt/
RUN tar -xf /opt/node-v14.4.0-linux-x64-musl.tar.xz -C /opt/
ENV PATH="$PATH:/opt/node-v14.4.0-linux-x64-musl/bin"
...
WORKDIR /var/www/app
COPY ${SRC_DIR} .
...
RUN npm install
# Generating static
RUN npm run dev
...
#2
Using multistage build to build static with a fixed version of node instead of installing node on php alpine image (this was hint by my supervisor, which I did not understand why I was never thinking of before, silly me)
FROM node:14-alpine AS node_builder
ARG SRC_DIR
RUN mkdir -p /var/www/mix
WORKDIR /var/www/mix
COPY ${SRC_DIR} ./
# Installs all node packages
RUN npm install
# Generating static into /var/www/mix
RUN npm run dev
FROM php:8-fpm-alpine3.15 as php_final
...
# setting up packages bla bla
...
WORKDIR /var/www/app
COPY ${SRC_DIR} .
COPY --from=node_builder /var/www/mix/public ./public/
...
To who may also meet this issue, I re-installed nodejs in my CentOS 7 environment and solved the issue. The nodejs version is also same as the previous one (v14.18.1).
my folder owner was root
change of ownership helped
chown admin:admin public -R
I have a Dockerfile in a php project where I need to pass a user and a password to download a library during the execution.
The user and password must be hidden in production or in the local .env files. At the moment I'm just trying the local option and the user and password come empty.
I have used "${USER}" and ${USER}, but not only the login fails, but when I print the variables they come empty. Also I've tried putting the variables hardcoded and it works fine, so the problem is that the variables are not retrieve from the .env file.
The docker-compose starts as follows
version: '3'
services:
server:
build:
context: .
dockerfile: docker/Server/Dockerfile
container_name: "server"
ports:
- 80:80
- 8888:8888
networks:
- network
env_file:
- .env
command: sh /start-workers.sh
And the Dockerfile:
FROM php:7.3-cli-alpine3.10
RUN apk add --update
#
# Dependencies
#
RUN apk add --no-cache --no-progress \
libzip-dev zip php7-openssl pkgconfig \
php-pear php7-dev openssl-dev bash \
build-base composer
#
# Enable PHP extensions
#
RUN docker-php-ext-install bcmath sockets pcntl
#
# Server Dependencies
#
RUN echo '' | pecl install swoole \
&& echo "extension=swoole.so" >> /usr/local/etc/php/conf.d/swoole.ini
#
# installation
#
WORKDIR /var/www/service
COPY . .
RUN echo "START"
RUN echo "${USER}"
RUN echo "${PASSWORD}"
RUN echo "END"
RUN composer config http.libraries.com "${USER}" "${PASSWORD}" --global \
&& composer install -n -o --prefer-dist --no-dev --no-progress --no-suggest \
&& composer clear-cache \
&& mv docker/Server/start-workers.sh /
EXPOSE 80
The .env starts and ends as follows:
APP_ENV=dev
APP_SECRET=666666666666666
.
.
.
USER=user
PASSWORD=password
At the moment if I execute docker-compose up --build the output as follows
Step 10/15 : RUN echo "START"
---> Running in 329b1707c2ab
START
Removing intermediate container 329b1707c2ab
---> 362b915ef616
Step 11/15 : RUN echo "${USER}"
---> Running in e052e7ee686a
Removing intermediate container e052e7ee686a
---> 3c9cfd43a4df
Step 12/15 : RUN echo "${PASSWORD}"
---> Running in b568e7b8d9b4
Removing intermediate container b568e7b8d9b4
---> 26a727ba6842
Step 13/15 : RUN echo "END"
---> Running in 726898b3eb42
END
I'd like the user and the password to be printed, so I know I'm receiving the .env data and I can use it.
You could use args to meet your requirement.
And one notice here is: you should not use USER in .env as a keyword, as it will be override by bash's default environment USER which will make your dockerfile not get the correct value.
A full workable minimal example as follows, FYI:
docker/Server/Dockerfile:
FROM php:7.3-cli-alpine3.10
ARG USER
ARG PASSWORD
RUN echo ${USER}
RUN echo ${PASSWORD}
.env (NOTE: you had to use USR, not USER here):
USR=user
PASSWORD=password
docker-compose.yaml:
version: '3'
services:
server:
build:
context: .
dockerfile: docker/Server/Dockerfile
args:
- USER=${USR}
- PASSWORD=${PASSWORD}
Execute:
$ docker-compose build --no-cache
Building server
Step 1/5 : FROM php:7.3-cli-alpine3.10
---> 84d7ac5a44d4
Step 2/5 : ARG USER
---> Running in 86b35f6903e2
Removing intermediate container 86b35f6903e2
---> ee6a0e84c76a
Step 3/5 : ARG PASSWORD
---> Running in 92480327a820
Removing intermediate container 92480327a820
---> 1f886e8f6fbb
Step 4/5 : RUN echo ${USER}
---> Running in 8c207c7e6080
user
Removing intermediate container 8c207c7e6080
---> cf97b2cc0317
Step 5/5 : RUN echo ${PASSWORD}
---> Running in 7cbdd909826d
password
Removing intermediate container 7cbdd909826d
---> 6ab7987e080a
Successfully built 6ab7987e080a
Successfully tagged 987_server:latest
The problem here that your ENV will bem accessed only from run phase not build.
I suggest you to use build args for example:
build:
context: .
args:
buildno: 1
gitcommithash: cdc3b19
We use Docker, Docker-compose and Webpack to build a local environment for a php site. Recently I was tasked with adding a blog to the local setup using wordpress. I have been able to get everything up and running almost how I intended however there have been some issues with live reloading of the site. I can not for the life of me get the setup to work so that both the site root files and the blog sub-directory files will live-reload when saved. I can get either or too work but just not both. We use the browser sync plugin in webpack to reload any change it sees to the dist folder.
I believe the issue comes from the volume mount in the docker-compose file. If I mount only the wordpress wp-content files:
volumes:
- ./dist/blog/wp-content/uploads:/var/www/html/blog/wp-content/uploads
- ./dist/blog/wp-content/plugins:/var/www/html/blog/wp-content/plugins
- ./dist/blog/wp-content/themes:/var/www/html/blog/wp-content/themes
The wordpress blog gets updated upon save but any files not under blog/ do not. If I have the root folder mounted in volumes all files but the WordPress files reload.
volumes:
- ./dist:/var/www/html
And when I exec into the blog folder it has erased or overwritten the entire wordpress installation so they the WP site can no longer be used. If I have all four lines in same result. I am not sure if anyone can help me but I hope someone has ran into this issue before and I appreciate any help you can give. I have tried to include my relevant file info. LEt me know If I need to add more
dist folder structure
dist/
blog/
wp-content/
themes/
custom-themes/
.... theme-files
index.php
contactus.php
about.php
... etc
dockerfile
FROM php:7.0-apache
# Run Linux apt (Advanced Package Tool) to and install any packages
RUN apt-get update && \
apt-get install -y --no-install-recommends
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
# Enable mod_rewrite in apache modules
RUN a2enmod rewrite
# Manually set up the apache environment variables
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
# Expose apache.
EXPOSE 80
ADD ves-apache-config.conf /etc/apache2/sites-enabled/000-default.conf
WORKDIR /var/www/html/
COPY ./dist /var/www/html/
WORKDIR /var/www/html/blog/
# Set our wordpress environment variables
ENV WORDPRESS_VERSION 5.2.2
ENV WORDPRESS_SHA1 3605bcbe9ea48d714efa59b0eb2d251657e7d5b0
# Download and unpack wordpress
RUN set -ex; \
curl -o wordpress.tar.gz -fSL "https://wordpress.org/wordpress-${WORDPRESS_VERSION}.tar.gz"; \
echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c -; \
# upstream tarballs include ./wordpress/ so this gives us /usr/src/wordpress
tar -xzf wordpress.tar.gz -C /var/www/html/blog; \
rm wordpress.tar.gz; \
chown -R www-data:www-data /var/www/html/blog
RUN cp -r /var/www/html/blog/wordpress/. /var/www/html/blog/
RUN rm -rf /var/www/html/blog/wordpress.tar.gz
RUN rm -rf /var/www/html/blog/wordpress
CMD ["apache2-foreground"]
docker-compose.yml
version: "3"
services:
server:
# Name our container
container_name: corporate-site
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
depends_on:
- database
build:
context: ./
volumes:
# - ./dist:/var/www/html
- ./dist/blog/wp-content/uploads:/var/www/html/blog/wp-content/uploads
- ./dist/blog/wp-content/plugins:/var/www/html/blog/wp-content/plugins
- ./dist/blog/wp-content/themes:/var/www/html/blog/wp-content/themes
restart: always
ports:
- "8080:80"
# Logging Control
logging:
driver: none
### MYSQL DATABASE ###
database:
container_name: blog-database
build:
context: ./config/docker/database
volumes:
- datab:/var/lib/mysql
restart: always
ports:
- "3306:3306"
volumes:
datab:
Webpack file
module.exports = merge(base, {
mode: 'development',
devtool: 'inline-source-map',
watch: true,
plugins: [
new BrowserSyncPlugin({
host: 'localhost',
proxy: 'http://localhost:8080',
port: 3200,
open: true,
files: [
'./dist/*.php',
'./dist/blog/wp-content/themes/blog/*.php',
'./dist/blog/wp-content/themes/blog/*.css'
]
}),
]
})
I have three Docker containers running on Mac OS sierra, namely web, mysql and mongo, and have linked both mongo and mysql into web, which is essentially a Ubuntu Xenail base, with Apache and PHP added.
I am currently mounting my local Symfony project into the web container, and that seems to be working fine, but when I try to interact with the DB in any way, I get:
An exception occured in driver: SQLSTATE[HY000] [2002] Connection
refused
I've tried almost every combination of parameter values, but keep getting the same result.
I suspect it might have something to do with the way that I am linking the containers?
I'm in the process of learning Docker, so please excuse my limited knowledge.
Thanks!
Web dockerfile:
FROM ubuntu:xenial
MAINTAINER Some Guy <someguy#domain.com>
RUN apt-get update && apt-get install -y \
apache2 \
vim \
php \
php-common \
php-cli \
php-curl \
php-mysql \
php-mongodb \
libapache2-mod-php \
php-gd
RUN mkdir -p /var/www/symfony.local/public_html
RUN chown -R $USER:$USER /var/www/symfony.local/public_html
RUN chmod -R 755 /var/www
COPY config/php/php.ini /usr/local/etc/php/
COPY config/apache/sites-available/*.conf /etc/apache2/sites-available/
RUN a2enmod rewrite
RUN a2dissite 000-default.conf
RUN a2ensite symfony.local.conf
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Mysql dockerfile:
FROM mysql:5.7
MAINTAINER Some Guy <someguy#domain.com>
# Set the root users password
ENV MYSQL_ROOT_PASSWORD password
# Copy over the DB dump to be run upon creation
COPY sql/ /docker-entrypoint-initdb.d
# Copy over the custom mysql config file
COPY config/ /etc/mysql/conf.d
EXPOSE 3306
Run commands:
docker run --name mongo -d mongo #Im making use of the official Mongo image
docker run --name mysql -v /usr/local/var/mysql:/var/lib/mysql -d someguy/local:mysql
docker run --name web -d -p 80:80 --link mysql:mysql --link mongo:mongo -v ~/Sites/symfony.local/:/var/www/symfony.local/public_html/ someguy/local:web
Symfony parameters.yml file:
parameters:
database_host: mysql
database_port: 3306
database_name: gorilla
database_user: root
database_password: password
UPDATE:
So I've moved over to using docker-compose, but am still receiving the same error.
docker-compose.yml file
version: "2"
services:
web:
build: ./web
ports:
- "80:80"
volumes:
- ~/Sites/symfony.local/:/var/www/symfony.local/public_html/
depends_on:
- db
- mongo
mongo:
image: mongo:latest
mysql:
image: mysql:latest
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
An exception occured in driver: SQLSTATE[HY000] [2002] Connection refused
Means, it has nothing to do with your network per se - the links are just fine.
What you are lacking is the how the user has been created, if the user has been created https://github.com/docker-library/mysql/blob/c207cc19a272a6bfe1916c964ed8df47f18479e7/5.7/docker-entrypoint.sh#L122 .. so actually without a host limitation per se.
The question in your case is, what is inside your "sql/" folder - those scripts are executed during the entrypoint.
Be sure to never use exitX in those scripts, they will interrupt the main script, see https://github.com/docker-library/mysql/blob/c207cc19a272a6bfe1916c964ed8df47f18479e7/5.7/docker-entrypoint.sh#L151
Check your docker logs for mysql to ensure the script did not print you any warnings, use https://github.com/docker-library/mysql/blob/c207cc19a272a6bfe1916c964ed8df47f18479e7/5.7/docker-entrypoint.sh as an reference.
And last but not least, please use docker-compose. If you have issues with the timings ( mysql starting to slow and your web-container freaks out ), use a "wait for mysql" entrypoint in web:
#!/bin/bash
# this script does only exist to wait for the database before we fire up tomcat / standalone
RET=1
echo "Waiting for database"
while [[ RET -ne 0 ]]; do
sleep 1;
if [ -z "${db_password}" ]; then
mysql -h $db_host -u $db_user -e "select 1" > /dev/null 2>&1; RET=$?
else
mysql -h $db_host -u $db_user -p$db_password -e "select 1" > /dev/null 2>&1; RET=$?
fi
done
Set db_host, $user, $pasword accordingly using ENV or whatever suits you.
I need to install cURL compiled with OpenSSL and zlib via Dockerfile for Debian image with apache and php 5.6. I tried many approaches but due to the fact that I don't have string understanding in Linux a failed. I use docker-compose to up my container. docker-compose.yaml looks like:
version: '2'
services:
web:
build: .
command: php -S 0.0.0.0:80 -t /var/www/html/
ports:
- "80:80"
depends_on:
- db
volumes:
- $PWD/www/project:/var/www/html
container_name: "project-web-server"
db:
image: mysql:latest
ports:
- "192.168.99.100:3306:3306"
container_name: "project-db"
environment:
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbpass
MYSQL_ROOT_PASSWORD: dbpass
As a build script I use Dockerfile:
FROM php:5-fpm
RUN apt-get update && apt-get install -y \
apt-utils \
curl libcurl3 libcurl3-dev php5-curl php5-mcrypt
RUN docker-php-ext-install -j$(nproc) curl
'docker-php-ext-install' is a helper script from the base image https://hub.docker.com/_/php/
The problem is that after $ docker build --rm . which is successful a don't get an image with cURL+SSL+zlib. After $ docker-compose up I have a working container with Apache+MySQL and can run my project but libraries I need are not there.
Could you explain how to add these extensions to my apache in container properly? I even tried to create my own Dockerfile and build apache+php+needed libs there, but had no result.
Your Dockerfile is not complete. You have not done a COPY (or similar) to transfer your source code to execute from the host into the container. The point of a Dockerfile is to setup an environment together with your source code which finishes by launching a process (typically a server).
COPY code-from-some-location into-location-in-container
CMD path-to-your-server
... as per the URL you reference a more complete Dockerfile would appear like this
FROM php:5.6-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
notice the COPY which recursively copies all files/dirs (typically the location of your source code, etc like data and/or config files) in your $PWD where you execute the command onto the specified location internal to the container In unix a period as in . indicates the current directory so above command
COPY . /usr/src/myapp
will copy all files and directories in current directory from the host computer (the one you are using when typing in the docker build command) into the container directory called /usr/src/myapp
the WORKDIR acts to change directories into your container's dir supplied
finally the CMD launches the server which hums along once your launch the container