I am trying to upgrade my docker image from php:7.4-fpm-alpine3.13 to php:7.4-fpm-alpine3.14, in which this issue happened.
Error: EACCES: permission denied, open '/var/www/app/public/mix-manifest.json'
Dev team is currently use Laravel Mix to generate static files.
Logs:
/var/www/app # npm run development
> development
> mix
glob error [Error: EACCES: permission denied, scandir '/root/.npm/_logs'] {
errno: -13,
code: 'EACCES',
syscall: 'scandir',
path: '/root/.npm/_logs'
}
Browserslist: caniuse-lite is outdated. Please run:
npx browserslist#latest --update-db
Why you should do it regularly:
https://github.com/browserslist/browserslist#browsers-data-updating
● Mix █████████████████████████ sealing (92%) asset processing SourceMapDevToolPlugin
attached SourceMap
internal/fs/utils.js:332
throw err;
^
Error: EACCES: permission denied, open '/var/www/app/public/mix-manifest.json'
at Object.openSync (fs.js:497:3)
at Object.writeFileSync (fs.js:1528:35)
at File.write (/var/www/app/node_modules/laravel-mix/src/File.js:211:12)
at Manifest.refresh (/var/www/app/node_modules/laravel-mix/src/Manifest.js:75:50)
at /var/www/app/node_modules/laravel-mix/src/webpackPlugins/ManifestPlugin.js:21:48
at Hook.eval [as callAsync] (eval at create (/var/www/app/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:12:1)
at Hook.CALL_ASYNC_DELEGATE [as _callAsync] (/var/www/app/node_modules/tapable/lib/Hook.js:18:14)
at Compiler.emitAssets (/var/www/app/node_modules/webpack/lib/Compiler.js:850:19)
at /var/www/app/node_modules/webpack/lib/Compiler.js:438:10
at processTicksAndRejections (internal/process/task_queues.js:77:11) {
errno: -13,
syscall: 'open',
code: 'EACCES',
path: '/var/www/app/public/mix-manifest.json'
}
My dockerfile:
FROM php:7.4-fpm-alpine3.14
ARG COMPONENT
ARG APP_ENV
ARG SRC_DIR
# Update & add nginx
RUN apk update && \
apk add nginx && mkdir -p /var/cache/nginx/ && \
chmod 777 -R /var/lib/nginx/tmp
COPY ./docker/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./docker/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
# Give permission to nginx folder
RUN chown -R www-data:www-data /var/lib/nginx
RUN chmod 755 /var/lib/nginx/tmp/
# Add php.ini
COPY ./docker/${COMPONENT}/php.ini /etc/php7/php.ini
# Add entrypoint
COPY ./docker/${COMPONENT}/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# Install nodejs, npm
RUN apk add --no-cache nodejs npm
# Create source code directory within container
RUN mkdir -p /var/www/app
RUN chown -R www-data:www-data /var/www/app
# Add source code from local to container
WORKDIR /var/www/app
COPY ${SRC_DIR} .
# Grant permission for folders & install packages
RUN chmod 777 -R bootstrap storage && \
cp ./env/.env.${APP_ENV} .env && \
composer install
RUN rm -rf .env
RUN npm install && npm run ${APP_ENV} && rm -rf node_modules
# Expose webserver ports
EXPOSE 80 443
# Command-line to run supervisord
CMD [ "/bin/bash", "/usr/local/bin/entrypoint.sh" ]
What I have tried:
rm -rf ./node_modules and install again
npm config set unsafe-perm true before running npm
RUN npm config set user 0 && npm config set unsafe-perm true before npm install
Any help is appreciated!
After almost a year later, I am facing my nemesis once again, and this time I told myself that I would resolve this issue once and for all.
And for whom facing this issue in the future, this is what you need to run Laravel-Mix with Nodejs on an Alpine Image
There are 2 options:
#1
If you are stubborn, run it with an unofficial image of nodejs 14 built from musl instead of official provided package from Alpine Repository.
Then extract and add executables (node14.4.0 and npm6.14.5) to PATH
FROM php:8-fpm-alpine3.15
ARG SRC_DIR
...
# setting up packages bla bla
...
# Install nodejs 14 from unofficial repo instead of
# This will not work RUN apk add --no-cache nodejs npm
RUN wget https://unofficial-builds.nodejs.org/download/release/v14.4.0/node-v14.4.0-linux-x64-musl.tar.xz -P /opt/
RUN tar -xf /opt/node-v14.4.0-linux-x64-musl.tar.xz -C /opt/
ENV PATH="$PATH:/opt/node-v14.4.0-linux-x64-musl/bin"
...
WORKDIR /var/www/app
COPY ${SRC_DIR} .
...
RUN npm install
# Generating static
RUN npm run dev
...
#2
Using multistage build to build static with a fixed version of node instead of installing node on php alpine image (this was hint by my supervisor, which I did not understand why I was never thinking of before, silly me)
FROM node:14-alpine AS node_builder
ARG SRC_DIR
RUN mkdir -p /var/www/mix
WORKDIR /var/www/mix
COPY ${SRC_DIR} ./
# Installs all node packages
RUN npm install
# Generating static into /var/www/mix
RUN npm run dev
FROM php:8-fpm-alpine3.15 as php_final
...
# setting up packages bla bla
...
WORKDIR /var/www/app
COPY ${SRC_DIR} .
COPY --from=node_builder /var/www/mix/public ./public/
...
To who may also meet this issue, I re-installed nodejs in my CentOS 7 environment and solved the issue. The nodejs version is also same as the previous one (v14.18.1).
my folder owner was root
change of ownership helped
chown admin:admin public -R
Related
I have setup a containerized Wordpress project as a Azure App Service based on the official Wordpress Docker image where I have made no modifications to the image itself other than adding a SSH server based on the instructions given by Azure. This is what the Dockerfile looks like:
FROM wordpress:6.0.3-php7.4
########################################## Add SSH support for Azure ##########################################
# Install OpenSSH and set the password for root to "Docker!".
RUN apt update \
&& apt install -y openssh-server \
&& rm -rf /var/lib/apt/lists/*
RUN echo "root:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY docker/ssh/sshd_config /etc/ssh/
# Copy and configure the ssh_setup file
RUN mkdir -p /tmp
COPY docker/ssh/ssh_setup.sh /tmp
RUN chmod +x /tmp/ssh_setup.sh \
&& (sleep 1;/tmp/ssh_setup.sh 2>&1 > /dev/null)
# Open port 2222 for SSH access
EXPOSE 80 2222
###############################################################################################################
COPY docker/script.sh script.sh
RUN chmod +x script.sh
CMD []
ENTRYPOINT ["./script.sh"]
And docker/script.sh
#!/bin/bash
exec service ssh start &
exec /usr/local/bin/docker-entrypoint.sh apache2-foreground
On the App Service I have added the WEBSITES_ENABLE_APP_SERVICE_STORAGE=true application setting to enable persistent storage as well as set the WORDPRESS_DB_HOST, WORDPRESS_DB_NAME, WORDPRESS_DB_PASSWORD and WORDPRESS_DB_USER settings to connect to my database running on a another host.
When accessing the app service page in the browser and going through the Wordpress setup I can easily upload new files which are placed in the file system at /var/www/html/wp-content/uploads/<year>/<month>/<filename> which I can then access in my browser at https://my-app-service.azurewebsites.net/wp-content/uploads/<year>/<month>/<filename>.
With Azure only persisting data written in /home I instead tried to move the /var/www/html/wp-content/uploads directory to /home/uploads and then create a symbolic link to this from the expected path like so (the symbolic link creation could then also be added to the Dockerfile to automate this during deployment):
$ cd /var/www/html/wp-content
$ mv uploads /home/uploads
$ ln -s /home/uploads uploads
Now however, when I access https://my-app-service.azurewebsites.net/wp-content/uploads/<year>/<month>/<filename> I just get an empty 400 response.
In order to see if this was some sort of limitation of Azure I decided to try something similar with the most simple Python page instead. Dockerfile:
FROM python:3.10.0
RUN mkdir -p /var/www/html
WORKDIR /var/www/html
########################################## Add SSH support for Azure ##########################################
# Install OpenSSH and set the password for root to "Docker!".
RUN apt update \
&& apt install -y openssh-server \
&& rm -rf /var/lib/apt/lists/*
RUN echo "root:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY docker/ssh/sshd_config /etc/ssh/
# Copy and configure the ssh_setup file
RUN mkdir -p /tmp
COPY docker/ssh/ssh_setup.sh /tmp
RUN chmod +x /tmp/ssh_setup.sh \
&& (sleep 1;/tmp/ssh_setup.sh 2>&1 > /dev/null)
# Open port 2222 for SSH access
EXPOSE 80 2222
###############################################################################################################
COPY docker/script.sh script.sh
RUN chmod +x script.sh
CMD []
ENTRYPOINT ["./script.sh"]
And docker/script.sh
#!/bin/bash
exec service ssh start &
exec python -m http.server 80
Doing the same thing here works, so it doesn't seem to be a limitation with Azure. What I don't understand, however, is that the Wordpress docker image with the symbolic link works as expected running on my local machine.
What am I doing wrong? Why does the Python project work but not the Wordpress one?
Solved it by adding an Alias instead of a symbolic link and disabling MMAP:
docker/extra.conf:
Alias /wp-content/uploads/ "/home/uploads/"
<Directory "/home/uploads/">
Options Indexes MultiViews
AllowOverride None
Require all granted
</Directory>
<Directory "/home/uploads">
EnableMMAP Off
</Directory>
docker/script.sh
#!/bin/bash
exec service ssh start &
exec /usr/local/bin/docker-entrypoint.sh apache2-foreground
Dockerfile
FROM wordpress:6.0.3-php7.4
########################################## Add SSH support for Azure ##########################################
# Install OpenSSH and set the password for root to "Docker!".
RUN apt update \
&& apt install -y openssh-server \
&& rm -rf /var/lib/apt/lists/*
RUN echo "root:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY docker/ssh/sshd_config /etc/ssh/
# Copy and configure the ssh_setup file
RUN mkdir -p /tmp
COPY docker/ssh/ssh_setup.sh /tmp
RUN chmod +x /tmp/ssh_setup.sh \
&& (sleep 1;/tmp/ssh_setup.sh 2>&1 > /dev/null)
# Open port 2222 for SSH access
EXPOSE 80 2222
###############################################################################################################
COPY docker/apache/extra.conf /etc/apache2/extra.conf
RUN cat /etc/apache2/extra.conf >> /etc/apache2/apache2.conf
COPY docker/script.sh /usr/local/bin/script.sh
RUN chmod +x /usr/local/bin/script.sh
CMD []
ENTRYPOINT ["/usr/local/bin/script.sh"]
You have to change the configuration of apache webserver so, that it can follow symbolic links:
<VirtualHost *:80>
DocumentRoot /var/www
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
After deploying the application to laravel, I need to run these commands.
docker exec -it php bash
composer update --ignore-platform-reqs
exit
cd back/src
sudo chmod o+w ./storage/ -R
But when deploying to other developers, this is inconvenient, how can I include these commands in a dockerfile or docker-compose.yml? And even, it is possible that after build, docker-composer up -d is immediately filled
Composer does not start from the system(root), so i have to run it from another container
In the docker-compose.yml file you can set the command to update the packakges. Also set the volume, which will give the correct rights.
php:
command:
- composer update --ignore-platform-reqs
volumes:
- ./storage/:/app/storage:rw
But all of this does depend on the image you're using. Which docker image do you use?
The problem
I'm trying to setup my developing environment using Docker for Windows for use with Wordpress. I'm using docker compose with a custom Dockerfile. This works perfectly on MacOS.
Using the exact same docker setup on Windows though gets me these messages within Wordpress.
Trying to upload media
Trying to update Wordpress
Clearly, Worpress doesn't have the correct file permissions.
What I tried
1. Checking Docker for Windows settings and upgrading to WSL 2
I'm using the WSL 2 based engine now, which should give full root permissions to all the files on the system. I upgraded to WPL 2 as I was first using the Hyper-V- based backend (with of course the correct file permissions setup), I tried to fix the problems by upgrading. No luck.
2. Experimenting with chmod and chown
First, I added chmod -R 777 /var/www/html/ to the Dockerfile. As far as I know, this should give all file permissions to root. It didn't have any effect. So maybe I'm using a different user? The command whoami did give me root back though.
Maybe I did something wrong and the user is something else. So I added chown -R www-data:www-data /var, as I saw www-data should be the default Docker user and group. No luck.
Just for the fun of it, I also tried chmod -R 777 /var/www/html/wp-content/uploads/ just to be more specific in the path. Interestingly, this gave me the error chmod: cannot access '/var/www/html/wp-content/uploads/': No such file or directory. I did link the folders though and this works (I can see in the folder structure in IntelliJ the files indeed are in /var/www/html). The -R option should make this recursive anyway, so it shouldn't matter.
3. Doing all this while the container is running
So maybe because the files were not yet present, I could not assign permissions. So I tried all this also when the container was actually running. Again, no luck.
4. Running as user root
First, I added user: root to the service in my docker-compose.yml. No Luck.
Then I added USER root to the Dockerfile, just below FROM php:7.4-apache. No luck.
5. Using the official Wordpress image
As you can see below, I'm using the apache image as a basis for my Dockerfile. I also tried using the wordpress:latest image directly from my docker-compose.yml (omitting the entire Dockerfile) and I tried using FROM: wordpress:latest on top of the Dockerfile. Both didn't change anything.
My files
By now, I tried every solution I could find on the internet and nothing works. Crazy thing, all this works fine under MacOS. Here are my docker files, I hope u guys can help me out here.
docker-compose.yml
services:
web:
build:
context: ./
dockerfile: .docker/Dockerfile
container_name: wolfpackvision.com
ports:
- "8080:80"
volumes:
- .:/var/www/html
Dockerfile
FROM php:7.4-apache
#USER root
RUN apt-get update
RUN docker-php-ext-install mysqli
## Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
## Install PHP-GD
RUN apt-get install -y libpng-dev libjpeg-dev libfreetype6-dev \
&& docker-php-ext-configure gd --with-freetype=/usr/include/ --with-jpeg=/usr/include/ \
&& docker-php-ext-install gd
## Install xdebug
RUN apt-get install --assume-yes --fix-missing git libzip-dev libmcrypt-dev openssh-client \
libxml2-dev libpng-dev g++ make autoconf \
&& docker-php-source extract \
&& pecl install xdebug redis \
&& docker-php-ext-enable xdebug redis \
&& docker-php-source delete \
&& docker-php-ext-install pdo_mysql soap intl zip
## Configure xdebug
RUN echo "xdebug.remote_enable=on" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
&& echo "xdebug.remote_autostart=off" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
&& echo "xdebug.remote_port=9000" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
&& echo "xdebug.remote_handler=dbgp" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
&& echo "xdebug.remote_connect_back=0" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
&& echo "xdebug.idekey=wolfpackvision.com" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
&& echo "xdebug.remote_host=host.docker.internal" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
## Enable mod_rewrite http://httpd.apache.org/docs/current/mod/mod_rewrite.html & mod_headers http://httpd.apache.org/docs/current/mod/mod_headers.html
RUN a2enmod rewrite \
&& a2enmod headers
## Give Full folder permissions to server
#RUN chown -R www-data:www-data /var/www/html
#RUN chmod -R 777 /var/www/html/
#RUN chmod -R 777 /var/www/html/wp-content/uploads/
#RUN chmod -R 777 /var/www/html/
#RUN chmod -R 766 /var/www/html/
## Copy php.ini over
COPY ./.docker/php/php.ini /usr/local/etc/php
## Cleanup
RUN rm -rf /tmp/*
-> Please don't go warn me about 777, I know about that. This is all strictly local and I will never use this in production. Plus, if I get the permissions working, I might tighten it. First I want it to work at all.
Edit
In response to #user969068.
`docker exec -it ps aux` gives me:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.1 0.0 90652 28568 ? Ss 11:04 0:00 apache2 -DFOREG
www-data 16 0.0 0.0 90684 8176 ? S 11:04 0:00 apache2 -DFOREG
www-data 17 0.0 0.0 90684 8176 ? S 11:04 0:00 apache2 -DFOREG
www-data 18 0.0 0.0 90684 8176 ? S 11:04 0:00 apache2 -DFOREG
www-data 19 0.0 0.0 90684 8176 ? S 11:04 0:00 apache2 -DFOREG
www-data 20 0.0 0.0 90684 8176 ? S 11:04 0:00 apache2 -DFOREG
root 21 0.0 0.0 7640 2708 pts/0 Rs+ 11:06 0:00 ps aux
I already tried to do what you recommended with PID 21, 1 and 16. All three had the same result; no file permissions. What am I missing here?
I think your issue with user being used is different, php:7.4-apache uses www-data as user. to confirm the user when you run ( replace php:7.4-apache with your image name)
docker run -d php:7.4-apache
and than run
docker exec -it YOUR_IMAGE_HASH ps aux
it should show you www-data under User columns of running processes.
Once you identify the correct user, you can add to your docker file , like
FROM php:7.4-apache
.....
ARG user_id=1000
RUN usermod -u $user_id www-data
I'm not an expert in WSL but I guess your issue is not inside the container, it seems like a host permissions issue. The container process can't write to . directory (which means the current one when you start docker-compose).
Here are some related issues
I'd check from which user the Docker process is run and try to write from this user to the directory something.
I also recommend using named volumes. I believe newly created volume should have r/w permission by default on Windows host.
your issue it's related to a know Docker bug, a bit of it is described here
Basically, the issue is that windows volumes don't respect the original permissions of the files, and assign everything to root at any change after the container has been initialized, I was having a similar issue with another CMS, and finally decided to enable apache to run as root for development, so it can read the files, and normally for production, this gave me some ideas on how to achieve that.
Thanks for your help.
Turns out, the issue had nothing to do with Docker. WordPress was configured to find the uploads directory within my host folder structure, setting it to /wp-content/uploads fixed everything.
Thanks for your help anyway!
I had the same problem on my mac and just did this. This is not the answer to this question, but it may help someone.
Can help someone using MacOS
sudo chown -R username /Users/username/.docker/contexts
My WordPress theme is complaining about max_execution_time (30), max_input_vars (1000) and WP Memory Limit (40). I really need to increase PHP resources and memory on the docker container that my website lives in.
I tried manually altering php.ini and .htaccess files, with no success. As I could understand, these settings need to be done on the dockerfile, as instructions.
Here is my dockerfile:
# ========== STAGE FOR BUILDING THE JS/CSS ASSETS
FROM node:9.11.1-slim AS builder
WORKDIR /var/www/
# Install the required packages for building the assets
COPY src/package*.json src/gulpfile.js ./
RUN npm install
# Build the assets
COPY src/wp-content/themes/tsc ./wp-content/themes/tsc
RUN npm run build-prod
# ========== PRODUCTION IMAGE
FROM wordpress:5.0.3-php7.2-fpm
# ---------- Configure PHP
# RUN docker-php-ext-install sockets
# RUN sed -i "s|;pid =.*|pid = /var/run/php-fpm.pid|" /usr/local/etc/php-fpm.conf
# RUN sed -i "s|listen =.*|listen = /var/run/php/php-fpm.sock|" /usr/local/etc/php-fpm.d/www.conf
# RUN sed -i "s|;listen.mode =.*|listen.mode = 0666|" /usr/local/etc/php-fpm.d/www.conf
COPY ./conf/php/* ./conf/php-fpm.conf /usr/local/etc/
COPY ./conf/php-fpm.d/* /usr/local/etc/php-fpm.d/
# ---------- Configure blog files and directories
WORKDIR /var/www/html/
COPY src ./tsc/
COPY --from=builder /var/www/wp-content/themes/tsc/build/ ./tsc/wp-content/themes/tsc/build/
COPY bin /usr/local/bin/
RUN chmod +x /usr/local/bin/start-server.sh
# ---------- Install and configure Nginx
RUN apt-get update && apt-get install -y wget gnupg
RUN wget -O- http://nginx.org/keys/nginx_signing.key > nginx.key && apt-key add nginx.key && rm nginx.key
RUN echo deb http://nginx.org/packages/debian/ stretch nginx > /etc/apt/sources.list.d/nginx.list && \
echo deb-src http://nginx.org/packages/debian/ stretch nginx >> /etc/apt/sources.list.d/nginx.list
RUN apt-get update
RUN apt-get install -y --allow-downgrades --allow-remove-essential --allow-change-held-packages nginx
# Copy nginx and default site conf
COPY conf/nginx.conf /etc/nginx/nginx.conf
COPY conf/nginx-site.conf /etc/nginx/conf.d/default.conf
# ---------- Configure environment
COPY conf/setup-env-vars.sh /tmp/setup-env-vars.sh
RUN chmod +x /tmp/setup-env-vars.sh
# ---------- Run
EXPOSE 80
CMD ["start-server.sh"]
# ---------- Configure debug
RUN sed -i "s|;error_log =.*|error_log = /var/log/fpm-php.www.log|" /usr/local/etc/php-fpm.conf
RUN echo "\ncatch_workers_output = yes" >> /usr/local/etc/php-fpm.d/www.conf
RUN echo "\nphp_flag[display_errors] = on" >> /usr/local/etc/php-fpm.d/www.conf
# RUN echo "\nphp_admin_value[error_log] = /var/log/fpm-php.www.log" >> /usr/local/etc/php-fpm.d/www.conf
RUN echo "\nphp_admin_flag[log_errors] = on" >> /usr/local/etc/php-fpm.d/www.conf
# RUN touch /var/log/fpm-php.www.log && chmod 777 /var/log/fpm-php.www.log
# Forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log && \
ln -sf /dev/stderr /var/log/nginx/error.log
So, I need help with some instructions to achieve the following values on my docker container:
max_input_vars = 5000
max_execution_time = 300
post_max_size = 50M
upload_max_filesize = 50M
So, you can explore the docker image in this way:
docker pull wordpress:5.0.3-php7.2-fpm
docker run -t -i wordpress:5.0.3-php7.2-fpm /bin/bash
Once you've done that, you can explore to locate how the docker image is laid out, and determine if you're updating the correct files.
The php.ini files for this image are located at /usr/local/etc in php-fpm.conf and ./php-fpm.d/www.conf. I would recommend putting your 4 variables into a file that gets copied into .../php-fpm.d/<mystuff>.conf.
The start-server.sh file that you use to launch PHP is not listed here, so I can't tell what command line arguments you're sending to PHP in this case. It's probably not a good idea to override the default launch**. By default this image will run the php-fpm engine. If you just want an all-in-one image, you might consider the wordpress:5.0.3-apache which is preconfigured to handle HTTP traffic.
** Note: in fact, you're copying in nginx into the docker image as well. That would be an antipattern for docker too. Each docker container should have exactly one service - this one would run php-fpm to execute the wordpress engine, and there should be another that runs your front-end (e.g. nginx or whatever) and connects by name to the php-fpm image to run the PHP. But I digress.
This question already has answers here:
Why can't I use Docker CMD multiple times to run multiple services?
(5 answers)
Closed 4 years ago.
I have a dockerfile that sets up NGINX, PHP, adds a Wordpress Repository. I want at boot time, to start PHP and NGINX. However, I am failing to do so. I tried adding the two commands in the CMD array, and I also tried to put them in a shell file and starting the shell file. Nothing worked. Below is my Dockerfile
FROM ubuntu:16.04
WORKDIR /opt/
#Install nginx
RUN apt-get update
RUN apt-get install -y nginx=1.10.* php7.0 php7.0-fpm php7.0-mysql
#Add the customized NGINX configuration
RUN rm -f /etc/nginx/nginx.conf
RUN rm -f /etc/nginx/sites-enabled/*
COPY nginx/nginx.conf /etc/nginx/
COPY nginx/site.conf /etc/nginx/sites-enabled
#Copy the certificates
RUN mkdir -p /etc/pki/nginx
COPY nginx/certs/* /etc/pki/nginx/
RUN rm -f /etc/pki/nginx/placeholder
#Copy the build to its destination on the server
RUN mkdir -p /mnt/wordpress-blog/
COPY . /mnt/wordpress-blog/
#COPY wp-config.php
COPY nginx/wp-config.php /mnt/wordpress-blog/
#The command to run the container
CMD ["/bin/bash", "-c", "service php7.0-fpm start", "service nginx start"]
I tried to put the commands in the CMD in a shell file, and run the shell file in the CMD command. It still didn't work. what am i missing?
start.sh
#!/bin/bash
/usr/sbin/service php7.0-fpm start
/usr/sbin/service nginx start
tail -f /dev/null
Dockerfile
COPY ["start.sh", "/root/start.sh"]
WORKDIR /root
CMD ["./start.sh"]
With this, you can put more complex logic in start.sh.
You can replace the CMD line for some like ...
CMD ["/bin/bash", "-c", "/usr/sbin/service php7.0-fpm start && nginx -g 'daemon off;'"]
TL;DR: You don't have an entry point.
Main idea in the Docker is to have one responsibility per container. So, in order to keep running a Docker container you have to start a program in foreground upon container boot.
However, in your Dockerfile, there is no entrypoint to start a program in foreground. So, just after your container boot, your container exits.
So, in order to prevent your container from exiting, just start a program in foreground.
Nginx for instance.
Example scenario:
entrypoint.sh content:
#!/bin/bash
service php7.0-fpm start
nginx -g 'daemon off;
somewhere in Dockerfile:
COPY [ "./entrypoint.sh", "/root/entrypoint.sh" ]
at the end of the Dockerfile:
ENTRYPOINT /root/entrypoint.sh