Can I use docker-compose to create a one step setup for a completely installed PHP application? Since that's a pretty vague questions. I will use WordPress as an example.
If I look at the official wordpress docker repositories, I see there's already a super-useful yml file for docker-composer
version: '3.1'
services:
wordpress:
image: wordpress
restart: always
ports:
- 8080:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
This is great and gets me a running frontend app server with WordPress already there. It also gets a database server. It sets up everything to talk to one another. That's all great.
What this doesn't get me is a fully installed WordPress. Like a lot of PHP applications, in order to be fully installed there's a few additional configuration fields that need to be set, and a few additional database fields as well.
This means I can't fully install the application until the both the wordpress and db container are fully up. I've thought about hacking up a workaround where I have the CMD or ENTRYPOINT wait around for a DB connection to be established, but the base WordPress Dockerfile(s) already uses ENTRYPOINT and CMD so that's not an option. (or is it?)
Is there an elegant, docker-ish way to do what I want to do? Or am I stuck telling my users to docker-compose up, and then run a second command to finish the installation?
If there are extra steps that you do manually or through scripts after running docker-compose up for the docker-compose posted in your question you can modify the original wordpress image with your own image in order to make it work as expected for your business needs all you have to do is to start writing a new Dockerfile that produces a new customized wordpress image. So you can do the following as an example:
FROM wordpress:5.1.0-php7.1-apache
COPY custom_docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["custom_docker-entrypoint.sh"]
CMD ["apache2-foreground"]
The custom_docker-entrypoint.sh represents the extra steps which needed to be done. also you may introduce new environment variables inside the entrypoint script so you can make the process dynamic as possible for different clients without the need to build a custom image for each client.
The generated image can be used in your docker-compose file instead of the official one
What you will probably have to do is to create your own Dockerfile based on the Wordpress image, and then give them your own stuff to run.
I believe something like this should work
docker-compose.yml
version: '3.1'
services:
wordpress:
build:
context: ./docker/wordpress
restart: always
ports:
- 8080:80
depends_on:
- "db"
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
./docker/wordpress/Dockerfile
FROM wordpress
RUN your-own-commands
I only briefly put together, so it is not really tested, but it should give you a general idea and direction
If there are some commands that you want to run after the containers are up then there is a way. I don't know if it's the standard way to do it in docker but it will work.
As you already know that the wordpress image has its own ENTRYPOINT which is the script docker-entrypoint.sh.
So what you can do is put your custom command inside this script which will be executed on wordpress container start.
You can do it as follows :
Start your container and copy the contents of the existing
docker-entrypoint.sh
Create a new docker-entrypoint.sh outside the container and edit that
script to add your chmod command at appropriate location.
In your Dockerfile add a line to copy this new entrypoint script to the
location /usr/local/bin/docker-entrypoint.sh
NOTE:
Do not put your custom command at the end of the script docker-entrypoint.sh. You can put it anywhere before the line exec "$#".
As far as I know, the MySQL image has its own startup script which may still run after the container is ready. This startup script can be used to import data into the database - MySQL connections still wil not get accepted, but docket will think that your db is ready.
What this means for you is that if for any reason the db init script runs longer, it will stop the php install commands from working.
You might need to implement a polling thread which will wait for the database to start - only then run the install scripts on php.
Here’s an article I found that details a similar problem.
https://cweiske.de/tagebuch/docker-mysql-available.htm
There might be some other tips that might be appropriate for your use case, but I would need to know a bit more context.
EDIT:
Check docker healthchecks. This might fit your usecase.
https://docs.docker.com/compose/compose-file/#healthcheck
Related
I've been looking it for at least few hours and I was unable to find a solution, so I would like to ask for your advice.
I have a docker-compose.yaml file:
version: "3.5"
services:
php:
build:
context: dev/php
- ./source:/application
nginx:
build:
context: dev/nginx
depends_on:
- php
volumes:
- ./source:/application
- ./dev/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
mysql:
image: mysql:8.0.22
environment:
- MYSQL_ROOT_PASSWORD=${RDS_PASSWORD}
- MYSQL_DATABASE=${RDS_DB_NAME}
- MYSQL_USER=npuser
- MYSQL_PASSWORD=${RDS_PASSWORD}
ports:
- "3306:3306"
and now I would like to deploy it using the proper way with CLI commands, so it could be done by CI/CD.
To do so - right now I'm doing the following:
cut the mysql and other databases part from docker-compose.yaml, as I would like to use RDS database
run eb init
run eb deploy --staged
What I don't like about this approach is point 1, where I need to modify the original docker-compose.yaml file, and point 3, where I need to add --staged because docker-compose.yaml changed in point 1.
Of course, I don't want to remove entirely mysql from docker-compose.yaml file, as I would like it to be easily run in local dev env, but I see no option in eb to deploy only selected containers.
Also, I was wondering, maybe I should use Dockerrun.aws.json instead of docker-compose.yaml for eb deploy? I hope you can get me to the right direction, as I have no idea what should be proper deploy in this scenario.
I'm learning PDO now and I found it better to learn it in a LEMP docker stack (Nginx, php-fpm, MariaDB, phpMyadmin) on my Ubuntu 18.04LTS.
This is my php file:
<?php
try {
$mydb = new PDO('mysql:host=database;dbname=mysql;charset=utf8', 'root', 'admin');
} catch (Exception $e) {
die('Error : ' . $e->getMessage());
}
?>
As you can see, I try to make a PDO in my php code to recover some datas from my db.
But everytime I got that message on my browser (Firefox 69.0.2):
Error : could not find driver
I saw that post here: "Docker can't connect to mariadb with PHP". The problem was quite similar to mine but it didn't work for me.
Note: php-fmp and Nginx work perfeclty together. Same for MariaDB and phpMyAdmin.
Here is my docker-compose.yml file:
version: "3"
services:
nginx:
image: tutum/nginx
ports:
- "7050:80"
links:
- phpfpm
volumes:
- ./nginx/default:/etc/nginx/sites-available/default
- ./nginx/default:/etc/nginx/sites-enabled/default
- ./logs/nginx-error.log:/var/log/nginx/error.log
- ./logs/nginx-access.log:/var/log/nginx/access.log
phpfpm:
image: php:fpm
links:
- database:mysql
ports:
- "7051:9000"
volumes:
- ./public:/usr/share/nginx/html
database:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: admin
ports:
- "7052:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
links:
- database:mysql
ports:
- "7053:80"
environment:
PMA_HOST: mysql
PMA_USER: root
PMA_PASSWORD: admin
PMA_ARBITRARY: 1
If it is possible to solve this without building my own Dockerfiles, it would be great.
But if I must, I will. This isn't a problem.
docker-compose is an "api" of sorts for a Dockerfile. You need to add those libraries (apt-get etc...) in the Dockerfile
Dockerfile is your friend!
Is your PHP file inside a docker container or is it running outside docker, in the host machine?
If it is running inside the docker container, which service is it in? Please note that the nginx service does not have the "links" configuration, meaning it only accesses the database through the "database" hostname. Check the port as well (in the end of this post).
If your PHP file is running outside, then you have to use localhost instead of mysql in your connection string, like so: 'mysql:host=localhost;dbname=mysql;charset=utf8'. This is because docker's internal DNS is just that: internal. You can't access this hostname (database or mysql) outside docker.
Equally important, your connection string is not specifying the port, which is 7052 in your case. Since you're redirecting from 7052 to 3306, I think 3306 is mysql's default port, and the driver assumes 3306 if you do not specify it. It's always a good idea to be explicit about hosts and ports. Check the documentation on PHP databse connection strings about it (as I know nothing about php). It's probably ...;port=7052 or something.
Also, read up on docker-compose links, which you are using. It's deprecated now, so I advise to not use it in future projects, I even advise to spend some time removing it. Should take like 30 seconds to 5 minutes if everything goes well, and it won't haunt you anymore.
A found the solution.
First of all, the host must be mysql and not the name of my container (which is database):
$mydb = new PDO('mysql:host=mysql;dbname=mysql;charset=utf8', 'root', 'admin');
Inside the phpfpm container (accessible via the command docker-compose run --rm <container-name> bash), I had to enable the extension=php_pdo_msql line in my config file php.ini by removing the semicolon at the beginning of its line.
To avoid doing this manually every time after a docker-compose up, I replaced the phpfpm service in my docker-compose.yml file the following Dockerfile:
FROM php:fpm
RUN docker-php-ext-install pdo pdo_mysql
Finally, just build the image with the command docker-compose build . (replace the . by the path to the directory containing the docker-compose.yml file).
It works perfectly for me.
I am developing a Laravel application. I am trying to use Laravel Websocket in my application, https://docs.beyondco.de/laravel-websockets. I am using Docker/ docker-compose.yml. Since the Laravel Websocket run locally on port 6001, I am having problem with integrating it with docker-compose. Searching solution I found this link, https://github.com/laradock/laradock/issues/2002. I tried it but not working. Here is what I did.
I created a folder called workspace under the project root directory. Inside that folder, I created a file called, Dockerfile.
This is the content of Dockerfile
EXPOSE 6001
In the docker-compose.yml file, I added this content.
workspace:
port:
- 6001:6001
My docker-compose.yml file looks something like this
version: "3"
services:
workspace:
port:
- 6001:6001
apache:
container_name: web_one_apache
image: webdevops/apache:ubuntu-16.04
environment:
WEB_DOCUMENT_ROOT: /var/www/public
WEB_ALIAS_DOMAIN: web-one.localhost
WEB_PHP_SOCKET: php-fpm:9000
volumes: # Only shared dirs to apache (to be served)
- ./public:/var/www/public:cached
- ./storage:/var/www/storage:cached
networks:
- web-one-network
ports:
- "80:80"
- "443:443"
php-fpm:
container_name: web-one-php
image: php-fpm-laravel:7.2-minimal
volumes:
- ./:/var/www/
- ./ci:/var/www/ci:cached
- ./vendor:/var/www/vendor:delegated
- ./storage:/var/www/storage:delegated
- ./node_modules:/var/www/node_modules:cached
- ~/.ssh:/root/.ssh:cached
- ~/.composer/cache:/root/.composer/cache:delegated
networks:
- web-one-network
When I run "docker-compose up --build -d", it is giving me the following error.
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.workspace: 'port' (did you mean 'ports'?)
What is wrong and how can I fix it? How can I use Laravel Web Socket with docker-compose?
I tried changing from 'port' to 'ports', then I got the following error message instead.
ERROR: The Compose file is invalid because:
Service workspace has neither an image nor a build context specified. At least one must be provided.
Your Dockerfile is wrong. A Dockerfile must start with a FROM <image> directive as explained in the documentation. In your case it might be sufficient to run an existing php:<version>-cli image though, avoiding the whole Dockerfile stuff:
workspace:
image: php:7.3-cli
command: ["php", "artisan", "websockets:serve"]
Of course you will also need to add a volume with the application code and a suitable network. If you add a reverse proxy like Nginx in front of your services, you don't need to export the ports on your workspace either. Services may access other services as long as they are in the same network.
I have just started to learn PHP and MSSQL and set up both environments in docker using docker-compose. For the most part its going great as I have gotten pages created in PHP and displayed on the web while being able to create a database in azure data studio. The issue is I am trying to link my php:apache container with my MSSQ: container and be able to use PHP to display the database on the web or manipulate the database with PHP code on the web. I have done lots of searching and I can seem to figure out a way to do this. I even with a last ditch effort tried to set up PHP, the PHP driver and Apache on the MSSQL server Ubuntu container but I could only get them connected by the command line not the web. So I was wondering what code do I need to write in PHP to connect the two and what do I need to install to make the php:apache and MSSQL containers work together? Below is my docker-compose.yaml for reference. Also my file setup has the docker-compose.yaml at the root with the build: directories where all their files go. Hopefully that's some helpful info and sorry for being a beginner and not knowing exactly what info to provide to help resolve my issue? Thank you all who tries to help out in advance.
version: "3.7"
services:
homepage:
build: ./homepage
volumes:
- ./homepage/public-html:/usr/local/apache2/htdocs/
ports:
- 5001:80
php:
image: php:apache
volumes:
- ./php:/var/www/html
ports:
- 5000:80
db:
image: "mcr.microsoft.com/mssql/server:latest"
volumes:
- ./db:/Documents
environment:
SA_PASSWORD: "hidden password so my password is not leaked"
ACCEPT_EULA: "Y"
ports:
- 1433:1433
I am new to Docker.
I have read that it is better to keep an app per container.
I need to run web app (LAMP stack). So I have found the container for running PHP + Apache.
Now I need to set up a mysql container. But I am not sure what is the correct way to do that.
I read that it is possible to connect multiple containers together.
The idea is to make it trasnparent to the container running PHP + Apache when it tries to connect to mysql database locally.
But redirect all local connections to another container.
Another idea I have is to provide environment variable with host where should all connections go. In this case I need to have publicly accessible mysql server, but I need to keep it private and accessible only locally.
Could you please suggest a better option to choose in my case ?
Thank you
Use docker-compose:
For example, start from this docker-compose.yml. Put it in the same dir as your php Dockerfile:
version: "3"
services:
web:
build: .
ports:
- 8000:80
depends_on:
- db
db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=something
volumes:
- ./mysql-data:/var/lib/mysql
Then:
docker-compose up
So thanks to Docker network, you can point from your PHP as this: db:3306.