Docker, referencing a different container. Call mysqldump using php command - php

I've got a database backup bundle (https://github.com/dizda/CloudBackupBundle) installed on a Symfony3 project using Docker, but I can't get it to work due to it either not finding PHP or not finding MySQL
When I run php app/console --env=prod dizda:backup:start via exec, run, or via cron. I get mysqldump command not found error through the PHP image, or PHP not found error from the Mysql/db image.
How do I go about running a php command that then runs a mysqldump command.
My docker-compose file is as follows:
version: '2'
services:
web:
# image: nginx:latest
build: .
restart: always
ports:
- "80:80"
volumes:
- .:/usr/share/nginx/html
links:
- php
- db
- node
volumes_from:
- php
volumes:
- ./logs/nginx/:/var/log/nginx
php:
# image: php:fpm
restart: always
build: ./docker_setup/php
links:
- redis
expose:
- 9000
volumes:
- .:/usr/share/nginx/html
db:
image: mysql:5.7
volumes:
- "/var/lib/mysql"
restart: always
ports:
- 8001:3306
environment:
MYSQL_ROOT_PASSWORD: gfxhae671
MYSQL_DATABASE: boxstat_db_live
MYSQL_USER: boxstat_live
MYSQL_PASSWORD: GfXhAe^7!
node:
# image: //digitallyseamless/nodejs-bower-grunt:5
build: ./docker_setup/node
volumes_from:
- php
redis:
image: redis:latest
I'm pretty new to docker, so and easy improvements you can see feel free t flag...I'm in the trial and error stage!

Your image that has your code should have all the dependencies needed for your code to run.
In this case, your code needs mysqldump installed locally for it to run. I would consider this to be a dependency of your code.
It might make sense to add a RUN line to your Dockerfile that will install the mysqldump command so that your code can use it.
Another approach altogether would be to externalize the database backup process instead of leaving that up to your application. You could have some container that runs on a cron and does the mysqldump process that way.
I would consider both approaches to be clean.

Related

Legacy LAMP Stack on Docker

I want to preface this by saying this question is more about system design and is somewhat open-ended. There isn't anything, in particular, I need help with. But I would appreciate some guidance. I will provide a copy of my docker-compose.yml so it's easier to visualize what I'm working with.
I'm looking to dockerize an older LAMP stack application. This app is currently deployed in a CentOS 6.10 VM, running PHP 5.4, MySQL 5.7, and Apache 2.2.15.
I wonder how I might go about dockerizing while minimizing the number of modifications I have to make to the underlying codebase.
I was playing with aliasing deprecated functions and redefining them with an updated API, but, it's been quite the hassle. Here's an example:
if (!function_exists('mysql_num_rows')) {
function mysql_num_rows($result)
{
return mysqli_num_rows($result);
}
}
Here's my docker-compose.yml:
version: "3.8"
x-common-variables: &common-variables
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: ...
MYSQL_PASSWORD: ...
volumes:
mysql:
driver: local
services:
mysql:
platform: linux/x86_64
image: mysql:5.7
container_name: mysql_container
environment:
<<: *common-variables
ports:
- 3306:3306
restart: unless-stopped
volumes:
- mysql:/var/lib/mysql
- ./docker/init.sql:/docker-entrypoint-initdb.d/init.sql
phpmyadmin:
depends_on:
- mysql
image: phpmyadmin:latest
container_name: phpadmin_container
environment:
<<: *common-variables
PMA_HOST: mysql
links:
- mysql:mysql
ports:
- 8080:81
restart: always
apache:
container_name: apache_container
depends_on:
- mysql
build: ./bootstrap
environment:
<<: *common-variables
extra_hosts:
- "app1.localhost.com:127.0.0.1" # This is configured in local hosts file
- "app2.localhost.com:127.0.0.1"
ports:
- 443:443 # App requires SSL - using a self-signed cert locally
- 80:80
volumes:
- ./bootstrap/httpd.conf:/etc/apache2/sites-enabled/000-default.conf
- ./bootstrap/php.ini:/usr/local/etc/php/php.ini
- ./:/var/www
links:
- mysql:mysql
I'm using the php:7.4-apache image for the apache service (not shown here, it's in the Dockerfile).
As I was writing this question, I realized I could probably use a centos image and install the older versions of software required for the project. However, I'm still going to post because any insight would be helpful.
Let me know if there's any more info I can provide!

Could not open input file: /path/artisan using supervisor

Ciao,
I am trying to use a supervisor to run the Laravel job in the background but it seems like, I won't go far without your assistance, when the supervisor starts, in my worker.log i found that it couldn't open artisan file, so i tried all online solution for issues related to mine but nothing works!
my app is running well and even i try to run php artisan queue:work in my container it works like charm so, i don't know what's the issue is!
my docker-compose file
version: '3.8'
services:
pms:
image: pms
build:
context: .
ports:
- "8009:8180"
depends_on:
- pms-db
networks:
- pms-network
restart: always
pms-db:
image: mysql
container_name: mysql_db
environment:
# MYSQL_ROOT_PASSWORD: $DB_PASSWORD
MYSQL_ALLOW_EMPTY_PASSWORD: 'true'
MYSQL_DATABASE: $DB_DATABASE
# MYSQL_USER: $DB_USERNAME
# MYSQL_PASSWORD: $DB_PASSWORD
volumes:
- dbdata:/var/lib/mysql
networks:
- pms-network
supervisor:
build:
context: .
dockerfile: ./supervisor.Dockerfile
container_name: supervisor
volumes:
- .:/app
networks:
- pms-network
networks:
pms-network:
volumes:
dbdata:
external: true
any help would be appreciated, also if you think there is another way to do so please let me know!! if also, you find that my code needs to be updated for better perfomance please let me also!
Happy coding!
Thanks to techno, the issue was solved successfully; also, if found this docker image redditsaved/laravel-supervisord, it is easy to configure; just take a look if you are facing supervisord issues to use a supervisor in dockerized Laravel application!

php container keeps restarting using docker-compose

I have the following docker-compose file. When I try to up the file the mysql container start, but the php one keeps on restarting. When I look at the logs all I get is "interactive shell" constantly. Any idea why this is happening?
---
version: "3"
services:
web:
image: php:alpine3.12
restart: unless-stopped
volumes:
- web_Data:/var/www/html
ports:
- 80:80
- 443:443
mariadb:
image: mariadb
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: Password1
volumes:
- mariadb_Data:/var/lib/mysql
ports:
- 3306:3306
volumes:
web_Data:
mariadb_Data:
driver: local
The reason you are getting Interactive shell message it's because that's an output of php:alpine3.12 image and since your container is constantly restarting, it keeps logging that message.
I don't really know PHP but it looks like the command that the image tries to do is docker-php-entrypoint php -a, and that starts an interactive shell, am I right?
If that is the case, then you need to run it in interactive mode. To do that, in docker-compose.yml file, just add the last 2 lines:
web:
image: php:alpine3.12
restart: unless-stopped
volumes:
- web_Data:/var/www/html
ports:
- 80:80
- 443:443
stdin_open: true
tty: true
Then your container will keep running and you will be able to interact with it.
The reason is that you are using an inappropriate PHP image. If you want to run PHP with a web server then you should use one of:
php:<version>-fpm
php:<version>-apache
See Image Variants in the Docker documentation.

How to run linux daemon in container linked to another container with docker-compose?

I have the following docker-compose.yml file which runs nginx with PHP support:
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
The PHP application inside /var/www/my-app needs to communicate with a linux daemon (let's call it myappd).
The way I see it, I need to either:
Copy the myappd into the nginx container to /usr/local/bin, make it executable with chmod +x and run it in the background.
Create a different container, copy myappd to /usr/local/bin, make it executable with chmod +x and run it in the foreground.
Now, I'm new to Docker and I'm researching and learning about it but my best guess, given that I'm using Docker Composer, is that option 2 is probably the recommended one? Given my limited knowledge about Docker, I'd have to guess that this container would require some sort of linux-based image (like Ubuntu or something) to run this binary. So maybe option 1 is preferred? Or maybe option 2 is possible with a minimal Ubuntu image or maybe it's possible without such image?
Either way, I have no idea how would I implement that on the composer file. Especially option 2, how would the PHP application communicate with the daemon in a different container? Just "sharing" a volume (where the binary is located) like I did for nginx/php services would suffice? Or something else is required?
Simple answer is adding command entry to php service in docker-compose.yml.
Given that myappd is at ./my-app/ on host machine and at /var/www/my-app/, updated docker-compose.yml is something like following.
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
command: ["/bin/sh", "/var/www/my-app/mappd", "&&", "php-fpm"]
Better answer is to create the third container which runs linux daemon.
New Dockerfile is something like following.
FROM debian:jessie
COPY ./myappd /usr/src/app/
EXPOSE 44444
ENTRYPOINT ['/bin/sh']
CMD ['/usr/src/app/myappd']
Build image and name it myapp/myappd.
Updated docker-compose.yml is something like following.
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
networks:
- network1
depends_on:
- daemon
daemon:
container_name: my-app-daemon
image: myapp/myappd
restart: always
networks:
- network1
networks:
network1:
You can send request with hostname daemon from inside php. Docker container has capability to resolve hostname of another container in the same network.

Copying bin files to docker container leads to error

I'm trying to setup a docker-compose system where I'd like to copy dev tools to /usr/local/bin/ on startup.
Thats my docker-compose.yml:
version: '3'
services:
web:
build: docker/container/nginx
ports:
- 4000:80
volumes: &m2volume
- ./src:/var/www/html/
- ./docker/data/bin/:/usr/local/bin/
- ~/.composer:/var/www/.composer
networks: &m2network
- www
links:
- "php"
- "mariadb:mysql"
mariadb:
image: mariadb
ports:
- 8001:3306
networks: *m2network
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: magento2
MYSQL_DATABASE: db
MYSQL_USER: magento2
MYSQL_PASSWORD: magento2
volumes:
- ./docker/container/db/docker-entrypoint-initdb.d/:/docker-entrypoint-initdb.d/
- ./docker/container/db/conf.d:/etc/mysql/conf.d
- ./docker/data/mariadb:/var/lib/mysql
php:
build: docker/container/fpm
volumes: *m2volume
networks: *m2network
networks:
www:
if I leave - ./docker/data/bin/:/usr/local/bin/ in it, I get an error:
ERROR: for m2_php_1 Cannot start service php: oci runtime error: container_linux.go:262: starting container process caused "exec: \"docker-php-entrypoint\": executable file not found in $PATH"
Starting m2_mariadb_1 ... done
ERROR: for php Cannot start service php: oci runtime error: container_linux.go:262: starting container process caused "exec: \"docker-php-entrypoint\": executable file not found in $PATH"
If I uncomment it, all works fine.
What am I doing wrong here?
If i understand this correctly, and mapping the volume ./docker/data/bin/:/usr/local/bin/ is causing an exception, then that's probably because of the entrypoint defined in the mariadb image.
More to the point, you're overwriting the /usr/local/bin container folder, which probably contains an executable used in the entrypoint. When that disappears, you get an error.

Categories