Ciao,
I am trying to use a supervisor to run the Laravel job in the background but it seems like, I won't go far without your assistance, when the supervisor starts, in my worker.log i found that it couldn't open artisan file, so i tried all online solution for issues related to mine but nothing works!
my app is running well and even i try to run php artisan queue:work in my container it works like charm so, i don't know what's the issue is!
my docker-compose file
version: '3.8'
services:
pms:
image: pms
build:
context: .
ports:
- "8009:8180"
depends_on:
- pms-db
networks:
- pms-network
restart: always
pms-db:
image: mysql
container_name: mysql_db
environment:
# MYSQL_ROOT_PASSWORD: $DB_PASSWORD
MYSQL_ALLOW_EMPTY_PASSWORD: 'true'
MYSQL_DATABASE: $DB_DATABASE
# MYSQL_USER: $DB_USERNAME
# MYSQL_PASSWORD: $DB_PASSWORD
volumes:
- dbdata:/var/lib/mysql
networks:
- pms-network
supervisor:
build:
context: .
dockerfile: ./supervisor.Dockerfile
container_name: supervisor
volumes:
- .:/app
networks:
- pms-network
networks:
pms-network:
volumes:
dbdata:
external: true
any help would be appreciated, also if you think there is another way to do so please let me know!! if also, you find that my code needs to be updated for better perfomance please let me also!
Happy coding!
Thanks to techno, the issue was solved successfully; also, if found this docker image redditsaved/laravel-supervisord, it is easy to configure; just take a look if you are facing supervisord issues to use a supervisor in dockerized Laravel application!
Related
I want to preface this by saying this question is more about system design and is somewhat open-ended. There isn't anything, in particular, I need help with. But I would appreciate some guidance. I will provide a copy of my docker-compose.yml so it's easier to visualize what I'm working with.
I'm looking to dockerize an older LAMP stack application. This app is currently deployed in a CentOS 6.10 VM, running PHP 5.4, MySQL 5.7, and Apache 2.2.15.
I wonder how I might go about dockerizing while minimizing the number of modifications I have to make to the underlying codebase.
I was playing with aliasing deprecated functions and redefining them with an updated API, but, it's been quite the hassle. Here's an example:
if (!function_exists('mysql_num_rows')) {
function mysql_num_rows($result)
{
return mysqli_num_rows($result);
}
}
Here's my docker-compose.yml:
version: "3.8"
x-common-variables: &common-variables
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: ...
MYSQL_PASSWORD: ...
volumes:
mysql:
driver: local
services:
mysql:
platform: linux/x86_64
image: mysql:5.7
container_name: mysql_container
environment:
<<: *common-variables
ports:
- 3306:3306
restart: unless-stopped
volumes:
- mysql:/var/lib/mysql
- ./docker/init.sql:/docker-entrypoint-initdb.d/init.sql
phpmyadmin:
depends_on:
- mysql
image: phpmyadmin:latest
container_name: phpadmin_container
environment:
<<: *common-variables
PMA_HOST: mysql
links:
- mysql:mysql
ports:
- 8080:81
restart: always
apache:
container_name: apache_container
depends_on:
- mysql
build: ./bootstrap
environment:
<<: *common-variables
extra_hosts:
- "app1.localhost.com:127.0.0.1" # This is configured in local hosts file
- "app2.localhost.com:127.0.0.1"
ports:
- 443:443 # App requires SSL - using a self-signed cert locally
- 80:80
volumes:
- ./bootstrap/httpd.conf:/etc/apache2/sites-enabled/000-default.conf
- ./bootstrap/php.ini:/usr/local/etc/php/php.ini
- ./:/var/www
links:
- mysql:mysql
I'm using the php:7.4-apache image for the apache service (not shown here, it's in the Dockerfile).
As I was writing this question, I realized I could probably use a centos image and install the older versions of software required for the project. However, I'm still going to post because any insight would be helpful.
Let me know if there's any more info I can provide!
Can someone please help.
I was running successfully my Symfony project via Docker containers. Suddenly when I access http://localhost/ I get the File not found. error?
I now that it means that system can not locate my files, but I am not sure what happened.
I see that my containers are built and running okay.
Also the same message I get when I try to test app endpoints through Postman.
I am on Mac Monterey 12.4.
Everything was working fine couple of hours ago. I just switched branches to change something, then switched back. The problem was on both branches..
Can someone help, I do not know what to do?
Docker config:
services:
db:
image: postgres:${POSTGRES_VERSION:-12}-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-name}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-pass}
POSTGRES_USER: ${POSTGRES_USER:-postgres}
volumes:
- $PWD/postgres-data:/var/lib/postgresql/data:rw
profiles:
- db-in-docker
ports:
- "5432:5432"
networks:
- symfony
redis:
image: "redis:alpine"
command: redis-server /usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
volumes:
- $PWD/redis-data:/var/lib/redis
- $PWD/redis/redis.conf:/usr/local/etc/redis/redis.conf
environment:
- REDIS_REPLICATION_MODE=master
networks:
- symfony
php:
container_name: "backend_php"
build:
context: ..
dockerfile: docker/php/Dockerfile
target: dev
args:
TIMEZONE: ${TIMEZONE}
volumes:
- symfony_docker_app_sync:/var/www/symfony/
depends_on:
- redis
networks:
- symfony
nginx:
build:
context: ./nginx
volumes:
- ../:/var/www/symfony/
ports:
- 80:80
depends_on:
- php
networks:
- symfony
env_file:
- .env.nginx.local
First of all: Why do you donĀ“t use the built in symfony server for local development? However - how looks your docker container configuration for your webserver?
currently, I am working a lot with WordPress and for every project, my company wants different environments. So first of all we have our development environment which is running on our own server. Then there is the staging environment on the client's server and also the live(production) environment.
Also, there is the local environment in which every
participant developer is working on whatever features.
Since this is always a time-consuming hustle, I would love to use Docker to make things more straight forward.
So I set up my local environment with a docker-compose and create a MySql,WordPress and phpMyAdmin container.
With a shell script, I add a WordPress-theme and the Plugins from our private Gitlab repository as a submodule followed by the docker-compose up -d.
After all the containers are up and running I wait for a connection to the MySql-Database and then feed it with my backup.sql file.
That's all working fine so far...
version: '3.8'
volumes:
wp-content: {}
mysql-backup: {}
networks:
wp-back:
services:
db:
build:
context: .
dockerfile: Dockerfile-mysql
container_name: mysql-cont
volumes: ['mysql-backup:/root']
environment:
MYSQL_ROOT_PASSWORD: rootPassword
MYSQL_DATABASE: wordpress
MYSQL_USER: wp-user
MYSQL_PASSWORD: wp-pass
ports:
- 8889:3306
networks:
- wp-back
restart: always
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
container_name: pma-cont
environment:
PMA_HOST: db
MYSQL_USER: wp-user
MYSQL_PASSWORD: wp-pass
MYSQL_ROOT_PASSWORD: rootPassword
ports:
- 3001:80
restart: always
networks:
- wp-back
wordpress:
depends_on:
- db
container_name: wp-cont
build:
context: .
dockerfile: Dockerfile-wordpress
ports:
- 8000:80
- 443:443
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wp-user
WORDPRESS_DB_PASSWORD: wp-pass
volumes: ['./wp-content:/var/www/html/wp-content']
networks:
- wp-back
restart: always
But regarding my needs with the 3 different environments, I am struggling with a setup where I can easily take everything that I created local and put it on the other 2 servers (dev & staging/live)
Because I always have the theme and the backup.sql in my docker volume, I somehow need to share that with other developers and the servers. I mean for the WordPress theme I could just work with pulling it from the project's private GitLab repository.
But what to do with the .sql file?
Where is the best place to put it?
Should I put Dockerfile, wp-content and backup.sql into one single repository for the project? It's getting pretty heavy then
Furthermore, let's say after a year I want to be able to easily set up the project in my local environment with the MySQL-database and WordPress uploads from the live environment. But I don't find a solution.
I appreciate any kind of brainstorming, ideas, help or links.
Cheers
Losing my mind on this one.
I've got a Lumen and MySQL setup in a Docker container. Most everything is good to go. I can run the container and access Lumen through a browser. I can access MySQL through Sequel Pro, no problem. And I can run php artisan migrate and it works fine.
But if I try to do anything through Lumen in the browser, it won't connect to the database, and it gives me the Connection refused error.
I'm using Lumen 5.7.7 and .env file looks like this:
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=test
DB_USERNAME=root
DB_PASSWORD=root
docker-compose.yml looks like this:
version: '3'
services:
api:
build:
context: .
dockerfile: .docker/Dockerfile
image: laravel-docker
ports:
- 8080:80
depends_on:
- mysqldb
volumes:
- .:/srv/app
# container_name:
mysqldb:
image: mysql:5.7
container_name: mysqldb
command: mysqld --user=root --verbose
volumes:
- ./schemadump.sql:/docker-entrypoint-initdb.d/schemadump.sql
ports:
- 3306:3306
environment:
MYSQL_DATABASE: test
MYSQL_USER: test
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: root
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
EDIT: Ran phpinfo() on the site and on the command line and realized that they aren't even the same version, let alone the same exact details. Could that be the problem here? Looking at it more...
Well, it is working now, sort of. I've got it now working from the browser and not from the command line, which I can work with. From a comment above, it looks like a missing "link" section connecting the api to the database was the issue. .env file is unchanged, but the docker-compose.yml now looks like this:
version: '3'
services:
api:
build:
context: .
dockerfile: .docker/Dockerfile
image: laravel-docker
ports:
- 8080:80
links:
- mysql
volumes:
- .:/srv/app
environment:
DB_HOST: mysql
DB_DATABASE: test
DB_USERNAME: test
DB_PASSWORD: test
mysql:
image: mysql:5.7
ports:
- 13306:3306
environment:
MYSQL_DATABASE: test
MYSQL_USER: test
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: root
Added the "environment" section to the api section of the yml file. That helped out. And changed the port so it has a different external port compared to the internal port. Not 100% which part made it work right, but it is working OK for now and I'm not about to change it more.
I've got a database backup bundle (https://github.com/dizda/CloudBackupBundle) installed on a Symfony3 project using Docker, but I can't get it to work due to it either not finding PHP or not finding MySQL
When I run php app/console --env=prod dizda:backup:start via exec, run, or via cron. I get mysqldump command not found error through the PHP image, or PHP not found error from the Mysql/db image.
How do I go about running a php command that then runs a mysqldump command.
My docker-compose file is as follows:
version: '2'
services:
web:
# image: nginx:latest
build: .
restart: always
ports:
- "80:80"
volumes:
- .:/usr/share/nginx/html
links:
- php
- db
- node
volumes_from:
- php
volumes:
- ./logs/nginx/:/var/log/nginx
php:
# image: php:fpm
restart: always
build: ./docker_setup/php
links:
- redis
expose:
- 9000
volumes:
- .:/usr/share/nginx/html
db:
image: mysql:5.7
volumes:
- "/var/lib/mysql"
restart: always
ports:
- 8001:3306
environment:
MYSQL_ROOT_PASSWORD: gfxhae671
MYSQL_DATABASE: boxstat_db_live
MYSQL_USER: boxstat_live
MYSQL_PASSWORD: GfXhAe^7!
node:
# image: //digitallyseamless/nodejs-bower-grunt:5
build: ./docker_setup/node
volumes_from:
- php
redis:
image: redis:latest
I'm pretty new to docker, so and easy improvements you can see feel free t flag...I'm in the trial and error stage!
Your image that has your code should have all the dependencies needed for your code to run.
In this case, your code needs mysqldump installed locally for it to run. I would consider this to be a dependency of your code.
It might make sense to add a RUN line to your Dockerfile that will install the mysqldump command so that your code can use it.
Another approach altogether would be to externalize the database backup process instead of leaving that up to your application. You could have some container that runs on a cron and does the mysqldump process that way.
I would consider both approaches to be clean.