I have a problem for a week, my tests can not connect with the database.
Indeed I have passed my environment under docker, my database connects perfectly but not the tests. Do you know why?
My errors :
[critical] Uncaught PHP Exception Doctrine\DBAL\Exception\ConnectionException: "An exception occurred in the driver: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo for database failed: Name or service not known" at /opt/project/vendor/doctrine/dbal/src/Driver/API/MySQL/ExceptionConverter.php line 103
Here are my files:
docker-compose.yml
version: '3.7'
services:
database:
image: 'mysql:5.7'
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: app
ports:
- '3306:3306'
volumes:
- db-data:/var/lib/mysql/data:rw
app:
image: peedro07/app:latest
ports:
- "8080:80"
environment:
DATABASE_URL: mysql://root:root#database:3306/app
volumes:
db-data:
My Dockerfile which is build with the docker build command . -f chemin/Dockerfile -t peedro07/app
FROM php:8.1-apache
ADD https://github.com/mlocati/docker-php-extension-installer/releases/latest/download/install-php-extensions /usr/local/bin/
RUN chmod +x /usr/local/bin/install-php-extensions && \
install-php-extensions pdo_mysql intl
RUN curl -sSk https://getcomposer.org/installer | php -- --disable-tls && \
mv composer.phar /usr/local/bin/composer
COPY . /var/www/
COPY ./docker/php/apache.conf /etc/apache2/sites-available/000-default.conf
RUN cd /var/www && \
composer install
WORKDIR /var/www/
#ENTRYPOINT ["bash", "./docker/docker.sh"]
EXPOSE 80
My phpunit.xml.dist
<?xml version="1.0" encoding="UTF-8"?>
<!-- https://phpunit.readthedocs.io/en/latest/configuration.html -->
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="vendor/phpunit/phpunit/phpunit.xsd"
backupGlobals="false"
colors="true"
bootstrap="tests/bootstrap.php"
convertDeprecationsToExceptions="false"
>
<php>
<env name="SYMFONY_DEPRECATIONS_HELPER" value="weak"/>
<ini name="display_errors" value="1"/>
<ini name="error_reporting" value="-1"/>
<server name="APP_ENV" value="test" force="true"/>
<server name="SHELL_VERBOSITY" value="-1"/>
<server name="SYMFONY_PHPUNIT_REMOVE" value=""/>
<server name="SYMFONY_PHPUNIT_VERSION" value="9.5"/>
</php>
<testsuites>
<testsuite name="Project Test Suite">
<directory>tests</directory>
</testsuite>
</testsuites>
<coverage processUncoveredFiles="true">
<include>
<directory suffix=".php">src</directory>
</include>
</coverage>
<listeners>
<listener class="Symfony\Bridge\PhpUnit\SymfonyTestsListener"/>
</listeners>
<!-- Run `composer require symfony/panther` before enabling this extension -->
<!--
<extensions>
<extension class="Symfony\Component\Panther\ServerExtension" />
</extensions>
-->
<extensions>
<extension class="DAMA\DoctrineTestBundle\PHPUnit\PHPUnitExtension"/>
</extensions>
</phpunit>
In my .env :
DATABASE_URL="mysql://root:root#database:3306/app"
In my env.test
KERNEL_CLASS='App\Kernel'
APP_SECRET='$ecretf0rt3st'
SYMFONY_DEPRECATIONS_HELPER=999999
PANTHER_APP_ENV=panther
PANTHER_ERROR_SCREENSHOT_DIR=./var/error-screenshots
DATABASE_URL="mysql://root:root#database:3306/app"
Symfony is usually adding a suffix to the database name in the test environment.
You can verify that with the php bin/console debug:config doctrine --env=test command and look for the doctrine.dbal.connections.dbname_suffix config.
Or you can look in one of these files depending on which version of Symfony you are using.
Older:
config/packages/test/doctrine.yaml
doctrine:
dbal:
# "TEST_TOKEN" is typically set by ParaTest
dbname: 'main_test%env(default::TEST_TOKEN)%'
Newer:
config/packages/doctrine.yaml
when#test:
doctrine:
dbal:
# "TEST_TOKEN" is typically set by ParaTest
dbname_suffix: '_test%env(default::TEST_TOKEN)%'
This config is useful to make sure you are not testing on "real data" aka corrupting the prod database.
And the token might be useful should you have parallel testing.
What I would usually do is:
Create the tests database with php bin/console doctrine:database:create --env=test
Make sure my tests load fixtures to have a predictable dataset between each test
Run my tests
Thank you for your response. It was simply a docker problem. I had to add a volume to my service app. I then killed my containers, and restarted docker.
After reading several documentations, I also noticed that the mysql 5.7 image had some configuration problems, so I switched to the mariadb image which works perfectly with mysql (even if I find this behavior a bit weird ...)
Thanks to all :)
My new docker-compose.yml :
database:
image: 'mariadb'
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: app
command: ["mysqld", "--ignore-db-dir=lost+found", "--explicit_defaults_for_timestamp"]
restart: always
ports:
- '3306:3306'
volumes:
- db-data:/var/lib/mysql/data:rw
app:
image: peedro07/app:latest
restart: always
ports:
- "8080:80"
environment:
DATABASE_URL: mysql://root:root#database:3306/app
volumes:
- app-data:/var/www
Related
I was able to access the mysql database from phpmyadmin using user: admin and password: root and the url: 127.0.0.1:3310 and I was able to load the website on 127.0.0.1:8008 but when i try to login or interact with the database i get the error below:
SQLSTATE[HY000] [2002] Connection refused
select * from users where email = boyiajas#gmail.com limit 1
and I also try to do a migration from within the app docker container but failed as well
root#58a709f18668:/var/www/html# php artisan migrate
Illuminate\Database\QueryException
SQLSTATE[HY000] [2002] Connection refused (SQL: select * from information_schema.tables where table_schema = laravelvueblog_db and table_name = migrations and table_type = 'BASE TABLE')
below is my .env file
DB_CONNECTION=mysql
DB_HOST=db
DB_PORT=3309
DB_DATABASE=laravelvueblog_db
DB_USERNAME=admin
DB_PASSWORD=root
below is my vhost.conf file
<VirtualHost *:8008>
DocumentRoot /var/www/html/public
<Directory "/var/www/html/public">
AllowOverride all
Require all granted
</Directory>
#ErrorLog ${APACHE_LOG_DIR}/error.log
#CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
below is my Docker file
FROM php:8-apache
USER root
RUN apt-get update -y && apt-get install -y openssl curl zip unzip git nano
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install mysqli pdo pdo_mysql opcache
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
WORKDIR /app
COPY . /app
COPY vhost.conf /etc/apache2/sites-available/000-default.conf
RUN chown -R www-data:www-data /app && a2enmod rewrite
RUN rm -rf /var/www/html && ln -s /app /var/www/html
RUN composer install
RUN php artisan optimize:clear
CMD php artisan serve --host=0.0.0.0 --port=8000
EXPOSE 8000
below is my docker-compose.yaml file
version: '3.8'
services:
db:
image: mariadb:latest
container_name: db
ports:
- 3309:3306
environment:
MYSQL_DATABASE: laravelvueblog_db
MYSQL_ROOT_PASSWORD: root
MYSQL_PASSWORD: root
MYSQL_USER: admin
volumes:
- mysql_file:/docker-entrypoint-initdb.d
networks:
- appnetwork
main:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
ports:
- 8008:8000
environment:
# MYSQL_DATABASE: laravelvueblog_db
# MYSQL_ROOT_PASSWORD: root
# MYSQL_PASSWORD: root
# MYSQL_USER: admin
DB_HOST: db
DB_USER: admin
DB_PASSWORD: root
DB_NAME: laravelvueblog_db
WAIT_HOSTS: db:3306
depends_on:
- db
links:
- db
networks:
- appnetwork
phpmyadmin:
image: phpmyadmin/phpmyadmin
ports:
- 3310:80
links:
- mysql
environment:
PMA_HOSTS: db
PMA_PORT: 3306
depends_on:
- db
networks:
- appnetwork
volumes:
mysql_file:
driver: local
networks:
appnetwork:
driver: bridge
After several attempt I found the problem, since all the services are on the same network (appnework)
I used the internal port in my laravel .env file like below
DB_PORT=3306
And now it works fine
I want to run my unit tests through my yml file, but they fail because they must be run on Mysql
I think something is missing.
here is yaml file:
steps:
- uses: shivammathur/setup-php#15c43e89cdef867065b0213be354c2841860869e
with:
php-version: '7.4|8.0'
- uses: actions/checkout#v2
- name: Copy .env
run: php -r "file_exists('.env') || copy('.env.example', '.env');"
- name: Install Dependencies
run: composer install --no-progress --prefer-dist --optimize-autoloader
- name: Generate key
run: php artisan key:generate
- name: Directory Permissions
run: chmod -R 777 storage bootstrap/cache
- name: Optimize Project
run: php artisan optimize:clear
- name: Execute tests (Unit and Feature tests) via PHPUnit
env:
DB_CONNECTION: mysql
DB_HOST: 127.0.0.1
DB_PORT: 3306
DB_DATABASE: database_name
DB_USERNAME: root
DB_PASSWORD:
run: |
php artisan migrate
php artisan test
and here is phpunit.xml file:
<server name="APP_ENV" value="testing"/>
<server name="BCRYPT_ROUNDS" value="4"/>
<server name="CACHE_DRIVER" value="array"/>
<!-- <server name="DB_CONNECTION" value="sqlite"/> -->
<!-- <server name="DB_DATABASE" value=":memory:"/> -->
<server name="MAIL_MAILER" value="array"/>
<server name="QUEUE_CONNECTION" value="sync"/>
<server name="SESSION_DRIVER" value="array"/>
<server name="TELESCOPE_ENABLED" value="false"/>
This example shows an example of adding MySQL into into your GitHub action, so the database is available:
Your GitHub actions yaml needs a services section:
services:
# mysql-service Label used to access the service container
mysql:
# Docker Hub image (also with version)
image: mysql:5.7
env:
## Accessing to Github secrets, where you can store your configuration
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: test_db
## map the "external" 3306 port with the "internal" 3306
ports:
- 3306:3306
# Set health checks to wait until mysql database has started (it takes some seconds to start)
options: >-
--health-cmd="mysqladmin ping"
--health-interval=10s
--health-timeout=5s
--health-retries=3
The env you can set with your GH Actions command that needs database access can include:
env:
DB_CONNECTION: mysql
DB_DATABASE: db_test
DB_USER: root
DB_PASSWORD: root
You may want to check out Chipper CI, which does a lot of this work for you for testing Laravel apps!
I have the following php service in docker-compse.yml
version: '3'
networks:
laravel:
driver: bridge
services:
nginx:
image: nginx:stable-alpine
restart: unless-stopped
ports:
- "${WEB_PORT}:80"
volumes:
- "${PROJECT_DIR}:/var/www/html"
- "${NGINX_CONFIG}:/etc/nginx/conf.d/default.conf"
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
depends_on:
- php
- mysql
networks:
- laravel
mysql:
image: mysql:5.7.29
restart: unless-stopped
user: "${HOST_UID}:${HOST_GID}"
tty: true
ports:
- "${SQL_PORT}:3306"
environment:
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- ./docker/mysql:/var/lib/mysql
networks:
- laravel
php:
build:
context: ./docker
dockerfile: Dockerfile-php
user: "${HOST_UID}:${HOST_GID}"
volumes:
- "${PROJECT_DIR}:/var/www/html"
- ./docker/php/php.ini:/usr/local/etc/php/php.ini
#- "${COMPOSER_CACHE_DIR}:/.composer/cache"
#- "${COMPOSER_CONFIG}:/.composer/config"
working_dir: /var/www/html
networks:
- laravel
npm:
image: node:13.7
user: "${HOST_UID}:${HOST_GID}"
volumes:
- "${PROJECT_DIR}:/var/www/html"
working_dir: /var/www/html
entrypoint: ['npm']
When I run whoami in the container, it returns:
whoami: cannot find name for user ID 1000
I think this is a problem because there is no home directory, docker-compose exec php ls ~ returns:
ls: cannot access '/home/clarg': No such file or directory
This then leads to docker-compose exec php php artisan tinker returning:
ErrorException
Writing to directory /.config/psysh is not allowed.
at vendor/psy/psysh/src/ConfigPaths.php:362
358▕ #\mkdir($dir, 0700, true);
359▕ }
360▕
361▕ if (!\is_dir($dir) || !\is_writable($dir)) {
➜ 362▕ \trigger_error(\sprintf('Writing to directory %s is not allowed.', $dir), \E_USER_NOTICE);
363▕
364▕ return false;
365▕ }
366▕
+20 vendor frames
21 artisan:37
Illuminate\Foundation\Console\Kernel::handle(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
From googling, I see this is in the home directory, which does not exist in the container.
How can I solve this?
EDIT:
Dockerfile-php:
FROM php:8.0-fpm
ADD https://github.com/mlocati/docker-php-extension-installer/releases/latest/download/install-php-extensions /usr/local/bin/
RUN chmod +x /usr/local/bin/install-php-extensions && \
install-php-extensions gd zip pdo_mysql
# check https://github.com/mlocati/docker-php-extension-installer#supported-php-extensions for more extensions
COPY --from=composer /usr/bin/composer /usr/bin/composer
php.ini
https://pastebin.com/T2iYTZz2
Your docker containers don't have any knowledge of the users that may or may not exist on the host machine, so unless you've built those in with their accompanying config and directory structure the only thing you're getting out of feeding docker your local UID and GID is "running the container as something other than root", which is good.
But generally you don't want to tie a docker container/image to the particular environment that it is launched from, eg: requiring a user with the same name as your local user exist within the container, plus all of its associated directories and such.
In this specific case it looks like artisan just wants to cache some config, and you can control where that lands with the environment variable:
XDG_CONFIG_HOME=/some/writeable/directory
Which you could set in the Dockerfile, docker-compose, or .env file of your project. I would suggest setting it to somewhere in your project directory, but outside of the docroot.
Ref: https://stackoverflow.com/a/62041096/1064767
This is well enough for local dev where you want to mount in your local work dir for testing, but will likely need a bit more consideration if you're going to build/deploy a final docker image.
My project is defined in a docker-compose file, but I'm not too familiar with docker-compose definitions.
When I try to docker-compose up -d in a fresh setup, the following error occurred during the build of a docker image.
This is after composer install, under post-autoload-dump. Laravel tries to auto discover packages (php artisan package:discover).
Generating optimized autoload files
> Illuminate\Foundation\ComposerScripts::postAutoloadDump
> #php artisan package:discover --ansi
RedisException : php_network_getaddresses: getaddrinfo failed: Name or service not known
at [internal]:0
1|
Exception trace:
1 ErrorException::("Redis::connect(): php_network_getaddresses: getaddrinfo failed: Name or service not known")
/var/www/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php:126
2 Redis::connect("my_redis", "6379")
/var/www/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php:126
Please use the argument -v to see more details.
Script #php artisan package:discover --ansi handling the post-autoload-dump event returned with error code 1
ERROR: Service 'my_app' failed to build: The command '/bin/sh -c composer global require hirak/prestissimo && composer install' returned a non-zero code: 1
The reason it cannot connect to my_redis:6379 is because my_redis is another service in the same docker-compose.yml file. So I assume the domain is not ready yet, since docker-compose wants to first build my images before hosting containers.
EDIT I just found this GitHub issue linking to my problem: https://github.com/laravel/telescope/issues/620. It seems that the problem is related to Telescope trying to use the Cache driver. The difference is I'm not using Docker just for CI/CD, but for my local development.
How can I resolve this problem? Is there a way to force Redis container to up first before building my_app? Or is there a Laravel way to prevent any domain discovery? Or is there a way to specify the building of an image depends on another service to be available?
If you want to see my docker-compose.yml:
version: '3.6'
services:
# Redis Service
my_redis:
image: redis:5.0-alpine
container_name: my_redis
restart: unless-stopped
tty: true
ports:
- "6379:6379"
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
- redisdata:/data
networks:
- app-network
# Postgres Service
my_db:
image: postgres:12-alpine
container_name: my_db
restart: unless-stopped
tty: true
ports:
- "5432:5432"
environment:
POSTGRES_DB: my
POSTGRES_PASSWORD: admin
SERVICE_TAGS: dev
SERVICE_NAME: postgres
volumes:
- dbdata:/var/lib/postgresql
- ./postgres/init:/docker-entrypoint-initdb.d
networks:
- app-network
# PHP Service
my_app:
build:
context: .
dockerfile: Dockerfile
image: my/php
container_name: my_app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: my_app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- /tmp:/tmp #For CS Fixer
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
- fsdata:/my
networks:
- app-network
# Nginx Service
my_webserver:
image: nginx:alpine
container_name: my_webserver
restart: unless-stopped
tty: true
ports:
- "8080:80"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
# Docker Networks
networks:
app-network:
driver: bridge
# Volumes
volumes:
dbdata:
driver: local
redisdata:
driver: local
fsdata:
driver: local
There is a way to force a service to wait another service in docker compose depends_on, but it only wait until the container is up not the service, and to fix that you have to customize redis image by using command to execute a bash script that check for redis container and redis daemon availability check startup-order on how to set it up.
I currently mitigated this by adding --no-scripts to Dockerfile and added a start.sh. Since it is Laravel's package discovery script that binds to post-autoload-dump that wants to access Redis.
Dockerfile excerpt
#...
# Change current user to www
USER www
# Install packages
RUN composer global require hirak/prestissimo && composer install --no-scripts
RUN chmod +x /var/www/scripts/start.sh
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["/var/www/scripts/start.sh"]
start.sh
#!/usr/bin/env sh
composer dumpautoload
php-fpm
I'm sure you've resolve this yourself by now but for anyone else coming across this question later, there are two solutions I have found:
1. Ensure Redis is up and running before your App
In your redis service in docker-compose.yml add this...
healthcheck:
test: ["CMD", "redis-cli", "ping"]
...then in your my_app service in docker-compose.yml add...
depends_on:
redis:
condition: service_healthy
2. Use separate docker compose setups for local development and CI/CD pipelines
Even better in my opinion is to create a new docker-compose.test.yml In here you can omit the redis service entirely and just use the CACHE_DRIVER=array. You could set this either directly in your the environment property of your my_app service or create a .env.testing. (make sure to set APP_ENV=testing too).
I like this approach because as your application grows there may be more and more packages which you want to enable/disable or configure differently in your testing environment and using .env.testing in conjunction with a docker-compose.testing.yml is a great way to manage that.
We have a strange issue with php.
We have a docker container that runs our code for local dev purposes, and we deploy this container to a rancher environment for acceptance and production.
We have the following habit to check if a key in an associative array exists:
try {
$value = $someArray['does']['this']['exist'];
} catch (\Exception $e) {
// No it didn't, do something else
$value = 'fallback';
}
This is just a shortcut and prevents that we need to check if all of those keys exist.
This works perfectly in our local dev environment. But not when it's deployed to our acceptance or production environment. It will never reach the catch. In this case, $value will be null.
How come the same docker container behaves differently when it's deployed?
EDIT (added docker-compose info)
# docker/docker-compose.yml
version: "2"
services:
api-tools:
container_name: api-tools
image: bizz-registry.githost.io/neon/api-tools:acceptance
env_file: [".env"]
ports:
- 8005:80
environment:
SYMFONY_DEBUG: "true"
SYMFONY_ENV: "dev"
restart: on-failure:10
volumes:
- .:/opt/webapp:cached
- ./web/app_dev.php:/opt/webapp/web/app.php
- ./docker/vhost.conf:/etc/apache2/sites-available/000-default.conf
- ./docker/development/development.override:/etc/apache2/sites-available/development.override
- ./docker/development/development.opcache.override:/etc/php/7.1/fpm/conf.d/tweak-opcache.ini
working_dir: "/opt/webapp"
external_links: [ mysql ]
command: "/usr/bin/env php -d open_basedir= vendor/phing/phing/bin/phing setup run"
This is what setup calls:
<target name="setup">
<exec command="php ${project.basedir}/bin/console cache:warmup --env=prod" passthru="true" checkreturn="true" />
</target>
This is what run calls:
<target name="run">
<exec command="/opt/start.sh" passthru="true" checkreturn="true" />
<exec command="/usr/sbin/apache2ctl -D FOREGROUND" passthru="true" checkreturn="true" />
</target>