Unit tests works locally but does not work on Jenkins - php

Been banging my head for hours now. I have just gotten started with Docker, and build systems. I have a PHP Codeception (unit testing framework) codebase as a repo example. Locally, I am able to successfully run the test, but when I check-in my code and it goes to the CI (Jenkins), the client program can be invoked but says no tests found.
*note I know that my build definitions could use some work, my goal is just to get a working build.
Dockerfile
FROM codeception/codeception
Makefile
APP_NAME=codeception
ROOT_DIR=${PWD}
WORK_DIR=app
docker_build:
docker build -t $(APP_NAME) .
run_test:
docker run --rm --name ception -w /$(WORK_DIR) -v $(ROOT_DIR):/$(WORK_DIR) $(APP_NAME) run acceptance
Just for arguments sake, this is how it prints in the build system
docker run --rm --name ception -w /app -v /var/jenkins_home/workspace/codeception:/app codeception run
Build Triggers
make docker_build
make run_test
Local Output
OK (1 test, 0 assertions)
Jenkins Output
*note did not fail, but..
no tests executed!
Jenkins Installation
Host is a Docker machine droplet on digital ocean and I ran a Jenkins container voluming Docker sock so I can invoke Docker on build triggers
ANOTHER QUESTION
Given the above preconditions with the command:
*note the pwd
docker run --rm --name ception -w /app -v /var/jenkins_home/workspace/codeception:/app codeception run; pwd
Jenkins output:
/var/jenkins_home/workspace/codeception
I was expecting it to output /app because my understanding is, it invokes pwd from inside the docker container, so this should have outputed the /app directory. I am confused now
Logs:
Started by GitHub push by edsk3o
Building in workspace /var/jenkins_home/workspace/codeception
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/edsk3o/codeception.git # timeout=10
Fetching upstream changes from https://github.com/edsk3o/codeception.git
> git --version # timeout=10
using GIT_ASKPASS to set credentials
> git fetch --tags --progress https://github.com/edsk3o/codeception.git +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 1a7dd08ef1ed9e8e7b3f236c50690b65c65f37e8 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 1a7dd08ef1ed9e8e7b3f236c50690b65c65f37e8
Commit message: "laksjd"
> git rev-list --no-walk 528640c7be393aaa06f94edc80f12234a759efd6 # timeout=10
[codeception] $ /bin/sh -xe /tmp/jenkins9213645655529951332.sh
+ make docker_build
docker build -t codeception .
Sending build context to Docker daemon 3.408MB
Step 1/1 : FROM codeception/codeception
---> 1681da57f253
Successfully built 1681da57f253
Successfully tagged codeception:latest
+ make run_test
docker run --rm --name ception -w /app -v /var/jenkins_home/workspace/codeception:/app codeception run acceptance; pwd
Codeception PHP Testing Framework v2.3.8
Powered by PHPUnit 6.5.6 by Sebastian Bergmann and contributors.
[1mAcceptance Tests (0) [22m---------------------------------------
------------------------------------------------------------
Time: 89 ms, Memory: 10.00MB
[30;43mNo tests executed![0m
/var/jenkins_home/workspace/codeception
Finished: SUCCESS

Related

building docker container for selenium-chrome tests

I try to build docker image for use it in gitlab-ci
What already written:
FROM ubuntu:latest as ubuntu
FROM php:8.1-cli as php
FROM node:14.15.0-stretch AS node
FROM selenium/standalone-chrome:latest
COPY --from=ubuntu /bin /bin
COPY --from=php /app /app
COPY --from=node /usr/local/lib/node_modules /usr/local/lib/node_modules
COPY --from=node /usr/local/bin/node /usr/local/bin/node
RUN ln -s /usr/local/lib/node_modules/npm/bin/npm-cli.js /usr/local/bin/npm
# EXPOSE 4444
# EXPOSE 7900
LABEL miekrif uzvar-selenium
ENTRYPOINT ["/bin/bash"]
What I wanted to achieve: I have a stage in .gitlab-ci.yml with the following points:
test-dev:
stage: test
image: miekrif/uzavr-selenium:latest
script:
- npm i
- npm run prod
- nohup /opt/bin/start-selenium-standalone.sh &
- npx mocha tests/js/screenshots-* --timeout 50000
- npx playwright test tests/js/pw_*
- php artisan test
What I want to achive: Current job maked for test our project, I couldn't think of another way to start selenium\standalone-chrome so that npx could run tests because it refers to 127.0.0.1
Currently I have following failure:
/usr/bin/sh: /usr/bin/sh: cannot execute binary file

Different behavior of two gitlab runners with same docker-compose config

For a PHP/Symfony project, I currently setup the Gitlab CI (self-hosted CE) for building the Docker images and running tests and code-style checks.
To run it parallel, I have one build job which is building the Docker image and two jobs, the first is running the phpunit tests, the other one runs the code style checks like phpstan and codesniffer.
The project has some composer dependencies which are installed with the command docker-compose run --entrypoint="composer install -n" php. The project folder is a volume configured in the docker-compose.yml file:
php:
image: 'git.cd.de:5050/sf/sf-software:dev_latest'
depends_on:
- database
environment:
TIMEZONE: Europe/Berlin
XDEBUG_MODE: 'off'
XDEBUG_CONFIG: >-
client_host=host.docker.internal
client_port=9003
idekey=PHPSTORM
PHP_IDE_CONFIG: serverName=sf
volumes:
- './:/var/www/html'
- './docker/php/php.ini:/usr/local/etc/php/php.ini:ro'
This is working on my local machine and also in the CI, it's working - but only, if it's running on "the one" gitlab runner, which is installed on a second virtual machine, while the "second runner" is installed on the same machine on which is also gitlab running. The runner "second runner" fails with the following message:
$ docker-compose run --entrypoint="composer install -n" php
Creating cd-software_php_run ...
Creating cd-software_php_run ... done
Composer could not find a composer.json file in /var/www/html
To initialize a project, please create a composer.json file. See https://getcomposer.org/basic-usage
The composer.json file does not exists in the docker image php.
It makes no difference in the result whether the "second runner" performs the test-job or the check-job. The "second job" always fails with that error.
My .gitlab-ci.yml file:
variables:
DOCKER_DRIVER: overlay
before_script:
- apk add --no-cache docker-compose
- docker info
- docker-compose --version
build_dev:
services:
- docker:dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $CI_REGISTRY_IMAGE:dev_latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:dev_latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:dev_latest .
- docker push $CI_REGISTRY_IMAGE:dev_latest
tests:
services:
- docker:dind
needs:
- build_dev
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker-compose pull
- docker-compose run --entrypoint="composer install -n" php
- docker-compose run --entrypoint="bin/console doctrine:migrations:migrate -n" php
- docker-compose run --entrypoint="bin/console doctrine:schema:validate" php
- docker-compose run --entrypoint="bin/console doctrine:fixtures:load -n" php
- docker-compose run --entrypoint="vendor/bin/simple-phpunit -c phpunit.xml.dist" php
checks:
services:
- docker:dind
needs:
- build_dev
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker-compose pull
- ls -la
- docker-compose run --entrypoint="ls -la" php
- docker-compose run --entrypoint="composer install -n" php
- docker-compose run --entrypoint="composer run check-style" php
- docker-compose run --entrypoint="composer run phpstan" php

How to stop Github Actions step when functional tests failed (using Codeception)

I'm new with Github Actions and I try to make some continuous integration with functional tests.
I use Codeception and my workflow run perfectly, but when some tests fail the step is written as success. Github don't stop the action and continue to run the nexts steps.
Here is my workflow yml file :
name: Run codeception tests
on:
push:
branches: [ feature/functional-tests/codeception ]
jobs:
build:
runs-on: ubuntu-latest
steps:
# —— Setup Github Actions 🐙 —————————————————————————————————————————————
- name: Checkout
uses: actions/checkout#v2
# —— Setup PHP Version 7.3 🐘 —————————————————————————————————————————————
- name: Setup PHP environment
uses: shivammathur/setup-php#master
with:
php-version: '7.3'
# —— Setup Docker Environment 🐋 —————————————————————————————————————————————
- name: Build containers
run: docker-compose build
- name: Start all container
run: docker-compose up -d
- name: Execute www container
run: docker exec -t my_container developer
- name: Create parameter file
run: cp app/config/parameters.yml.dist app/config/parameters.yml
# —— Composer 🧙‍️ —————————————————————————————————————————————————————————
- name: Install dependancies
run: composer install
# —— Check Requirements 👌 —————————————————————————————————————————————
- name: Check PHP version
run: php --version
# —— Setup Database 💾 —————————————————————————————————————————————
- name: Create database
run: docker exec -t mysql_container mysql -P 3306 --protocol=tcp -u root --password=**** -e "CREATE DATABASE functional_tests"
- name: Copy database
run: cat tests/_data/test.sql | docker exec -i mysql_container mysql -u root --password=**** functional_tests
- name: Switch database
run: docker exec -t php /var/www/bin/console app:dev:marketplace:switch functional_tests
- name: Execute migrations
run: docker exec -t php /var/www/bin/console --no-interaction doctrine:migrations:migrate
- name: Populate database
run: docker exec -t my_container php /var/www/bin/console fos:elastica:populate
# —— Generate Assets 🔥 ———————————————————————————————————————————————————————————
- name: Install assets
run: |
docker exec -t my_container php /var/www/bin/console assets:install
docker exec -t my_container php /var/www/bin/console assetic:dump
# —— Tests ✅ ———————————————————————————————————————————————————————————
- name: Run functional tests
run: docker exec -t my_container php codeception:functional
- name: Run Unit Tests
run: php vendor/phpunit/phpunit/phpunit -c app/
And here is the logs of the action step :
Github Action log
Anyone know why the step don't faile and how to throw an error ?
Probably codeception:functional set an exit code = 0 even though an error occured. docker exec passes the exit code of the process through. GitHub Actions fails the step if a command returns with an exit code != 0.

Call Docker container outside docker

I have a project running under Apache and without docker. However, in this project, I have a PDF export and this PDF export I want to call it in a docker container. How do you do that?
Here is my docker-compose.yaml :
version: "3.3"
services:
php:
image: registry.gitlab.com/nodevo/keneo/php:latest
volumes:
- .:/srv:rw,cached
- ~/.ssh/id_rsa:/root/.ssh/id_rsa:ro
- ~/.composer:/tmp
browsershot:
image: ouranoshong/browsershot
links:
- chrome
chrome:
build: ./docker/chrome
cap_add:
- SYS_ADMIN
ports:
- '9223:9222'
The Browsershot service downloads the PDF and have node and npm. For that, it simulates a browser, so I have a chrome service but I don't know if it's the right method.
These 2 commands works in a terminal :
docker-compose run --rm browsershot node -v
docker-compose run --rm browsershot npm -v
However, Browsershot needs the path of the executable node and npm which are suddenly the 2 commands above just like that :
/**
* #param string $nodePath
* #param string $npmPath
*/
public function __construct(string $nodePath, string $npmPath)
{
$this->browserShot = new Browsershot();
$this->browserShot
->setNodeBinary($nodePath)
->setNpmBinary($npmPath)
->showBackground()
->setOption('fullPage', true)
->setOption('args', ['--disable-web-security'])
->emulateMedia('screen')
;
}
The variable $nodePath is docker-compose run --rm browsershot node and variable $npmPath id docker-compose run --rm browsershot npm.
This code works with executable in my local machine like /usr/local/bin/node and /usr/local/bin/npm.
But, when I launch the export with docker commands, I get this error:
HTTP 500 Internal Server Error
The command "PATH=$PATH:/usr/local/bin NODE_PATH=`docker-compose run --rm browsershot node docker-compose run --rm browsershot npm root -g` docker-compose run --rm browsershot node '/media/vdufour/DATA/Sites/keneo/vendor/spatie/browsershot/src/../bin/browser.js' '{"url":"file:\/\/\/tmp\/639306024-0718333001579852859\/index.html","action":"pdf","options":{"args":["--disable-web-security"],"viewport":{"width":800,"height":600},"fullPage":true,"emulateMedia":"screen","delay":2000,"displayHeaderFooter":false,"printBackground":true}}'" failed.
Exit Code: 1(General error)
Working directory: /media/vdufour/DATA/Sites/keneo/public
Output:
================
Error Output:
================
Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
When using docker-compose, I just "ssh" into the container's shell like this:
docker-compose exec php /bin/bash
(where php is the container in my docker-compose.yml)
Rememvber though that for every terminal window you open, you also need to run the following to load in the environment vars:
eval $(docker-machine env)
Without doing that, you'll get the Couldn't connect to Docker daemon message.
Try to run commands with bash
docker-compose run --rm browsershot bash -c "npm && node && some_command..."

Running package manager inside the docker

I've built an image for the purpose of PHP development, and it became clear to me that I didn't really thought about how to access the tools that I need for every day development. For example: composer, package manager for PHP, I need it to run whenever composer.json updates. I thought it is worth installing those tools inside the same image, but then I don't have a way to access them. So, I can:
Create separate image for composer and run it in different container
Install composer on my host machine.
I'd like to avoid option 2), but then, does it have sense having a setup like 1) ? How did you guys solved this issue ?
Unless you have some quite specific requirements there is a third option:
Connect to the container using docker exec command:
docker exec -it CONTAINER-NAME/ID COMMAND [ARG...]
Here is the example:
1: Create your application:
echo "<?php phpinfo();" > index.php
2: Start container:
docker run -it --rm --name my-apache-php-app -p 80:80 -v "$PWD":/var/www/html php:5.6-apache
3: Open another terminal window and exec required commands inside running container:
docker exec -it my-apache-php-app curl -sS https://getcomposer.org/installer | php
docker exec -it my-apache-php-app ls
If you need shell inside running container - run:
docker exec -it my-apache-php-app bash
That's it!

Categories