Bitbucket pipeline fails when using custom php image - php

I have created a custom Docker image, pushed it to dockerhub, added it to the bitbucket pipeline. It gets pulled but I cannot execute php. Note even php -v works, no output is visible. What am I doing wrong?
I have built the image in an intel processor. Also building the image on bitbucket and directly from there pushing it to dockerhub did not change anything.
This is the output I get...
The image is incuded in the pipeline as follows:
definitions:
services:
mariadb:
image: mariadb:10.6.4
environment:
MYSQL_DATABASE: 'redacted'
MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
MYSQL_USER: 'redacted'
MYSQL_PASSWORD: 'redacted'
steps:
- step: &TestPHP
name: Test PHP
image:
name: "sensetence/hcwcrm-php80fpm:latest"
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_TOKEN
deployment: test
services:
- mariadb
script:
- cd api
- ls -lh
- ls -lh /usr/local/bin
- php -v
And this is how I am building the image (left out some software installation parts):
FROM php:8.0.25-fpm-alpine
ARG UID
ARG GID
ENV UID=${UID}
ENV GID=${GID}
RUN mkdir -p /var/www/html
WORKDIR /var/www/html
# MacOS staff group's gid is 20, so is the dialout group in alpine linux. We're not using it, let's just remove it.
RUN delgroup dialout
RUN addgroup -g ${GID} --system symfony
RUN adduser -G symfony --system -D -s /bin/sh -u ${UID} symfony
RUN sed -i "s/user = www-data/user = symfony/g" /usr/local/etc/php-fpm.d/www.conf
RUN sed -i "s/group = www-data/group = symfony/g" /usr/local/etc/php-fpm.d/www.conf
RUN echo "php_admin_flag[log_errors] = on" >> /usr/local/etc/php-fpm.d/www.conf
[... here installation, copying files, setting up supercronic]

Solution found. The Dockerfile also loads a php.ini, in this file are the following two lines to warm up opcache:
opcache.preload=/var/www/html/config/preload.php
opcache.preload_user=symfony
Looks like this path is not defined when the image is executed in the bitbucket? At least when the image is started and FPM wants to start running? I don't know, to be honest. I removed these two lines, everything worked.

Related

building docker container for selenium-chrome tests

I try to build docker image for use it in gitlab-ci
What already written:
FROM ubuntu:latest as ubuntu
FROM php:8.1-cli as php
FROM node:14.15.0-stretch AS node
FROM selenium/standalone-chrome:latest
COPY --from=ubuntu /bin /bin
COPY --from=php /app /app
COPY --from=node /usr/local/lib/node_modules /usr/local/lib/node_modules
COPY --from=node /usr/local/bin/node /usr/local/bin/node
RUN ln -s /usr/local/lib/node_modules/npm/bin/npm-cli.js /usr/local/bin/npm
# EXPOSE 4444
# EXPOSE 7900
LABEL miekrif uzvar-selenium
ENTRYPOINT ["/bin/bash"]
What I wanted to achieve: I have a stage in .gitlab-ci.yml with the following points:
test-dev:
stage: test
image: miekrif/uzavr-selenium:latest
script:
- npm i
- npm run prod
- nohup /opt/bin/start-selenium-standalone.sh &
- npx mocha tests/js/screenshots-* --timeout 50000
- npx playwright test tests/js/pw_*
- php artisan test
What I want to achive: Current job maked for test our project, I couldn't think of another way to start selenium\standalone-chrome so that npx could run tests because it refers to 127.0.0.1
Currently I have following failure:
/usr/bin/sh: /usr/bin/sh: cannot execute binary file

Docker with Doctrine generate-proxies

I'm trying to create a Docker container (using docker-compose) for an application wit Doctrine, the problem is: if I just run the application, it works, but when I try to use the application before I run command ./vendor/bin/doctrine orm:generate-proxies, I get the error:
PHP Warning: require(/tmp/__CG__DomainEntitiesAnyEntity.php): failed to open stream: No such file or directory in /var/www/html/vendor/doctrine/common/lib/Doctrine/Common/Proxy/AbstractProxyFactory.php on line 204
PHP Fatal error: require(): Failed opening required '/tmp/__CG__DomainEntitiesAnyEntity.php' (include_path='.:/usr/local/lib/php') in /var/www/html/vendor/doctrine/common/lib/Doctrine/Common/Proxy/AbstractProxyFactory.php on line 204
OK, so just run the command on docker-compose.yml
version: '3'
services:
apache_server:
build: .
working_dir: /var/www/html
ports:
- "80:80"
volumes:
- ./:/var/www/html
- ../uploads:/var/www/uploads
- ./.docker/apache2.conf:/etc/apache2/apache2.conf
- ./.docker/000-default.conf:/etc/apache2/sites-avaliable/000-default.conf
- ./.docker/php.ini:/etc/php/7.4/apache2/php.ini
depends_on:
- postgres_database
command: sh -c "./vendor/bin/doctrine orm:generate-proxies"
networks:
- some-network
Yes, it works as expected and generates the proxies to /tmp folder, but after the command run and after the prompt with proxies generated, I get the message exited with code 0. It happens because Docker finish the container execution after getting the status code 0. So I tried two more things:
Add tail to something:
command: sh -c "./vendor/bin/doctrine orm:generate-proxies && tail -f /var/www/html/log.txt"
but when I do this, the server doesn't respond to requests (http://localhost/) anymore.
Add tty before running the command:
tty: true
# restart: unless-stopped <--- also tried this
and doesn't work also. Is there another way to solve this without I have to manually run the command inside the container every time?
PS: my dockerfile is this one:
FROM php:7.4-apache
WORKDIR /var/www/html
RUN a2enmod rewrite
RUN a2enmod headers
RUN mkdir /var/www/uploads
RUN mkdir /var/www/uploads/foo-upload-folder
RUN mkdir /var/www/uploads/bar-upload-folder
RUN chmod 777 -R /var/www/uploads
RUN apt-get update \
&& apt-get install -y \
libpq-dev \
zlib1g-dev \
libzip-dev \
unzip \
&& docker-php-ext-install \
pgsql \
pdo \
pdo_pgsql \
zip
RUN service apache2 restart
Cause of your issue
Your Docker Compose configuration of command
command: sh -c "./vendor/bin/doctrine orm:generate-proxies"
in docker-compose.yml overwrites the Cmd in the Docker image php:7.4-apache that normally would start the Apache server, see
docker inspect php:7.4-apache
or more specific
docker inspect --format="{{ .Config.Cmd }}" php:7.4-apache
which gives you
[apache2-foreground]
Solution in general
If you like to run a command before the original command of a Docker image, use Entrypoint and make sure you call the original entrypoint, see
$ docker inspect --format="{{ .Config.Entrypoint }}" php:7.4-apache
[docker-php-entrypoint]
For example, instead of command define
entrypoint: sh -c "./vendor/bin/doctrine orm:generate-proxies && docker-php-entrypoint"
Solution in your case
However, in your case, I would configure Doctrine like this (see Advanced Doctrine Configuration)
$config = new Doctrine\ORM\Configuration;
// ...
if ($applicationMode == "development") {
$config->setAutoGenerateProxyClasses(true);
} else {
$config->setAutoGenerateProxyClasses(false);
}
In development your code changes (mounted as volume) and proxies may have to be updated/generated. In production your code does not change anymore (copy code to Docker image). Hence, you should generate proxies in your Dockerfile (after you copied the source code), e.g.
FROM php:7.4-apache
WORKDIR /var/www/html
# ...
Copy . /var/www/html
RUN ./vendor/bin/doctrine orm:generate-proxies

How to stop Github Actions step when functional tests failed (using Codeception)

I'm new with Github Actions and I try to make some continuous integration with functional tests.
I use Codeception and my workflow run perfectly, but when some tests fail the step is written as success. Github don't stop the action and continue to run the nexts steps.
Here is my workflow yml file :
name: Run codeception tests
on:
push:
branches: [ feature/functional-tests/codeception ]
jobs:
build:
runs-on: ubuntu-latest
steps:
# —— Setup Github Actions 🐙 —————————————————————————————————————————————
- name: Checkout
uses: actions/checkout#v2
# —— Setup PHP Version 7.3 🐘 —————————————————————————————————————————————
- name: Setup PHP environment
uses: shivammathur/setup-php#master
with:
php-version: '7.3'
# —— Setup Docker Environment 🐋 —————————————————————————————————————————————
- name: Build containers
run: docker-compose build
- name: Start all container
run: docker-compose up -d
- name: Execute www container
run: docker exec -t my_container developer
- name: Create parameter file
run: cp app/config/parameters.yml.dist app/config/parameters.yml
# —— Composer 🧙‍️ —————————————————————————————————————————————————————————
- name: Install dependancies
run: composer install
# —— Check Requirements 👌 —————————————————————————————————————————————
- name: Check PHP version
run: php --version
# —— Setup Database 💾 —————————————————————————————————————————————
- name: Create database
run: docker exec -t mysql_container mysql -P 3306 --protocol=tcp -u root --password=**** -e "CREATE DATABASE functional_tests"
- name: Copy database
run: cat tests/_data/test.sql | docker exec -i mysql_container mysql -u root --password=**** functional_tests
- name: Switch database
run: docker exec -t php /var/www/bin/console app:dev:marketplace:switch functional_tests
- name: Execute migrations
run: docker exec -t php /var/www/bin/console --no-interaction doctrine:migrations:migrate
- name: Populate database
run: docker exec -t my_container php /var/www/bin/console fos:elastica:populate
# —— Generate Assets 🔥 ———————————————————————————————————————————————————————————
- name: Install assets
run: |
docker exec -t my_container php /var/www/bin/console assets:install
docker exec -t my_container php /var/www/bin/console assetic:dump
# —— Tests ✅ ———————————————————————————————————————————————————————————
- name: Run functional tests
run: docker exec -t my_container php codeception:functional
- name: Run Unit Tests
run: php vendor/phpunit/phpunit/phpunit -c app/
And here is the logs of the action step :
Github Action log
Anyone know why the step don't faile and how to throw an error ?
Probably codeception:functional set an exit code = 0 even though an error occured. docker exec passes the exit code of the process through. GitHub Actions fails the step if a command returns with an exit code != 0.

Issues while build a local php site with wordpress in a sub-directory

We use Docker, Docker-compose and Webpack to build a local environment for a php site. Recently I was tasked with adding a blog to the local setup using wordpress. I have been able to get everything up and running almost how I intended however there have been some issues with live reloading of the site. I can not for the life of me get the setup to work so that both the site root files and the blog sub-directory files will live-reload when saved. I can get either or too work but just not both. We use the browser sync plugin in webpack to reload any change it sees to the dist folder.
I believe the issue comes from the volume mount in the docker-compose file. If I mount only the wordpress wp-content files:
volumes:
- ./dist/blog/wp-content/uploads:/var/www/html/blog/wp-content/uploads
- ./dist/blog/wp-content/plugins:/var/www/html/blog/wp-content/plugins
- ./dist/blog/wp-content/themes:/var/www/html/blog/wp-content/themes
The wordpress blog gets updated upon save but any files not under blog/ do not. If I have the root folder mounted in volumes all files but the WordPress files reload.
volumes:
- ./dist:/var/www/html
And when I exec into the blog folder it has erased or overwritten the entire wordpress installation so they the WP site can no longer be used. If I have all four lines in same result. I am not sure if anyone can help me but I hope someone has ran into this issue before and I appreciate any help you can give. I have tried to include my relevant file info. LEt me know If I need to add more
dist folder structure
dist/
blog/
wp-content/
themes/
custom-themes/
.... theme-files
index.php
contactus.php
about.php
... etc
dockerfile
FROM php:7.0-apache
# Run Linux apt (Advanced Package Tool) to and install any packages
RUN apt-get update && \
apt-get install -y --no-install-recommends
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
# Enable mod_rewrite in apache modules
RUN a2enmod rewrite
# Manually set up the apache environment variables
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
# Expose apache.
EXPOSE 80
ADD ves-apache-config.conf /etc/apache2/sites-enabled/000-default.conf
WORKDIR /var/www/html/
COPY ./dist /var/www/html/
WORKDIR /var/www/html/blog/
# Set our wordpress environment variables
ENV WORDPRESS_VERSION 5.2.2
ENV WORDPRESS_SHA1 3605bcbe9ea48d714efa59b0eb2d251657e7d5b0
# Download and unpack wordpress
RUN set -ex; \
curl -o wordpress.tar.gz -fSL "https://wordpress.org/wordpress-${WORDPRESS_VERSION}.tar.gz"; \
echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c -; \
# upstream tarballs include ./wordpress/ so this gives us /usr/src/wordpress
tar -xzf wordpress.tar.gz -C /var/www/html/blog; \
rm wordpress.tar.gz; \
chown -R www-data:www-data /var/www/html/blog
RUN cp -r /var/www/html/blog/wordpress/. /var/www/html/blog/
RUN rm -rf /var/www/html/blog/wordpress.tar.gz
RUN rm -rf /var/www/html/blog/wordpress
CMD ["apache2-foreground"]
docker-compose.yml
version: "3"
services:
server:
# Name our container
container_name: corporate-site
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
depends_on:
- database
build:
context: ./
volumes:
# - ./dist:/var/www/html
- ./dist/blog/wp-content/uploads:/var/www/html/blog/wp-content/uploads
- ./dist/blog/wp-content/plugins:/var/www/html/blog/wp-content/plugins
- ./dist/blog/wp-content/themes:/var/www/html/blog/wp-content/themes
restart: always
ports:
- "8080:80"
# Logging Control
logging:
driver: none
### MYSQL DATABASE ###
database:
container_name: blog-database
build:
context: ./config/docker/database
volumes:
- datab:/var/lib/mysql
restart: always
ports:
- "3306:3306"
volumes:
datab:
Webpack file
module.exports = merge(base, {
mode: 'development',
devtool: 'inline-source-map',
watch: true,
plugins: [
new BrowserSyncPlugin({
host: 'localhost',
proxy: 'http://localhost:8080',
port: 3200,
open: true,
files: [
'./dist/*.php',
'./dist/blog/wp-content/themes/blog/*.php',
'./dist/blog/wp-content/themes/blog/*.css'
]
}),
]
})

Install cURL compiled with OpenSSL and zlib in Dockerfile

I need to install cURL compiled with OpenSSL and zlib via Dockerfile for Debian image with apache and php 5.6. I tried many approaches but due to the fact that I don't have string understanding in Linux a failed. I use docker-compose to up my container. docker-compose.yaml looks like:
version: '2'
services:
web:
build: .
command: php -S 0.0.0.0:80 -t /var/www/html/
ports:
- "80:80"
depends_on:
- db
volumes:
- $PWD/www/project:/var/www/html
container_name: "project-web-server"
db:
image: mysql:latest
ports:
- "192.168.99.100:3306:3306"
container_name: "project-db"
environment:
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbpass
MYSQL_ROOT_PASSWORD: dbpass
As a build script I use Dockerfile:
FROM php:5-fpm
RUN apt-get update && apt-get install -y \
apt-utils \
curl libcurl3 libcurl3-dev php5-curl php5-mcrypt
RUN docker-php-ext-install -j$(nproc) curl
'docker-php-ext-install' is a helper script from the base image https://hub.docker.com/_/php/
The problem is that after $ docker build --rm . which is successful a don't get an image with cURL+SSL+zlib. After $ docker-compose up I have a working container with Apache+MySQL and can run my project but libraries I need are not there.
Could you explain how to add these extensions to my apache in container properly? I even tried to create my own Dockerfile and build apache+php+needed libs there, but had no result.
Your Dockerfile is not complete. You have not done a COPY (or similar) to transfer your source code to execute from the host into the container. The point of a Dockerfile is to setup an environment together with your source code which finishes by launching a process (typically a server).
COPY code-from-some-location into-location-in-container
CMD path-to-your-server
... as per the URL you reference a more complete Dockerfile would appear like this
FROM php:5.6-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
notice the COPY which recursively copies all files/dirs (typically the location of your source code, etc like data and/or config files) in your $PWD where you execute the command onto the specified location internal to the container In unix a period as in . indicates the current directory so above command
COPY . /usr/src/myapp
will copy all files and directories in current directory from the host computer (the one you are using when typing in the docker build command) into the container directory called /usr/src/myapp
the WORKDIR acts to change directories into your container's dir supplied
finally the CMD launches the server which hums along once your launch the container

Categories