How to access docker image on AzureDevops? - php

I want to create a Docker image of a Laravel Application on Azure Devops and deploy it to a GCR, I commited a project created with laravel 5.5 that works with Docker on local. I thought that if I created a Build pipeline on Azure Devops It would take docker-compose.yml but it failed, so i tried using azure-pipeline.yml and i was able to create an image (still don't know where is it placed). I would like to know how can I access docker image the way i do in local using a command line. is that possible?
this is the yaml file
trigger:
- my_build
resources:
- repo: self
variables:
tag: '$(Build.BuildId)'
GOOGLE_PROYECT_ID: "deft-clarity-myGCPry"
GOOGLE_PROYECT_NAME: "Dashboard LW"
GOOGLE_APPLICATION_CREDENTIALS: '$(admin_gcr_myGCPry.secureFilePath)'
GOOGLE_CLOUD_KEYFILE_JSON: '$(admin_gcr_myGCPry.secureFilePath)'
stages:
- stage: Build
displayName: Build mypr img
jobs:
- job: Build_My_Pry_Image
displayName: Job Build Dashboard Image
pool:
vmImage: ubuntu-latest
steps:
- task: Docker#2
displayName: Build an image
inputs:
command: build
dockerfile: '$(Build.SourcesDirectory)/.docker/Dockerfile'
tags: |
$(tag)
- task: DownloadSecureFile#1
name: admin_gcr_361712
displayName: Download Service Account
inputs:
secureFile: 'admin-gcr-mygcpry.json'
- bash: 'echo $GOOGLE_PROYECT_ID'
- bash: 'gcloud --version'
name: "version"
- bash: 'gcloud auth activate-service-account admin-gcr#deft-clarity-myGCPry.iam.gserviceaccount.com --key-file=$GOOGLE_APPLICATION_CREDENTIALS'
name: "Activate_Service"
- bash: 'gcloud config set proyect deft-clarity-myCGPry'
- bash: 'gcloud config list'
- bash: 'gcloud compute network list' ```

Related

Github action by ssh ionos

I have a problem to deploy a symfony project with github action. I can connect with ssh and execute a git pull or a php bin/console doctrine:migrations:migrate, but it's impossible to use the compose command.
I followed the various explanations of ionos (https://www.ionos.com/digitalguide/websites/web-development/using-php-composer-in-ionos-webhosting-packages/) but github actions tells me "Could not open input file: composer.phar".
Here is my script if anyone has an idea
name: CD
on:
push:
branches: [ develop ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: SSH and Deploy
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.APP_HOST }}
username: ${{ secrets.APP_USER }}
password: ${{ secrets.APP_PASS }}
port: 22
script: |
cd /homepages/14/d800745077/htdocs/clickandbuilds/dashJob
git pull
/usr/bin/php8.0-cli composer.phar i
/usr/bin/php8.0-cli bin/console d:m:m -n
It depends on your current working directory.
If the repository has composer.phar in your remote repository you just pulled, then the command would work.
If not, replace it with a find . -name "composer.phar" to check where that file is in the repository.
IONOS is offering a tooling named Deploy Now that should ease your setup. For PHP you can find docs here.

Run php artisan commands from Cloud Build

I'm using Cloud Build to deploy my app on Cloud Run. I'd like to set php artisan commands in my cloudbuild.yaml to run migrations, init passport library... But I got this error on my Laravel init step:
Starting Step #3 - "Laravel Init"
Step #3 - "Laravel Init": Already have image (with digest): gcr.io/cloud-builders/gcloud
Step #3 - "Laravel Init": bash: php: command not found
Step #3 - "Laravel Init": bash: line 1: php: command not found
Step #3 - "Laravel Init": bash: line 2: php: command not found
Step #3 - "Laravel Init": bash: line 3: php: command not found
Step #3 - "Laravel Init": bash: line 4: php: command not found
Finished Step #3 - "Laravel Init"
ERROR
ERROR: build step 3 "gcr.io/cloud-builders/gcloud" failed: step exited with non-zero status: 127
And here is my cloudbuild.yaml
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args:
...
id: Build
# Push the container image to Artifacts Registry
- name: 'gcr.io/cloud-builders/docker'
...
id: Push
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: Deploy
entrypoint: gcloud
...
# Laravel Init
- name: 'gcr.io/cloud-builders/gcloud'
id: Laravel Init
entrypoint: "bash"
args:
- "-c"
- |
php artisan migrate --force
php artisan db:seed --force
php artisan db:seed --class=Database\\Seeders\\UsersTableSeeder --force
php artisan passport:install
images:
- 'europe-west3-docker.pkg.dev/$PROJECT_ID/.....'
tags:
- latest
How can I do to execute my php artisan commands ?
I found a solution. I used the helper exec-wrapper. With this helper I can use my laravel container env and connect to Cloud SQL with the embedded cloud sql proxy. So I just have to pass my latest current image previously built in the first step in cloudbuild.yaml. I set the database socket connection and then I pass the migration.sh file where I can run all my php artisan commands.
I'm using mysql so you have to adjust port and connection name if you are using another Database.
cloudbuild.yaml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args:
...
id: Build
# Push the container image to Artifacts Registry
- name: 'gcr.io/cloud-builders/docker'
...
id: Push
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: Deploy
entrypoint: gcloud
...
# Laravel Init
- name: 'gcr.io/google-appengine/exec-wrapper'
id: Laravel Init
args: [
'-i', '<YOUR_IMAGE_URL>',
'-e', 'DB_CONNECTION=mysql',
'-e', 'DB_SOCKET=/cloudsql/<YOUR_CLOUD_SQL_INSTANCE>',
'-e', 'DB_PORT=3306',
'-e', 'DB_DATABASE=<YOUR_DATABASE_NAME>',
'-e', 'DB_USERNAME=<YOUR_DB_USER>',
'-e', 'DB_PASSWORD=<YOUR_DB_PASS>',
'-s', '<YOUR_CLOUD_SQL_INSTANCE>',
'--', '/app/scripts/migration.sh'
]
images:
- 'europe-west3-docker.pkg.dev/$PROJECT_ID/.....'
Care about /app folder in /app/scripts/migration.sh. /app is my WORKDIR that I set in my Dockerfile
migration.sh look like this:
#!/bin/bash
php artisan migrate --force
php artisan db:seed --force
#... add more commands
Don't forget to Add the permission Client Cloud SQL to the Cloud Build service in the IAM section else Cloud Build cannot connect to your Cloud SQL instance.
And care about if your image has an entrypoint file. You have to use exec $# to execute the -- command from app engine exec wrapper. If you don't use it the commands will be ignored.
Based off this output php: command not found, it looks like you don't have PHP installed in your cloud environment

Using mysql in github workflow always leads to SQLSTATE[HY000] [1045] Access denied for user error

Im currently configuring a CI pipe for my laravel project via github actions.
This is my build.yml file
# GitHub Action for Laravel with MySQL and Redis
name: API
on: [push, pull_request]
jobs:
laravel:
name: Laravel (PHP ${{ matrix.php-versions }})
runs-on: ubuntu-latest
services:
mysql:
image: mysql:5.7
env:
MYSQL_ROOT_PASSWORD: 'secret'
MYSQL_DATABASE: 'content_information_test'
MYSQL_USER: 'homestead'
MYSQL_PASSWORD: 'secret'
ports:
- 33306:3306/tcp
options: --health-cmd="mysqladmin ping" --health-interval=10s --health-timeout=5s --health-retries=3
redis:
image: redis
ports:
- 6379/tcp
options: --health-cmd="redis-cli ping" --health-interval=10s --health-timeout=5s --health-retries=3
strategy:
fail-fast: false
matrix:
php-versions: ['7.4']
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup PHP, with composer and extensions
uses: shivammathur/setup-php#v2 #https://github.com/shivammathur/setup-php
with:
php-version: ${{ matrix.php-versions }}
extensions: mbstring, dom, fileinfo, mysql
coverage: xdebug #optional
- name: Start mysql service
run: sudo /etc/init.d/mysql start
- name: Get composer cache directory
id: composercache
run: echo "::set-output name=dir::$(composer config cache-files-dir)"
- name: Cache composer dependencies
uses: actions/cache#v2
with:
path: ${{ steps.composercache.outputs.dir }}
# Use composer.json for key, if composer.lock is not committed.
# key: ${{ runner.os }}-composer-${{ hashFiles('**/composer.json') }}
key: ${{ runner.os }}-composer-${{ hashFiles('**/composer.lock') }}
restore-keys: ${{ runner.os }}-composer-
- name: Install Composer dependencies
run: composer install --no-progress --prefer-dist --optimize-autoloader
- name: Copy Env File
run: cp .env.testing .env
- name: Setup database user
run: mysql -u runner -e 'CREATE USER 'worker'#'localhost' IDENTIFIED BY 'secret';'
- name: Flush privileges
run: mysql -u worker --password=secret -e 'FLUSH PRIVILEGES;'
- name: Create testing database
run: mysql -u worker --password=secret -e 'CREATE DATABASE IF NOT EXISTS content_information_test;'
- name: Migrate Test Database
run: php artisan migrate --env=testing --seed --force
env:
DB_PORT: 33306:3306/tcp
REDIS_PORT: ${{ job.services.redis.ports['6379'] }}
- name: Change Directory Permissions
run: chmod -R 777 storage bootstrap/cache
- name: Static Analysis via PHPStan
run: ./vendor/bin/phpstan analyse app/BusinessDomain app/DataDomain app/Infrastructure tests/Unit -c phpstan.neon
- name: Execute tests (Unit and Feature tests) via PHPUnit
run: vendor/bin/phpunit
env:
DB_PORT: 33306:3306/tcp
REDIS_PORT: ${{ job.services.redis.ports['6379'] }}
- name: Run code style fixer on app/
run: php tools/php-cs-fixer/vendor/bin/php-cs-fixer fix app/
- name: Run code style fixer on tests/
run: php tools/php-cs-fixer/vendor/bin/php-cs-fixer fix tests/
- name: Run code style fixer on database/
run: php tools/php-cs-fixer/vendor/bin/php-cs-fixer fix database/
- name: Run code style fixer on routes/
run: php tools/php-cs-fixer/vendor/bin/php-cs-fixer fix routes/
The problem is that the action always fails at the "Migrate Test Database" step with the following error
Run php artisan migrate --env=testing --seed --force
php artisan migrate --env=testing --seed --force
shell: /usr/bin/bash -e {0}
env:
DB_PORT: 33306:3306/tcp
REDIS_PORT: 49153
In Connection.php line 678:
SQLSTATE[HY000] [1045] Access denied for user 'homestead'#'localhost' (usin
g password: NO) (SQL: select * from information_schema.tables where table_s
chema = content_information_test and table_name = migrations and table_type
= 'BASE TABLE')
In Connector.php line 70:
SQLSTATE[HY000] [1045] Access denied for user 'homestead'#'localhost' (usin
g password: NO)
Error: Process completed with exit code 1.
Unfortunately this seems like the correct behaviour for me since I have never created a user named homestead but yet I still don't know how to create a mysql user that I can use since I'm always getting the ' SQLSTATE[HY000] [1045] Access denied for user ' error when trying to use mysql via the workflow.
Does anyone know how to make this work?
I also ran into this issue trying to set up a CI/CD pipeline with GitHub Actions and Laravel Dusk, using the default GitHub Action workflow in the Dusk documentation, which - as I've come to except for anything related to Dusk - is lacking.
I fixed it by adding my database environment variables from my .env.dusk.testing file to the GitHub Actions workflow, to make sure that the GitHub Actions runner had access to them. I also optionally moved them from the "Run Dusk" step to the job-level, making them available to any other steps in the job:
name: CI
on: [push]
jobs:
dusk-php:
env:
APP_URL: "http://127.0.0.1:8000"
DB_DATABASE: CICD_test_db
DB_USERNAME: root
DB_PASSWORD: root
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Prepare The Environment
run: cp .env.example .env
- name: Create Database
run: |
sudo systemctl start mysql
mysql --user="root" --password="root" -e "CREATE DATABASE CICD_test_db character set UTF8mb4 collate utf8mb4_bin;"
- name: Install Composer Dependencies
run: composer install --no-progress --prefer-dist --optimize-autoloader
- name: Generate Application Key
run: php artisan key:generate
- name: Upgrade Chrome Driver
run: php artisan dusk:chrome-driver `/opt/google/chrome/chrome --version | cut -d " " -f3 | cut -d "." -f1`
- name: Start Chrome Driver
run: ./vendor/laravel/dusk/bin/chromedriver-linux &
- name: Run Laravel Server
run: php artisan serve --no-reload &
- name: Run Dusk Tests
run: php artisan dusk --env=testing -vvv
- name: Upload Screenshots
if: failure()
uses: actions/upload-artifact#v2
with:
name: screenshots
path: tests/Browser/screenshots
- name: Upload Console Logs
if: failure()
uses: actions/upload-artifact#v2
with:
name: console
path: tests/Browser/console

How to install GD Library? Laravel at AWS Lambda with Bref

When using Intervention\Image in laravel on lambda
The following error has occurred.
By the way, it works in the local environment.
I have to add gd.
[2021-08-17 10:37:18] DEV.ERROR: GD Library extension not available with this PHP installation.
{"exception":"[object] (Intervention\Image\Exception\NotSupportedException(code: 0):
GD Library extension not available with this PHP installation.
at /var/task/vendor/intervention/image/src/Intervention/Image/Gd/Driver.php:19)
What I looked up
https://bref.sh/docs/environment/php.html#extensions
https://github.com/brefphp/extra-php-extensions
Deployment method
We are deploying to lambda using the sls command.
sls deploy --stage dev
Based on the investigation, the following is implemented
composer require bref/extra-php-extensions
Added below
serverless.yml
plugins:
- ./vendor/bref/bref
- ./vendor/bref/extra-php-extensions #add
functions:
# This function runs the Laravel website/API
web:
image:
name: laravel
events:
- httpApi: '*'
# This function lets us run artisan commands in Lambda
artisan:
handler: artisan
timeout: 120 # in seconds
layers:
- ${bref:layer.php-80}
- ${bref:layer.console}
- ${bref-extra:gd-php-80} #add
Even if the above settings are added and deployed, they are not updated. .. why?
enviroment
Laravel Framework 8.33.1
PHP 7.4.3
bref
serverless
I'm sorry if English is strange.
Put the layers into web "tag".
plugins:
- ./vendor/bref/bref
- ./vendor/bref/extra-php-extensions #add
functions:
# This function runs the Laravel website/API
web:
image:
name: laravel
layers:
- ${bref-extra:gd-php-80} #add
events:
- httpApi: '*'
# This function lets us run artisan commands in Lambda
artisan:
handler: artisan
timeout: 120 # in seconds
layers:
- ${bref:layer.php-80}
- ${bref:layer.console}
Then add the folder php/conf.d inside put a file with extension .ini. For example php.ini. In it just put:
extension=gd

How to use github actions/checkout#v2 inside own docker container

I have my own php image, which I would like to use for my project to run tests on.
container: rela589n/doctrine-event-sourcing-php:latest
services:
test_db:
image: postgres:13-alpine
env:
POSTGRES_DB: des
POSTGRES_USER: des_user
POSTGRES_PASSWORD: p#$$w0rd
steps:
- uses: actions/checkout#v2
- whatever_needed_to_run_tests_inside_container
This fails on checkout action with such error:
EACCES: permission denied, open '/__w/doctrine-event-sourcing/doctrine-event-sourcing/6977c4d4-3881-44e9-804e-ae086752556e.tar.gz'
And this is logical as in fresh docker container there's no such folder structure. What i thought to do is run checkout action inside virtual machine provided runs-on: ubuntu-20.04 and configure volume for docker so that it will have access to code. However I have no idea neither is it a good practice to do this way nor how to implement this. I guess even if it is possible to do this way it won't work for other actions.
Had the same issue when trying to use my own Docker image. In my case, installing everything I need on the fly was not an option, so I had to fix this issue.
It appears that GitHub runs the Docker image with user 1001 named runner and group 121 named docker. After adding the group, adding the user and adding the user to sudoers the problem was solved.
Notice that the checkout path starts with /_w which is strange. If I perform actions/checkout#v2 without my container, the path is /home/runner. Not sure how to solve that yet.
Thanks, this really helped me to find the issue when trying to deploy a CDK project from within a Docker container on Github Actions.
I was getting a permission denied error after checking out the code, and trying to deploy it.
Error: EACCES: permission denied, mkdir '/__w/arm-test/arm-test/cdk.out/asset.7d21b14f781f8b0e4ebb3b30c66614a80f71a2c1637298e5557a97662fce0abe'
This issue had the workaround of running the container with the same user and group as the Github Actions runner, so that it matched with the permissions of the source code directory: https://github.com/actions/runner/issues/691
jobs:
configure:
runs-on: ubuntu-latest
outputs:
uid_gid: ${{ steps.get-user.outputs.uid_gid }}
steps:
- id: get-user
run: echo "::set-output name=uid_gid::$(id -u):$(id -g)"
clone-and-install:
needs: configure
runs-on: ubuntu-latest
container:
image: mcr.microsoft.com/vscode/devcontainers/base:ubuntu
options: --user ${{ needs.configure.outputs.uid_gid }}
steps:
- uses: actions/checkout#v2

Categories