I have a problem to deploy a symfony project with github action. I can connect with ssh and execute a git pull or a php bin/console doctrine:migrations:migrate, but it's impossible to use the compose command.
I followed the various explanations of ionos (https://www.ionos.com/digitalguide/websites/web-development/using-php-composer-in-ionos-webhosting-packages/) but github actions tells me "Could not open input file: composer.phar".
Here is my script if anyone has an idea
name: CD
on:
push:
branches: [ develop ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: SSH and Deploy
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.APP_HOST }}
username: ${{ secrets.APP_USER }}
password: ${{ secrets.APP_PASS }}
port: 22
script: |
cd /homepages/14/d800745077/htdocs/clickandbuilds/dashJob
git pull
/usr/bin/php8.0-cli composer.phar i
/usr/bin/php8.0-cli bin/console d:m:m -n
It depends on your current working directory.
If the repository has composer.phar in your remote repository you just pulled, then the command would work.
If not, replace it with a find . -name "composer.phar" to check where that file is in the repository.
IONOS is offering a tooling named Deploy Now that should ease your setup. For PHP you can find docs here.
Related
I can update my code on cPanel using git version control. At the moment I then have to go to the terminal and run the artisan migrate command. Can this be added together to make one step?
This is my workflow:
This is a basic workflow to help you get started with Actions
name: CI
Controls when the workflow will run
on:
Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ main ]
Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
This workflow contains a single job called "build"
job_one:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: executing remote ssh commands using ssh keys
uses: appleboy/ssh-action#master
with:
host: hostgoeshere
username: uknet
key: ${{ secrets.CPANEL }}
#password: passwordgoeshere
port: 7822
script: |
cd /home/uknet/public_html/master
pwd
git add .
git stash
git pull origin main
git status
Is it possible to add a terminal line at the bottom with php artisan migrate?
Im currently configuring a CI pipe for my laravel project via github actions.
This is my build.yml file
# GitHub Action for Laravel with MySQL and Redis
name: API
on: [push, pull_request]
jobs:
laravel:
name: Laravel (PHP ${{ matrix.php-versions }})
runs-on: ubuntu-latest
services:
mysql:
image: mysql:5.7
env:
MYSQL_ROOT_PASSWORD: 'secret'
MYSQL_DATABASE: 'content_information_test'
MYSQL_USER: 'homestead'
MYSQL_PASSWORD: 'secret'
ports:
- 33306:3306/tcp
options: --health-cmd="mysqladmin ping" --health-interval=10s --health-timeout=5s --health-retries=3
redis:
image: redis
ports:
- 6379/tcp
options: --health-cmd="redis-cli ping" --health-interval=10s --health-timeout=5s --health-retries=3
strategy:
fail-fast: false
matrix:
php-versions: ['7.4']
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup PHP, with composer and extensions
uses: shivammathur/setup-php#v2 #https://github.com/shivammathur/setup-php
with:
php-version: ${{ matrix.php-versions }}
extensions: mbstring, dom, fileinfo, mysql
coverage: xdebug #optional
- name: Start mysql service
run: sudo /etc/init.d/mysql start
- name: Get composer cache directory
id: composercache
run: echo "::set-output name=dir::$(composer config cache-files-dir)"
- name: Cache composer dependencies
uses: actions/cache#v2
with:
path: ${{ steps.composercache.outputs.dir }}
# Use composer.json for key, if composer.lock is not committed.
# key: ${{ runner.os }}-composer-${{ hashFiles('**/composer.json') }}
key: ${{ runner.os }}-composer-${{ hashFiles('**/composer.lock') }}
restore-keys: ${{ runner.os }}-composer-
- name: Install Composer dependencies
run: composer install --no-progress --prefer-dist --optimize-autoloader
- name: Copy Env File
run: cp .env.testing .env
- name: Setup database user
run: mysql -u runner -e 'CREATE USER 'worker'#'localhost' IDENTIFIED BY 'secret';'
- name: Flush privileges
run: mysql -u worker --password=secret -e 'FLUSH PRIVILEGES;'
- name: Create testing database
run: mysql -u worker --password=secret -e 'CREATE DATABASE IF NOT EXISTS content_information_test;'
- name: Migrate Test Database
run: php artisan migrate --env=testing --seed --force
env:
DB_PORT: 33306:3306/tcp
REDIS_PORT: ${{ job.services.redis.ports['6379'] }}
- name: Change Directory Permissions
run: chmod -R 777 storage bootstrap/cache
- name: Static Analysis via PHPStan
run: ./vendor/bin/phpstan analyse app/BusinessDomain app/DataDomain app/Infrastructure tests/Unit -c phpstan.neon
- name: Execute tests (Unit and Feature tests) via PHPUnit
run: vendor/bin/phpunit
env:
DB_PORT: 33306:3306/tcp
REDIS_PORT: ${{ job.services.redis.ports['6379'] }}
- name: Run code style fixer on app/
run: php tools/php-cs-fixer/vendor/bin/php-cs-fixer fix app/
- name: Run code style fixer on tests/
run: php tools/php-cs-fixer/vendor/bin/php-cs-fixer fix tests/
- name: Run code style fixer on database/
run: php tools/php-cs-fixer/vendor/bin/php-cs-fixer fix database/
- name: Run code style fixer on routes/
run: php tools/php-cs-fixer/vendor/bin/php-cs-fixer fix routes/
The problem is that the action always fails at the "Migrate Test Database" step with the following error
Run php artisan migrate --env=testing --seed --force
php artisan migrate --env=testing --seed --force
shell: /usr/bin/bash -e {0}
env:
DB_PORT: 33306:3306/tcp
REDIS_PORT: 49153
In Connection.php line 678:
SQLSTATE[HY000] [1045] Access denied for user 'homestead'#'localhost' (usin
g password: NO) (SQL: select * from information_schema.tables where table_s
chema = content_information_test and table_name = migrations and table_type
= 'BASE TABLE')
In Connector.php line 70:
SQLSTATE[HY000] [1045] Access denied for user 'homestead'#'localhost' (usin
g password: NO)
Error: Process completed with exit code 1.
Unfortunately this seems like the correct behaviour for me since I have never created a user named homestead but yet I still don't know how to create a mysql user that I can use since I'm always getting the ' SQLSTATE[HY000] [1045] Access denied for user ' error when trying to use mysql via the workflow.
Does anyone know how to make this work?
I also ran into this issue trying to set up a CI/CD pipeline with GitHub Actions and Laravel Dusk, using the default GitHub Action workflow in the Dusk documentation, which - as I've come to except for anything related to Dusk - is lacking.
I fixed it by adding my database environment variables from my .env.dusk.testing file to the GitHub Actions workflow, to make sure that the GitHub Actions runner had access to them. I also optionally moved them from the "Run Dusk" step to the job-level, making them available to any other steps in the job:
name: CI
on: [push]
jobs:
dusk-php:
env:
APP_URL: "http://127.0.0.1:8000"
DB_DATABASE: CICD_test_db
DB_USERNAME: root
DB_PASSWORD: root
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Prepare The Environment
run: cp .env.example .env
- name: Create Database
run: |
sudo systemctl start mysql
mysql --user="root" --password="root" -e "CREATE DATABASE CICD_test_db character set UTF8mb4 collate utf8mb4_bin;"
- name: Install Composer Dependencies
run: composer install --no-progress --prefer-dist --optimize-autoloader
- name: Generate Application Key
run: php artisan key:generate
- name: Upgrade Chrome Driver
run: php artisan dusk:chrome-driver `/opt/google/chrome/chrome --version | cut -d " " -f3 | cut -d "." -f1`
- name: Start Chrome Driver
run: ./vendor/laravel/dusk/bin/chromedriver-linux &
- name: Run Laravel Server
run: php artisan serve --no-reload &
- name: Run Dusk Tests
run: php artisan dusk --env=testing -vvv
- name: Upload Screenshots
if: failure()
uses: actions/upload-artifact#v2
with:
name: screenshots
path: tests/Browser/screenshots
- name: Upload Console Logs
if: failure()
uses: actions/upload-artifact#v2
with:
name: console
path: tests/Browser/console
I have my own php image, which I would like to use for my project to run tests on.
container: rela589n/doctrine-event-sourcing-php:latest
services:
test_db:
image: postgres:13-alpine
env:
POSTGRES_DB: des
POSTGRES_USER: des_user
POSTGRES_PASSWORD: p#$$w0rd
steps:
- uses: actions/checkout#v2
- whatever_needed_to_run_tests_inside_container
This fails on checkout action with such error:
EACCES: permission denied, open '/__w/doctrine-event-sourcing/doctrine-event-sourcing/6977c4d4-3881-44e9-804e-ae086752556e.tar.gz'
And this is logical as in fresh docker container there's no such folder structure. What i thought to do is run checkout action inside virtual machine provided runs-on: ubuntu-20.04 and configure volume for docker so that it will have access to code. However I have no idea neither is it a good practice to do this way nor how to implement this. I guess even if it is possible to do this way it won't work for other actions.
Had the same issue when trying to use my own Docker image. In my case, installing everything I need on the fly was not an option, so I had to fix this issue.
It appears that GitHub runs the Docker image with user 1001 named runner and group 121 named docker. After adding the group, adding the user and adding the user to sudoers the problem was solved.
Notice that the checkout path starts with /_w which is strange. If I perform actions/checkout#v2 without my container, the path is /home/runner. Not sure how to solve that yet.
Thanks, this really helped me to find the issue when trying to deploy a CDK project from within a Docker container on Github Actions.
I was getting a permission denied error after checking out the code, and trying to deploy it.
Error: EACCES: permission denied, mkdir '/__w/arm-test/arm-test/cdk.out/asset.7d21b14f781f8b0e4ebb3b30c66614a80f71a2c1637298e5557a97662fce0abe'
This issue had the workaround of running the container with the same user and group as the Github Actions runner, so that it matched with the permissions of the source code directory: https://github.com/actions/runner/issues/691
jobs:
configure:
runs-on: ubuntu-latest
outputs:
uid_gid: ${{ steps.get-user.outputs.uid_gid }}
steps:
- id: get-user
run: echo "::set-output name=uid_gid::$(id -u):$(id -g)"
clone-and-install:
needs: configure
runs-on: ubuntu-latest
container:
image: mcr.microsoft.com/vscode/devcontainers/base:ubuntu
options: --user ${{ needs.configure.outputs.uid_gid }}
steps:
- uses: actions/checkout#v2
I am using sonarqube to analyze the code of my project in PHP, everything is set up and partially working, the problem is as follows, I do a check with the Sonar scanner on my pull-requests and merge with the branch master, the analysis it is being carried out, but only in the modified files. I would need to analyze all the code at least on the merge with the master branch. When I go to Project -> code, I only have a few files in the master branch.
I would like to know if there is any parameter that can be passed in the scanner so that it always analyzes all files as it is done with the scanner run locally.
Code scanner
name: Analyze pull request
on:
pull_request:
types: [opened, edited, reopened, synchronize]
branches:
- master
jobs:
SonarQube-Scanner-pull_request:
runs-on: ubuntu-latest
steps:
- name: Setup sonarqube
uses: warchant/setup-sonar-scanner#v1
- name: 'Checkout repository on branch: ${{ github.REF }}'
uses: actions/checkout#v2
with:
ref: ${{ github.HEAD_REF }}
- name: Retrieve entire repository history
run: |
git fetch --prune --unshallow
- name: Run an analysis of the PR
env:
# to get access to secrets.SONAR_TOKEN, provide GITHUB_TOKEN
GITHUB_TOKEN:
run: sonar-scanner
-Dsonar.host.url=
-Dsonar.login=
-Dsonar.projectKey=Project
-Dsonar.qualitygate.wait=true
-Dsonar.pullrequest.key=${{ github.event.number }}
-Dsonar.pullrequest.branch=${{ github.HEAD_REF }}
-Dsonar.pullrequest.base=${{ github.BASE_REF }}
-Dsonar.pullrequest.github.repository=${{ github.repository }}
-Dsonar.scm.provider=git
-Dsonar.java.binaries=/tmp
enter image description here
Thank you for your help
Can you try by giving sonar.projectBaseDir and sonar.sources in sonar analysis properties.
Find more details here Alternate Analysis Directory
I am investigating on how I can create an AWS lambda in php using the bref library
Therefore, according to documentation I set up the environment with the following command cocktail:
sudo -H npm install -g serverless
composer require bref/bref
Then using the following command created my first php lambda:
vendor/bin/bref init
And I selected the first option PHP Function provided by default. Creating the following creating an index.php file:
declare(strict_types=1);
require __DIR__.'/vendor/autoload.php';
lambda(function ($event) {
return 'Hello ' . ($event['name'] ?? 'world');
});
Then I changed my serverless.yml into that:
service: app
provider:
name: aws
region: eu-central-1
runtime: provided
stage: ${opt:stage,'local'}
package:
exclude:
- '.gitignore'
plugins:
- ./vendor/bref/bref
functions:
dummy:
handler: index.php
name: Dummy-${self:provider.stage}
description: 'Dummy Lambda'
layers:
- ${bref:layer.php-73}
And I try to launch it via the following command:
sls invoke local --stage=local --docker --function dummy
But I get the following error:
{"errorType":"exitError","errorMessage":"RequestId: 6403ebee-13b6-179f-78cb-41cb2f517460 Error: Couldn't find valid bootstrap(s): [/var/task/bootstrap /opt/bootstrap]"}
Therefore, I want to ask why I am unable to run my lambda localy?
Since this question is getting a lot of views, I recommend to have a look at the Bref documentation:
Local development for PHP functions
That involves using the bref local CLI command instead of serverless invoke local:
$ vendor/bin/bref local hello
Hello world
# With JSON event data
$ vendor/bin/bref local hello '{"name": "Jane"}'
Hello Jane
# With JSON in a file
$ vendor/bin/bref local hello --file=event.json
Hello Jane
On my local, clearing cache before invoking lambda worked fine, I'm using linux / ubuntu
docker system prune --all
sudo apt-get autoremove
sudo apt-get clean
sudo apt-get autoclean
sudo rm -rf ~/.cache/
sudo rm -rf /var/cache/
It is a known bug for bref. It can be solved via providing the layer manually on your function in serverless.yml. So the functions section at serverless.yml should change from:
functions:
dummy:
handler: index.php
name: Dummy-${self:provider.stage}
description: 'Dummy Lambda'
layers:
- ${bref:layer.php-73}
Into:
functions:
dummy:
handler: index.php
name: Dummy-${self:provider.stage}
description: 'Dummy Lambda'
layers:
- 'arn:aws:lambda:eu-central-1:209497400698:layer:php-73:15'
The reason why is that ${bref:layer.php-73} cannot be resolved into a php layer. Therefore, you need to provide manually the arn for the lambda layer.
Keep in mind that the arn comes into various "versions" that is being inidcated from the last number in the arn seperated with :. So in the arn
arn:aws:lambda:eu-central-1:209497400698:layer:php-73:15
Indicated that the layer is in version "15" whitch is thje latest at the moment of the answer. The next one logically should be the:
arn:aws:lambda:eu-central-1:209497400698:layer:php-73:16