Running Laravel migrations to Cloud SQL part of continous deployment - php

I'm working on a Laravel project with fully continuous deployments to Cloud Run and using Cloud SQL as storage service. Right now, I need to perform php artisan migrate manually using the cloud_sql_proxy within the local environment.
Does anyone know whether it is possible to perform this step automatically, possibly part of the Dockerfile.
This is my current Dockerfile:
FROM php:7
ENV PORT=8080
ENV HOST=0.0.0.0
RUN apt-get update -y \
&& apt-get install --no-install-recommends -y openssl zip unzip git libonig-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN ["/bin/bash", "-c", "set -o pipefail && curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer"]
RUN docker-php-ext-install pdo mbstring
WORKDIR /app
COPY . /app
RUN composer validate && composer install
EXPOSE 8080
CMD ["php", "artisan", "serve", "--host=0.0.0.0", "--port=8080"]
Thanks for any help!

It's not recommended to put in Dockerfile the migration script as that will be triggered for EVERY future requests. You need to run it only ONCE and that execution should be triggered by a build script or by a developer.
For migrations that introduce breaking changes either on commit or on rollback, it's mandatory to have a full stop, and of course, rollback planned accordingly.
Also pay attention, that a commit/push should not trigger immediately the new migrations. Often these are not part of the regular CI/CD pipeline that goes to production.
Make sure you have a manual deploy for migrations and not under CI/CD.
After you deploy a service, you can create a new revision and assign a tag that allows you to access the revision at a specific URL without serving traffic.
A common use case for this, is to run and control the first visit to this container. You can then use that tag to gradually migrate traffic to the tagged revision, and to rollback a tagged revision.
To deploy a new revision of an existing service to production:
gcloud beta run deploy myservice --image IMAGE_URL --no-traffic --tag TAG_NAME
The tag allows you to directly test(or run via this the migration - the very first request) the new revision at a specific URL, without serving traffic. The URL starts with the tag name you provided: for example if you used the tag name green on the service myservice, you would test the tagged revision at the URL https://green---myservice-abcdef.a.run.app

I got the migrations running with every deployment via ENTRYPOINT.
Details are at the reply here : https://stackoverflow.com/a/69088911/867451

Related

WebdriverIO and Docker Setup

I am trying to setup a docker container with WebdriverIO built into it, with the eventual aim of being able to run a CI/CD pipeline in gitlab, but I have absolutely no idea where to start.
My application is a PHP/MySQL based app which was also recently dockerised. I access it locally on http://localhost.
I have tried to create a docker image with wdio built into it, but it fails when trying to do the
npm init wdio --yes
as the --yes command doesn't force any of the default settings, which goes against the official documentation. This then causes the wdio installation to fail.
What is confusing me even more is that there seems to be very few tutorials for this, the wdio documentation doesn't seem great, and what tutorials I can find all seem to mention selenium. FYI, I am just a dev that has been tasked to take some existing WDIO scripts and get them ready for CI/CD, I don't know a massive amount about WDIO in the first place.
Does anyone have any basic steps I could follow that would describe the process of taking some local WDIO scripts, and getting them to run inside a container, with the end goal of being to have them into some sort of CI/CD pipeline?
When trying to create the image, the following command does not seem to work:
npm init wdio --yes
It would be much more appropriate if you have initialize a wdio project and copy it to the Dockerfile.
This is what it might look like:
FROM node:16
USER root
#===============================
# Set default workspace
#===============================
RUN mkdir /home/workspace \
&& chmod 2777 /home/workspace
COPY . /home/workspace
WORKDIR /home/workspace
This way, your docker image contains your whole project built in.
Then you could append the following command to make sure the environment is ready for webdriverI/O to execute.
#==================================
# Install needed packages
#==================================
RUN apt update && apt upgrade -y
RUN npm install
If you need anything like browser and webdriver, you could install it via dozens of approaches.
You can use ENTRYPOINT or CMD to make it execute the specified test suites once the container is up.
If you wanna complete CI or CD flow with docker containers, it will depend on which service you may utilize.

Symfony 6 AWS Beanstalk run npm run build

I'm new to AWS and I've gotten as far as getting the following error in Symfony:
Asset manifest file "/var/app/current/public/build/manifest.json" does not exist.
In local, this would be fixed by running npm run build. I've tried adding NPM_CONFIG_PRODUCTION=true in the environment variables, but I think that might just be for node.js apps?
I've also tried SSHing onto the EC2 instance and installing node on there, but I ran into errors trying to install either npm or nvm. I feel like this is the wrong approach anyway, since it seems like the idea of beanstalk is that you shouldn't need to ssh onto the instance.
Perhaps I should just include the node_modules folder in the zip uploaded, but since one of the recommended ways to produce the zip is to use git, this doesn't seem correct either.
After a lot of digging around, it seems like there's 3 options here:
SSH onto the instance(s) and the following worked for me (Amazon Linux 2 - ARM chip)
curl --silent --location https://rpm.nodesource.com/setup_16.x | sudo bash -
sudo yum -y install nodejs
(cd /var/app/current/;sudo npm add --dev #symfony/webpack-encore)
(cd /var/app/current/;sudo npm install)
(cd /var/app/current/;sudo npm run build)
The problem with this, is if you have multiple instances that scale up and down with a load balancer, it isn't really practical to do this.
Add the above as a hook:
The following sh file could be put in the following directory: .platform/hooks/predeploy
#!/bin/bash
curl --silent --location https://rpm.nodesource.com/setup_16.x | sudo bash -
sudo yum -y install nodejs
(cd /var/app/current/;sudo npm add --dev #symfony/webpack-encore)
(cd /var/app/current/;sudo npm install)
(cd /var/app/current/;sudo npm run build)
However, I've since learnt that it's best advised to just include the node_modules in the zip that gets uploaded. I guess this way the time to get the server up is reduced.
Include the node_modules folder in the zip that gets uploaded.
To include the node_modules folder, since this is naturally ignored by GIT, I used the EB CLI and added a .ebignore file, which is a clone of the .gitignore file, but includes the node_modules and public folders. Also be cautious in your build process that you're not including the node dev dependencies.

How to setup and run laravel, from git?

Either I miss something, or the whole chain lacks something.
Here's my assumption:
The whole point of containerization in development, is to reduce the cost of environment setup, and create a prepared image with all the required pieces.
So, when I read that Laravel Sail is installing laravel via containerization, I get excited. Thus I install it via their instructions, and everything works.
Then the problem begins. Because:
After a successful installation, I create a git repo, with GitHub's default laravel .gitignore
Then I push the newly installed laravel app into my git repo.
Then I ask a developer to start developing it. Please note that:
He does not have PHP installed
He does not have Composer installed
He clonse the repo, and as per installation guide, runs ./vendor/bin/sail up
But ./vender folder is correctly excluded in .gitignore
Thus his command results in:
bash: ./vendor/bin/sail: No such file or directory
He Googles it of course, and finds out that people suggest to run composer update
He goes to install composer, then before that PHP, then all extensoins of PHP, then ...
Do I miss something here? The whole point of containerization was to not install the required environment locally.
What is the proper way of running a laravel app, that is not installed from https://laravel.build, but is cloned from a git repo, WITHOUT having PHP or Composer installed locally?
Update
I found Bitnami laravel docker and it's exactly what containers should be.
You are right and the other developer doesn't need to have php nor composer installed.
All he/she needs is Docker installed on the local machine.
If you scaffolded the project with what is mentioned in the official Laravel docs under the Getting started section, then you will have a docker-compose.yml file in your project root directory.
For Windows
For Linux
For Mac OS
All the developer has to do after git cloning the repository is to run
docker-compose up --build -d
That's it.
For those struggling with this issue... I've found a command that work perfectly fine.
First of all, you don't need to locally have any PHP or Composer installed, maybe there is a misunderstanding about it, all you need is Docker.
Docker will install everything you need in something I understand is like a sandbox, not locally, for each project.
And for those downloaded projects, from GIT as example, that does not have vendor folder, and obviously cannot execute sail up you can simple execute:
docker run --rm --interactive --tty -v $(pwd):/app composer install
That command will download a composer image for docker, if you do not have one yet. Then, will run a composer install and you are free to execute a ./vendor/bin/sail up if you hadn't configured an alias or just sail up if you already configure an alias.
That's all.
The official documentation lists the following command.
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
If you were to clone a Laravel project and run this command in the project root, it would create a very small container with php and composer installed and run composer in the project root to install all php dependencies. In effect, this installs the Laravel core code into the cloned project. Once the project in set up this way, the user should create a local .env file to match their development evironment.
cp .env.example .env # creates a .env file to be populated for the local environment
With the envronment set up, they can now create the application containers in docker and run the application. Laravel provides the Sail helper for this.
./vendor/bin/sail up -d # runs the docker containers in detached mode
Now it's a matter of setting up the laravel app and running the Laravel app. (I'm assuming the app uses one of the Laravel start kits that rely on Node.js. If you are using a Blade only application, you can skip the "npm" commands.)
sail artisan key:generate # (Best Practice) Generate a new application key on each machine
sail artisan migrate # Scaffold the database structure
sail artisan db:seed # (Optional) Seed the database with data
sail npm install # (Optional) Install front-end dependencies (Inertia, Vue, React, others...)
sail npm run dev # (Optional) Run the front-end framework in development mode
With this, the new developer should be running an exact copy of both the project and the development environment as the original developer.
Your project README may include additional steps to set up some other dependencies, but this is the basic workflow for contributing to a Laravel project.
The only prerequisites for this workflow is to have Docker installed with an Internet connection. This is most easily accomplished on Windows, Mac, and Linux by installing Docker Desktop.
Alternate for Older Projects
If you are working on an older project that doesn't use Laravel Sail, but does have a docker-compose.yml file, you should be able to build and run the necessary containers with the following command.
docker-compose up --build -d
Once you have the containers running, you would need to install the project dependencies directly into the container.
docker ps # find the container ID of your project's container
docker exec -it CONTAINER_ID php artisan key:generate
docker exec -it CONTAINER_ID php artisan migrate
docker exec -it CONTAINER_ID php artisan db:seed
docker exec -it CONTAINER_ID npm install
docker exec -it CONTAINER_ID npm run dev
Of course, Docker Desktop simplifies this process. With a button click you can have a terminal shell open directly in your container eliminating the need for the docker exec command.

Docker - Installing Composer /bin/sh: 1: php: not found curl: (23) Failed writing body (0 != 16133)

Hello I am creating a dockerfile for my laravel project. This is it so far:
FROM php:7.2-cli
FROM nginx
FROM node:8
MAINTAINER zachary tyhacz
# does not install mysql
# mysql is outside container
RUN apt-get update -y && apt-get install -y openssl zip unzip git
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
WORKDIR /var/www/public
COPY . /var/www/public
COPY nginx.conf /etc/nginx/sites-available/domain
RUN ln -s /etc/nginx/sites-available /etc/nginx/sites-enabled
RUN npm install
RUN composer install
# sets up the database
CMD php artisan migrate:fresh --seed
# resets configuration files
CMD php artisan config:cache
# refreshes routes
CMD php artisan route:cache
# enables serve
CMD php artisan serve --host=0.0.0.0 --port=436
EXPOSE 8080/udp
EXPOSE 8080/tcp
EXPOSE 80/udp
EXPOSE 80/tcp
EXPOSE 436/tcp
EXPOSE 436/udp
upon docker tag to create an image, it gets to this instruction:
Step 6/22 : RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
and it throws this error and stops.
/bin/sh: 1: php: not found
curl: (23) Failed writing body (0 != 16133)
I am not sure what is going wrong. I think it could be a permissions issue or a directory issue.
Thanks anyone for any suggestions in helping me out
also, my reference in helping me create this dockerfile is this:
https://buddy.works/guides/laravel-in-docker
You can only have one base image using FROM in a Dockerfile. Basically, that tells docker what to start with. In your case, you have several FROMs, so it appears that Docker simply takes the last one you give it, in this case node:8. So PHP is never being installed.
To fix this issue, you'll need to pick a single base image (for example php), and install your other dependencies on top of that, so you could manually install nginx and node on top of the php image using RUN. You may also want to consider building a separate nginx image. This is considered good practice to separate your services into different images when possible.
Also, instead of using multiple CMD entries, use a small startup shell script. For example
#!/usr/bin/env bash
set -e
php artisan migrate:fresh --seed
php artisan config:cache
php artisan route:cache
exec php artisan serve --host=0.0.0.0 --port=436
Put that in a script called start.sh or something like that, then in your Dockerfile, use
CMD ["./start.sh"]
Then, you'll probably also want to start a second container for your nginx service. You could do this manually using docker run, but I suggest checking out docker-compose. It helps you build and run multiple containers at once.

Continuous Integration using composer

I have a PHP project in which I load packages through Composer. I also run Continious Integration using Jenkins, on a dedicated CI server. Once every hour, Jenkins queries my repository for changes, and if present, if executes a test run.
First step of the testrun is making a fresh checkout of the repository, and performing a build of the application, using Phing. One of the steps of the build is performing an
composer install
Since Jenkins always works with a fresh checkout, composer will always fetch all packages on every test run, even if none of the packages have been changed since the previous run. This has a couple of disadvantages:
It takes a relativally long time to complete a test run (composer needs to fetch for example Zend Framework, which is rather large
It put unnecessary strain on the packagist server, if new packages are fetched every hour
If, for some reason, the composer install fails, so does my test run.
I was thinking of possibly storing the packages that composer fetches on a central spot at the CI server, so Jenkins would be able to access the packages at that location for every test run. Of course, now I have to rewrite part of my application to handle the fact that the vendor folder is in a different location when on the CI server. Secondly, I have to tell Jenkins to keep track of changes on the composer.lock file, to see if he needs to run composer anyway. I'm afraid none of those two things are really trivial.
Does anyone have any suggestions of a other/better way to do this, or is it the best option to just fetch all packages through composer on every test run. Admiditally, it's the best way to make sure you always use the correct packages, but it sortof feels like a waste of bandwith, certainly in later stages of development, when the list of packages will hardly change anymore.
One way to speed it up is to use composer install --prefer-dist which only downloads zips even for dev packages. This is preferred for unique builds since it skips the whole history of the project.
As for sparing packagist, don't worry about it too much, one build every hour isn't going to make a huge difference compared to all the open source libs that build on travis at every commit.
One thing you could do is to store vendors in a location outside of project's workspace in jenkins so that it remains between the builds. You not necessarily need to change your application. Just update the build script so that it creates a symbolic link to the vendors location.
I use capifony for deployment and it uses this approach to keep the vendors between releases.
One thing to note is that Composer caches packages that it downloads. So once they are downloaded the first time, they should work even if Packagist is down (not 100% sure), and network bandwidth spared (100% sure).
Second thing is: why are you running tests by doing a fresh checkout of the repository? It is entirely possible to keep a copy of your code in the workspace in Jenkins, and just make sure you wipe on every test run the caches, logs and other artifacts. This will speed up not only composer install, but also the git pulls, especially for big repos!
Side note: for our own Jenkins platform, where workspaces are not cleaned between tests, the main drawback we found with composer is the sheer amount of disk space taken by having the full vendor dir in each workspace. I tried to work around this by using symlinks and sharing the vendors (named based on hashes of composer.lock), but then composer autoloader had a bit of problems finding where to load classes from...
Steps to install zf2 project on Jenkins
mkdir /path/to/your/project
1. Install the composer
curl -sS https://getcomposer.org/installer | php
mv composer.phar /usr/local/bin/composer
Note: If the above fails due to permissions, run the mv line again with sudo.
A quick copy-paste version including sudo:
curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer
create a composer.json file in the root directory of the project
add all the pacakages you require
{
"name": "amarjitsingh",
"description": "amarjitsingh",
"license": "BSD-3-Clause",
"keywords": [
"framework",
"zf2"
],
"homepage": "http://domain.com/",
"require": {
"php": ">=5.5",
"zendframework/zendframework": "~2.5",
"phpoffice/phpword": "dev-master",
"doctrine/doctrine-orm-module": "0.7.0",
"imagine/Imagine": "0.5.*",
"zf-commons/zfc-user": "dev-master"
},
"autoload" : {
"psr.0" : "/module"
}
}
run 'composer install' to install these packages.
set up git on your machine
if you are using ubuntu you can set up GIT using the folowing commands
sudo apt-get update
sudo apt-get install git
Set Up Git
git config --global user.name "Your Name"
git config --global user.email "youremail#domain.com"
check the config list
git config --list
once you have setup GIT then c
cd /path/to/your/project
. once you have packes installed the create a '.gitignore' file in the dcument
root and add 'vendor' inside it.
git init
git remote add origin https://username#bitbucket.org/username/zf2ci.git
apply below command to ADD, COMMIT, AND PUSH the files
git add .
git commit -m 'Initial commit with contributors'
git push -u origin master
git pull
using cloud you can use AWS . I am using digital ocean
1 create a droplet
2.name it as you wish , in mycase it is zf2ci
3. choose a package
4. choose the OS my cas eis Ubuntu 14.04
5. In applications tab choose LAMP
6 once you done with that you will get IP address, username root and password.
7. login the ip by using the putty
8. user root
9. password pass
10. once you get into it it will prompt to you to change the password
11. goto web root eg /var/www/html
12. install GIT
13. apt-get install git
14. clone the repo
15. git clone https://username#bitbucket.org/username/zf2ci.git
16. install composer on this machine
curl -sS https://getcomposer.org/installer | php
mv composer.phar /usr/local/bin/composer
Note: If the above fails due to permissions, run the mv line again with sudo.
A quick copy-paste version including sudo:
curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer
goto app path /var/ww/html/zf2ci
run 'composer install --no-dev' we are installing it with no dev option becuasae we only install well tested code on app server
Step3
Create a Jenkins server
1. set up another droplet for Jenkins
2. image ubuntu
3.install Lamp
install Jenkns
Installing Jenkins
Before we can install Jenkins, we have to add the key and source list to apt. This is done in 2 steps, first we'll add the key.
1.1
wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | apt-key add -
Secondly, we'll create a sources list for Jenkins.
1.2
echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list
1.3
Now, we only have to update apt's cache before we can install Jenkins.
apt-get update
1.4
As the cache has been updated we can proceed installing Jenkins. Note that Jenkins has a big bunch of dependencies, so it might take a few moments to install them all.
apt-get install jenkins
1.5 open the ip with port 8080
eg http://127.0.0.1:8080
1.6 install git on jenkins server
sudo apt-get update
sudo apt-get install git
1.7 install composer
curl -sS https://getcomposer.org/installer | php
mv composer.phar /usr/local/bin/composer
1.8 enable user authentication
1.9
enable bitbucket plugin for Jenkins
1.9.1
Manage Jenkins->Manage Plugins->Bitbucket Plugin->download and install
1.9.2
create job
create job->
project name(eg. zf2ci)->
source code management (git) provide ssh url(git#bitbucket.org:username/zf2ci.git)->
branches to built (*/master) this is the branch where each time any user commits and merge the code with Master branch -Jenkins gets invoked
1.9.3
Build Triggers
choose the option(build when a chnage is pushed) this will wok when we make a POST hook on bit bucket
1.9.4
Build->Execute shell
composer install
./vendor/bin/phpunit ./tests
our tests sits intests dir
1.9.5
set a ssh key pair
login to jenkins Serevr through putty
su jenkins
cd
ls -la( check what is in the jenkins home directory)
ssh-kegen -t rsa (dsa by default but choose rsa key ,it is faster)
press enter(on path)
press enter(leave the pass phrase empty , the whole point here is to avoid passwords in the automated jobs)
pres enter
cd .ssh
ls -la (you will find id_rsa.pub) file there
cat id_rsa.pub
(select all and copy the contents of the file)
1.9.6
goto bitbucket
switch to the repo zf2ci
goto settings
click deployment keys->add key
add label (jenkins)
key*(paste the the contents of the id_rsa.pub)file here
save key
summary
`zf2ci->settings->deployment keys->add key->type` label and paste id_rsa.pub key->save
1.9.7
register POST hook for repo
Settings->
Integrations->
Hooks->
POST(search for POST Hook)->
Add the url /IP of the Jenkins Server) (`172.62.235.100:8080/bitbucket-hook/`)
(the body of the post contanis information about the repository, branch, list of recent commits, user)
1.9.8
login to Jenkins server
su jenkinks
cd
cd .ssh
git ls-remote -h ssh://git#bitbucket.org:username/zf2ci.git HEAD
1.9.9
save project on Jenkins
1.9.10
add the following command in the
Execute Shell->command
[rsync -y -vrzhe "ssh -o StrictMostKeyChecking=no" --exclude vendor/ . root#ipaddress:/var/www/html/zf2ci( of app server)]
ssh root#ipaddress<<EOF
cd /var/www/html/zf2ci
composer install --no-dev
EOF

Categories