I have codedeploy setup with same appspec file for 6 different deployments. While the codedeploy works perfectly for some of the deployments, it gets stuck on others.
The issue is, it gets stuck on random environments on random basis, sometimes on Install phase and sometimes on AfterInstall phase. It also gets stuck on one of the multiple servers inside same deployment.
appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/current
permissions:
- object: /var/www/html/current
pattern: "**"
owner: root
group: www-data
mode: 644
type:
- file
- object: /var/www/html/current
pattern: "**"
owner: root
group: www-data
mode: 755
type:
- directory
hooks:
BeforeInstall:
- location: scripts/beforeinstall.sh
runas: root
AfterInstall:
- location: scripts/afterinstall.sh
runas: root
beforeinstall.sh
#!/bin/bash
php artisan cache:clear
hostname >> /tmp/bhostname.txt
crontab -r
if [ "$HOSTNAME" = horizon ]
then
hostname >> /tmp/ahostname.txt
cd /var/www/html/current/backend && sudo php artisan down
sudo supervisorctl stop laravel-worker:*
sleep 30
sudo service supervisor stop
sleep 30
rm -rf /var/www/html/current/backend/bootstrap/*
rm -rf /var/www/html/current/backend/storage/*
rm -rf /var/www/html/current/backend/worker.log
else
cd /var/www/html/current/backend && sudo php artisan down
rm -rf /var/www/html/current/backend/bootstrap/*
rm -rf /var/www/html/current/backend/storage/*
fi
afterinstall.sh
#!/bin/bash
chown -R www-data:www-data /var/www/html/current/backend/bootstrap
chown -R www-data:www-data /var/www/html/current/backend/storage
cd /var/www/html/current/backend && sudo php artisan cache:clear
cd /var/www/html/current/backend && sudo php artisan view:clear
cd /var/www/html/current/backend && sudo php artisan config:cache
export GOOGLE_APPLICATION_CREDENTIALS=/var/www/html/current/backend/storage/gcp_translate.json
# (crontab -l 2>/dev/null; echo "*/2 * * * * cd /var/www/html/current/backend/ && php artisan schedule:run >> /dev/null 2>&1")| crontab -
echo "*/2 * * * * cd /var/www/html/current/backend/ && php artisan schedule:run >> /dev/null 2>&1" > /var/spool/cron/crontabs/www-data
echo "* * * * * /bin/chown -R www-data.www-data /var/www/html/current/backend/storage/logs" > /var/spool/cron/crontabs/root
echo "0 0 * * * rm -rf /opt/codedeploy-agent/deployment-root" > /var/spool/cron/crontabs/root
chmod 600 /var/spool/cron/crontabs/www-data
chmod 600 /var/spool/cron/crontabs/root
service cron restart
if [ "$HOSTNAME" = horizon ]
then
cd /var/www/html/current/backend && php artisan up
cd /var/www/html/current/ && pwd && ls -altr && mv laravel-worker.conf /etc/supervisor/conf.d
sudo service supervisor restart
sleep 30
sudo supervisorctl restart laravel-worker:*
fi
service php7.4-fpm restart
service nginx restart
The issue was resolved by reducing build size.
The build used to be around 900 MB after packaging as .zip file. After trimming it down to 600 MB, AWS CodeDeploy is not getting stuck on random basis.
The build was trimmed by removing node_modules (1.7 GB without compression) from frontend.
Therefore, I conclude the issue to be inadequate size of servers (t3.medium) with regards to the highly compressed bigger builds.
Related
I have a Laravel 9 application deployed on GKE. It has some background jobs which I have configured to run using supervisor (I will share snippets of config files below).
The Problem
The problem is when Jobs are run using scheduler or manually using the artisan command, there are cache files created in storage/framework/cache/data path with root user as owner. This causes the issues as errors keep logging with message Unable to create lockable file because all the other folders and files have user www-data which we set in Dockerfile. To fix it, I have to manually run chown -R www-data:www-data . in the above cache path.
Dockerfile
FROM php:8.0-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libonig-dev \
libicu-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
libzip-dev \
libpq-dev \
ca-certificates \
zip \
jpegoptim optipng pngquant gifsicle \
nano \
unzip \
git \
curl \
supervisor \
cron \
nginx
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo pdo_mysql mbstring zip exif bcmath
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-install gd
RUN docker-php-ext-configure intl
RUN docker-php-ext-install intl
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
COPY scripts/supervisor.conf /etc/supervisor/conf.d/supervisor.conf
COPY /scripts/nginx/nginx.conf /etc/nginx/sites-enabled/default
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Setup cron job
COPY scripts/crontab /etc/cron.d/scheduler
RUN chmod 0644 /etc/cron.d/scheduler
RUN usermod -u 1000 www-data
RUN usermod -G staff www-data
COPY --chown=www-data:www-data . /var/www
RUN touch /var/www/storage/logs/laravel.log
RUN mkdir /var/www/storage/framework/cache/data
RUN chown -R www-data:www-data /var/www/storage
RUN chmod -R 777 /var/www/storage
RUN composer install --no-interaction
COPY /scripts/entrypoint.sh /etc/entrypoint.sh
RUN chmod +x /etc/entrypoint.sh
EXPOSE 80 443
ENTRYPOINT ["/etc/entrypoint.sh"]
crontab
* * * * * root echo "cron working..." >> /var/log/cron.log
* * * * * root /usr/local/bin/php /var/www/artisan schedule:run >> /var/log/cron.log
entrypoint.sh
#!/usr/bin/env bash
php artisan config:cache
service supervisor start
service nginx start
php-fpm
supervisor.conf
[program:cron]
process_name=%(program_name)s_%(process_num)02d
command=cron -f
autostart=true
autorestart=true
startretries=5
numprocs=1
stderr_logfile=/var/log/cron.log
stderr_logfile_maxbytes=10MB
stdout_logfile=/var/log/cron.log
stdout_logfile_maxbytes=10MB
Things I have tried so far
I have tried changing user group in crontab from root to www-data but that result in cron not working at all.
I have tried changing supervisor user to www-data so cron command is run by www-data instead of root.
Also setting user as www-data in Dockerfile, but all of the solutions either result in cron not running at all or the files created by jobs are still owned by root user.
After much investigation, I found that it is not a good practice to run laravel scheduler as root user because that can create files with root owner.
I updated my crontab file to following:
* * * * * root su -c "/usr/local/bin/php /var/www/artisan schedule:run >> /var/log/cron.log" -s /bin/bash www-data
This way the cache files created will be owned by www-data and no files from root owner are created.
Hope this helps someone who is facing the same issues.
I am stuck with that problem.
I have a docker compose file with volumes mapping that way :
volumes:
- ./:/var/www/html:rw
- ../epossobundle/:/var/www/epossobundle :rw;
In composer.json, there is a repo linked on a bundle which I am working on it like this :
"repositories": [
{
"type": "path",
"url": "../epossobundle"
},
But when I start my container I got the error
Warning: include(/var/www/html/vendor/composer/../epo/api-auth-sso-bundle/EpoApiAuthSsoBundle.php): failed to open stream: No such file or directory
How can I do ??
PS : It is not possible to install something in Docker image because it is intranet network
Thanks a lot
Serge
I assume that the vendor file is missing dependencies, specifically epo/api-auth-sso-bundle/EpoApiAuthSsoBundle.php so you'll need to install them.
Grab a root shell inside the container:
docker exec -it -u root <container_id_or_name> /bin/bash
Install composer (requires root):
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
Then install your dependencies (if not in production, no need for --no-dev):
composer install --no-dev
Finally, change the permissions back (ls -rvla):
# Nginx
chown -R nginx:nginx .
# Apache2
chown -R www-data:www-data .
Note: This is a "hacky-fix", if you rebuild the container you will have to do this every time. You should look to do this inside your Dockerfile as a permanent solution.
Untested example of the Dockerfile:
FROM php:7.4-fpm
WORKDIR /var/www/html
# Nginx
RUN groupadd -g 101 nginx && \
useradd -u 101 -ms /bin/bash -g nginx nginx
# Apache2
RUN groupadd -g 101 www-data && \
useradd -u 101 -ms /bin/bash -g www-data www-data
# Download and install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Nginx
USER nginx
# Apache2
USER www-data
# Or add this as an entry-point (runs as user for permissions)
RUN composer install --no-dev
I am using laravel 8 sail, and I want to run a task repeatedly, but seems schedule task not getting called.
I have installed crontab
RUN apt-get update && apt-get install -y cron
COPY crontab /etc/cron.d/crontab
RUN chmod 0644 /etc/cron.d/crontab \
&& crontab /etc/cron.d/crontab \
&& touch /var/log/cron.log
and created crontab file
* * * * * cd /var/www/html && php artisan schedule:run --no-ansi >> /var/log/cron.log 2>&1
# Don't remove the empty line at the end of this file. It is required to run the cron job
And inside Kernel.php
protected function schedule(Schedule $schedule)
{
$schedule->command('inspire')->everyMinute();
$schedule->call(function () {
DB::table('products')->delete();
})->everyMinute();
}
Normally after every minute it should run inspire command and delete products table, but nothing happen.
note: When I get inside the docker instance and run 'cd /var/www/html && php artisan schedule:run --no-ansi >> /var/log/cron.log 2>&1' it works fine but automatically not.
I have problems with a share volumes that both nginx apline and php-fpm can write and delete folder and file.
Nginx container will create static file on var/run.
FROM nginx:alpine
RUN set -x ; \
addgroup -g 82 -S www-data ; \
adduser -u 82 -D -S -G www-data www-data && exit 0 ; exit 1
RUN chmod 777 -R /var/run
RUN mkdir -p /var/run/static-cache/
I also create a same directory on php-fpm container so it can clear all the cache when I update a page.
FROM php:7.4-fpm-alpine
RUN apk add shadow && usermod -u 1000 www-data && groupmod -g 1000 www-data
RUN chmod 777 -R /var/run
RUN mkdir -p /var/run/static-cache.
WORKDIR /var/www/html/
A mount volume will link these two directories between 2 containers.
However, when I save a page, I'm not able to clear any folder on php-fpm container due to permision denied.
Is ther any solutions to solve arround this?
I have this multi-stage build in a Dockerfile:
## Stage 1
FROM node:9 as builder
RUN apt-get update && apt-get -y install sudo
RUN echo "newuser ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
RUN useradd -ms /bin/bash newuser
USER newuser
RUN mkdir -p /home/newuser/app
WORKDIR /home/newuser/app
RUN sudo chown -R $(whoami) $(npm config get prefix)/lib
RUN npm set progress=false
RUN npm config set depth 0
RUN npm cache clean --force
COPY dist .
## Stage 2
FROM php:5.6.30
RUN apt-get update && apt-get -y install sudo
RUN echo "newuser ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
RUN useradd -ms /bin/bash newuser
USER newuser
RUN mkdir -p /home/newuser/app
WORKDIR /home/newuser/app
COPY --from=builder /home/newuser/app /home/newuser/app
RUN ls -a /home/newuser/app
CMD ["php", "-S", "localhost:3000"]
the image build successfully, using:
docker build -t x .
Then I run it with:
docker run -p 3000:3000 x
but when I go to localhost:3000 on the host machine, I don't get a response. The webpage is blank.
Does anyone know why that might happen?
I also tried:
CMD ["sudo", "php", "-S", "localhost:80"]
and
docker run -p 3000:80 x
and a few other variations, still nothing.
Not sure why the PHP static server wasn't working, but I got it working with Node.js static server package:
FROM node:9 as builder
RUN apt-get update && \
apt-get -y install sudo
RUN echo "newuser ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
RUN useradd -ms /bin/bash newuser
USER newuser
RUN mkdir -p /home/newuser/app
WORKDIR /home/newuser/app
RUN sudo chown -R $(whoami) $(npm config get prefix)/lib
RUN sudo chown -R $(whoami) $(npm config get prefix)/lib/node_modules
RUN sudo chown -R $(whoami) $(npm config get prefix)/bin
RUN sudo chown -R $(whoami) $(npm config get prefix)/share
RUN sudo chown -R $(whoami) /usr/local/lib
RUN npm set progress=false
RUN npm config set depth 0
RUN npm cache clean --force
COPY dist .
RUN npm install -g http-server
ENTRYPOINT ["http-server", "."]
you build it like so:
docker build -t x .
and then run it like so:
docker run -p 3000:8080 x