I am trying to setup a docker container with WebdriverIO built into it, with the eventual aim of being able to run a CI/CD pipeline in gitlab, but I have absolutely no idea where to start.
My application is a PHP/MySQL based app which was also recently dockerised. I access it locally on http://localhost.
I have tried to create a docker image with wdio built into it, but it fails when trying to do the
npm init wdio --yes
as the --yes command doesn't force any of the default settings, which goes against the official documentation. This then causes the wdio installation to fail.
What is confusing me even more is that there seems to be very few tutorials for this, the wdio documentation doesn't seem great, and what tutorials I can find all seem to mention selenium. FYI, I am just a dev that has been tasked to take some existing WDIO scripts and get them ready for CI/CD, I don't know a massive amount about WDIO in the first place.
Does anyone have any basic steps I could follow that would describe the process of taking some local WDIO scripts, and getting them to run inside a container, with the end goal of being to have them into some sort of CI/CD pipeline?
When trying to create the image, the following command does not seem to work:
npm init wdio --yes
It would be much more appropriate if you have initialize a wdio project and copy it to the Dockerfile.
This is what it might look like:
FROM node:16
USER root
#===============================
# Set default workspace
#===============================
RUN mkdir /home/workspace \
&& chmod 2777 /home/workspace
COPY . /home/workspace
WORKDIR /home/workspace
This way, your docker image contains your whole project built in.
Then you could append the following command to make sure the environment is ready for webdriverI/O to execute.
#==================================
# Install needed packages
#==================================
RUN apt update && apt upgrade -y
RUN npm install
If you need anything like browser and webdriver, you could install it via dozens of approaches.
You can use ENTRYPOINT or CMD to make it execute the specified test suites once the container is up.
If you wanna complete CI or CD flow with docker containers, it will depend on which service you may utilize.
Related
I'm new to AWS and I've gotten as far as getting the following error in Symfony:
Asset manifest file "/var/app/current/public/build/manifest.json" does not exist.
In local, this would be fixed by running npm run build. I've tried adding NPM_CONFIG_PRODUCTION=true in the environment variables, but I think that might just be for node.js apps?
I've also tried SSHing onto the EC2 instance and installing node on there, but I ran into errors trying to install either npm or nvm. I feel like this is the wrong approach anyway, since it seems like the idea of beanstalk is that you shouldn't need to ssh onto the instance.
Perhaps I should just include the node_modules folder in the zip uploaded, but since one of the recommended ways to produce the zip is to use git, this doesn't seem correct either.
After a lot of digging around, it seems like there's 3 options here:
SSH onto the instance(s) and the following worked for me (Amazon Linux 2 - ARM chip)
curl --silent --location https://rpm.nodesource.com/setup_16.x | sudo bash -
sudo yum -y install nodejs
(cd /var/app/current/;sudo npm add --dev #symfony/webpack-encore)
(cd /var/app/current/;sudo npm install)
(cd /var/app/current/;sudo npm run build)
The problem with this, is if you have multiple instances that scale up and down with a load balancer, it isn't really practical to do this.
Add the above as a hook:
The following sh file could be put in the following directory: .platform/hooks/predeploy
#!/bin/bash
curl --silent --location https://rpm.nodesource.com/setup_16.x | sudo bash -
sudo yum -y install nodejs
(cd /var/app/current/;sudo npm add --dev #symfony/webpack-encore)
(cd /var/app/current/;sudo npm install)
(cd /var/app/current/;sudo npm run build)
However, I've since learnt that it's best advised to just include the node_modules in the zip that gets uploaded. I guess this way the time to get the server up is reduced.
Include the node_modules folder in the zip that gets uploaded.
To include the node_modules folder, since this is naturally ignored by GIT, I used the EB CLI and added a .ebignore file, which is a clone of the .gitignore file, but includes the node_modules and public folders. Also be cautious in your build process that you're not including the node dev dependencies.
I have a new job as a vulnerability and load tester at a company that makes and runs ecommerce sites for apparel companies in Japan. I don't have a degree in computer science (some first and second year credits) so I'm struggling quite a bit.
First, I gitcloned the repo to my local machine.
Next, are the instructions on our company Bitbucket as follows:
install aws-cli
install composer
install npm
log in to ecr:
$(aws ecr get-login --region ap-northeast-1 --no-include-email)
Get the local environment running with this command:
If there is no SDK in the local directory, set it using "how to update SDK to the newest version"
move to php source directory
cd php/ef
edit composer npm
composer install
npm install
(the contents of package.json package.json.lock have been somewhat edited)
npm run dev
cd ../..
# set environment variables
export CUSTOMER_NAME=javag(choose the client)
# docker-compose
docker-compose -f docker-compose.yml -f docker-compose.local.yml up
After a short while, php-fpm starts up. Check http://localhost:8000/
You can show changes to files in php/ds3-base-pre by reloading.
I apologize for the poor translation from Japanese but basically (from my low knowledge level) I think this ecommerce site is being run in a Docker container in an EC2 instance with Nginx (redis server) as the server and PHP7-fpm as the version of PHP that can be used? I'm new to web and the only thing I've done so far is to get simple MVC examples running on a local server.
However, my question is, how can I open Laravel inside this container so that all the folders on the left side of my editor like below:
I want this so writing Laravel tests would be more organized as I am somewhat accustomed to the folder structure of Laravel and also so I can use the php artisan commands to run tests.
I apologize for such a wordy question but I was thrown this task and really I am having to start at Docker for beginner (and beginner level for every technology mentioned above pretty much) so any guidance is welcome.
Edit:
I opened the /php/ef directory locally with subl ef command and sure enough I have it Laravel style (folders on the side bar).
I have a local install of JenkinsCI. Installed via instructions in the Chapter 2 of Jenkins The Definitive Guide. I start Jenkins via Java Web Start/JNLP file on my MAC running El Capitan. All that went great, sample project is working.
I know want to get my Codeception Acceptance test running via Jenkins. I'm following the most recent blog post about this on the Codeception site: http://codeception.com/02-04-2015/setting-up-jenkins-with-codeception.html#.VwWxE2PLRAZ.
Using the Execute shell build step, my build fails with the following message:
Started by user anonymous
Building in workspace /Users/Cosette/.jenkins/workspace/Project Name
[Project Name] $ /bin/sh -xe
/var/folders/ns/ly6hv_513tl6qqslrb2vj_dw0000gn/T/hudson9210778078639547082.sh
composer install
/var/folders/ns/ly6hv_513tl6qqslrb2vj_dw0000gn/T/hudson9210778078639547082.sh:
line 2: composer: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
My guess is that maybe this install type doesn't install the Jenkins user? Please note I am very beginner level and this is my first question here on stackoverflow. Also, That should be a + sign in front of "composer install".
You should download composer from https://getcomposer.org/, rename it to composer, make executable with chmod +x and place somewhere in the PATH of Jenkins.
I have yet to find an answer to ensuring the Jenkins user has access to files under other users. I eventually gave up and installed via homebrew, eliminating the Jenkins user all together. For now I'm just placing everything necessary to run in Users/UserName/.jenkins/Home/workspace/Project-name
I am trying to use Slim on OpenShift with a free node. I can run composer update from the SSH sessions without any problem.
The only problem is every time I want to commit files through git I have to go to the console and run composer install again. My question is there is any easy way to workaround this? I tried a BASH script in /project/.openshift/action_hooks/post_deploy but the server is not creating the vendor folder under runtime/repo
I always do it via action hooks:
Inside my project directory I have a script called by /project/.openshift/action_hooks/post_deploy where post_deploy is a bash script.
Here goes what I have been using:
#!/bin/bash
export MY_PHPCOMPOSER=$OPENSHIFT_DATA_DIR/composer.phar
# if composer not exists, download
if [ ! -f $MY_PHPCOMPOSER ]; then
cd $OPENSHIFT_DATA_DIR
echo "Downloading composer..."
php -r "readfile('https://getcomposer.org/installer');" | php
fi
$MY_PHPCOMPOSER -n -q self-update
cd $OPENSHIFT_REPO_DIR
# install
php -dmemory_limit=1G $MY_PHPCOMPOSER install
So post_deploy script will perform every time which you push your repo to openshit. It work like a charm!
Side note
Since not always the OpenShift composer's version is updated it's safe
to download a new composer copy and use it.
Also, don't forget adjusting permissions settings.
Helpful links
Openshift builds
Openshift Default Build Lifecycle
I know that my answer is late but according to the Openshift documentation you can enable composer install after each build by just creating a marker file:
touch .openshift/markers/use_composer
I am yet to find an elegant and efficient way to run Laravel Artisan commands in my Docker based local dev environment.
Could anybody suggest the recommended or "proper" way to do things like migrations?
Or, has anybody found a neat way of doing this? Ideally with examples or suggestions.
Things that I've considered:
A new container (sharing the same volume and db link) with ssh, just for running commands (seems nasty).
Hacks in supervisor that could then end up running on live (not ideal).
Editing db configs, or trying to hack in a "host" environment, so that at least things like migrate can be run from the host.
Creating web front ends to run things (really nasty).
Trying to build a "signal" for it things.
I'm still getting my head around Docker and it's new-container-for-everything approach.
I suppose I want to balance cool-dev-ops stuff with why-do-I-need-another-fake-server-just-get-it-working-already.
I'd love to commit to it for my dev workflow, but it seems to become awkward to use under certain circumstances, like this one...
Any suggestions and ideas are welcome. Thanks all.
Docker 1.3 bring new command exec
So now you can "enter" running container like
docker exec -it my-container-name /bin/bash
After that you can run any command you want
php artisan --version
The best practice regarding Docker is to run each process inside it's own container. Therefore, the ideal way to run artisan commands is to have an image for creating containers specifically for this purpose.
I've created an image which can be pulled from the Docker Hub dylanlindgren/docker-laravel-artisan and it works really well. It's on GitHub as well if you want to take a look at the Dockerfile behind it.
I've also just written a blog post describing the way that all these seperate containers fit together.
There are a couple of possibilities...
Mounting a host directory in your container as the folder in which your Laravel app lives. That way you can just run php artisan migrate or composer update from the host. You might have problems with deployment, though, since you would have to replicate that part of your environment on the server.
adding an SSH server to your container (which is not recommended; here's a good discussion of that).
build and use nsenter, a tool for "entering" a running container and getting shell access. Note, I haven't used it, I just found it a while ago via a reference in the link above.
If you're primarily interested in deployment and you're doing it via a dockerfile, then the answer would be to add composer install and php artisan migrate to your dockerfile so they run when the container is built.
I'm interested in hearing more answers to this. It's something that I'm just getting into as well and would like to learn more about.
I use SSH and run migrations from a terminal inside the container.
I personally like Phusion's approach of using Docker as a 'lightweight virtual machine'. So I used their baseimage-docker which I've extended to create my own Docker image for Laravel applications.
I'm aware Phusion's image can be contentious in the Docker community, and that SSH is frowned upon by some who advocate Docker containers as microservices. But I'm happy with Phusion's approach until there are more established tools and practices for the multi-container approach.
I'm in the process of figuring out creating Docker images for Laravel projects, this is what I have so far.
FROM base_image_with_LAMP_stack_and_dependencies
WORKDIR /var/www/html/app
COPY composer.json composer.json
RUN composer install --no-scripts --no-dev --no-autoloader
COPY . .
RUN echo 'chown -R www-data:www-data /var/www/ \
&& composer dump-autoload \
&& cp .env.example .env \
&& php artisan key:generate \
&& php artisan migrate \
&& apachectl -D FOREGROUND' >> /root/container_init.sh && \
chmod 755 /root/container_init.sh
EXPOSE 80
CMD /root/container_init.sh
This way, there is no dependency on database during build time, and the migration process can run every time a new container is started.