Use behat with jenkins in amazon ec2 server - php

How can I setup and configure behat,ahoy,docker with jenkins in amazon ec2 server?
I want to run my behat feature's every time I push something in my Git A/c with help of jenkins and sauce labs in the ec2 server.

There are lots of ways to do this. What do you know about Amazon EC2? And Selenium? And Docker? There are lot's of technologies here... Do you want to configure a Selenium grid? I'll try to answer some of this. But you are asking so many things... xD
I'll tell you my solution (Selenium grid) on that first:
First of all you need to create a Selenium hub with an EC2 ubuntu 14.04 AMI without UI and link it as a jenkins slave to your Jenkins master. Or as directly a master. What you want. Only command line. Download Selenium Server standalone. (be careful on downloading the version. If you Download the Selenium3Beta, things could change). Here you can configure the HUB. You can also add the Selenium Hub as a service and configure to run automatically at server start. its important that you open the Selenium default port (or the one that you configured) so the nodes can connect to it. You can do that on the Amazon EC2 console when you have created your instance. You just need to add a security group with an inbound rule for TCP in the port you want for the IPs you want.
Then, you can create a Windows server 2012 instance server (for example, that's what I did), and do the same process. Download the same version for Selenium and the chromedriver (there is no need to download any firefoxdriver for Selenium versions before Selenium3). Generate a txt file and prepare the Selenium command to link to the HUB as a NODE. And convert it to *.bat in order to execute it. If you want to run the bat at start you can create a service with the task scheduler or use NSSM (https://nssm.cc/). Don't forget to add the rules to the security groups for this machine too!
You can link as many servers as you want to your node.
If you want to use docker, good luck! ;) Haha.
No, with docker I recommend you to start as easy as posible trying to create a Dockerfile in local that runs the Jenkins server and the Selenium Server NOT in grid mode. When you have it working in local, push it to a repository. When You have all of this running, create an EC2 instance and install docker. Pull your selenium docker image and run it linking the local server ports to the docker machine ports.
You have so many work to do here... But it's so interesting. I recommend you go step by step creating in every iteration a better infrastructure. Don't try to add all that technologies at the same time.
Thera are lots of webs talking about that concepts.
Good luck!

Related

Nodejs and wamp server confusion

The situation
I have been developing in php and using wamp for the past 2 years. Then I come across a module to implement a chat system followed by instant notifications. So I go look it up and found this awesome "nodejs" that allows you to connect to connected users in realtime.
This guy nodejs socket.io and php uploaded a way to integrate nodejs socket.io and php without node server.
So I downloaded his project (github) and ran it on my computer but it gave
connection refused error from 8080 So,
I go to nodejs site and install nodejs on my system (windows). It automatically updated my environment variables and I could just go to my command line to run a example project as
path(...)node nodeServer.js
and then run the index file of the project from the shared link and it starts working. everything runs smooth and nice.
MY QUESTION
If without installing nodejs on my system I cannot run the node app in the small example project then how am I supposed to install nodejs on live server (apache) and use command line to start nodejs.
I know this might be too silly but I am really new to nodejs so I don't know if I can run node on live php server. If it is possible then can anyone tell me how can I do that ? or is it just an ideal situation and can't be done.
Node.js do not need to be installed with Apache. Node.js itself provide a server that would listen on a port. You can use Apache or Nginx to use proxy. You can run your application without these server also.
Create a file index.js using the code below and run node index.js
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
Open you browser and enter this url : http://127.0.0.1:1337/ You will see Hello World over there. In this case nodejs is listening on port 1337
If you are using cloud or VPS or any kind of solution that allows you full control of stuff installed, you can just install node.js there and run what you need...
https://github.com/joyent/node/wiki/installing-node.js-via-package-manager
some services will allow you to pick what gets installed... so you just pick nodejs and run it alongside your apache.
However, if you are using shared hosting solution, there is limited number of those actually even hosting node (if any) and solving this would be almost impossible for you.
Second Edit: Sorry for editing twice, but there is a thing with "no nodejs server" in mentioned stackoverflow post - there is actually a server and mentioned need to npm install certain modules... this is not right way to do this, but if you still want to try this you need node installed (and npm along with it) and then you need to npm isntall mentioned packages, add simple server file quoted in the post, run it and then have all you need for your chat...
If you need some help, ping me, but if this is time critical project, rather find some third party solution... and then learn about this one.
TLDR find a hosting service that'll give u admin and support firewall requests, or self host w/ a free dns subdomain and have a script update your ip so you dont have to pay for static.
My Experiences:
You can actually utilize node for input/output stream manipulation as well. Look at gulp and node for more info. Using bower and bluebird on top of a git project makes setting up apps very easy and quick via node.
As for using socket.io w/ a node/wamp setup, I've actually used this in the past. I had wamp installed on the server initially, but I used the apache directives to reverse proxy requests on 8080 to the node.js app from the client scripts.
I did have to install node separately, though, so you'll need something like ramnode maybe (I think they allow hosted apps like iis/mvc etc too).
Easiest hosting setup for development imo was self host wamp/node w/ a free subdomain from afraid.dns.
Otherwise ramnode gives you full access to admin features on your vm, i believe. So you may be able to install node there as long as you request firewall permissions when needed for xtra ports (socket.io used diff ports for requests on page so I didnt have to worry about CORs crap or anything).

PHP and Ruby with Docker

is possible to run two web apps at the same time, one using PHP the other using Ruby, each one on a Docker container ?
Should be no problem. Normally you have one App per container.
You could create a Docker container for your PHP server and a container for your Ruby server.
You need to choose different ports, because by default, both will run on port 80 or 443 and then it should work
Docker is designed to run one software, if you want to run more than one, you need a tool like supervisor, s6, daemontools, check the doc for supervisor
https://docs.docker.com/articles/using_supervisord/

Where to put jenkins server in my situation

Hi i have a server setup like this ,
i want to update my QA server and Development server when ever a change happen to bitbucket . to automate this one person suggested me to use git hooks so i search about it and found about jenkins and bitbucket connector
jenkins hook management
so i think that i have to have a jenkins server somewhere and i can not figure out where .
where should i have a jenkins server . Inside Development server ? QA server ? or both servers ?
Can anyone please help me and explain how to do this because i am new to jenkins and bitbucket
I using PHP and my servers using LAMP
For what is worth, here is answer but not spectacular since there is no need to be spectacular :) You can set it up either on Development Server or QA Server it does not matter I guess.
Jenkins will orchestrate deployment from bitbucket to your environments and you just need one instance of it to do it. Flow will go something like this:
Push to bitbucket
Triggers commit hook
Jenkins remotes and runs deployment script on development server
Jenkins remotes and runs deployment script on QA server
Jenkins runs tests on QA server
etc.
Hope it helps, just to clarify deployment script here would be pull code, migrate db, restart server ...

How to host a PHP file with AWS?

After many hours of reading documentation and messing around with Amazon Web Services. I am unable to figure out how to host a PHP page.
Currently I am using the S3 service for a basic website, but I know that this service does not support dynamic pages. I was able to use the Elastic Beanstalk to make the Sample Application running PHP. But i have really no idea how to use it. I read up on some other services but they don't seem to do what I want or they are just way to confusing.
So what I want to be able to do is host a website with amazon that has dynamic PHP pages. Is this possible and what services do you use?
For a PHP app, you really have two choices in AWS.
Elastic Beanstalk is a service that takes your code, and manages the runtime environment for you - once you've set it up, it's very easy to deploy, and you don't have to worry about managing servers - AWS does pretty much everything for you. You have less control over the environment, but if your server will run in EB then this is a pretty easy path.
EC2 is closer to conventional hosting. You need to decide how your servers are configured & deployed (what packages get installed, what version of linux, instance size, etc), your system architecture (do you have separate instances for cache or database, whether or not you need a load balancer, etc) and how you manage availability and scalability (multiple zones, multiple data centers, auto scaling rules, etc).
Now, those are all things that you can use - you dont have to. If you're just trying to learn about php in AWS, you can start with a single EC2 instance, deploy your code, and get it running in a few minutes without worring about any of the stuff in the previous paragraph. Just create an instance from the Amazon Linux AMI, install apache & php, open the appropriate ports in the firewall (AKA the EC2 security group), deploy your code, and you should be up & running.
Your Php must run on EC2 machines.
Amazon provides great tools to make life easy (Beanstalk, ECS for Docker ...) but at the end, you own EC2 machines.
There is no a such thing where you can put your Php code without worrying about anything else ;-(
If you are having problems hosting PHP sites on AWS then you can go with a service provider like Cloudways. They provide managed AWS servers with one click installs of PHP frameworks and CMS.

Pushing to multiple EC2 instances on a load balancer

I am attempting to figure out a good way to push out a new commit to a group of EC2 server instances behind a ELB (load balancer). Each instance is running Nginx and PHP-FPM
I would like to perform the following workflow, but I am unsure of a good way to push out a new version to all instances behind the load balancer.
Dev is done on a local machine
Once changes are ready, I perform a "git push origin master" to push
the changes to BitBucket (where I host all my git repos)
After being pushed to bitbucket, I would like to have the new version
pushed out to all EC2 instances simultaneously.
I would like to do this without having to SSH in to each instance
(obviously).
Is there a way to configure the remote servers to accept a remote push? Is there a better way to do this?
Yes, I do this all of the time (with the same application stack, actually).
Use a base AMI from a trusted source, such as the default "Amazon Linux" ones, or roll your own.
As part of the launch configuration, use the "user data" field to bootstrap a provisioning process on boot. This can be as simple as a shell script that runs yum install nginx php-fpm -y and copies files down from a S3 bucket or do a pull from your repo. The Amazon-authored AMI's also include support for cloud-init scripts if you need a bit more flexibility. If you need even greater power, you can use a change management and orchestration tool like Puppet, Chef, or Salt (my personal favorite).
As far as updating code on existing instances: there are two schools of thought:
Make full use of the cloud and just spin up an entirely new fleet of instances that grab the new code at boot. Then you flip the load balancer to point at the new fleet. It's instantaneous and gives you a really quick way to revert to the old fleet if something goes wrong. Hours (or days) later, you then spin down the old instances.
You can use a tool like Fabric or Capistrano to do a parallel "push" deployment to all the instances at once. This is generally just re-executing the same script that the servers ran at boot. Salt and Puppet's MCollective also provide similar functionality that mesh with their basic "pull" provisioning.
Option one
Push it to one machine.
Have a git hook created on it http://git-scm.com/book/en/Customizing-Git-Git-Hooks.
Make hook run pull on other machines.
Only problem , you'll have to maintain list of machines to run update on.
Another option
Have cron job to pull from your bitbucket account. on a regular base.
The tool for this job is Capistrano.
I use an awesome gem called capistrano-ec2group in order to map capistrano roles with EC2 security groups.
This means that you only need to apply an EC2 security group (eg. app-web or app-db) to your instances in order for capistrano to know what to deploy to them.
This means you do not have to maintain a list of server IPs in your app.
The change to your workflow would be that instead of focussing on automating the deploy on pushing to bitbucket, you would push and then execute
cap deploy
If you really don't want to do to steps, make an alias :D
alias shipit=git push origin master && cap deploy
This solution builds on E_p's idea. E_p says the problem is you'd need to maintain a server list somewhere in order to tell each server to pull the new update. If it was me, I'd just use tags in ec2 to help identify a group of servers (like "Role=WebServer" for example). That way you can just use the ec2 command line interface to list the instances and run the pull command on each of them.
for i in \
`ec2din --filter "tag-value=WebServer" --region us-east-1 \
| grep "running" \
| cut -f17`\
; do ssh $i "cd /var/www/html && git pull origin"; done
Note: I've tested the code that fetches the ip addresses of all tagged instances and connects to them via ssh, but not the specific git pull command.
You need the amazon cli tools installed wherever you want this to run, as well as the ssh keys installed for the servers you're trying to update. Not sure what bitbucket's capabilities are but I'm guessing this code won't be able to run there. You'll either need to do as E_p suggests and push your updates to a separate management instance, and include this code in your post-commit hook, OR if you want to save the headache you could do as I've done and just install the CLI tools on your local machine and run it manually when you want to deploy the updates.
Credit to AdamK for his response to another question which made it easy to extract the ip address from the ec2din output and iterate over the results: How can I kill all my EC2 instances from the command line?
EC2 CLI Tools Reference: http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/Welcome.html
Your best bet might be to actually use AMI's for deployments.
Personally, I typically have a staging instance where I can pull any repo changes into. Once I have confirmed it is operating the way I want, I create an AMI from that instance.
For deployment, I use an autoscaling group behind the load balancer (doesn't need to be dynamically scaling or anything). In a simple set up where you have a fixed number of servers in the autoscale group (for example 10 instances). I would change the AMI associated with the autoscale group to the new AMI, then start terminating a few instances at a time in the autoscale group. So, say I have 10 instances and I terminate two to take it down to 8 instances. The autoscale group is configured to have a minimum of 10 instances, so it will automatically start up two new instances with the new AMI. You can then keep removing instances at whatever rate makes sense for your level of load, so as to not impact the performance of your fleet.
You can obviously do this manually, even without an autoscale group by directly adding/removing instances from the ELB as well.
If you are looking to make this fully automated (i.e. continuous deployment), then you might want to look at using a build system such as Jenkins, which would allow for a commit to kick off a build and then run the necessary AWS commands to make AMI's and deploy them.
I am looking for a solution to the same problem. I came across this post and thought it was an interesting approach
https://gist.github.com/Nilpo/8ed5e44be00d6cf21f22#pc
Go to "Pushing Changes To Multiple Servers"
Basically the idea is to create another remote call it "production" or whatever you want, and then add multiple urls (The ip's of all of the servers) to that remote. This can be done by editing .git/config
Then you can run git push production <branch> and it should push out to all of the urls listed under "production"
One requirement to this approach is that the repo on the servers need to be bare repos and you will need to have a post-receive hook to update the working tree.
Here is an example of how to do that: Setting up post-receive hook for bare repo

Categories