Laravel Elastic Beanstalk app deployed in CodePipeline giving 500 SERVER ERROR - php

I've created a Laravel app locally. (This is working fine)
After, deployed that app to AWS Elastic Beanstalk with a .zip file. (This is working fine)
Then, created a simple pipeline using AWS CodePipeline to grab data from a particular GitHub repo and deploy to that specific AWS Elastic Beanstalk environment. I see that any push I make to that particular repo, the CodePipeline does then deploy to that particular AWS Elastic Beanstalk environment.
The problem is that the instance now has
with the following Recent Event
Environment health has transitioned from Warning to Severe. 100.0 % of
the requests are failing with HTTP 5xx. ELB processes are not healthy
on all instances. Application restart completed 42 seconds ago and
took 7 seconds. ELB health is failing or not available for all
instances. One or more TargetGroups associated with the environment
are in a reduced health state: - awseb-AWSEB-CVIEEN5EVRFC - Warning
and if I go to its URL I get a
500 | SERVER ERROR
Deleted the .zip file from the root of the repo as that could've been causing a conflict. It didn't solve.
Checked the Full Logs but couldn't spot anything useful.

Based on the comments.
The issue was caused by missing .env file in the deployment package/artifact that CodePipeline deploys. This was caused by the file not being committed into the GitHub repository.
To determine the cause, the CodePipeline's artifact was inspected. The artifact can be found in CodePipeline's bucket or in EB Application versions (in the Source column) and it is an objects a random name without an extension. In OP's case it was S40pAMw. It should be noted that this object is just a zip file without extension. To inspect it content, adding the extension to the download object allowed for straight forward opening of the zip archive.
(Please see #JackPrice-Burns answer for alternative way of dealing with env variables).
The solution was to commit the missing file into the repository. Once that was done, the CodePipeline was triggered
and once the deployment finished the Health of the Elastic Beanstalk instance changed to Ok
and the 500 | SERVER ERROR was now gone

This issue specifically was caused by missing environment variables.
It is BAD practice to commit .env or any files containing secrets to GitHub or any other source control system.
First of all, if that repository is public in any way, all of the secrets are now also public, database credentials, encryption keys, AWS access credentials.
Secondly, a common attack vector is the .git folder, and the underlying source control repository. Potentially a malicious user (if they found the source control details) could gain access to your secrets if your GitHub (or other accounts) were compromised.
Thirdly, if you would like to setup multiple environments for your code, a production / develop / local environment for example. You now can't easily change these environment variables on a per environment basis as they're committed directly to the repository.
In the ElasticBeanstalk console you can go to Configuration -> Software and add environment variables at the bottom of the page (screen shot attached). These environment variables will be picked up by Laravel. Set all variables that are in the .env on that settings page and do not commit the .env.
Another good practice to follow is not committing your vendor folder. AWS CodePipeline allows you to create another step which can build your source code. This build step can take in your source control code, run composer install (to generate the vendor folder) and then send that to Elastic Beanstalk for deployment.
Firstly, committing the vendor folder drastically increases the size of your repository and how long it takes to clone your repository.
Secondly, merging code from different branches can become difficult if you're dealing with the whole vendor folder as well which can become thousands of files and millions of lines of code.
Thirdly, if you would like to track how much work has actually been done for your repo on a per contributor basis, it becomes difficult because if someone commits a vendor folder change, they will commit whole packages which they didn't code themselves.

Related

Laravel Application to Phar

I have a laravel project. I'm deploying it on an Apache server by copying files to the server. Is there an alternative using phar that works like jar files?
The simplest way would be for you to rsync the files to the server. One a bit more complex but worked out great was using capistrano on the desired servers, the deploy script from jenkins would upload the build file to s3 and ssh into the server and run
bundle exec cap deploy
on the desired servers.
This way would make the servers deploy on themselves, in the case I used it, the capistrano on the machines did the build and deploy using git archive, but you could split your build step and deploy step.
Basically, you would have a private s3 bucket to upload your builds, they would always be named application-latest.zip
Your jenkins server would always build, upload and trigger the deploy
Your application servers would have a capistrano download the zip, unzip it and 'deploy'
Why am I mentioning capistrano so much? Because it relieves you from a lot of the deploy struggles, exemple:
- it will do the mentioned steps on a folder called application-timestamp
- this folder is kept alongside other folders containing your previous deploys
- there is a symlink called release that points to your latest deployment
- once capistrano finishes your steps, it changes the symlink to point to your fresh deployed folder
Benefits from this approach and tech choices:
you can choose to keep 'x' last deployments, that let's you rollback deploys instantly because that is just a symlink change from your current deployment to your current-1 deployment
that also controls how much disk space your assets take by cleaning up the oldest folders, if you chose to keep 5 deployments for rollback, when you deploy for the sixth time, the oldest folder is removed
it prevents you from having the white screen of death that is caused when nginx/apache/php can't access the file that is currently being updated by the deploy script
it lets you create an image of this machine, let's say on aws, and put it on a autoscale group, and once this machine is started, it updates itself by downloading the latest zip from s3 without having jenkins get in the way, you will just have to setup the autoscale init script to trigger the deploy on whatever provider you chose
you have a very clear boundary between build and deploy
hope that helps!

NetBeans + FTP + BitBucket

I know this has been asked before, but I couldn't get the answer I needed.
Currently I'm developing an website using PHP and was using Notepad++ before, and it all worked well because I'm developing with a co-worker so we both keep on changing different files on the FTP.
Switched to NetBeans. All went ok, pulled the entire website via FTP to my local computer and everytime I edited a file and saved it uploaded to the FTP. But, there is a problem. If my colleague updates a file, it doesn't update on my local folder. So, I thought: "Let's try versioning".
Created a team on bitbucket, created a repository. All went ok.
But now, I'm in a struggle to get everything up and running on both NetBeans (mine and colleague's) so that my colleague is editing a file on his NetBeans and constantly saving so that it gets saved on FTP and only when he stops working on that file push it to BitBucket so that I can pull after.
Suggestions?
About setting up your work environment :
In order to set up your bitbucket repository and local clone, go read this link (official doc).
You will need to repeat the cloning part once for each PC (e.g : once on yours, once on your colleague's).
Read the account management part to see how you can tag your actions with your account, and your colleague's action with his own account.
Start using your git workflow ; when you are tired of always typing your password to upload modifications to your bitbucket account, take the time to read the ssh keys setup part - read carefully, you will need to execute the procedure once for you and once for your colleague.
Using your local git repository with Netbeans is pretty straightforward :
From netbeans, run the File > New Project ... command (default: Ctrl+Shift+N),
Select PHP application with Existing Sources and click Next >,
For the Sources Folder: line, select your local git directory,
Fill the remaining fields, and if you want the last Run configuration screen, then click Finish.
After the project is created in netbeans, you can modify the Run configuration part by right clicking on the project's icon, selecting the Properties menu entry, and going to the Run configuration item.
About solving your workflow "problem" :
Your current FTP workflow can lead you to blindly squash your colleague's modifications (when uploading), or have your colleague's modification blindly squash your own local modifications (when downloading). This is bad, and you will generally notice it only after the bad stuff happened - too late.
Correctly using version control allows you to be warned when this could potentially happen, and to keep an almost infinite undo stack on the modifications of the project's files. The cost, however, is that both of you will have to add several actions in your day to day workflow - some choices can not be made automatically.
You may find it cumbersome in the beginning, but it really pays off, and quite quickly - we're talking big bucks here. So use it and learn.
On top of using Ctrl+S to save your modifications on disk, you and your colleague will need to integrate 3 extra commands in your daily work :
Save your work to your local repository (git add / git commit)
Download the latest modifications shared by your colleague (git pull)
Upload your work to the central repository (git push)
You can access these commands :
from a terminal,
from a GUI frontend : you can try TortoiseGit for windows, or gitk for linux,
from Netbeans :
in the contextual menu of the files/folders in the project tree (right click on the item, there is a "Git" entry),
using the Team > Git > ... menu
Since you provided a git tag, I'll describe what's to do for Git.
set up a remote bare repo on a server that you both could access (BitBucket in your case):
http://git-scm.com/book/en/Git-on-the-Server-Getting-Git-on-a-Server
you both clone that remote repo to your local machines:
http://git-scm.com/book/en/Git-Basics-Getting-a-Git-Repository#Cloning-an-Existing-Repository
each of you works in her part of the application. When one is done, publish the work to the server:
http://git-scm.com/book/en/Git-Basics-Working-with-Remotes#Pushing-to-Your-Remotes
By now, the remote server holds the version that was just pushed. What's missing is the deployment of the website. This has been discussed here:
Using GIT to deploy website
Doing so, you will decouple your work from that of your colleague since you're not changing files over FTP all the time. You work in your part, your partner works on her part. The work is getting merged and then a new version of the website gets published.
You can create git or Mercurial repositories in Atlassian Bitbucket (http://bitbucket.org). If your team is new to version control, i advise you no forks in your first project.
The easy solution ins to use Atlassian SourceTree (http://www.sourcetreeapp.com/) to control your code since there is a bug in netbeans. See NetBeans + Git on BitBucket
You need to create a new repository in bitbucket. I assume you already configure the ssh2 keys. Using Git you need:
git clone --bare --shared php_project php_project.git
git commit
Using Mercurial you need:
hg init
hg commit
Good luck / boa sorte
Pedro

GitHub coding setup

I am new to GitHub. I managed to install GitHub for Windows and created a github repository. I'm a PHP developer and this is my current situation before GitHub.
Currently, all of my work go to C:\xampp\Dropbox\* ("htdocs"). Everything I code is in there with each application under its own subdirectory. Whenever I need to update the production server, I FTP our production server and upload the necessary files. This is good when I am working alone but working with other developers would be hard because we need to know who edited which, when what was edited, etc.
Could you help explain how I can maintain my codes using GitHub? I suppose that I shouldn't make the entire htdocs as a local repository. I access my codes via http://localhost/ when testing it locally. Since I develop web applications using PHP, code changes regularly. We don't compile codes and I was used to simply saving all the files and letting Dropbox save all the versions I made.
It's a bit confusing what to do next since the GitHub for Windows application created local repositories in C:\Users\Admin\Documents\GitHub\test-app folder. Should I edit the code in htdocs and ALSO edit the code in My Documents\GitHub? Then also "push" the update to GitHub AND also update our production server via FTP?
So, to summarize, from the primitive perspective of web development, what steps must be changed so that I can enjoy the benefits of using version control systems such as GitHub?
Thank you!
The global idea is to use a versioning server to push code directly into your production server, bypassing FTP boring method.
You can tell GitHub application to clone your code at Xampp htdocs root, instead cloning it into your documents, if you have already initialized your repositories.
Every project must be a GitHub (or Git, more generally) repository.
So, you have to :
git init all your projects into your local server, at root of your project (so, not htdocs, but htdocs\<YOURPROJECT>
create repositories on GitHub for each of your projects
Follow GitHub instructions to initialize projects, git push on GitHub to finish.
You can do all that with a command line. In my opinion, it's easier.
Your code is on GitHub now. You won't have to edit your code into your documents AND htdocs if you initialize your repos in htdocs.
Next, it could be "fun" to install Git on your production server to grab most recent code from GitHub repository. Without Git, it's a pain in the a** to push code on a production server.
Now, when your local dev server and your production server are in sync, every time you will commit and push on GitHub, you can grab latest copy with a simple git pull on your production server.

Multiple server update architecture

i'm using AWS for my application. current configuration is:
Load balancer --> Multiple EC2 instances (auto scaled) --> all mount a NFS drive with SVN sync
Every time we want to update the application we login to the NFS server (another EC2 instance), and execute svn update to the application folder.
I wanted to know if there is a better way (architecture) to work since i sometime get permission changes after SVN update and server take a while to update. (thought about mounting S3 as a drive).
My app is a PHP + Yii framwork + and mysql DB.
Thanks for the help,
Danny
You could use a slightly more sophisticated solution:
Move all your dynamic directories (protected/runtime/, assets/, etc.) outside the SVN-Directory (use svn:ignore) and symlink them into your app directory. This may require some configuration change in your webserver to follow symlinks, though.
/var/www/myapp/config
/var/www/myapp/runtime
/var/www/myapp/assets
/var/www/myapp/htdocs/protected/config-> ../../config
/var/www/myapp/htdocs/protected/runtime -> ../../runtime
/var/www/myapp/htdocs/assets -> ../assets
On deployment start with a fresh SVN copy in htdocs.new where you create the same symlinks and can fix permissions
/var/www/myapp/htdocs.new/protected/config-> ../../config
/var/www/myapp/htdocs.new/protected/runtime -> ../../runtime
/var/www/myapp/htdocs.new/assets -> ../assets
Move the htdocs to htdocs.old and htdocs.new to htdocs. You may also have to HUP the webserver.
This way you can completely avoid the NFS mount as you have time to prepare step 1 and 2. The only challange is to synchronize step 3 on all machines. You could for example use at to schedule the update on all machines at the same time.
As a bonus you can always undo a deployment if something goes wrong.
Situation might be more complex if you have to run migrations, though. If the migrations are not backwards compatible with your app, you probably can't avoid some downtime.

2 cloud servers, one dev, one prod; what's a good deployment process?

Currently using LAMP stack for my web app. My dev and prod are in the same cloud instance. Now I am getting a new instance and would like to move the dev/test environment to the new instance, separating it from the prod environment.
It used to be a simple Phing script that would do a SVN export into the prod directory (pointed to by my vhost.conf). How do I make a good build process now with the environments separated?
Thinking of transferring the SVN repository to the dev server and then doing a ssh+svn push (is this possible with Phing?)
What's the best/common practice for this type of setup?
More Info:
I'm currently using CodeIgniter for MVC framework, Phing for automated builds for localhost deployment. The web app is also supported by a few CRON scripts written in Java.
Update:
Ended up using Phing + Jenkins. Working well so far!
We use Phing for doing deployments similar to what you have described. We also use Symfony framework for our projects (which is not so much important for this but Symfony supports the concept of different environments so it's a plus).
However we still need to produce different configuration files for database, front controllers etc.
So we ended up having a folder with build.properties that define configuration for different environments (and in our case also for different clients we ship our product to). This folder is linked to the file structure using svn externals (again not necessary).
The Phing build.xml file then accept a property file as a parameter on the command line, takes the values from it and produces all necessary configuration files, controllers and other environment specific files.
We store the configuration in template files and then use copy/filter feature in Phing to replace the placeholders in the templates with the specific values.
The whole task of configuring the given environment can then be as simple as something like this:
phing configure-environment -DpropertyFile=./build_properties/build.properties.prod
In your build file you check if the propertyFile property that specifies the properties file is defined and load the file using <property file="./build_properties/build.properties.prod" override="true" />. Then you just do any magic with the values as you need.
You can still use your svn checkout/update and put all the resulting configuration files into svn ignore (you will have them generated by phing). We actually use additional steps in Phing. Those steps in the end produce a Linux shell installation self-deploy package. This is produced automatically in Jenkins. We then send the package to our clients or the support team can grab the package from Jenkins and they can do the whole deployment just by executing it (we still prefer manual deployments to production servers) or Jenkins can deploy it automatically (for example to test servers).
I'll be happy to write more info if needed.
I recommend using Capistrano (looks like they haven't updated the docs since they moved the site) and railsless-deploy for doing deployment. Eventually, you are probably going to need to add more app boxes and run other tasks as part of your deployment so choosing a framework that will support this can save you a lot of time in the future. I have used capistrano for two PHP deployments (one small and one large) and although its not perfect, it works well. It also handles all of the code checkout / update, moving symlinks into place, and rolling back if something goes wrong.
Once you have capistrano configured, all you have to do is something like:
cap dev deploy
cap prod deploy
Another option that I have explored for doing this is fabric. Although I haven't used it, if I had to deploy a complex app again, I would consider it. The interface is simple and straightforward.
A third option you might take a look at thought its still in the early stages of development is gantry (pardon the self promoting). This is something I have been working on out of frustration with using capistrano to deploy a PHP application in an environment with a lot of moving pieces. Capistrano is great and works well for non PHP application deployments, but you still have to some poking around in the code to understand what is happening and tweak it to suit your needs. This is also why I suggest giving fabric a good look.
I use a similar config now. Lamp + SVN + codeigniter + prd and dev servers.
I run the svn repos on dev. I checkout the repos into the root folder of the dev domain. Then use a post-commit hook to update the root folder everytime any developer commits.
When we are happy and have fully tested the code I ssh into the prd server and rsync the dev root to the prd root.
Heres my solution for the different configs. Outside the root folder I have a config.ini file. I parse the file in my codeigniter constants.php script. This means that the prd and dev server can have separate settings without them ever being in the repos.
If you want help with post-commit, rsync and ini code let me know.

Categories