Laravel Application to Phar - php

I have a laravel project. I'm deploying it on an Apache server by copying files to the server. Is there an alternative using phar that works like jar files?

The simplest way would be for you to rsync the files to the server. One a bit more complex but worked out great was using capistrano on the desired servers, the deploy script from jenkins would upload the build file to s3 and ssh into the server and run
bundle exec cap deploy
on the desired servers.
This way would make the servers deploy on themselves, in the case I used it, the capistrano on the machines did the build and deploy using git archive, but you could split your build step and deploy step.
Basically, you would have a private s3 bucket to upload your builds, they would always be named application-latest.zip
Your jenkins server would always build, upload and trigger the deploy
Your application servers would have a capistrano download the zip, unzip it and 'deploy'
Why am I mentioning capistrano so much? Because it relieves you from a lot of the deploy struggles, exemple:
- it will do the mentioned steps on a folder called application-timestamp
- this folder is kept alongside other folders containing your previous deploys
- there is a symlink called release that points to your latest deployment
- once capistrano finishes your steps, it changes the symlink to point to your fresh deployed folder
Benefits from this approach and tech choices:
you can choose to keep 'x' last deployments, that let's you rollback deploys instantly because that is just a symlink change from your current deployment to your current-1 deployment
that also controls how much disk space your assets take by cleaning up the oldest folders, if you chose to keep 5 deployments for rollback, when you deploy for the sixth time, the oldest folder is removed
it prevents you from having the white screen of death that is caused when nginx/apache/php can't access the file that is currently being updated by the deploy script
it lets you create an image of this machine, let's say on aws, and put it on a autoscale group, and once this machine is started, it updates itself by downloading the latest zip from s3 without having jenkins get in the way, you will just have to setup the autoscale init script to trigger the deploy on whatever provider you chose
you have a very clear boundary between build and deploy
hope that helps!

Related

Laravel Elastic Beanstalk app deployed in CodePipeline giving 500 SERVER ERROR

I've created a Laravel app locally. (This is working fine)
After, deployed that app to AWS Elastic Beanstalk with a .zip file. (This is working fine)
Then, created a simple pipeline using AWS CodePipeline to grab data from a particular GitHub repo and deploy to that specific AWS Elastic Beanstalk environment. I see that any push I make to that particular repo, the CodePipeline does then deploy to that particular AWS Elastic Beanstalk environment.
The problem is that the instance now has
with the following Recent Event
Environment health has transitioned from Warning to Severe. 100.0 % of
the requests are failing with HTTP 5xx. ELB processes are not healthy
on all instances. Application restart completed 42 seconds ago and
took 7 seconds. ELB health is failing or not available for all
instances. One or more TargetGroups associated with the environment
are in a reduced health state: - awseb-AWSEB-CVIEEN5EVRFC - Warning
and if I go to its URL I get a
500 | SERVER ERROR
Deleted the .zip file from the root of the repo as that could've been causing a conflict. It didn't solve.
Checked the Full Logs but couldn't spot anything useful.
Based on the comments.
The issue was caused by missing .env file in the deployment package/artifact that CodePipeline deploys. This was caused by the file not being committed into the GitHub repository.
To determine the cause, the CodePipeline's artifact was inspected. The artifact can be found in CodePipeline's bucket or in EB Application versions (in the Source column) and it is an objects a random name without an extension. In OP's case it was S40pAMw. It should be noted that this object is just a zip file without extension. To inspect it content, adding the extension to the download object allowed for straight forward opening of the zip archive.
(Please see #JackPrice-Burns answer for alternative way of dealing with env variables).
The solution was to commit the missing file into the repository. Once that was done, the CodePipeline was triggered
and once the deployment finished the Health of the Elastic Beanstalk instance changed to Ok
and the 500 | SERVER ERROR was now gone
This issue specifically was caused by missing environment variables.
It is BAD practice to commit .env or any files containing secrets to GitHub or any other source control system.
First of all, if that repository is public in any way, all of the secrets are now also public, database credentials, encryption keys, AWS access credentials.
Secondly, a common attack vector is the .git folder, and the underlying source control repository. Potentially a malicious user (if they found the source control details) could gain access to your secrets if your GitHub (or other accounts) were compromised.
Thirdly, if you would like to setup multiple environments for your code, a production / develop / local environment for example. You now can't easily change these environment variables on a per environment basis as they're committed directly to the repository.
In the ElasticBeanstalk console you can go to Configuration -> Software and add environment variables at the bottom of the page (screen shot attached). These environment variables will be picked up by Laravel. Set all variables that are in the .env on that settings page and do not commit the .env.
Another good practice to follow is not committing your vendor folder. AWS CodePipeline allows you to create another step which can build your source code. This build step can take in your source control code, run composer install (to generate the vendor folder) and then send that to Elastic Beanstalk for deployment.
Firstly, committing the vendor folder drastically increases the size of your repository and how long it takes to clone your repository.
Secondly, merging code from different branches can become difficult if you're dealing with the whole vendor folder as well which can become thousands of files and millions of lines of code.
Thirdly, if you would like to track how much work has actually been done for your repo on a per contributor basis, it becomes difficult because if someone commits a vendor folder change, they will commit whole packages which they didn't code themselves.

Handling File ownership issues in a PHP apache application

Env: Linux
PHP apps runs as "www-data"
PHP files in /var/www/html/app owned by "ubuntu". Source files are pulled from git repository. /var/www/html/app is the local git repository (origin: bitbucket)
Issue: Our Developers and Devops would like to pull the latest sources (frequently), and would like to initiate this over the web (rather than putty -> and running the git pull command).
However, since the PHP files run as "www-data" it cannot run a git pull (as the files are owned by "ubuntu").
I am not comfortable with both alternatives:
Running Apache server as "ubuntu", due to obvious security issue.
The git repository files to be "www-data", as it makes it very inconvenient for developers logging into the server and editing the files directly.
What is the best practice for handling this situation? I am sure this must be a common issue for many setups.
Right now, we have a mechanism where the Devops triggers the git pull request from the web (where a PHP job - running as "www-data" creates a temp file). And a Cron job, running as "ubuntu", reads the temp file trigger and then issues the "git pull" command. There is a time lag, between the trigger and the actual git pull, which is a minor irritant now. I am in the process of setting up docker containers, and have the requirement to update the repo, running on multiple containers within the same host. I wanted to use this opportunity to solve this problem, in a better way, and looking for advise regarding this.
We use Rocketeer and groups to deploy. Rocketeer deploys with the user set to the deployment user (ubuntu in your case) and read/write permission for it, and the www-data group with read/execute permission. Then, as a last step, it modifies the permissions on the web-writable folders so that php can write to them.
Rocketeer executes over ssh, so can be triggered from anywhere, as long as it can connect to the server (public keys help). You might be able to setup your continuous integration/automated deployment to trigger a deploy automatically when a branch is updated/tests pass.
In any case, something where the files are owned by one user that can modify them and the web group can read the files should solve the main issue.
If you are planning on using docker, the simplest way would be to generate a new docker image for each build that you can distribute to your hosts. The docker build process would simply pull the latest changes on creation and never update itself. If a new version needs to be deployed, a new immutable image with the latest code is created and distributed.

Continuous Deployment with Jenkins & PHP

I'm sure there are answers for this all over stackoverflow but I was unable to find anything specific.
I have a PHP project which I am revisiting. Its running on a RHEL5 box. I have SVN on the same box.
Out of curiosity I recently added Jenkins to the machine and have the jenkins php template at...
http://jenkins-php.org/
There was a bit of playing around with the setup but I more or less have this all running and doing Continuous Inspection builds when something is committed to SVN.
What I want to do now is have Jenkins copy my updated files across to the server when the build completes.
I am running a simple LAMP setup and would ideally only like to copy across the files that have actually changed.
Should I just use ANT & sync? Currently the files reside on the same box as the server but this may change whereby I will need to sync these files across to multiple remote boxes.
Thanks
Check these - Copy Artifact Plugin and the job's env variables.
Now set 2 jobs - 1 on source machine and 1 on destination server (make it a slave). Use the plugin to copy required artifacts by using environment variables.
Do you have your project (not the jenkins but that with LAMP setup) under the SVN? If yes I'd recommend to create standalone job in Jenkins that will just do an svn up and you could tie it to jenkins job the way like - you running your main job, and if build is ok jenkins automatically runs job to update your project.
For copying to other servers take a look at Publish Over plugins
It's very easy to setup server and rules. The bad thing is that you can't setup copying only the new files for current build which means that the entire project is uploaded every build.
If your project is too big, another solution is to use rsync as post build action.

Multiple server update architecture

i'm using AWS for my application. current configuration is:
Load balancer --> Multiple EC2 instances (auto scaled) --> all mount a NFS drive with SVN sync
Every time we want to update the application we login to the NFS server (another EC2 instance), and execute svn update to the application folder.
I wanted to know if there is a better way (architecture) to work since i sometime get permission changes after SVN update and server take a while to update. (thought about mounting S3 as a drive).
My app is a PHP + Yii framwork + and mysql DB.
Thanks for the help,
Danny
You could use a slightly more sophisticated solution:
Move all your dynamic directories (protected/runtime/, assets/, etc.) outside the SVN-Directory (use svn:ignore) and symlink them into your app directory. This may require some configuration change in your webserver to follow symlinks, though.
/var/www/myapp/config
/var/www/myapp/runtime
/var/www/myapp/assets
/var/www/myapp/htdocs/protected/config-> ../../config
/var/www/myapp/htdocs/protected/runtime -> ../../runtime
/var/www/myapp/htdocs/assets -> ../assets
On deployment start with a fresh SVN copy in htdocs.new where you create the same symlinks and can fix permissions
/var/www/myapp/htdocs.new/protected/config-> ../../config
/var/www/myapp/htdocs.new/protected/runtime -> ../../runtime
/var/www/myapp/htdocs.new/assets -> ../assets
Move the htdocs to htdocs.old and htdocs.new to htdocs. You may also have to HUP the webserver.
This way you can completely avoid the NFS mount as you have time to prepare step 1 and 2. The only challange is to synchronize step 3 on all machines. You could for example use at to schedule the update on all machines at the same time.
As a bonus you can always undo a deployment if something goes wrong.
Situation might be more complex if you have to run migrations, though. If the migrations are not backwards compatible with your app, you probably can't avoid some downtime.

AWS Elastic Beanstalk for PHP uploading a ZIP off app through CLI

I'm wondering if it is possible fo upload a zip of my entire application to Elastic Beanstalk through CLI? Currently I'm using git aws.push, but the problem is that my app has vendor dependencies I need to install after deploy. If I could directly upload a zip from CLI I could get my jenkins build server to install all the vendors, zip up the entire app then upload to EBS.
You have this tagged as "Elastic Beanstalk", so I'm going to assume that you're trying to push your app to S3, not EBS. App versions are stored in a S3 bucket and the Beanstalk deployment script then downloads the .zip from the bucket to the EBS volume, extracts it in /tmp, and then copies over the code to /var/www/html. You don't actually ever upload anything directly to the EBS volume (well, I mean, you shouldn't).
If your dependencies are PHP libraries, just include them with your application source bundle. However, if your dependencies include files that are installed in a location other than /var/www/html (like Apache modules or other binaries), then no, you can't (easily) do that with Elastic Beanstalk and PHP during deployment. You'll have to SSH in, install your dependencies, "burn" a custom AMI of the instance, and then specify that custom AMI in your Beanstalk environment config.
This is somewhat of an ugly workaround because you now have the responsibility of maintaining this custom AMI, whereas using the stock Amazon-provided images means you can rely on them to periodically release new versions with security fixes, etc. Keep in mind though that the AWS product development teams move at breakneck speed. Just a couple weeks ago the Beanstalk team introduced Puppet-like configuration scripts for provisioning. This is likely exactly what you'd need, but unfortunately it only supports Java and Python environments. I expect them to release PHP support soon though, so keep an eye on it.

Categories