I am working on development of a web app (for learning) in Laravel and I`m using Bitbucket for source control. It will be deployed on couple servers (20 or so, perhaps more over time), and I would like to be able to update all of them as the app changes over time.
The problem is that I will not have SSH access to most of those servers so I wont be able to use a simple "git pull" (a server we test on does not even have git installed so shell_exec is not an option also).
My plan was to make a script that will download latest zip from Bitbucket server, unpack it overwriting the old code, and then running a Laravel script to run migrate (for eventual database changes).
Is there maybe a more sensible way of doing this?
What are you looking for is CI/CD, i.e. Continues Integration/ Continues Delivery. There are so many ways to automatically deploy or pull a code over server. You can use following methods
Automating Deployment to EC2 Instance With Git
Using Bitbucket for Automated Deployments
CI\CD workflow with BitBucket Cloud, Bamboo, AWS CodeDeploy
Bitbucket - Manage Web Hooks
Apart from this you can find so many articles on this, but if you wants to automate the process at laravel level then use Laravel Envoy
Related
I'm new to the topic of CI & CD, therefore I have some basic questions.
But first about my current environment:
On my server I have Debian running with GitLab and one webpage. I run the update commands manually at my website like
php artisan down &&
git checkout . &&
git checkout master &&
git pull &&
composer install &&
php artisan migrate &&
php artisan up
In future I would like to improve my system landscape to have multiple stages. DEV, QA (demo for future releases) and PROD. On long term perspective at least PROD will get an own server. But I'm not sure about the right setup.
Now my questions;
When I have a gitlab-ci.yml where does this script run, I believe on the server where gitlabs runs, right? How can I run the command on client side e.g. my external PROD server?
Sofar I like the concept of webhooks, but I can't have multiple webhooks by different stages, or? Does it make sense to split the different branches and validate within the webhook script on webhost side?
When I have a gitlab-ci.yml where does this script run, I believe on the server where gitlabs runs, right?
No, it runs on a seperate server on which the Gitlab CI Runneris installed. https://docs.gitlab.com/runner/
How can I run the command on client side e.g. my external PROD server?
Using SSH in the gitlab-ci.yml file, for the authentification you can use secret variables which will be usable in the build script. Don't forget to ensure you are connectet to the right host
Sofar I like the concept of webhooks, but I can't have multiple webhooks by different stages, or? Does it make sense to split the different branches and validate within the webhook script on webhost side?
I wouldn't use webhooks. See the tutorials above.
In future I would like to improve my system landscape to have multiple stages. DEV, QA (demo for future releases) and PROD. On long term perspective at least PROD will get an own server. But I'm not sure about the right setup.
You should have a own server or VM for each server. Is it possible to do this all on a single server. Sure, but it is far from recommended. In that case you can use Gitlab eviroments
Welcome to Stackoverflow!
Recently I investigated the continuous integration development process, so I installed a TeamCity server, set up the build and tried to make it automatically deploy the build artifacts (my web app) to the web hosting server through FTP. However, I failed on the last step (deployment) because of thousands php files deploying for a very long time. So, I'm wondering if there is a way to do it more quickly, perhaps using zip-archives or something else.
So, my question is, if there is a common way to solve such problem?
Use git for deploy. It get only changes
Example using hooks: gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa – Łukasz Gawrylu
For example:
I have a Homestead with Laravel in Virtual Machine. After I finish my project could I just copy the files and bring them to my Wamp server, and export the database and import it into Wamp?
Or is there more behind all this?
Yes, generally this could be one way to deploy a project to a server.
The question is a bit broad to give a good answer because there are many ways how to built a good project deploy chain. First of all it depends on the server you are deploying to and the access rights you have (e.g. are you allowed to ssh into the server or can you run git at your server).
If you have ssh access and you are able to run git a good way could be to pull the git project from your git server, run composer install and migrate and seed your database with artisan.
There are even more ways up to full integrated deploy chains where you just need to push your project to a git server to trigger a deploy (e.g have a look at Capistrano or Laravel Forge for automated deploy).
I have a similar question to Pushing to an existing AWS Elastic Beanstalk application from the command line and Git pushes entire project even if few files are changed. AWS but did not see the answer I am looking for.
There have been comments about the confusing changes to Amazon's documentation because different versions of the documentation state they are the latest when some functions have actually been replaced and I think a new question is warranted now.
I used the Deploying a Symfony2 Application to AWS Elastic Beanstalk guide to setup my dev app and it works great. After I make several changes and want to update the aws app, I use git aws.push which creates a new version of my app and restarts the server.
I do not have my configuration files finalized (this is just a dev app) and need to manually run several commands on the remote server before my app can be viewed. For very minor temporary changes, I connected to the remote server via ssh and edited the php files directly which works fine. This way the server does not need to be restarted because everyimt I use git aws.push the server restarts. I would like to have a method to update those files using git without restarting the entire server/app.
Main question - Is there anyway I can push only the files that were changed in the recent commit and not have the server restart?
Side question for new aws commands - Should I use the eb commands Getting Started with EB CLI 3.x and use eb deploy instead of the git command?
No, currently there is no scenario where Elastic Beanstalk pulls changes without reinitiating and restarting server.
You can try to write your own workaround for that purpose, but you
will need to mention files that need to be updated manually and be
sure that files will be delivered to each EB instance. If you are
pushing from windows don't forget to convert line ending dos2unix.
eb deploy is a canonical command for aws.push
If you experience "full upload" issue instead of sending "only changes", please read my fresh answer here:
Elastic Beanstalk "git aws.push" only commited difference?
my question is basically two questions, but since they are closely related I thought that it makes sense to ask them en-bloque.
Case:
I am running a webapplication, which is distributed over multiple AWS EC2 instances behind a AWS Elastic Load Balancer
Intended goals:
a) When deploying new app code (php) it should be automatically distributed to all EC2 instances.
b) When new EC2 instances are added, they should automatically "bootstrap" with the latest appcode
My thoughts so far:
ad a)
phing (http://phing.info) is probably the answer for this part. i would probably add multiple targets for each EC2 instance and when running a deploy it would deploy to all machines. probably unfortunately not in parallel. but that might be even beneficial when scripting it in a way where the EC2 instance is "paused" in the load balancer, upgraded, "unpaused" again and on to the next instance.
ad b)
not sure how i would achieve that. in a conventional "hardware based setup" i probably had a "app code" volume on a network storage device and when adding a new server i'd simply attach that volume. when deploying new appcode i had just one deploy operation to this volume. so i need some "central storage" from where the newly bootstrapped machine/instance downloads it's appcode. i thought about git, but after all git is not a deploy tool and probably shouldn't be forced to be used as one.
I'd be happy to see your setups for such tasks and hear your hints and ideas for such a situation.
Thanks,
Joshua
This could be made using phing. However, I'm not sure why you want new instances to automatically retrieve the appcode? Do you get extra instance very often? And in order for a) to deploy code to several instances, it would still need to know them?
This setup requires a master deploy server and uses a push strategy. The master server needs phing, any required phing packages, and optionally ssh keys for the EC2 instances.
Suggestion for a)
(This is just a general outline of the phing tasks required)
Retrieve instance list (either config file or suuplied parameters)
Export appcode from repository to master server(e.g. SubVersion)
Tar appcode
scp tarball to all EC2 instances (to a deploy folder)
With rsh unpack tarball on EC2 instances
With rsh update symbolic link on EC2 instances so webserver folder points at new deploy folder
Clear any caches on webserver
The above could be called after you have made a new release.
Suggestion for b)
This could be achieved by running the phing script every couple of hours, have it login to the EC2 instances and check for the appcode. If it doesn't find it, it will deploy the newest final release.
This of course, requires the EC2 instances to be setup correctly in regard to webservers, config files, etc. (However, this could also be achieved by remote shell via phing).
I've previously used similar setups, but haven't tried it with services like EC2.
a) Take a look a Capistrano and since you're not using Ruby (and RoR) use the railsless-deploy plugin. Capistrano can deploy to multiple servers.
b) Haven't gotten any experience with this, but I wouldn't be surprised if there isn't a plugin/gem for Capistrano that can pretty much do this for you.
I believe phing is a good tool for this. Capistrano might also help.
I deploy my application in a very similar environment. So far, I've been using simple bash scirpts, but I'll probably be moving towards a phing based solution, mostly because of the difficulty involved in developing shell scripts (you have to know a new syntax that's not very flexible, not to mention cross-platform) and the availability of parallel processing, which is present in phing.
Phing is a great tool for managing your deployment tasks, that pretty much covers most of (a).
We use OpsCode Chef for the infrastructure management. It has a deploy_revision target that supports both SVN and Git repos for application code deployment as well, and knife (the main Chef command-line tool) has an EC2 plug-in that allows you, with one command, to spin up a new EC2 instance with whatever roles and environments you have defined in Chef, and deploy your application code.
For managing this with multiple EC2 instances behind an ELB, we have used the Python boto library to write very simple scripts that connect to the ELB, get a list of all instances attached to that ELB and one-by-one removes an instance from the ELB, runs the Chef update on that instance (which deploys the new application code and any machine configuration changes), re-attaches the instance to the ELB and moves on to the next instance.