I want to deploy a Laravel app from Gitlab to a server with no downtime. On the server, I serve the app with php artisan serve. Currently, I'm thinking that I would first copy all the files, then stop the old php artisan serve process on the server and start a new one on the server in the directory with the new files. However, this introduces a small downtime. Is there a way to avoid this?
If you are serving with a single server, you can not achieve 0 downtime. If downtime is a crucial part of your system, then use two server and load balance between them smartly. Remember, no hosting or VPS provider ensure you to deliver 100% availability. So if you want to get 100% availability in the deployment process, in an irony situation, your site may have been down in another time. What I'm saying is that, if tiny moment of restarting php artisan serve is matter, then scale up to more than one server.
A workaround solution would be use some 3rd party service (like CloudFlare) which can smartly detect server down situation and notify user when it is back, I personally use that.
If you really want full uptime, docker with kubernetes is your technology.
Related
After many hours of reading documentation and messing around with Amazon Web Services. I am unable to figure out how to host a PHP page.
Currently I am using the S3 service for a basic website, but I know that this service does not support dynamic pages. I was able to use the Elastic Beanstalk to make the Sample Application running PHP. But i have really no idea how to use it. I read up on some other services but they don't seem to do what I want or they are just way to confusing.
So what I want to be able to do is host a website with amazon that has dynamic PHP pages. Is this possible and what services do you use?
For a PHP app, you really have two choices in AWS.
Elastic Beanstalk is a service that takes your code, and manages the runtime environment for you - once you've set it up, it's very easy to deploy, and you don't have to worry about managing servers - AWS does pretty much everything for you. You have less control over the environment, but if your server will run in EB then this is a pretty easy path.
EC2 is closer to conventional hosting. You need to decide how your servers are configured & deployed (what packages get installed, what version of linux, instance size, etc), your system architecture (do you have separate instances for cache or database, whether or not you need a load balancer, etc) and how you manage availability and scalability (multiple zones, multiple data centers, auto scaling rules, etc).
Now, those are all things that you can use - you dont have to. If you're just trying to learn about php in AWS, you can start with a single EC2 instance, deploy your code, and get it running in a few minutes without worring about any of the stuff in the previous paragraph. Just create an instance from the Amazon Linux AMI, install apache & php, open the appropriate ports in the firewall (AKA the EC2 security group), deploy your code, and you should be up & running.
Your Php must run on EC2 machines.
Amazon provides great tools to make life easy (Beanstalk, ECS for Docker ...) but at the end, you own EC2 machines.
There is no a such thing where you can put your Php code without worrying about anything else ;-(
If you are having problems hosting PHP sites on AWS then you can go with a service provider like Cloudways. They provide managed AWS servers with one click installs of PHP frameworks and CMS.
I am currently working with a startup that is in a transitional phase.
We have a PHP web application and utilise continuous integration with the standard unit and regression tests (selenium) run over jenkins. We have a development server which hosts newly committed code and a staging server that holds the build ready for deployment to the production server. The way we deploy to the production server is through a rudimentary script that pulls the latest svn copy and overwrites the changes in the htdocs directory. Any SQL changes are applied via the sync feature from MySQL Workbench.
This setup works fine for a very basic environment but we are now in a transition from single server setups to clusters due to high traffic and I have come up against a conundrum.
My main concern is how exactly do we switch deployment from a single
server to a cluster of servers ? Each server will have its own htdocs
and SQL database and under the current setup I would need to execute
the script on every server which sounds like an abhorrent thing to
do. I was looking into puppet which can be used to automate sysadmin tasks but I am not sure whether it is a formidable approach for deploying new builds to a cluster.
My second problem is to do with the database. Now my assumption is the code changes will be applied immediately, but since we will have a db master/slave replication my concern is the database changes will take longer to propagate and thus introduce inconsistencies during deployment. How can the code AND database be synchronised at the same time ?
My third problem is related to automation of database changes. Does anyone know of any way I can automate the process of updating a DB schema without manually having to run the synchronisation ? At the moment I have to manually run the workbench sync tool, whereas I am really looking for a commit and forget approach. I commit it and DB changes are auto synchronised across the dev and QA setups.
I am running a similar scenario, but I am using a Cloud Provider for my production environment, in order that I do not need to care about replication of DB, multi server instances etc. (I am Using pagodabox, but AWS would also work perfectly fine).
I would recommend you to create real migrations for Database Migrations, in order to track those via svn or something else. In that case, you can also provide information, how to roll back. I am using https://github.com/doctrine/migrations, but mainly because I use doctrine as ORM.
If you have a migration tool, you can easily add a command in your deployment script to run those migrations after deployment.
I don't think that the database synchronisation is a big issue during deployment. That might depend on the actual infrastructure youre using. The cloud providers like pagoda or aws take care of it for you.
How do i get the changes from live to my repo? The files running on the heroku app have changed and now if i push these will be overwritten.
I have my php code running on heroku and storing 'database' things in local files.
{
"id":1,
"date":"12/1/2012",
"topImg":"/img/dates/1.jpg"
.....
So these things are stored in a json object then just saved over.
Don't do this!
Local files are your enemy, because Heroku is a cloud application host that runs applications on multiple anonymous load-balanced nodes.
Perhaps you're running a single dyno right now for development purposes, but if you ever want to make your site go live you'll need at least two dynos (because Heroku free tier service is qualitatively different from their non-free tier service, particularly in that they will spin down a free dyno if it is not being used but they will never do that to a non-free dyno). When you have multiple dynos, using local files for anything other than caching will be totally unmanageable.
Even if you somehow stay with one dyno forever, Heroku dynos are not guaranteed to maintain their local storage -- if for instance there is a hardware failure on the machine your dyno is served from, Heroku will not hesitate to spin down your application, deleting all local storage, and spin it up again with just your application code loaded, because it does not expect your application to be using local storage for anything.
There is no one supported method for getting files off of a dyno, because, again, it's never a good idea to store local files on a dyno. However, if you really, really need to do this, you can use heroku run and run one-off commands to, for instance, open up a shell and upload the files somewhere. Again: do not do this for anything serious, because once you have multiple dynos it'll be nearly impossible to manage files on them.
Totaly agree with #Andrew. Prefer to use something as mongoDB database as a service with heroku : https://addons.heroku.com/catalog/mongolab or elasticsearch, if you want to add search function over those documents: https://addons.heroku.com/catalog/searchbox. There are well designed to store json docs and, with those services, you are sure that your data will be persistent no matter your dynos are.
Now, to get back your heroku local files, I would do something like that :
run the heroku bash with heroku run bash
make a scp -pYourPort yourFile(s) userName#yourDestination:/pathToSaveLocaion
logout from your heroku instance
I hope this will help you.
I've got a PHP script on an Amazon EC2 instance. I changed a couple things in it, but the output is the same when I load it in my browser. Does Amazon have some sort of caching in place? I know the East zone was down today, but my instance is running fine now. I've ruled out client-side caching already.
In our ec2 instances this sort of thing happens in 2 scenarios:
1) a bug in our deployment ( for example, puppet or another deploy tool did something funny) or
2) git : a branch was not pushed to head, but the server redeploy happend.
Unless you are using a caching system which you know of, there is no reason to think that ec2 is caching things under the hood - ec2 is controlled and configured quite directly .
I’ve been working on a cloud based (AWS EC2 ) PHP Web Application, and I’m struggling with one issue when it comes to working with multiple servers (all under an AWS Elastic Load Balancer). On one server, when I upload the latest files, they’re instantly in production across the entire application. But this isn’t true when using multiple servers – you have to upload files to each of them, every time you commit a change. This could work alright if you don’t update anything very often, or if you just have one or two servers. But what if you update the system multiple times in one week, across ten servers?
What I’m looking for is a way to ‘commit’ changes from our dev or testing server and have it ‘pushed’ out to all of our production servers immediately. Ideally the update would be applied to only one server at a time (even though it just takes a second or two per server) so the ELB will not send traffic to it while files are changing so as not to disrupt any production traffic that may be flowing to the ELB .
What is the best way of doing this? One of my thoughts would be to use SVN on the dev server, but that doesn’t really ‘push’ to the servers. I’m looking for a process that takes just a few seconds to commit an update and subsequently begin applying it to servers. Also, for those of you familiar with AWS , what’s the best way to update an AMI with the latest updates so the auto-scaler always launches new instances with the latest version of the software?
There have to be good ways of doing this….can’t really picture sites like Facebook, Google, Apple, Amazon, Twitter, etc. going through and updating hundreds or thousands of servers manually and one by one when they make a change.
Thanks in advance for your help. I’m hoping we can find some solution to this problem….what has to be at least 100 Google searches by both myself and my business partner over the last day have proven unsuccessful for the most part in solving this problem.
Alex
We use scalr.net to manage our web servers and load balancer instances. It worked pretty well until now. we have a server farm for each of our environments (2 production farms, staging, sandbox). We have a pre configured roles for a web servers so it's super easy to open new instances and scale when needed. the web server pull code from github when it boots up.
We haven't completed all the deployment changes we want to do, but basically here's how we deploy new versions into our production environment:
we use phing to update the source code and deployment on each web service. we created a task that execute a git pull and run database changes (dbdeploy phing task). http://www.phing.info/trac/
we wrote a shell script that executes phing and we added it to scalr as a script. Scalr has a nice interface to manage scripts.
#!/bin/sh
cd /var/www
phing -f /var/www/build.xml -Denvironment=production deploy
scalr has an option to execute scripts on all the instances in a specific farm, so each release we just push to the master branch in github and execute the scalr script.
We want to create a github hook that deploys automatically when we push to the master branch. Scalr has api that can execute scripts, so it's possible.
Have a good look at KwateeSDCM. It enables you to deploy files and software on any number of servers and, if needed, to customize server-specific parameters along the way. There's a post about deploying a web application on multiple tomcat instances but it's language agnostic and will work for PHP just as well as long as you have ssh enabled on your AWS servers.