Version control of a web server config files - php

Between me and a Network Architect we manage a bunch of Web Servers (FreeBSD). He's responsible for all server/network related stuff (IPs, Firewalls, users/groups, etc.) and I'm responsible for all web-related stuff (Apache, PHP, MySQL). Sometimes the responsibilities overlap.
It happend few times that some changes were made to the config files which more or less affected the server and we were not able to figure out which of us made the changes and why.
I - being a Web Developer - think it'd be a good practice to put the files under version control (we currently use Subversion), so that whenever we change anything we have to commit an comment the changes. It'll solve all the problems with wondering who did what and why.
The particular config files I was thinking of were:
firewall config
apache config (with extras)
php config (php.ini)
MySQL config (my.conf)
I already know, that the idea version control of server config files is sound based on other question asked here. My only worry is how to do it properly on the Web Server side since the files are in different locations. Putting the whole /usr/local/etc under version control seems pointless as it contains not just the config files.
I was wondering whether not to create a new folder, say /config which would be under version control and would contain all the config files we need and then replace the original ones with symlinks to ones in the /config folder. E.g.:
/usr/local/etc/apache22/httpd.conf -> /config/apache22/httpd.conf
So the question is: Is this a good idea and if not, what is a better solution?

If you use GIT then puitting the whole /usr/local/etc under version control is not pointless at all.
you can only track a handfull of files if you so chose
the working directory with all config files tracked is hardly bigger in size
Just install git, then go to /usr/local/etc and write git init. This will create the .git folder in your current location (basically making this folder a repository).
Then add the config files you want to track:
git add firewall/firewall_config.conf apache2/httpd.conf etc
and commit: git commit -m "Initial Configuration"
Your config files are now being tracked.

Since you are versioning sensitive configuration files, I would recommend setting up an internal git server like gitlab. Create a git repository for each server, or server template/image/etc. Go to the / directory and 'git init'.
You can be selective about what you put under version control by only using 'git add /path/to/file for the files that you will be customizing.
Then, 'git -m 'commit comment' and 'git push -u origin master'.

Have you looked at any of the management tools made to do just this sort of thing? I recommend Puppet or Chef, having used them in previous jobs to do this sort of thing.

Related

Packing and deploying Symfony web site

I'm new to Symfony, coming from .NET world. Using Symfony (4) documentation, I managed to create simple web site. Now I want to put it to live, but I'm struggling to find any useful information what should I do in order to "pack" everything necessary and deploy it. Indeed, there's page describing deployment (How to Deploy a Symfony Application), but I find it lacking information about:
what to include/exclude (obviously I don't want to pack dev dependencies, and deploying composer files also doesn't seems to make any sense)
what to change (there's .env file - containing APP_ENV and APP_SECRET - where do I use those values?)
my hosting uses folder www for public presentation, do I have to change/configure something before renaming public directory just to www?
do I have to configure .htaccess to not route images/css/js trough PHP?
My current project structure is:
+ bin
+ config
+ public
+ css
- index.php
+ src
+ Controller
- Kernel.php
+ templates
+ var
+ vendor
- .env
- .gitignore
- composer.json
- composer.lock
- symfony.lock
Edit (2018-07-17):
I'm using git
hosting is capable of deploying from git branch called production (whenever I push to this branch, it calls composer install --no-dev)
Configuring public directory name is done in composer.json
Example of extra configuration in composer.json:
"extra": {
"symfony": {
"allow-contrib": false
},
"public-dir": "www"
}
Regarding my original question - I'm now using hosting capability of deployment using git. In that case, I do need composer files as well. My original thought was to build and pack bare minimum of things and then deploy this package to server. (Now I still have bin, composer files or .gitignore (and probably even more odd things) deployed as well).
Well, I will take a shot.
what to include/exclude (obviously I don't want to pack dev
dependencies, and deploying composer files also doesn't seems to make
any sense)
I notice you don't have a .gitignore. If you are not using GIT (you should) take a look at the default gitignore, it will tell you which folders you don't need.
You are right, vendors are not necessary. Composer's lock file contains the exact versions you are using. Just do composer install once you copied the files to the server. The simplest way of "deployment" is to use git(hub), and set up an ssh key on your server, and grant it read access to your git repo. (Deploy key)
what to change (there's .env file - containing APP_ENV and APP_SECRET
- where do I use those values?)
The .env file only works in dev mode by default. (notice the require-dev part in your composer.json).
Symfony recommend using "real" environmental variables in production, but you could use the .env file. To do this you will have to move "symfony/dotenv" from require-dev to require in composer.json. (Do an update after)
You will also want to set APP_ENV to prod on your server, configure db acccess and mailer too.
my hosting uses folder www for public presentation, do I have to change/configure something before renaming public directory just to www?
I won't answer this fully, that would take too long. Configure apache to point to your public dir using a VirtualHost. More here: https://symfony.com/doc/current/setup/web_server_configuration.html
do I have to configure .htaccess to not route images/css/js trough
PHP?
If you have installed the apache pack, you will get a .htaccess file, and that ignores files.
# If the requested filename exists, simply serve it.
# We only want to let Apache serve files and not directories.
RewriteCond %{REQUEST_FILENAME} -f
RewriteRule ^ - [L]
This way requested files won't hit your PHP code.
If however a file does not exist, Symfony will process the request.
I recommend ignoring common file extensions eg:
# Do not allow image requests to hit symfony
RewriteCond %{REQUEST_URI} \.(gif|jpg|jpeg|png|ico|map)$ [NC]
RewriteRule .* - [L]
TL;DR
Since PHP is not a compiled language, a "building" process in terms of compiling code into an executable is not pertinent here.
Building a Stateless Container
However, if you are developing a web application and you want to deploy it, your best way to go (meaning, industry standard) is to "build" a stateless container out of it. In that container, you will tipically have Apache + PHP, or Ngnix + PHP-FPM (or even just PHP with PHP-PM). That will be your application process, exposed by port 80: a beautiful, stateless and scalable docker container.
In the process of building the container, the idea is that you install the PHP version of your choice (with pertinent extensions) and the web server or process that will run through the HTTP port. Then, you pull your code in the container using git and set your environment variables, and do a composer install. The result will be a docker image with a version of your application in it, reachable by port 80.
More on that here.
Mmmm, nah, I don't do containers
Of course, this may not be the way to go for you, depending on how much your application follows the principles of a twelve factor app and where you are going to host it.
If you want to deploy to a shared hosting, you will need to change the public folder for the document root of your hosting provider. Since you don't have access to the PHP runtime in a shared hosting, you will need to copy your .env file and send it to the web hosting (which is a horribly insecure practice).
To setup the automatic deployment, you should prefer ssh. If you don't have console access to your web hosting, then git-ftp is probably your best friend here. By using webhooks (on Github for example), on a post-receive event, you will deploy your code running a script that will have, among other things, a composer install (with locked dependencies), and a chown of everything to the www-data user (or the web server user, for that matter).
Now, to proper answers
Having said that, I'll try to answer somo of your questions:
You are correct. vendor/ should stay out. Not the composer.lock however. Basically, you have to deploy all the code living in your repository.
In your deploy script (yes, you will have one), you should create a new .env file to be added. Yes, you need all the parameters in there, but obviously with the values for the production environment.
Yes, you should rename symfony's public folder to www. As far as I'm concerned, you shouldn't run into problems. That is, unless you are using symfony server for development. But a simple php -S localhost:8000 -t www in your project will suffice for development purposes. I use it all the time.
If you are using apache, yes, you should include in your deployment a .htaccess.
Your Update
So I see that your hosting has git deployment capabilities. That's good. In that case, there must be a way to configure a post-receive hook in the repo located there. That post-receive can execute any bash command, like a composer install. However, that will the composer executable, but I don't thing that that will be included. What you can do is to upload a composer.phar in your hosting root. It's composer, in a single executable.
Then, in your post-receive hook (aka your deploy script) you will run a composer install, and the mentioned chown.
Non Requested Advice
I understand that your requirements are to deploy to a web hosting. However, you are seeking the benefits of CI/CD in a way that is not the current industry standard. Web hostings are not naturally designed for deployment pipelines. The way you are doing it now could cause you some pain in the future, especially when adding some other backing services, or scaling, etc...
Now, if your application is small and simple, you can probably ignore my words.

GIT & Laravel stopping certain files

I've been running a project written in Laravel which has been fun to use.
The setup I use is the vagrant box Homestead configuration so I do the majority of my work on my local machine and push up to the development server once its ready to go. During the installation I had to push up the logs & vendor folder for it to work properly but now I'm at a stage where every commit I do via the command line includes storage/logs/laravel.log which when I then pull down it asks me to stash/commit on the server because they're different.
I've added it to the .gitignore file in the root directory of the project and it looks like this:
/node_modules
/public/storage
/.idea
Homestead.json
Homestead.yaml
/storage/logs/
/vendor/
Vendor doesn't cause me any problems unless I make changes to it so its not much of a bother, its just the logs which will not stop going up. If I use a GUI tool, I can manually tell it not to go up but I want to get it to the point that I can use the terminal to push it up and not worry about logs need stashing on the server.
I believe this is the same for the .env so I imagine a solution will work for both. I have also noticed that PHPStorm have said they're ignored but tracked with git if that helps.
If you take a look at the Laravel repo on GitHub, you'll find the following .gitignore file in the storage directory:
https://github.com/laravel/laravel/blob/master/storage/logs/.gitignore
This comes with the default installation to mark the logs directory as ignored. If you've deleted this by mistake, you should be able to reinstate it and resolve the issue you're having.
Just as importantly though, your workflow isn't following best practice. With respect to "Vendor doesn't cause me and problems unless i make changes to it" - you should never make changes to your vendor directory. This folder is home to third-party packages and plugins, modifying them directly causes multiple issues, chief amongst them:
You can no longer update a modified package without breaking your application.
Other developers won't be able to replicate your installation easily.
In fact, the vendor directory shouldn't be versioned at all. The best way to handle the files within it is using a package manager, like Composer, to do it all for you. This means you can easily switch between different versions of your packages and, by versioning only the composer files, other developers can run composer install or composer update to synchronise their development environment to yours quickly and accurately.

Rendering code using Apache/SVN on Ubuntu

I'm not sure if I'm missing the point here...
Our devs want the following...
On a LAMP server with SVN/WebDAV they want the root Apache directory to be a repository that they can all work on. However, setting the default Apache directory to a repo doesn't work as the files aren't stored as html/php files, instead in the SVN db structure to handle changes/revisions/etc.
Is there any way to do this? or would we have to have a separate repo that they copy files to/from the web root when developing?
You have to setup a separate svn repository and Hook Scripts. This hook scripts can checkout the code on every code change to your apache root. This is quite common an also used for automatic testing etc.

Looking for a safe way to deploy PHP code

How we do things now
We have a file server (using NFS) that multiple web servers mount and use these mounts as the web root. When we deploy our codebase, we SCP an archive (tar.gz) to the NFS server and unarchive the data directly in the "web directory" of file server.
The issue
During the deploy process we are seeing some i/o errors, mostly when a requested file cannot be read: Smarty error: unable to read resource: "header.tpl" These errors seem to go away after the deploy is finished, so we assume that it's because unarchiving the data directly to the web directory isn't the safest of things. I'm guessing we need something atomic.
My Question
How can we atomically copy new files into an existing directory (the web server's root directory)?
EDIT
The files that we are uncompromising into the web directory are not the only files that are in the directory. We are adding files to the directory, that already has files. So copying the directory or using a symlink is not an option (that I know of).
Here's what I do.
DocumentRoot is, for example, /var/www/sites/www.example.com/public_html/:
cd /var/www/sites/www.example.com/
svn export http://svn/path/to/tags/1.2.3 1.2.3
ln -snf 1.2.3 public_html
You could easily modify this to expand your .tar.gz before changing the symlink instead of exporting from svn. The important part is that the change is the atomic application of the symlink.
I think rsync is a better choise instead of scp, only the changed files would be synchroned. but deploying code by script is not convenient for deveopments in a team, and the errors in deployment is not humanize.
you can think about Capistrano, Magallanes, Deployer, but they are script too. I may recommend you have a try walle-web, a deployment tool written in PHP with yii2 out of the box. I have hosted it in our company for months, it works smoothly while deploying test, simulate, production enviroment.
it depend on groups of bash tools, rsync, git, link, but a web ui generally well for operation, have a try:)
Why don't you just have 2 dirs with 2 different versions of the site. So when you finished deploying in site_2 you just switched site dir in your webserver config (for example apache) and copy all files to site_1 dir. Then you can deploy in site_1 dir and switched to it from site_2 with the same method.
RSync was born to run... er... I mean to do this very thing
RSync works over local file systems and ssh - it's very robust and fast - sending/copying only changed files.
It can be configured to delete any files that have been deleted (or are simply just missing from the source), or it can be configured to leave them alone. You can set up exclude lists to exclude certain files/directories when syncing.
Here's a link to a tutorial.
Re: atomic - link to another question on SO
I like the NFS idea. We do deploy our code to NFS server that is mout on our frontends. In fact we run a shell script when we want to release a new version. What we do is using a symlink current to the last release dir, like this:
/fasmounts/website/current -> /fasmounts/website/releases/2013120301/
And apache document root is:
/fasmounts/website/current/public
(in fact apache document root is /var/www which is a symlink to /fasmounts/website/current/public)
The shell script updates the current symlink to the new release AFTER everything has been uploaded correctly.

php, user-uploaded files, version control, and website deployment

I have a website that I regularly update the code to. I keep it in version control. When I want to deploy a new version of the site, I do an export and then symlink the served directory name to the directory of the deployment.
There is a place where users can upload files, and I noticed once that, after I had deployed a new version, the user files were gone! Of course, I hadn't added them to the repository, and since the served site was from an export, they weren't uploaded into a version-controlled directory anyways.
PHP doesn't yet have integrated svn functionality, so I couldn't do much programmatically to user uploaded files. My solution was to create an additional website, files.website.com, which sits in a parallel directory to the served website, and is served out of a directory that is under version control. That way they don't get obliterated when I do an upgrade to the website. From time to time, I manually add uploaded files to the svn project, deleted user-deleted ones, and commit the new version. I'm working on a shell script to run from cron to do this, but it isn't my forte, so it's on the backburner as it's not a pressing need.
Is there a better way to do this?
I usually dont keep user generated data/file in svn. only the code, db schema/test data. What i usually do to deploy is an rsync from an up to date working copy which excludes the upload dir and .svn dirs. IMO content should be handled by more traditional filesystem/db backup mechanisms and not version control.
EDIT:
Just to be clear your symlinking strategy seems like a good practice. youre jsut missing the backup part it think. Id probably just tar | gzip the uploaded stuff in the cron job instead of interacting with SVN. And then probably have a seperate one to use mysqldump to dump the db and gzip that as well.
I would continue with the practice of exporting the site as upgrades are needed but have a symbolic link to a directory outside of the version controlled directory with the user uploaded content. This way when you do an export you only need to recreate the symlink if it gets blown away. You should then of course be backing up that user content as needed.
Rather than manually doing the export and managing symlinks, you could make your deployment directory a subversion checkout (from a production branch). That way, deploying is as simple as checking in your updates to the production branch.
This works as long as you have sufficient control of your subversion server and hosting setup, and your subversion repository is "ready to run." In this situation, your user directory could be an empty placeholder in subversion and would be left alone by the update process that runs on commit (business as usual for svn update). I'd still recommend (as mentioned by #Flash84x and #prodigitalson) a separate process to back up the user content.
There's an Ars Technica article with a description of how to set this up.
Update: If you follow this approach, make sure that your web server does not allow access to the .svn files in the deployment checkout.

Categories