A common problem that I face is placing a holding page on a website, while a new deployment is performed or some other maintenance is done (like testing the new deployment). I know this can be accomplished easily if you are using Apache (htaccess), but this is not always the web server in use (IIS, Nginx, etc). All my current websites redirect every request to an index.php file in the root of the public directory (e.g. Zend Framework, Wordpress, Symfony2), and so my current solution is the following:
To add the following code into the root index.php file:
$maintenanceFile = 'maintenance.flag';
if (file_exists($maintenanceFile)) {
$ips = explode("\n",file_get_contents($maintenanceFile));
foreach($ips as $key=>$value) {
$ips[$key] = trim($value);
}
if(!isset($_COOKIE['BYPASS_MAINTENANCE']) && !in_array($_SERVER['REMOTE_ADDR'], $ips)) {
include_once dirname(__FILE__) . '/holding.html';
exit;
}
}
With this I can simply add a maintenance.flag file also in the root, which contains a list of allowed ip addresses like so:
127.0.0.1
123.456.789.101
Then if the your ip address exists within the list, you can see the current website and test it etc before going public again, otherwise you are shown a holding.html page (which also resides in the root) instead. Once finished I can simply delete the maintenance.flag file.
So my question is, is there a better way of doing this?
I split up my website in 3 parts (copies) and have 3 subdomains.
1) dev.website.com is for developers. You can access if your ip is found, like your way.
2) test.website.com is for tester before I launch the website we need to test for buggs.
3) live.website.com equal to website.com for all the others.
This way I can easy develop without causing problems on the live domains.
You can easy switch from dev to test to live. You can copy environments easy.
Hopefully helped you!
The development and production versions of a website should always be in different directories and accessed using different domain names or ports or paths. And both should never use the same DB or other resources like uploaded files, etc.
If you're using Subversion, or git or any code repository (which I hope you are) then its easy to have a separate "dev" environment setup where you can do all your testing, etc. All my sites are setup this way. Once I implement or fix features and have tested them in my "dev" side I commit my changes to SVN and then on my "prod" side I simply do an svn update to push everything into production.
The problem with your $maintenanceFile technique is every single page request to your site will result in a file_exists check on your hard drive. This may not matter for smaller sites but for larger sites with a lot of hits this could result in slower performance overall.
Related
I have a a few php files which I call via AJAX calls. They all have a URL to my config.php. Now I've the problem that I always have to change the URLs to that config file by hand when I deploy a new version on my server.
Local Path:
define('__ROOT__', $_SERVER["DOCUMENT_ROOT"].'/mywebsite');
Server Path:
define('__ROOT__', $_SERVER["DOCUMENT_ROOT"].'/../dev.my-website.tld/Modules/');
I want to track changes in all of these PHP files. I'm searching for a solution to automatically change this path.
E.g.
This is my current workflow:
Local Environment:
(path version A)
do changes in the code
git add, git commit, git merge, git push to my server
Server:
git reset --hard
change path to version B
You are trying to run different code bases between development and live, which is not recommended -- they should be identical. The way I tackle this is to use an environment variable to specify which of several config files should be loaded.
In my Apache vhost I do something like this:
SetEnv ENVIRONMENT_NAME local
And then I use a function to read the environment name:
function getEnvironmentName()
{
$envKeyName = 'ENVIRONMENT_NAME';
$envName = isset($_SERVER[$envKeyName]) ? $_SERVER[$envKeyName] : null;
if (!$envName)
{
throw new \Exception('No environment name found, cannot proceed');
}
return $envName;
}
That environment name can then be used in a config file to include, or to retrieve values from a single array keyed on environment.
I often keep environment-specific settings in a folder called configs/, but you can store them anywhere it makes sense in your app. So for example you could have this file:
// This is /configs/local.php
return array(
'path' => '/mywebsite',
// As many key-values as you want
);
You can then do this (assuming your front controller is one level deep in your project, e.g. in /web/index.php):
$root = dirname(__DIR__);
$config = include($root . '/configs/' . getEnvironmentName() . '.php');
You'll then have access to the appropriate per-environment settings in $config.
A pure git way to achieve this would be filters. Filters are quite cool but often overlooked. Think of filters as a git way of keyword expansion that you could fully control.
The checked in version of your file would for example look like this:
define('__ROOT__', 'MUST_BE_REPLACED_BY_SMUDGE');
Then set up two filters:
on your local machine, you'd set up a smudge filter that replaces
'MUST_BE_REPLACED_BY_SMUDGE'
with
$_SERVER["DOCUMENT_ROOT"].'/mywebsite'
on your server, you'd set up a smudge filter that replaces
'MUST_BE_REPLACED_BY_SMUDGE'
with
$_SERVER["DOCUMENT_ROOT"].'/../dev.my-website.tld/Modules/'
on both machines, the clean filter would restore the line to be
define('__ROOT__', 'MUST_BE_REPLACED_BY_SMUDGE');
Further information about filters could be found in this answer and in the Git Book.
I am planning on building a multi-tenant application in Laravel with a master subdomain holding the relevant public files and a subdomain for each customer who will have their own databases pointing to the 'master' files. I am planning on doing this as automated as possible e.g. you click a button a subdomain is created, a database is created and the relevant config files are set. All this is fine except I'm not sure if my practices are the best to use and whether or not there are any security issues with it.
In the bootstrap/start.php file I have the following:
$env = $app->detectEnvironment(array(
$_SERVER['SERVER_NAME'] => array($_SERVER['SERVER_NAME'])
));
This would essentially mean that the environment for test.example.co.uk is test.example.co.uk. My install script will create a config directory 'test.example.co.uk` in 'app/config' and will add the relevant database config there.
This does all work as I expected so I am just looking for advice, are there any vulnerabilities with this?
Just to Add - Users will not be able to use the installation script, its just for the developers
I don't think there is any security issues with your code. One thing that I notice is that you are limiting youself to just one environment. Here is my env settings:
$env = $app->detectEnvironment(function()
{
return getenv("ENV") ? : "local";
});
Now my environment will be auto detected - on server I did provide "hook"
in the form of getenv function, and on local machine it is local.
Also instead of array, I am sending callback to detectEnvironment - for more flexibility.
I was wondering if anyone had a good way of dealing with the situation of having different URL bases for local, testing and production environments.
I try and use relative addresses, but sometimes this isn't possible.
How do I stop things breaking when the homepage of my application changes from:
local:
http://localhost:8888/mysite/index.php (URL base is /mysite/)
testing:
http://mytestingsite.com/testing/mysite/index.php (URL base is/testing/mysite/)
production:
http://mysite.com/index.php (URL base is /)
Use $_SERVER['SERVER_NAME'] and a switch.
If it matches the given host, define a variable that contains the URL base that you want to use. Have this happen in the header.php, or some settings.php that is always imported, and call this variable when necessary!
Alternatively, have a separate settings.php file in each environment with preset values, but this requires a small change in each codebase.
You should use virtual hosts in the webserver to always have your base URL to be /. See the documentation for your webserver about how to do this.
An other solution is to program your site to allow to be kept in whatever directory. That does probably include prepending all your links with $baseurl that will be calculated in your index.php. However I strongly recommend the first solution.
I use CloudControl for hosting and I would like to set up a server (possibly with load balancing support) to host piwik for all of my websites. The only problem is that the only writable directory CloudControlled allows you to access is defined by $_SERVER['TMPDIR'].
Is it possible to modify piwik to use this directory for all of its file-writing needs?
And also will I run into any issues with using load balancing? Something like automatically generated reports being generated by each node behind my load balancer since they're not aware of each other?
The idea is to keep this change for your system even when you update.
This is easy to do: create a bootstrap.php inside the piwik main folder.
This is the content of said file:
<?php
define('PIWIK_USER_PATH', $_SERVER['TMPDIR']);
You can double-check this: in index.php, you should see that it checks for a bootstrap.php file in the same folder. It's included when available, and this allows you to do little customizations and keep them even when you update. E.g. I've run piwik from svn for the past three years or so and have some custom changes in there.
There's far too much code for me to be able to confirm this works, but the constant PIWIK_USER_PATH seems to be used as the base root for file io. With that in mind, editing index.php, around line 23, which is originally:
if(!defined('PIWIK_USER_PATH'))
{
define('PIWIK_USER_PATH', PIWIK_DOCUMENT_ROOT);
}
To something like:
if(!defined('PIWIK_USER_PATH'))
{
define('PIWIK_USER_PATH', $_SERVER['TMPDIR']);
}
Might work - but then what happens when it's trying to read a file in its original location? Since this is a temporary directory, however, it may not be viable, in which case an approach using override_function or a similar method, paired with a persistent storage (your database), might also work - by overriding file functions with a database load/save routine; obviously this opens up another can of worms of biblical proportions, thus, my final recommendation is for you to get another less restrictive host.
I am trying to get into PHP app deployment with Capistrano. I have two config files that I need to be "edited" depending on where I deploy it. It's basic stuff like database name and root url (Codeigniter). Can I make Capistrano edit specified automatically? Let's say I want to edit the following in the file /system/config/edit.php:
$test = '';
// edit to
$test = 'Hello World';
Thanks,
Max
What I generally do in this kind of situation (even though I don't use Capistrano) is to have several config files commited to source control.
For instance :
config.php for development machines
this file is the one that's always used by the application
config.testing.php
config.staging.php
config.production.php
And when deploying the application to the server, I just have to copy the file corresponding to the current environment to "config.php" -- as this one is the one that's always used by the application.
It means that I have to do a file copy during the build process, yes, but :
it means there is no need for any search and replace, that can break
it also means every config files are commited to SVN (or whatever source control software you are using)
If your configuration files become too complex, and duplicate lots of stuff, you can think about having one "default" config file, that's always included, and sub-config files that only define what depends on the environment.
What that, what I said before still stands : just include the "default" file at the begining of each other.
My Unix is knowledge isn't quite up to scratch so I can't quite get the syntax perfect for what you want. However, Capistrano allows you to directly use the Unix command-line by invoking :run_method within your configs.
The Capistrano code might look something like the following:
run "grep -R --files-with-matches '$test = "";' /system/config/ | xargs perl -pi~ -e 's/\$test = "";/$test = "Hello World";/'"
I would check up on that find and replace function working as expected before implementing it live though.
If you need any more help, I'd recommend checking out the Capistrano Handbook, it should answer most of your questions.