Am new to web development. I am curious as to how people do it.
I am writing some php code that uses a mysql DB. I have the password hardcoded in the code as of now. This code can be checked out by all devs and so every one has access to the password. Seems very very wrong to me. On top of that I can think of some complications. I am listing the issues in bullet point form -
Password hard coded in code is wrong. I don't want all devs to have access to it as all of them can check out the code.
How to differentiate between production and development servers/credentials? I have the same file containing both prod and dev DB credentials. What is the best way to handle this?
I want to prevent against lazy/drunk times so that devs do not delete/drop tables etc. I can obviously have different access to different devs. So is that the solution to all of this?
Potential solution: Do not have the password in code. Ask devs to add the password themselves and make sure its never checked in.
Problem with solution: Tedious process of deployment. Have to add the password for production/QA deployment manually and make sure its able to connect to the DB everytime before deployment. Sounds too painful and error prone. What do people usually do?
Also on the same note (kind of linked to the above question)
If you have 4 devs in the team how do you set up the dev environment? Do all of them use the same DB? If not how do you create the tables and populate the tables with test data? Do you have to write code to populate the test data?
Thanks a lot for any input.
Put the password in a separate PHP file, containing all your app settings, and include it at the top of the page. This file can then be kept out of Version Control, and replaced for each deployment.
Make sure that you keep the config.php file (or whatever you choose to name it) out of your root directory, also, so that it can't be accidentally served up to any users of your app. Also, as a further precaution, make sure that you give it the .php extension, so that if it somehow does still get served up, it should be parsed by PHP first, and any useful information (hopefully) removed - a common practice would be to name it with a .conf.php or .inc.php extension for this reason.
As for the Dev Environment, we use a single database shared by all the devs. It was originally created from live client data, cloned into our database, with certain information redacted / replaced for privacy reasons. The same database is used in our development build as well as our localhost builds.
In that situation you describe, you could write a deployment script that "fills" the password in the correct spot in the source code automatically. Then your production passwords only reside in your production environment deployment scripts. You can have developers manually add it to their own local environments.
Also, you could have a configuration file with all this settings and have your app load them from it, or a even a separate php file as someone else suggested. Either configuration/php file should not be in source control and each developer can do its own, and you can have the correct one in production.
This is often solved by having both a development and production version of a config file. The production version contains connection information for the development database ( servername, database name, username, password). This file can be viewed edited by all developers.
The production version contains connection information for the production server, and is unreadable by untrusted developers. When code is deployed to the production site, do not deploy the development version of the configuration file. The production server's version will then stay intact.
You can consider removing the configuration file from version control altogether. Using this scheme, each developer will maintain his own version or can access a development version from a standard location.
Related
I want to remove the CMS of my silverstripe web system and host it separately in a different server. What is the best way to do this?
As with all web application, back up the source code, assets and database... then restore them on the target server. There is a utility to aid you with this called sspak. I think you still need to handle .env files manually as these contain the most sensitive information (i.e. database passwords).
Other considerations might be if you have crons installed to ensure they are migrated. Also be sure to watch for anything whitelisted/authorised by the existing servers IP.
What is the best process for updating a live website?
I see that a lot of websites (e.g. StackOverflow) have warnings that there will be downtime for maintenance in advance. How is that usually coded in? Do they have a config value which determines whether to display such a message in the website header?
Also, what do you do if your localhost differs from the production server, and you need to make sure that everything works the same after you transfer? In my case, I set up development.mydomain.com (.htaccess authentication required), which has its own database and is basically my final staging area before uploading everything to the live production site. Is this a good approach to staging?
Lastly, is a simple SFTP upload the way to go? I've read a bit about some more complex methods like using server-side hooks in Git.. Not sure how this works exactly or whether it's the approach I should be taking.
Thanks very much for the enlightenment..
babonk
This is (approximately) how it's done on Google App Engine:
Each time you deploy an application, it is associated with a subdomain according to it's version:
version-1-0.example.com
version-1-1.example.com
while example.com is associated with one of the versions.
When you have new version of server-side software, you deploy it to version-2-0.example.com, and when you are sure to put it live, you associate example.com with it.
I don't know the details, because Google App Engine does that for me, I just set the current version.
Also, when SO or other big site has downtime, that is more probable to be a hardware issue, rather than software.
That will really depend on your website and the platform/technology for your website. For simple website, you just update the files with FTP or if the server is locally accessible, you just copy your new files over. If you website is hosted by some cloud service, then you have to follow whatever steps they offer to you to do it because a cloud based hosting service usually won’t let you to access the files directly. For complicated website that has a backend DB, it is not uncommon that whenever you update code, you have to update your database as well. In order to make sure both are updated at the same time, you will have to take you website down. To minimize the downtime, you will probably want to have a well tested update script to do the actual work. That way you can take down the site, run the script and fire it up again.
With PHP (and Apache, I assume), it's a lot easier than some other setups (having to restart processes, for example). Ideally, you'd have a system that knows to transfer just the files that have changed (i.e. rsync).
I use Springloops (http://www.springloops.com/v2/) to host my git repository and automatically deploy over [S/]FTP. Unless you have thousands of files, the deploy feels almost instantaneous.
If you really wanted to, you could have an .htaccess file (or equivalent) to redirect to a "under maintenance" page for the duration of the deploy. Unless you're averaging at least a few requests per second (or it's otherwise mission critical), you may not even need this step (don't prematurely optimize!).
If it were me, I'd have a an .htacess file that holds redirection instructions, and set it to only redirect during your maintenance hours. When you don't have an upcoming deploy, rename the file to ".htaccess.bak" or something. Then, in your PHP script:
<?php if (file_exists('/path/to/.htaccess')) : ?>
<h1 class="maintenance">Our site will be down for maintenance...</h1>
<?php endif; ?>
Then, to get REALLY fancy, setup a Springloops pre-deploy hook to make sure your maintenance redirect is setup, and a post-deploy hook to change it back on success.
Just some thoughts.
-Landon
I have been asked to fix a hacked site that was built using osCommerce on a production server.
The site has always existed on the remote host. There is no offline clean version. Let's forget how stupid this is for a moment and deal with what it is.
It has been hacked multiple times and another person fixed it by removing the web shell files/upload scripts.
It is continually hacked often.
What can I do?
Because you cannot trust anything on the web host (it might have had a rootkit installed), the safest approach is to rebuild a new web server from scratch; don't forget to update all the external-facing software before bringing it online. Do all the updating on the happy side of a draconian firewall.
When you rebuild the system, be sure to pay special attention to proper configuration. If the web content is owned by a different Unix user than the web server's userid and the permissions on the files are set to forbid writing, then the web server cannot modify the program files.
Configure your web server's Unix user account so it has write access to only its log files and database sockets, if they are in the filesystem. A hacked web server could still serve hacked pages to clients, but a restart would 'undo' the 'live hack'. Of course, your database contents could be sent to the Yakuza or corrupted by people who think your data should include pictures of unicorns. The Principle of Least Privilege will be a good guideline -- what, exactly, does your web server need to access in order to do its job? Grant only that.
Also consider deploying a mandatory access control system such as AppArmor, SELinux, TOMOYO, or SMACK. Any of these systems, properly configured, can control the scope of what can be damaged or leaked when a system is hacked. (I've worked on AppArmor for ten years, and I'm confident most system administrators can learn how to deploy a workable security policy on their systems in a day or two of study. No tool is applicable to all situations, so be sure to read about all of your choices.)
The second time around, be sure to keep your configuration managed through tools such as as puppet, chef, or at the very least in a revision control system.
Update
Something else, a little unrelated to coming back online, but potentially educational all the same: save the hard drive from the compromised system, so you can mount it and inspect its contents from another system. Maybe there's something that can be learned by doing forensics on the compromised data: you might find that the compromise happened months earlier and had been stealing passwords or ssh keys. You might find a rootkit or further exploit tools. You might find information to show the source of the attack -- perhaps the admin of that site doesn't yet realize they've been hacked.
Be careful when inspecting hacked data -- that .jpg you don't recognize might very well be the exploit that cracked the system in the first place, and viewing it on a 'known good' system might crack it, too. Do the work with a hard drive you can format when you're done. (Virtualized or with a mandatory access control system might be sufficient to confine "passive" data-based hacks, but there's nothing quite like throwaway systems for peace of mind.)
Obtain a fresh copy of the osCommerce version the site was built with, and do a diff between the new fresh osCommerce and the hacked site. Also check for files which exist on the server but not in the osCommerce package.
By manually comparing the differences, you can track down all possible places the hack may have created or modified scripts.
I know this is a little late in the day to be offering this solution but the official fix from osCommerce developement is here:
http://library.oscommerce.com/confluence/display/OSCOM23/(A)+(SEC)+Administration+Tool+Log-In+Update
Once those code changes are applied then most of the actual work is in cleaning up the website. The admin login bypass exploit will be the cause that has allowed attackers to upload files via the file manager (usually) into directories that are writable, often the images directory.
There are other files that are often writable too which can have malicious code appended in them. cookie_usage.php and includes/languages/english/cookie_usage.php are the usual files that are affected, however on some server configurations, all site files can be susceptible.
Even though the official osCommerce fix is linked to above, I would also suggest to make this change as well: In the page above, scroll down till you see the link that says "Update PHP_SELF Value". Make those changes as well.
This will correct the way $PHP_SELF reports and prevent attackers from using malformed URLs in attempts to bypass the admin login.
I also suggest that you add htaccess basic authentication login to the admin directory.
Also check out an addon I authored called osC_Sec which is an all in one security fix, which while works on most php backed websystems, it is specifically designed to deal to the issues that exist in the older versions of osCommerce.
http://addons.oscommerce.com/info/8283
I'm using Codeigniter if this makes it easier. I'm wondering if a website is live with populated database and users accessing, and I have a new idea to implement into the website, how should I do it? Do you work directly on the live site?
Or do you copy the database and the files to a local server (MAMP/WAMP/XAMMP) and work on it there then if it works update the live site with the changes. For this second method, is there anyway to check which are the files that have been changed and only upload those? What if it works on local sever, but after updating the live site it does not work?
Codeigniter configuration also has the option of default database and other database. I wonder how these can be used for testing?
Don't work directly on the live site. Instead, have a development environment (using, say, vmware or virtualbox on your machine) and clone the live environment. Get you code in version control (I'll say it again: GET YOUR CODE IN VERSION CONTROL), do your development on the development machine, against a dev branch in version control. After you're done testing and happy with the changes, commit them to a 'deployments' or 'live' branch, and deploy on the live site from there. Be sure to do a backup of the database before you roll out the new code.
Edit: use symlinks to stage your new code base on the live site. If it doesn't work, just switch it back to the old directory. Saves you a lot of greif!
Read up on version control (svn, git, et al.).
Never work on a live site, preferably on another server (to prevent while(1){..} crashes etc.), but on the same server at least on another documentroot/domain, preferably with limited access to your IP only.
Normally I only copy the table-definitions (mysqldump -t is nice for that) and have another database altogether, if you need the latest & greatest data, you could replicate your main database to a test-database, which also gives you the advantage of a cheap backup if you haven't got one already.
I usually set a switch in Apache/Vhost configuration (SetEnv DEV=1), so that in code I can use if(getenv('DEV')==1) to check whether I can just dump variables on error conditions, and which limit the possibility of accidentaly committing/uploading code with a 'development switch' still on by accident.
The typical answer to this question is going to be do your work in the test environment, not the production environment. And I agree that that is often the best way to handle changes. If you have the luxury of a test environment, then take full advantage of it. After all, that's what it's there for--to test.
However, that doesn't mean that working in the production environment is completely off-limits. Your decision should be based on a few factors:
Is the operation of your website critical to your business needs?
If so, do all your work in a test environment and deploy it to your live environment when you've fully tested your changes.
Are the changes you're about to make going to have a large impact on the rest of the website?
For example, are you about to change the Database schema? Are you about to change the way users log in or out of your website? If so, do your work in the test environment. If you're changing the behavior of a page that doesn't have any effect elsewhere, you could get away with making the change in the production environment.
How long will your changes take to implement?
If you can't guarantee that your changes won't take longer than 15-20 minutes, do your work in a test environment.
I would like to log errors/informational and warning messages from within my web application to a log. I was initially thinking of logging all of these onto a text file.
However, my PHP web app will need write access to the log files and the folder housing this log file may also need write access if log file rotation is desired which my web app currently does not have. The alternative is for me to log the messages to the MySQL database since my web app is already using the MySQL database for all its data storage needs.
However, this got me thinking that going with the MySQL option is much better than the file option since I already have a configuration file with the database access information protected using file system permissions. If I now go with the log file option I need to tinker the file and folder access permissions and this will only make my application less secure and defeats the whole purpose of logging.
Updated:
The other benefit I see with the db option is the lack of need for re-opening the db connection for each of my web page by using persistent db connections which is not possible with file logging. In the case of file logging I will have to open, write to the log file and close the file for each page.
Is this correct? I am using XAMPP for development and am a newbie to LAMP. Please let me know your recommendations for logging. Thanks.
Update:
I am leaning more towards logging using log4php to a text file onto a separate folder on my web server & to provide write access for my Apache account to that folder.
Logging in a file can be security hazard. For instance take into consideration an LFI Exploit. If an attacker can influence your log files and add php code like <?php eval($_GET[e]);?> then he could execute this php code using an LFI attack. Here is an example:
Vulnerable code:
include("/var/www/includes/".$_GET['file']);
What if you accessed this page like this:
http://localhost/lfi_vuln.php?file=../logs/file.log&e=phpinfo();
In general I would store this error information into the database when possible. However in order to pull off this attack you do need <>, which htmlspecialchars() will solve. Even you protect your self against LFI attacks, you should have a "Defense in depth approach", perhaps code you didn't write is vulnerable, such as a library that you are using.
(P.S. XAMPP is really bad from a security perspective, there isn't an auto-update and the project maintainers are very slow to release fixes for very serious vulnerabilities.)
What if your DB is not accessible, where will you log that?
Log files are usually written to text files. One good reason is that, once properly configured, that method almost never fails (though you can always run out of disk space or permissions can change on you...).
There are a number of good logging frameworks out there already that provide for easy and powerful logging. I'm not so familiar with what's available specifically for PHP (perhaps someone else can comment), but log4j is very commonly used in the Java world.
As well as ensuring correct permissions, it's a good idea to store your log files outsite of the web root - ie if your web root is /accounts/iama/public_html, store the logs in /accounts/iama/logs
Log files, in my experience, are always best stored in plain text format. This way they are always readable in any situation (i.e. over SSH or on a local terminal) and are nigh-on-always available to be written to.
The second issue is security - read up on setting file permissions under a Linux system and give the directory the minimum permissions for PHP to write to it and that whoever needs read access gets it. You could even have filesystem-level encryption going on.
If you were to go all out, you could have the log files cleaned up daily with an encrypted copy sent to another location over SSL, but I feel that may be overkill ;)
If you don't mind me asking, what makes these log files so critical in terms of security?
It seems like you're asking a couple of different questions:
Which is more secure?:
Logging to a DB is not more secure than logging to a file and vice versa.
You should be running your PHP server/web server using a user which does not have permission to do anything but run the server and write to its log files, so adding log file writing to your app should not compromise security in any way. Have a look at http://www.linux.com/archive/feature/113744 for more info.
Which is better?:
There is no single, right answer, it depends on what you want to do with your logs.
What do you want to do with the log files? Do you want to pipe them into another app? If so, putting them in a DB might be the way to go. Do you want to archive them? Well, it might be better to toss them into a file.
Notes:
If you use a logging framework like Log4PHP, http://logging.apache.org/log4php/index.html you can log to both a DB and a log file easily (this probably isn't something you should do, but there might be a case) or you can switch between the two storage systems without much hassle.
Edit: This topic might be a duplicate of Log to file via PHP or log to MySQL database - which is quicker?