How to optimize upgrading web application? - php

I mantain a custom PHP application (build for me) that is hosted in a web server. Sometimes I add new features or repair bugs, and after test in local I upload the changes to the web server. It's not a critical application (is a game), but the most of the time there are some people connected.
The steps that I make to upgrade the application:
Access via FTP (Filezilla)
Upload a .htaccess file that redirects all the people (except my IP) to a mantain.html file
Check that access is denied for other IP except mine.
Backup old code
Upload new code
Go to PhPMyAdmin
Backup DB
Execute scripts for the DB
Test that all works fine (if not -> revert the backups)
remove .htaccess file
I usually spend an average of 30 minutes doing these steps, and I'm wondering if there is any way to optimize, automatize or doing something to spend less time. Also I know that if I can automatize some steps there are less prone to have errors.

Several other answers suggest PHP-specific deployment tools, but being as I'm not very familiar with PHP, I'll offer some general tips. These suggestions may be redundant by some of the other tools already suggested, though.
First off, don't upload a new .htaccess file every time--just have two of them on your server. Perhaps call them .htaccess-permanent, and .htaccess-maintenence. Then create a symlink to the one that ought to be active. Then once you've tested that access is properly denied once, you don't have to do this manual testing phase every single time you do an upgrade.
I'd also write a shell script to do most everything for me. My new work flow would look like this:
Upload new code to server in a directory called new/
Log in to the server via shell, and execute the upgrade script
Test the new site
Run upgrade-finalize
The end.
Now for the interesting part, the upgrade script will do this:
It will delete the .htaccess symlink, and re-create it, pointing to .htaccess-maintenence.
It will copy the current code in current/ to backup/
It will back up the DB, using the exact same commands that PHPMyAdmin uses
It will move the contents of new/ (which you just uploaded) to current/
It will execute the scripts for the DB
And the upgrade-finalize script will simply:
Delete the .htaccess symlink, and re-create it, pointing to .htaccess-permanent once again
The only possibly tricky part here will be getting the exact commands that PHPMyAdmin uses to back up your database, but it's probably a simple mysqldump command, and you can probably get that info from PHPMyAdmin or some logs, or something. Sorry, I don't know more about PHPMyAdmin to help in this specific area.

Look into a deployment tool like Capistrano that allows you to automate those steps.

I usually spend an average of 30 minutes doing these steps, and I'm wondering if there is any way to optimize, automatize or doing something to spend less time.
There are many ways. For starters, steps one through eight can be done in a single shell script. You could checkout Phing, an automated deployment system. Also, you might want to delve in continuous integration for even more control over how and when the software can be deployed.
Doing this manually is, like you say, asking for trouble.

For starters, you could upload your files into a new webroot and when done, switch over the DocumentRoot in apache, leaving it available during the copy process. For any shared files you could use a symlink to a common folder (eg, uploaded images etc)
You could probably take the backup during operation as well if you don't care about consistency in the database. For migrations that doesn't "break" the functionality, you could also migrate it and test it on your new webroot with another hostname if consistency isn't a problem.
The best option is always to use multiple webservers so that you can take one offline for testing while the other one is operational, but you will still have problem with consistency, however I assume that is not an option since you don't mention it.

Related

PHP creating .nfs00000 files?

I have a PHP app, which is working fine for me, both on test system and a production system.
But another user of my app wrote me, that it creates a lot of files .nfs00000* on his system and it slows down loading of the page.
My app does not create any files on the filesystem, all datas are stored into MySQL. So I was really surprised by this. But that user removed my PHP app from his website and the problem dissappeared.
I will be honest -- I know nothing about .nfs00000* files and I was not able to google out anything reasonable about them. Can someone please try to give me explanation, what they are, why they are created and if I can do anything to avoid their creation?
Thanx, Honza
Maybe this can help:
Under linux/unix, if you remove a file that a currently running process still has open, the file isn't really removed. Once the process closes the file, the OS then removes the file handle and frees up the disk blocks. This process is complicated slightly when the file that is open and removed is on an NFS mounted filesystem. Since the process that has the file open is running on one machine (such as a workstation in your office or lab) and the files are on the file server, there has to be some way for the two machines to communicate information about this file. The way NFS does this is with the .nfsNNNN files. If you try to remove one of these file, and the file is still open, it will just reappear with a different number. So, in order to remove the file completely you must kill the process that has it open.
If you want to know what process has this file open, you can use 'lsof .nfs1234'. Note, however, this will only work on the machine where the processes that has the file open is running. So, if your process is running on one machine (eg. bobac) and you run the lsof on some other burrow machine (eg. silo or prairiedog), you won't see anything.
(Source)
If your app is deleting or modifying some files it could be the cause of the problem.

Sync phpMyAdmin DB's Across Desktops

So i just setup my Xampp Apache server to load all the documents i create on my Google Drive. For example if i type 127.0.0.1, it will show me all my web files on my Google Drive. I set this up so i can develop across my laptop which i use at school and my desktop which i use at home without having to copy files back and forth between computer to computer. This works the way i want it to but i forgot one thing. How am i supposed to sync my databases that i create. My question to you is how can i sync my databases to the cloud or somewhere else so i don't have to export and import every time i switch devices?
Also i would like to stay away from using hosting as i won't be online all the time.
The database server (the application itself) expects exclusive access to the data files. If you try to synchronize a data file between two systems, you're going to have issues and probably data loss.
What you could do is synchronize the data directory and make sure you're only running one server at a time. So when you're done working on the laptop, shut down the MySQL server process/service (mysqld), wait for it to finish synchronizing, and then start up the mysqld on the desktop. I suspect this will work, but it's a pretty non-standard usage so anything could happen.
To make it easier, I'd definitely consider writing a wrapper script/batch file that first tests for the presence of a lock file, then (if non exists) creates one, starts the mysqld, and when exiting make sure mysqld is stopped before deleting the lock file.
Anyway, to make this happen you would first stop mysqld everywhere, take the one mysql data directory that you wish to use, copy it to your Google Drive, then edit all of your MySQL configuration files to point to the new data directory instead of the old one. Whether XAMPP makes this more difficult than it should be, I'm not sure, but with stock MySQL it should be pretty trivial.
Remember that just because it's possible doesn't make it a good idea, and likewise that just because it's not a good idea doesn't make it won't work. So I'm saying it's not a good idea to do this, but if done with proper attention it will "probably" work.
Hope that helps.

How to upload changes automatically?

I'm developing a simple website(in a VPS) using php.
My question is: how can I send the changes I do using a secure protocol?
At the moment I use scp but I need a software that check if the file is changed, if yes sends it to the server.
Advice?
use inotify+rsync or lsyncd http://code.google.com/p/lsyncd/
check out this blog post for the inotify method http://www.cyberciti.biz/faq/linux-inotify-examples-to-replicate-directories/
If you have complete access to your VPS, then you can mount your remote home directory, via ssh and webdav, locally as a normal filesystem. Afterwards you can edit on your computer locally. Any changes should be automatically uploaded.
Have a look at this relevant article
FTP(S): Set up an FTP server on the target, together with an SSL cert (a personal cert is good enough).
SSH: You can pass files around through SSH itself, but I found it most unreliable (imho).
Shares: You could share a folder (as the other people suggested) with your local machine. This would be the perfect solution if it wasn't hard at diagnosing disconnection issues. I've been through this and it is far from a pleasant experience.
Dropbox: Set up a dropbox server on your VPS and a dropbox client on your local PC. Using dropbox (or similar methods like skydrive, icloud..), also has some added advantages, eg, versioning, recovery, conflict detection...
All in all, I think the FTPS way is the most reliable and popular one I've ever used. Via SSH, it caused several issues after using it from about 2 days. The disk sharing method caused me several issues mostly because it requires that you are somewhat knowledgeable with the protocol etc.
The dropbox way, although a bit time-consuming to set up, is still a good solution. However, I would only use it for larger clients, not the average pet hobby website :)
Again, this is just from experience.
You could also use Subversion to do this, although the setup may take a little while.
You basically create a Subversion repository and commit all of your code. Then on your web server, make the root directory a checkout of that repository.
Create a script that runs a svn update command, and set it as a post-commit hook. Essentially, the script would be executed every time you would do a commit from your workstation.
So in short, you would...
Update your code locally
Commit via Subversion (use Tortoise SVN, it's easy and simple to use)
Automatically, the commit would trigger the post-commit hook on the server and run the svn update command to synchronize the code in your root directory.
You would of course have to hide all of the .svn folders to the public using .htaccess files (providing you're using Apache).
I know this would work, as I have done it in the past.
The beauty of this option is that you also get a two in one. Source versionning and easy updates. If you make a mistake, you can also easily revert them back.

Process for updating a live website

What is the best process for updating a live website?
I see that a lot of websites (e.g. StackOverflow) have warnings that there will be downtime for maintenance in advance. How is that usually coded in? Do they have a config value which determines whether to display such a message in the website header?
Also, what do you do if your localhost differs from the production server, and you need to make sure that everything works the same after you transfer? In my case, I set up development.mydomain.com (.htaccess authentication required), which has its own database and is basically my final staging area before uploading everything to the live production site. Is this a good approach to staging?
Lastly, is a simple SFTP upload the way to go? I've read a bit about some more complex methods like using server-side hooks in Git.. Not sure how this works exactly or whether it's the approach I should be taking.
Thanks very much for the enlightenment..
babonk
This is (approximately) how it's done on Google App Engine:
Each time you deploy an application, it is associated with a subdomain according to it's version:
version-1-0.example.com
version-1-1.example.com
while example.com is associated with one of the versions.
When you have new version of server-side software, you deploy it to version-2-0.example.com, and when you are sure to put it live, you associate example.com with it.
I don't know the details, because Google App Engine does that for me, I just set the current version.
Also, when SO or other big site has downtime, that is more probable to be a hardware issue, rather than software.
That will really depend on your website and the platform/technology for your website. For simple website, you just update the files with FTP or if the server is locally accessible, you just copy your new files over. If you website is hosted by some cloud service, then you have to follow whatever steps they offer to you to do it because a cloud based hosting service usually won’t let you to access the files directly. For complicated website that has a backend DB, it is not uncommon that whenever you update code, you have to update your database as well. In order to make sure both are updated at the same time, you will have to take you website down. To minimize the downtime, you will probably want to have a well tested update script to do the actual work. That way you can take down the site, run the script and fire it up again.
With PHP (and Apache, I assume), it's a lot easier than some other setups (having to restart processes, for example). Ideally, you'd have a system that knows to transfer just the files that have changed (i.e. rsync).
I use Springloops (http://www.springloops.com/v2/) to host my git repository and automatically deploy over [S/]FTP. Unless you have thousands of files, the deploy feels almost instantaneous.
If you really wanted to, you could have an .htaccess file (or equivalent) to redirect to a "under maintenance" page for the duration of the deploy. Unless you're averaging at least a few requests per second (or it's otherwise mission critical), you may not even need this step (don't prematurely optimize!).
If it were me, I'd have a an .htacess file that holds redirection instructions, and set it to only redirect during your maintenance hours. When you don't have an upcoming deploy, rename the file to ".htaccess.bak" or something. Then, in your PHP script:
<?php if (file_exists('/path/to/.htaccess')) : ?>
<h1 class="maintenance">Our site will be down for maintenance...</h1>
<?php endif; ?>
Then, to get REALLY fancy, setup a Springloops pre-deploy hook to make sure your maintenance redirect is setup, and a post-deploy hook to change it back on success.
Just some thoughts.
-Landon

PHP application to replicate websites from single code source

I'm attempting to build an application in PHP to help me configure new websites.
New sites will always be based on a specific "codebase", containing all necessary web files.
I want my PHP script to copy those web files from one domain's webspace to another domain's webspace.
When I click a button, an empty webspace is populated with files from another domain.
Both domains are on the same Linux/Apache server.
As an experiment, I tried using shell and exec commands in PHP to perform actions as "root".
(I know this can open major security holes, so it's not my ideal method.)
But I still had similar permission issues and couldn't get that method to work either.
But I'm running into permission/ownership issues when copying across domains.
Maybe a CGI script is a better idea, but I'm not sure how to approach it.
Any advice is appreciated.
Or, if you know of a better resource for this type of information, please point me toward it.
I'm sure this sort of "website setup" application has been built before.
Thanks!
i'm also doing something like this. Only difference is that i'm not making copies of the core files. the system has one core and only specific files are copied.
if you want to copy files then you have to take in consideration the following:
an easy (less secured way) is to use the same user for all websites
otherwise (in case you want to provide different accesses) - you must create a different owner for each website. you must set the owner/group for the copied files (this will be done by root).
for the new website setup:
either main domain will run as root, and then it will be able to execute a new website creation, or if you dont want your main domain to be root, you can do the following:
create a cronjob (or php script that runs in a loop under CLI), that will be executed by root. it will check some database record every 2 minutes for example, and you can add from your main domain a record with setup info for new hosted website (or just execute some script that gains root access and does it without cron).
the script that creates this can be done in php. it can be done in any language you wish, it doesn't really matter as long as it gets the correct access.
in my case i'm using the same user since they are all my websites. disadvantage is that OS won't create restrictions, my php code will (i'm losing the advantage of users/groups permissions between different websites).
notice that open_basedir can cause you some hassle, make sure you exclude correct paths (or disable it).
also, there are some minor differences between fastCGI and suPHP (i believe it won't cause you too much trouble).

Categories