I am working on a moodle based project, which I inherited from someone else. Having copied the files into htdocs folder and started MAMP, the files still don't show in the browser. Instead, the browser automatically initiates a download. I might be required to change the config file, however, since I do not have much experience with php and SQL I am not sure what exactly. My part of the project is to develop html and css, but need to be able to run it locally first.
What do I need to do to get the files run locally? The route I use is localhost:8888/whatever/whatever/index.php
In case someone else runs into the same problem - Apache downloads php files instead of reading them - here is what helped me.
.htaccess file may need changes if the application has changed servers.
Delete (at least rename if you don't want to remove it) config.php and run the application through the browser. It should initiate install automatically.
To run php and SQL I used MAMP.
Let's say you have a big web app with large visits, but you don't want your app to crash & you don't want people to see the php or mysql errors that happens during replacing files using FTP, How to avoid that? How to just execute the old version of file until the replacing is done?
Thanks
you can follow at least one of this 2 rules:
to use accelerators (like APC) with turned off mtime checking. so until you clear cache manually - old versions will be used from memory
to use virtualhost symlinked to directory with your project: let's examine you store yout project at /home/project/www. and /home/project/public_html is your real webroot and symlinked to www. so - create /home/project/www2, checkout files there, setup and do whatever you want. after this - just change symlink.
I use git to upload my changes to a staging website on the same server, after testing I then push it to the production website. None of the files are changed until they are all received. On the plus side, it only sends the changes compressed, so I don't even have to send an entire file.
The staging area isn't required. I work with a lot of servers and sometimes some of the specific configurations on that server (mostly just find that an extension isn't installed)/
I'm sure you can do the same with another version control system. You need to be careful though. The tutorial I linked specifically stores the git information OUTSIDE the document root. Otherwise someone can just clone all the source code for your website.
If you like SVN, the .svn being in every directory can be a little annoying. Make sure that people can't download what they shouldn't be able to.
Deploy your app into the temporary directory. Then after you done, just rename the original app directory to app.old and the directory where you deployed your files into app
Note this should work okay in Unix environments. Also this will only work if all of the above directories are on the same file systems. In rare case users might see 404 error if they happen to access the app after your renamed the original app into .old and before you renamed temp dir into the original app directory.
I have a website that I regularly update the code to. I keep it in version control. When I want to deploy a new version of the site, I do an export and then symlink the served directory name to the directory of the deployment.
There is a place where users can upload files, and I noticed once that, after I had deployed a new version, the user files were gone! Of course, I hadn't added them to the repository, and since the served site was from an export, they weren't uploaded into a version-controlled directory anyways.
PHP doesn't yet have integrated svn functionality, so I couldn't do much programmatically to user uploaded files. My solution was to create an additional website, files.website.com, which sits in a parallel directory to the served website, and is served out of a directory that is under version control. That way they don't get obliterated when I do an upgrade to the website. From time to time, I manually add uploaded files to the svn project, deleted user-deleted ones, and commit the new version. I'm working on a shell script to run from cron to do this, but it isn't my forte, so it's on the backburner as it's not a pressing need.
Is there a better way to do this?
I usually dont keep user generated data/file in svn. only the code, db schema/test data. What i usually do to deploy is an rsync from an up to date working copy which excludes the upload dir and .svn dirs. IMO content should be handled by more traditional filesystem/db backup mechanisms and not version control.
EDIT:
Just to be clear your symlinking strategy seems like a good practice. youre jsut missing the backup part it think. Id probably just tar | gzip the uploaded stuff in the cron job instead of interacting with SVN. And then probably have a seperate one to use mysqldump to dump the db and gzip that as well.
I would continue with the practice of exporting the site as upgrades are needed but have a symbolic link to a directory outside of the version controlled directory with the user uploaded content. This way when you do an export you only need to recreate the symlink if it gets blown away. You should then of course be backing up that user content as needed.
Rather than manually doing the export and managing symlinks, you could make your deployment directory a subversion checkout (from a production branch). That way, deploying is as simple as checking in your updates to the production branch.
This works as long as you have sufficient control of your subversion server and hosting setup, and your subversion repository is "ready to run." In this situation, your user directory could be an empty placeholder in subversion and would be left alone by the update process that runs on commit (business as usual for svn update). I'd still recommend (as mentioned by #Flash84x and #prodigitalson) a separate process to back up the user content.
There's an Ars Technica article with a description of how to set this up.
Update: If you follow this approach, make sure that your web server does not allow access to the .svn files in the deployment checkout.
I've been trying to come up with a way to build an automatic update system for a cms that I'm building that will potentially be installed on numerous servers (likely with different configurations). What I've come up with is to keep a current version uploaded to my server in a predetermined directory. Then have the distributed systems check that directory (on a remote server) once every so often to see if a new version has been uploaded. If the version number of the uploaded version is greater than the one that particular system has, it will prompt the admin to update. Then the files will be copied via FTP into a tmp directory, then will be copied from tmp to replace the older versions of each file. then the tmp directory is deleted, and the systems version number is increased.
THe problem with this, is that I haven't found a way to transfer whole directories via PHP FTP. I know I can zip it and transfer it that way, but I haven't found a reliable way to unzip files in various server environments.
I know I can write my own method that will just go through each directory and transfer over each file one at a time, which is fine. I just wanted to know if someone else had already written something to this effect, or if someone knew of an alternate solution to this problem before I dive into it.
Thanks for your help!
Instead of sending a directory, why not zip the folder to save bandwidth and time? Then unzip on the destination servers. deleted since it was not an option for the OP
You could use PHAR archives for your app deployment (if you want to be bleeding-edge modern)
Or you could write an update script for Phing that gets the files from your server. You could even do checkouts from SVN then, instead of placing the build in a directory.
Try using exec() to run lftp from the command line:
/usr/bin/lftp -e 'o ftp://ftp.example.com/path/to/remote/directory && mirror --verbose && quit'
I've always just FTPed files down from sites, edited them and put them back up when creating sites, but feel it's worth learning to do things properly.
I've just commited everything to a SVN repo, and have tried sshing into the server and checking out a tagged build, as well as updating that build using switch.
All good, but it's a lot lot slower than my current process.
What's the best way to set something like this up? Most of my time is just bug fixes or small changes rather than large rewrites, so I'm frequently updating things.
You don't necessarily need to use SVN to deploy the files to the server. Keep using FTP for that and just use SVN for revision history.
You should look at installing rsync to upload changes to your server.
Rsync is great because it compares your local copy of the repo to the copy that's currently on the server and then only sends files that have changed.
This saves you having to remember every file that you changed and selecting them manually to FTP, or having to upload your whole local copy to the server again (and leaving FTP to do the comparisons).
Rsync also lets you exclude files/folder (i.e. .svn/ folders) when syncing between your servers.
I'd recommend you keep using Subversion to track all changes, even bug fixes. When you wish to deploy to your production server, you should use SSH and call svn update. This process can be automated using Capistrano, meaning that you can sit at your local box and call cap deploy -- Capistrano will SSH into your server and perform the Subversion update. Saves a lot of tedious manual labor.
For quick updates I just run svn update from the server.
Sometimes for really really quick updates I edit the files using vim and commit them from the server.
It's not very proper, but quick and quite reliable.
If you want to do this properly, you should definitely look into setting up a local SVN repository. I would also highly recommend setting up a continuous integration (CI) server such as cruise control, which would automatically run any tests against your PHP code when ever you check in to svn. Your CI server could also be used to publish your files via FTP to your host at the click of a button, once it has passed the tests.
Although this sounds like a lot of work, it really isn't and the benefits of a smooth deployment process will more than pay for itself in the long run.
For my projects, I usually have a repo. On my laptop is a working copy, and the live website is a working copy. I make my changes on the local copy, using my local webserver. When everything is tested and ready to go, I commit the changes, then I ssh into the remote server and svn update.
I also keep a folder in this repository which contains sql files of any changes I've made to the database structure, labelled according to their revision number. For instance, when I commit Revision 74 and it has a couple extra columns in one of the tables, included in the commit will be dbupdates/rev74.sql. That way, after I do my svn update, all I just have to run my sql file (mysql db_name -p -u username < dbupdates/rev74.sql) and I'm good to go.
If you want to get real funky with it, you could use a build script to get the current version from SVN, then compile your PHP code, then on a successful build, automatically push the changes to your server.
This will help in debugging and may make your code run faster. Also, getting into the build habit has really improved my coding over just pushing the PHP straight to the server and debugging via Firefox.
The benefits of source control reveal themselves as the complexity of the project and number of developers increase. If you are working directly on a remote server, and are only making quick patches most of the time, source control might not be worth the effort to you.
Preferably, you should be working from a local working copy of the repository (meaning you should also set up a local server). Working against a remote server using SVN as the only means to update it would slow you down quite considerably.
Having said that, working with SVN (or any other source control) will yield many benefits in the long run - you have a complete history of changes, you can always be sure the server is up-to-date (if you ran update) and if you add more developers to the project you can avoid costly source overwrites from each other.
What I do at work, is use FTP to upload changes to a test server. Then when I am finished with the section of the site that I was working on, I commit the changes and update both. Sometimes, if I am working on something and I change a lot of files in different directories, I commit it and update the test server. But I don't update the production server. But I am the only programmer here, I wouldn't recommend committing possibally buggy code if there is more than one programmer.
I use ZendStudio for Eclipse (currently version 6.1). And I use SVN to keep my source codes available. Initially I thought the process was somewhat slow due to commit process (and entering commit comment) and wait until it stops.
However after learning that Ctrl+Alt+C to Commit and check 'Always run in Background', the process doesn't slow at all.
Plus, I do run everything locally, then only SSH after a while.
I did a post-commit hook to automatically update my web. It´s fast but you can make mistakes.
IF on a *nix server AND you have the appropriate SSH access AND you have space to keep multiple copies of the website, THEN the single most useful versioning technique I have found is to use a symbolic link to point to the "current" version of the website. (You can still use SVN to version source code -- this is a way to easily/instantly switch between versions of the website on the server.)
Set up the webserver to point to /whatever.com as the root of the website.
Have a folder like /website/r1v00 to which you FTP the website files, then create a symlink called "whatever.com" that points to /website/r1v00
When you have an updated version of the website, create another folder called /website/r1v001, FTP all the files for the updated site, then change the symlink for "whatever.com" to now point to /website/r1v01. If there are any problems with the new site, you can back it out instantly by simply pointing the "whatever.com" symlink back to /website/r1v00
Of course, you can/should set up scripts to automate the creation and switching of the symlink. In my case, I have an "admin" page written in PHP that lists all the available versions, and allows me to switch to any of them. This technique has saved my bacon several times...!
Obviously this does not address any issues with versioning database schemas or database content.