I was enjoining about getting a Network Attached Storage (NAS) so that I can work on dev sites from both my desktop and my laptop without duplicating files and always having the most current file (just in case I forget to save). My question is if I put sites on there that uses php, would I be able to run the sites off of the NAS as I would with MAMP / WAMP? Or would I still need something else to make that work?
The point of a NAS is to share files over a network. This is usually done via Windows File & Print Sharing (aka Samba aka SMB) which is supported on most platforms.
Some NAS devices might allow you to run a web server (particularly if you can install custom firmware), but it is a poor choice of platform to run anything remotely complex in terms of web server stacks.
You can certainly store your development files on a NAS, and then access them from webservers running in both your development environments.
… but that said, I'd look at using version control software (Git would be my preference), keeping your repository on the NAS and getting into the habit of saving, committing and pushing. It makes things more manageable in the long run. (You could also use a service like Bitbucket or Github and dispense with the local NAS entirely).
You could also go a step further and run a server with CI software on it that monitors your repository and automatically pulls updates from it, runs your automated tests, and then updates a local test server.
I am assuming you use windows(it's easier to do it in mac i think) with wamp what you can do is mount a network drive to w:\ let's say. Then create a virtual host that points to a folder in W:\ drive.
With mac all you need to do is mount the remote folder share to your mamp directory and all should work as you want.
Though personally I think this is a terrible idea and would much rather suggest you to use a VCS(version control system) to share code between multiple places. Lots of them are designed with this problem in mind. And it provides nice history about your code at the same time. If you want to do some research look at GIT(the most popular, currently) bitbucket has free private repositories. you can look into what a VCS can do here https://en.wikipedia.org/wiki/Version_control
I work a lot with the WindowsAzure4E(clipse) IDE. And it's always pain to wait for the local test deployment)
Isn't there a way to develop on the deployed PHP files which must be stored somewhere to inetput or something else?
thx for your ideas.
Yes! In fact, I just got this working myself yesterday.
After installing PHP 5.3 with CGI support for IIS (making the necessary php.ini modifications of course), I simply created a new site in IIS that mapped to a role in the workspace for my Eclipse project.
Keep in mind that there's one hiccup to this and that is that the php_azure.dll file, used to access the service configuration and mount azure drives, was built to run in the azure fabric (either development or hosted). In my case, I don't NEED these features so I removed referrences to things like getconfig and poof the project loads in IIS just fine. I only need to make sure I start Azure Storage prior to launching the application.
I've been told that some folks are able to update their systems path environment variable with the location of the azure diagnostics dll (diagnostics.dll) and have it work without this modification. But this route didn't work for me. :(
I'll actually be blogging on this more this weekend as it took me a week of evenings to get things sorted out.
I found out that after the deployment the project files are copied to the folder ServiceDefinition.csx.
When you now edit the source code in this place, you can see the changes directly, without another deployment.
So we are pushing to create good processes in our office. I work in a web shop that has been doing web sites for over a decade. And we don't use version control. I know! It's bad, not my fault. I'm the guy with a SoftE background pushing for this at a minimum.
The tech lead has been looking into it. We all use Mac workstations and mostly use Coda for editing since it is a great IDE. It has SVN support built in but expects it to work on local files. We're trying to explore mounting the web directory as a local network drive with an SFTP tool.
We are a LAMP shop, BTW.
I am wondering what the model is here. I think we have typically would checkout the whole site to our local machine where we have apache running and then test it there? This isn't how we work yet, we do everything on the server. We've looked at checking things in and out, but some files are owned by apache and the ownerships change when I check them in, because I'm not apache.
I just want to know a way to do this that works given my circumstances. Would be nice to not have to run apache locally.
You might want to checkout the Coda mailing list and ask there. Lots of Coda enthusiasts there with specific experience.
If you don't want to have to run locally could make Apache on your server run a copy of the site for every developer, on a different port per person, and then mount those web-roots to the local macs and make that the working directory. If you're a small shop that's not hard to manage. I find that pretty easy to set up and saves a lot of resources on the local machines. The one-site-per-person helps to avoid conflicts with multiple people working on files at the same time.
What I'd additionally recommend is to have a script that gets the latest changes from SVN and deploys the entire site to the production server when you're ready. You could have that script change permissions on appropriate files/folders as needed to be owned by Apache. The idea once you're using source control is to never manually edit the production files -- you should have something that deploys it from SVN for you.
A few notes:
Take a look at MacFuse / MacFusion (the latter is the application, the former is the library behind it) to mount remote directories via SSH / FTP as local ones.
Allow your developers to check out into their local environment (with their own LAMP stack if they're savvy), or look into a shared dev environment with individual jails. This way your developers can run their own LAMP stack (which you could deploy for them on the machine) without interfering with others.
The idea being, let them use a workflow that works best for them, to minimize the pain in adapting to this change (if change management might be an issue!)
Just as an example, we have a shared dev server where jails are created with a single command for new developers. They have a full LAMP stack ready to go, and we can upgrade and re-deploy jails easily to keep software up to date. Developers have individual control to add custom settings / extensions if they need it for work, while the sys admins have the ability to reset everything when someone accidently breaks their environment :)
Those who prefer not to use jails, and are able to, manage their own local environments (typically through Macports or MAMP).
I have installed a xampp (apache + mysql + php) stack on a portable volume (truecrypt volume on a usb drive) that I use on different linux machines, for webapp development.
This volume is formatted in ext3, so I run into permissions problems as I can't be the same user on each machine. For example I tried to check the git status of a project started on the other machine and access is denied.
Last time I ran into a similar problem I did chmod 777 (or was it chown?) the whole directory, so I guess the permission status of my php files is messy. This time I am also worried about special cases like git files and the symfony web framework I installed lately.
How should I deal with this situation?
I would love to set this permission issue properly.
Thanks a lot for your attention.
Edit :
I may need to write a script to locally "sudo chown lampp:lamppusers" each time I use a different machine. Or some git voodoo trick with local-only repositories export? Any idea?
I doubt there is a way round it like that - you could try using SELinux ACLs instead of the usual permission, but I think that'd be overkill for you.
I would advise just setting the whole directory to 777 with chmod (chown changes ownership), as the user and group each user is in will be different on each machine, you can't really do better. Your only problem here is security - but as its on your local box for your own purposes, I don't see a problem. Not until you copy the files to the server, whereupon you'd best set them back correctly.
You could try authenticating against an LDAP server, you'd retrieve the uid for the username you enter, and that would be mapped to the same uid on each PC, so then you could set group or owner once. (I think, you'd have to ask someone more experienced with LDAP ud mappings).
This seems to be the best solution I have found :
sudo chown -R polypheme:lamppusers webdir
This way I can own again the whole directory with my local username and my local group.
polypheme is a username
lamppusers is a groupname
webdir is the directory containing all my web development files
Note that on computer 1 my username is polypheme and I belong to the group lamppusers, and on computer 2 my username is polypheme and I belong to the group lamppusers.
The usernames and groupnames are exactly the same.
I am not 100% sure this would work perfectly if names happened to be different.
I have not modified any chmod which is good as the problem is much more complex than simply flatten every file permission with a global
chmod -R 744 webdir
It would be the wrong thing to do with protected files in the symfony framework. And .git/ files too I guess. (gbjbaanb was right about this)
At this point, I have recovered the same configuration I used to have on the other machine, which is what I was looking for. I can work again.
Knowing this, my question suddenly seems stupid.
Well, the multi-computer context makes it tricky enough to be valuable. (To me at least)
(This answer will be validated as soon as it has been properly tested)
I've heard the phrase "deploying applications" which sounds much better/easier/more reliable than uploading individual changed files to a server, but I don't know where to begin.
I have a Zend Framework application that is under version control (in a Subversion repository). How do I go about "deploying" my application? What should I do if I have an "uploads" directory that I don't want to overwrite?
I host my application through a third party, so I don't know much other than FTP. If any of this involves logging into my server, please explain the process.
Automatic deploy + run of tests to a staging server is known as continuous integration. The idea is that if you check in something that breaks the tests, you would get notified right away. For PHP, you might want to look into Xinc or phpUnderControl
You'd generally not want to automatically deploy to production though. The normal thing to do is to write some scripts that automates the task, but that you still need to manually initiate. You can use frameworks such as Phing or other build-tools for this (A popular choice is Capistrano), but you can also just whisk a few shell-scripts together. Personally I prefer the latter.
The scripts themselves could do different things, depending on your application and setup, but a typical process would be:
ssh to production server. The rest of the commands are run at the production server, through ssh.
run svn export svn://path/to/repository/tags/RELEASE_VERSION /usr/local/application/releases/TIMESTAMP
stop services (Apache, daemons)
run unlink /usr/local/application/current && ln -s /usr/local/application/releases/TIMESTAMP /usr/local/application/current
run ln -s /usr/local/application/var /usr/local/application/releases/TIMESTAMP/var
run /usr/local/application/current/scripts/migrate.php
start services
(Assuming you have your application in /usr/local/application/current)
I wouldn't recommend automatic updating. Just because your unit tests pass doesn't mean your application is 100% working. What if someone checks in a random new feature without any new unit tests, and the feature doesn't work? Your existing unit tests might pass, but the feature could be broken anyway. Your users might see something that's half-done. With automatic deployment from a check-in, you might not notice for a few hours if something made it live that shouldn't have.
Anyhow, it wouldn't be that difficult to get an automatic deployment going if you really wanted. You'd need a post-check-in hook, and really the steps would be:
1) Do an export from the latest check-in
2) Upload export to production server
3) Unpack/config the newly uploaded export
I've always performed the last steps manually. Generally it's as simple as SVN export, zip, upload, unzip, configure, and the last two steps I just alias a couple of bash commands together to perform. Then I swap out the root app directory with the new one, ensuring I keep the old one around as a backup, and it's good to go.
If you're confident in your ability to catch errors before they'd automatically go live, then you could look at automating that procedure. It gives me the jibbly-jibblies though.
At my webdev company we recently started using Webistrano, which is a Web GUI to the popular Capistrano tool.
We wanted an easy to use, fast deployment tool with a centralized interface, accountability (who deployed which version), rollback to previous versions and preferably free. Capistrano is well-known as a deployment tool for Ruby on Rails applications, but not centralized and targeted mainly to Rails apps. Webistrano enhances it with a GUI, accountability, and adds basic support for PHP deployment (use the 'pure file' project type).
Webistrano is itself a Ruby on Rails app, that you install on a development or staging server. You add a project for each of your websites. To each project you add stages, such as Prod and Dev.
Each stage can have different servers to deploy to, and different settings. Write (or modify) a 'recipe', which is a ruby script that tells capistrano what to do. In our case I just used the supplied recipe and added a command to create a symlink to a shared uploads dir, just like you mentioned.
When you click Deploy, Webistrano SSHs into your remote server(s), does an svn checkout of the code, and any other tasks that you require such as database migrations, symlinking or cleanup of previous versions. All this can be tweaked of course, after all, it's simply scripted.
We're very happy with it, but it took me a few days to learn and set up, especially since I wasn't familiar with Ruby and Rails. Still, I can highly recommend it for production use in small and medium companies, since it's proven very reliable, flexible and has saved us many times the initial investment. Not only by speeding up deployments, but also by reducing mistakes/accidents.
This sort of thing is what you would call "Continous Integration". Atlassian Bamboo (cost), Sun Hudson (free) and Cruise Control (free) are all popular options (in order of my preference) and have support to handle PHPUnit output (because PHPUnit support JUnit output).
The deployment stuff can be done with a post build trigger. Like some other people on this thread, I would exercise great caution before doing automated deployments on checkin (and test passing).
check fredistrano, it's a capistrano clone
works great (litle bit confusing installing but after all runs great)
http://code.google.com/p/fredistrano/
To handle uploads, the classic solution is to move the actual directory out of the main webspace, leaving it only for a fresh version to be checked out (as I do in the script below) and then using Apache to 'Alias' it back into place as part of the website.
Alias /uploads /home/user/uploads/
There are less choices to you if you don't have as much control of the server however.
I've got a script I use to deploy a given script to the dev/live sites (they both run on the same server).
#!/bin/sh
REV=2410
REVDIR=$REV.20090602-1027
REPOSITORY=svn+ssh://topbit#svn.example.com/var/svn/website.com/trunk
IMAGES=$REVDIR/php/i
STATIC1=$REVDIR/anothersite.co.uk
svn export --revision $REV $REPOSITORY $REVDIR
mkdir -p $REVDIR/tmp/templates_c
chown -R username: $REVDIR
chmod -R 777 $REVDIR/tmp $REVDIR/php/cache/
chown -R nobody: $REVDIR/tmp $REVDIR/php/cache/ $IMAGES
dos2unix $REVDIR/bin/*sh $REVDIR/bin/*php
chmod 755 $REVDIR/bin/*sh $REVDIR/bin/*php
# chmod -x all the non-directories in images
find $IMAGES -type f -perm -a+x | xargs -r chmod --quiet -x
find $STATIC1 -type f -perm -a+x | xargs -r chmod --quiet -x
ls -l $IMAGES/* | grep -- "-x"
rm dev && ln -s $REVDIR dev
I put the revison number, and date/time which is used for the checked-out directory name. The chmod's in the middle also make sre the permissions on the images are OK as they are also symlinked to our dedicated image server.
The last thing that happens is an old symlink .../website/dev/ is relinked to the newly checked out directory. The Apache config then has a doc-root of .../website/dev/htdocs/
There's also a matching .../website/live/htdocs/ docroot, and again, 'live' is another symlink. This is my other script that will remove the live symlink, and replace it with whatever dev points to.
#!/bin/sh
# remove live, and copy the dir pointed to by dev, to be the live symlink
rm live && cp -d dev live
I'm only pushing a new version of the site every few dats, so you might not want to be using this several times a day (my APC cache wouldn't like more than a few versions of the site around), but for me, I find this to be very much problem-free for my own deployment.
After 3 years, I've learned a bit about deployment best practices. I currently use a tool called Capistrano because it's easy to set up and use, and it nicely handles a lot of defaults.
The basics of an automated deployment process goes like this:
Your code is ready for production, so it is tagged with the version of the release: v1.0.0
Assuming you've already configured your deployment script, you run your script, specifying the tag you just created.
The script SSH's over to your production server which has the following directory structure:
/your-application
/shared/
/logs
/uploads
/releases/
/20120917120000
/20120918120000 <-- latest release of your app
/app
/config
/public
...etc
/current --> symlink to latest release
Your Apache document root should be set to /your-application/current/public
The script creates a new directory in the releases directory with the current datetime. Inside that directory, your code is updated to the tag you specified.
Then the original symlink is removed and a new symlink is created, pointing to the latest release.
Things that need to be kept between releases goes in the shared directory, and symlinks are created to those shared directories.
It depends on your application and how solid the tests are.
Where I work everything gets checked into the repository for review and is then released.
Auto updating out of a repository wouldn't be smart for us, as sometimes we just check in so that other developers can pull a later version and merge there changes in.
To do what you are talking about would need some sort of secondary check in and out to allow for collaboration between developers in the primary check in area. Although I don't know anything about that or if its even possible.
There is also issues with branching and other like features that would need to be handled.