I am not much of a web developer, so apologies in advance for this dumb question.
I have a test server (Centos 6.3) with LAMP set up for me to play around. From what I understand, the server executes whatever is in /var/www/html directory. How do you edit source files in that directory ? Do you do a sudo vim "foo.php" each time you want to fix something (or add something ) ? I'd imagine that would be a pain when you are building a complex application with many files and directories .
This is what worked for me. For the record this is a Centos 6.3 server running LAMP (On Rackspace).
First, I found out that apache runs as user "apache" and group "apache" on centos systems. In other distros, I believe it runs as "www-data" in group "www-data". You can verify this by looking at /etc/httpd/conf/httpd.conf. We need to change ownership of /var/www to this user. Replace "apache" below with "www-data" if that is the case for you.
chown -hR apache:apache /var/www
Now lets make it writable by the group:
chmod -R g+rw /var/www
Add yourself to the apache group:
usermod -aG apache yourusername
Replace apache with www-data in the above if thats the case for you.
Now log out and log in - you can now edit the files, ftp them to this directory or do whatrever you want to do.
Comments welcome. TNX!
There are many approaches to modifying and deploying websites/web apps.
CentOS 6 is, by default, accessible with SSH2 on port 22. If you are on Windows you can use a combination of PuTTY and WinSCP (to manage your server, and its files, respectively). If you're on Linux or Mac OS X, SSH is already built into your system and can be accessed with Terminal. I find that using SSH is favourable over other methods because of how widely supported, secure and lightweight it is.
You'll also need a decent text editor or IDE to edit the files if you want proper syntax detection. There's tons of choices, my favourites are Notepad++ and Sublime Text 2. Not to say that I haven't edited my PHP files from time to time using the nano text editor package directly in PuTTY (yum install nano).
If you're using a edit-save-upload approach, just remember to back up your files regularly, you will find out the hard way if you fail to do so. Also, never use root unless you need to. Creating a user solely to modify your websites is good practice (adduser <username>, and give that user write access to /var/www/html).
To answer your second question:
Once you get into heavier web development you will probably want to use something like Git. Deploying with git is beyond the scope of this question so I won't get into that. In brief, you can set it up so your development environment sits locally and you can use a combination of git commit and git push to deploy.
I use a FTP client (FileZilla) to download files, edit them and then re-upload them. If you're a one (wo)man show, and on a test setup and just playing around to learn, this is probably sufficient. With more than 1 person, or going to a (test and) production setup, you should look at some more control with svn like #Markus mentioned in another answer.
You should change the permissions of that directory (with chmod) so you have write permissions, and can then read and write to that directory. Then, you don't need sudo.
dude. read up on version control and source code control systems like subversion and git. the idea is to develop on your machine, revision control the result, then deploy a known working version on the production server.
Related
I'm currently working to make a WordPress plugin of mine compatible with nginx. This plugin requires access to a .conf file in the wp-content/uploads directory, so that it can add in the required rules. Currently, it updates an .htaccess file in the same directory, and the changes take effect immediately without intervention. Because nginx requires service nginx reload to allow configuration changes to take effect, I am looking for a way to do that in my script. I'm not sure that even exec() would work for this, as service nginx reload is required to be run as root or using sudo. I've searched far and wide on StackExchange, Google, and everywhere else that I know of, and I can't even find a starting place.
Security wise, it would be a VERY bad thing to give the user that runs the web server sudo/root access. Rather, you could use a semaphore file and have a cron job run by root that runs every 5 minutes (or higher frequency if required) that looks for the presence of this file. If present, it issues the service nginx reload command and deletes the file.
My Suggestion
Overall I think the best solution would be to have your plugin create specific instructions for the user to edit any Nginx config manually and then reload Nginx themselves.
I believe this because giving PHP the ability to run a command that usually requires sudo requires opening up a large security hole.
On top of that, the methods that a user would have to do in order to allow PHP to run the service nginx reload command is not something you can accomplish in PHP alone, and may be equally, if not more, complex than having them update Nginx config and reloading themselves. The user needs to do extra work regardless!
If you really want to do this:
If you still choose to go down this course of action, I suggest having users of the plugin edit the server's /etc/sudoers file, which can allow user www-data (or any other user) to run that one specific command using sudo without requiring a password. This also means that the user (perhaps www-data) would NOT have permission to run any other command, which is at least a bit more secure.
Use the sudo visudo command to edit the /etc/sudoers file. Don't edit it directly.
Here's the line to add to that file for user www-data:
www-data ALL=(ALL:ALL) NOPASSWD:/usr/sbin/service nginx reload
Here's an example of allowing a user (group) to run similar commands, with some more explanation. Here's more on Sudoers and how /etc/sudoers works.
Lastly, note that user www-data is specific to installing Nginx on Debian/Ubuntu Linux distributions. Other very common distributions (centos, redhat, fedora, arch, etc etc etc) may not be running Nginx/PHP-FPM as user www-data unless the sysadmin creates those users manually.
Short answer: Bad idea. Don't do it.
Long answer: What you try do to cannot be compared with editing a .htaccess file simply because it does not work the same way. .htaccess files are made to be edited and reloaded on the fly and can be created and accessed almost in every directory inside your public directory.
That is not the case for Nginx configurations. Configuration files are made to be loaded when Nginx starts or manually when needed. Their location is related to the configuration defined by the administrator.
Even if you knew where the file was, it could be impossible for you to update it or write in that specific directory.
More than that, you'll have to figure out what user can write and read where the configuration files are located.
In the end, there are so many thing that can go wrong. In my humble opinion, the idea is really bad.
What most plugins do is display the Nginx configuration that should be included so the website administrator can copy and paste it. Or, they create a .conf file that the website administrator must copy somewhere before restarting Nginx.
What you try to achieve is doable. It can be coded. Don't get me wrong. BUT your plugin will become about handling every specificty of every Nginx configuration there is out there. You really don't want that.
As you don't know every configuration (and honestly, you don't want to know), I'd suggest you focus on developing a great plugin and offering the right Nginx configuration via the plugin. The last part (where the website administrator copies it to the right location and reloads the configuration) is his job.
The other answers I read in that question suggest ways of "trying" to achieve what you are asking. I think they will create more problems than they'll help. By changing permissions and running crons, it may work. But you'll also open the door to security vulnerabilities as your www-data user or website user owner will now be considered as root and could run things that should not be run. I would not advise you to do that.
You must add correct permissions to account www-data which is used to execute commands on Linux webservers. You see which account is executing script with PHP by using <?php echo exec("whoami"); ?>
We have an in-house Ubuntu Linux server that we use for development of PHP sites. The /var/www folder is shared via Samba on our network with settings that force anything created through this share to be www-data:www-data. Not the best, but it works.
We've recently started using Laravel - and this requires you to perform commands through artisan on the command line. I SSH into the Linux server to do this - but as a different username. Any files that are then created on the command line via artisan have my username and group attached to them and therefore cannot be edited remotely through the Samba share as they don't have the correct permissions unless I then chmod / chown them as well.
I would prefer this to happen automatically or, even better, to not be needed at all. My colleagues will start to build Laravel sites soon and it's not going to look good if they have to do this.
I have a feeling I'm doing something fundamentally wrong with the way we've got all of this set up (aside from the www-data thing) - I'd appreciate any pointers.
I just need to create a web application that could one-click packages on my ubuntu server, I really no idea, where to start.. Thought of doing it by php, and due to security issues, it wasn't a fair idea.
Sorry, I'm new to this.
Thanks for any help.
You should not do this.
To answer your question, yes, it is possible. It can be done with doing "sudo apt-get ..." within shell_exec(). This would require that the webserver has passwordless access to root powers.
Did I mention that you should not do this?
If you are trying to remotely manage a computer, you should use SSH to log in to it and avoid the unnecessary gymnastics of using PHP, or use a web-based interface like Webmin that can properly do these things for you.
You are on the right track using system()/shell_exec().
I think it "does not work" on your side because your Apache process owner does not have root permission (and you need root to be able to install packages). By the way, its the same for any other programming language implementation you use: you need root to be able to install packages.
You can set your Apache process owner to 'root', but then you'll get this when you try to restart it:
Error: Apache has not been designed to serve pages while
running as root. There are known race conditions that
will allow any local user to read any file on the system.
If you still desire to serve pages as root then
add -DBIG_SECURITY_HOLE to the CFLAGS env variable
and then rebuild the server.
It is strongly suggested that you instead modify the User
directive in your httpd.conf file to list a non-root
user.
You can compile Apache yourself to allow running Apache with root user as indicated above.
To summarize, what you're trying to do is possible BUT you are opening a really big security hole.
YOU SHOULD NOT DO THIS.
Use SSH instead.
I have installed a xampp (apache + mysql + php) stack on a portable volume (truecrypt volume on a usb drive) that I use on different linux machines, for webapp development.
This volume is formatted in ext3, so I run into permissions problems as I can't be the same user on each machine. For example I tried to check the git status of a project started on the other machine and access is denied.
Last time I ran into a similar problem I did chmod 777 (or was it chown?) the whole directory, so I guess the permission status of my php files is messy. This time I am also worried about special cases like git files and the symfony web framework I installed lately.
How should I deal with this situation?
I would love to set this permission issue properly.
Thanks a lot for your attention.
Edit :
I may need to write a script to locally "sudo chown lampp:lamppusers" each time I use a different machine. Or some git voodoo trick with local-only repositories export? Any idea?
I doubt there is a way round it like that - you could try using SELinux ACLs instead of the usual permission, but I think that'd be overkill for you.
I would advise just setting the whole directory to 777 with chmod (chown changes ownership), as the user and group each user is in will be different on each machine, you can't really do better. Your only problem here is security - but as its on your local box for your own purposes, I don't see a problem. Not until you copy the files to the server, whereupon you'd best set them back correctly.
You could try authenticating against an LDAP server, you'd retrieve the uid for the username you enter, and that would be mapped to the same uid on each PC, so then you could set group or owner once. (I think, you'd have to ask someone more experienced with LDAP ud mappings).
This seems to be the best solution I have found :
sudo chown -R polypheme:lamppusers webdir
This way I can own again the whole directory with my local username and my local group.
polypheme is a username
lamppusers is a groupname
webdir is the directory containing all my web development files
Note that on computer 1 my username is polypheme and I belong to the group lamppusers, and on computer 2 my username is polypheme and I belong to the group lamppusers.
The usernames and groupnames are exactly the same.
I am not 100% sure this would work perfectly if names happened to be different.
I have not modified any chmod which is good as the problem is much more complex than simply flatten every file permission with a global
chmod -R 744 webdir
It would be the wrong thing to do with protected files in the symfony framework. And .git/ files too I guess. (gbjbaanb was right about this)
At this point, I have recovered the same configuration I used to have on the other machine, which is what I was looking for. I can work again.
Knowing this, my question suddenly seems stupid.
Well, the multi-computer context makes it tricky enough to be valuable. (To me at least)
(This answer will be validated as soon as it has been properly tested)
I've heard the phrase "deploying applications" which sounds much better/easier/more reliable than uploading individual changed files to a server, but I don't know where to begin.
I have a Zend Framework application that is under version control (in a Subversion repository). How do I go about "deploying" my application? What should I do if I have an "uploads" directory that I don't want to overwrite?
I host my application through a third party, so I don't know much other than FTP. If any of this involves logging into my server, please explain the process.
Automatic deploy + run of tests to a staging server is known as continuous integration. The idea is that if you check in something that breaks the tests, you would get notified right away. For PHP, you might want to look into Xinc or phpUnderControl
You'd generally not want to automatically deploy to production though. The normal thing to do is to write some scripts that automates the task, but that you still need to manually initiate. You can use frameworks such as Phing or other build-tools for this (A popular choice is Capistrano), but you can also just whisk a few shell-scripts together. Personally I prefer the latter.
The scripts themselves could do different things, depending on your application and setup, but a typical process would be:
ssh to production server. The rest of the commands are run at the production server, through ssh.
run svn export svn://path/to/repository/tags/RELEASE_VERSION /usr/local/application/releases/TIMESTAMP
stop services (Apache, daemons)
run unlink /usr/local/application/current && ln -s /usr/local/application/releases/TIMESTAMP /usr/local/application/current
run ln -s /usr/local/application/var /usr/local/application/releases/TIMESTAMP/var
run /usr/local/application/current/scripts/migrate.php
start services
(Assuming you have your application in /usr/local/application/current)
I wouldn't recommend automatic updating. Just because your unit tests pass doesn't mean your application is 100% working. What if someone checks in a random new feature without any new unit tests, and the feature doesn't work? Your existing unit tests might pass, but the feature could be broken anyway. Your users might see something that's half-done. With automatic deployment from a check-in, you might not notice for a few hours if something made it live that shouldn't have.
Anyhow, it wouldn't be that difficult to get an automatic deployment going if you really wanted. You'd need a post-check-in hook, and really the steps would be:
1) Do an export from the latest check-in
2) Upload export to production server
3) Unpack/config the newly uploaded export
I've always performed the last steps manually. Generally it's as simple as SVN export, zip, upload, unzip, configure, and the last two steps I just alias a couple of bash commands together to perform. Then I swap out the root app directory with the new one, ensuring I keep the old one around as a backup, and it's good to go.
If you're confident in your ability to catch errors before they'd automatically go live, then you could look at automating that procedure. It gives me the jibbly-jibblies though.
At my webdev company we recently started using Webistrano, which is a Web GUI to the popular Capistrano tool.
We wanted an easy to use, fast deployment tool with a centralized interface, accountability (who deployed which version), rollback to previous versions and preferably free. Capistrano is well-known as a deployment tool for Ruby on Rails applications, but not centralized and targeted mainly to Rails apps. Webistrano enhances it with a GUI, accountability, and adds basic support for PHP deployment (use the 'pure file' project type).
Webistrano is itself a Ruby on Rails app, that you install on a development or staging server. You add a project for each of your websites. To each project you add stages, such as Prod and Dev.
Each stage can have different servers to deploy to, and different settings. Write (or modify) a 'recipe', which is a ruby script that tells capistrano what to do. In our case I just used the supplied recipe and added a command to create a symlink to a shared uploads dir, just like you mentioned.
When you click Deploy, Webistrano SSHs into your remote server(s), does an svn checkout of the code, and any other tasks that you require such as database migrations, symlinking or cleanup of previous versions. All this can be tweaked of course, after all, it's simply scripted.
We're very happy with it, but it took me a few days to learn and set up, especially since I wasn't familiar with Ruby and Rails. Still, I can highly recommend it for production use in small and medium companies, since it's proven very reliable, flexible and has saved us many times the initial investment. Not only by speeding up deployments, but also by reducing mistakes/accidents.
This sort of thing is what you would call "Continous Integration". Atlassian Bamboo (cost), Sun Hudson (free) and Cruise Control (free) are all popular options (in order of my preference) and have support to handle PHPUnit output (because PHPUnit support JUnit output).
The deployment stuff can be done with a post build trigger. Like some other people on this thread, I would exercise great caution before doing automated deployments on checkin (and test passing).
check fredistrano, it's a capistrano clone
works great (litle bit confusing installing but after all runs great)
http://code.google.com/p/fredistrano/
To handle uploads, the classic solution is to move the actual directory out of the main webspace, leaving it only for a fresh version to be checked out (as I do in the script below) and then using Apache to 'Alias' it back into place as part of the website.
Alias /uploads /home/user/uploads/
There are less choices to you if you don't have as much control of the server however.
I've got a script I use to deploy a given script to the dev/live sites (they both run on the same server).
#!/bin/sh
REV=2410
REVDIR=$REV.20090602-1027
REPOSITORY=svn+ssh://topbit#svn.example.com/var/svn/website.com/trunk
IMAGES=$REVDIR/php/i
STATIC1=$REVDIR/anothersite.co.uk
svn export --revision $REV $REPOSITORY $REVDIR
mkdir -p $REVDIR/tmp/templates_c
chown -R username: $REVDIR
chmod -R 777 $REVDIR/tmp $REVDIR/php/cache/
chown -R nobody: $REVDIR/tmp $REVDIR/php/cache/ $IMAGES
dos2unix $REVDIR/bin/*sh $REVDIR/bin/*php
chmod 755 $REVDIR/bin/*sh $REVDIR/bin/*php
# chmod -x all the non-directories in images
find $IMAGES -type f -perm -a+x | xargs -r chmod --quiet -x
find $STATIC1 -type f -perm -a+x | xargs -r chmod --quiet -x
ls -l $IMAGES/* | grep -- "-x"
rm dev && ln -s $REVDIR dev
I put the revison number, and date/time which is used for the checked-out directory name. The chmod's in the middle also make sre the permissions on the images are OK as they are also symlinked to our dedicated image server.
The last thing that happens is an old symlink .../website/dev/ is relinked to the newly checked out directory. The Apache config then has a doc-root of .../website/dev/htdocs/
There's also a matching .../website/live/htdocs/ docroot, and again, 'live' is another symlink. This is my other script that will remove the live symlink, and replace it with whatever dev points to.
#!/bin/sh
# remove live, and copy the dir pointed to by dev, to be the live symlink
rm live && cp -d dev live
I'm only pushing a new version of the site every few dats, so you might not want to be using this several times a day (my APC cache wouldn't like more than a few versions of the site around), but for me, I find this to be very much problem-free for my own deployment.
After 3 years, I've learned a bit about deployment best practices. I currently use a tool called Capistrano because it's easy to set up and use, and it nicely handles a lot of defaults.
The basics of an automated deployment process goes like this:
Your code is ready for production, so it is tagged with the version of the release: v1.0.0
Assuming you've already configured your deployment script, you run your script, specifying the tag you just created.
The script SSH's over to your production server which has the following directory structure:
/your-application
/shared/
/logs
/uploads
/releases/
/20120917120000
/20120918120000 <-- latest release of your app
/app
/config
/public
...etc
/current --> symlink to latest release
Your Apache document root should be set to /your-application/current/public
The script creates a new directory in the releases directory with the current datetime. Inside that directory, your code is updated to the tag you specified.
Then the original symlink is removed and a new symlink is created, pointing to the latest release.
Things that need to be kept between releases goes in the shared directory, and symlinks are created to those shared directories.
It depends on your application and how solid the tests are.
Where I work everything gets checked into the repository for review and is then released.
Auto updating out of a repository wouldn't be smart for us, as sometimes we just check in so that other developers can pull a later version and merge there changes in.
To do what you are talking about would need some sort of secondary check in and out to allow for collaboration between developers in the primary check in area. Although I don't know anything about that or if its even possible.
There is also issues with branching and other like features that would need to be handled.