symlink real world example - php

I was reading php functions and I came across symlink but I really couldnt grab it specially its usage in real world application developement. Can anybody please explain me with real world example?
Thanks

Let's assume you have a src folder in your $HOME directory where your sources are stored. When you open a new shell, you usually enter your $HOME directory when the shell was started. It might be a common step that whenever you open up a new shell, you want to enter the directory ~/src/very_long_project_name afterwards.
This is where symlinks come into play: you could create a symlink in your $HOME directory (for example called vlpn that directly points to ~/src/very_long_project_name.
When you open your console next time, you could simply type cd vlpn instead of cd src/very_long_project_name. That's it. Nothing PHP specific. Like giraff and gnur already said.

An administrator may create symlinks to arrange storage without messing up filesystems; e.g. a web mirror may have thousands of sites hosted, and dozens of disks mounted:
/mnt/disk1/
/mnt/disk2/
...
and want to store data in their /var/www/htdocs/ without users caring about which disk holds their data.
/var/www/htdocs/debian -> /mnt/disk1/debian
/var/www/htdocs/ubuntu -> /mnt/disk2/ubuntu
/var/www/htdocs/centos -> /mnt/disk9/centos
Second, you may have a 'latest upload'; your users are uploading photos, or software packages, and you want http://example.com/HOT_STUFF to always be the most recent uploaded photo. You could set the symlink($new_upload, $HOT_STUFF); and users will never need more than the one URL to see the newest thing.
Third, Debian and Ubuntu use the update-alternatives mechanism to allow multiple versions of a tool to be installed at once and yet still allow the administrator to say which one is the default. e.g.,
$ ls -l /usr/bin/vi
lrwxrwxrwx 1 root root 20 2011-01-11 01:07 /usr/bin/vi -> /etc/alternatives/vi
$ ls -l /etc/alternatives/vi
lrwxrwxrwx 1 root root 18 2011-01-11 01:07 /etc/alternatives/vi -> /usr/bin/vim.basic
$ ls -l /usr/bin/vim.basic
-rwxr-xr-x 1 root root 1866904 2010-09-28 04:06 /usr/bin/vim.basic
It's a little circuitous, but the configuration is maintained in a per-system /etc/ directory, and the usual /usr/bin/vi path will execute something that is very much like vi, when there are many choices available (nvi, elvis, vim, AT&T vi, etc.)

Symlinks are something that is used on the host OS, not so much by PHP itself.
It creates a shortcut to a file. It can be useful to request a much used file with a long path, by creating a symlink in the public_html folder to the long path you can include it without using the full path.
http://en.wikipedia.org/wiki/Symbolic_link
//edit:
This is better then just copying the file because you actually use the original file, so if the original changes the symlink will always point to the new file, so it is not a copy!

The php function actually only delegates to the operating-system's functionality, so it is only as useful as a symbolic symlink in general is:
Symbolic links operate transparently
for most operations: programs which
read or write to files named by a
symbolic link will behave as if
operating directly on the target file.
(from Wikipedia)
I've seen it used by Typo3:
Each Typo3-site has folder that link to the main installation - so several sites can use the same code base, and Typo3 can be updated by extracting the new version, then changing the symlinks (reducing the time the site is offline).

here is a pretty good example of usage with the explanation http://dogmatic69.com/blog/development/12-symlink-cakephp-plugin-assets-for-faster-page-loads

Related

hard links between php files in different directories, what should be the expected behaviour of __FILE__

I have two websites that are identical in structure. The index.php file needs to generate different content depending on the domain that the page is being served from. I am using apache 2.2 and created two virtual hosts using two different folders under /var/www (/var/www/site.domain.com and /var/www/site.domain.ca)
I am using FILE to obtain the full path of the executing index.php and depending on the path I output the correct content.
Since the files are all the same, I wanted to use links to make editing the file easier, so instead of editing one of the index.php, then copying it to the other directory, I wanted editing either of files to update the other.
I used the cp -al command to copy with hard links:
cp -al /var/www/site.domain.com /var/www/site.domain.ca
The problem is that when I access the index.php file in one site vs. the other, the FILE variable does not reflect the path of which index.php file is executing. Depending on which domain I visit first, the FILE will reflect the path of that domain.
I will try using getcwd to see if that works, but can someone explain why this is happening. Shouldn't FILE reflect the current script that's executing.
These are hard links, not soft links.
Is apache caching the script, is APC the source of the problem?
Update: getcwd() worked, it seems to always return the correct current directory.
That's how hardlinks work. The system is not going to scan through the entire filesystem to try and figure out alternative names for the file. A hardlink is exactly that.. HARD. "this is an official, unchanging, immutable name for this file".
e.g:
$ cat z.php
<?php
echo __FILE__, "\n"
$ mkdir subdir
$ cd subdir
$ ln ../z.php hardlink.php
$ ln -s ../z.php softlink.php
$ php hardlink.php
/home/marc/test/subdir/hardlink.php
$ php softlink.php
/home/marc/test/z.php
Note now the hardlink displays the location of the hardlink itself, while the softlink (aka symlink) displays the target of the link, not the link itself.

Automatically compress folders

I am creating an automatic backup system. I plan on running a Cron Job that will, once a week, automatically create a backup and email it to a secure email.(this way, even if the server explodes into a million pieces I will have a full recent backup of my files that any administrator can access)
I found this simple method: system('tar -cvzpf backup.tar.gz /path/to/folder');
It works nearly perfectly for what I need. Only problem is there is one directory that I do not want included in the backup. On this website, users upload their own avatars and the directory in which the images are held is inside the directory I want backed up. Because I'm sending this via email I have to keep the folder relatively small, and a couple thousand images add up. Is there any way I could tell the function to ignore this directory and just compress everything else?
find /path/to/folder -not -ipath '*baddirectory*' -print | xargs tar -cvzpf backup.tar.gz though you might consider passing PHP the full path to all the binaries you use (in the above command: find, xargs, and tar).
From the tar man:
tar --exclude='/path/to/folder/bad'
So you would get:
system('tar -czpf --exclude='/path/to/folder/bad' backup.tar.gz /path/to/folder');
You can leave the v (verbose) out, since you are not watching your code being executed.
You can exclude something from been included with --exclude-tag-all PATTERN long option as on manual.
Unfortunely I did not found a good example about pattern.
I guess the following exclude will work:
--exclude-tag-all=/path/foldernotinclude/.
Since it should match the "directory" file tag.
With lucky another user will make a comment about patern to use.

Files written though PHP/Apache don't honor directory setgid bit

Scratching my head on this one, seems so basic.
I've got a PHP based content management system for our website written by a contractor. One feature is the ability to upload images to be displayed in various places on the website (like a product gallery). All such uploaded images are stored in a particular directory called "attachments".
drwxrwsr-x 4 www ftpusers 4096 Oct 10 14:47 attachments
As you can see I've got the setgid bit set on that dir so that any files written will have the group that users (like FTP user) who need access to those files will able to modify/overwrite them. I've set the umask for Apache so that it will write files as group writable.
When I try this with ANY user in the system by creating a new file in that directory, it correctly inherits the group of the parent. When a new file is created through PHP running in Apache, it always has the apache.apache ownership. Apache seems to be ignoring the setgid bit, which I didn't think it could do as this was done by the file system. Here is one file I uploaded:
-rw-rw-r-- 1 apache apache 30536 Oct 10 14:43 209
I can't test as the apache user directly as it doesn't have a login shell specified (for obvious security reasons).
I can get the same permissions capability by adding the ftpusers group to the apache group, but this doesn't seem wise from a security perspective.
I did find one thing that seemed like it might be related - php safe mode, which I've verified is off in /etc/php.ini, although I'm not positive I found the php.ini file that mod_php in apache is using. The php script is using move_uploaded_file(); as far as I can tell, nothing fancy with permissions is being done in the php code.
My best guess would be that this is an intentional limitations for security, but I can't find anything that seems to indicate that is the case.
Running CentOS 5.6 with Apache 2.2.17 and php 5.2.16.
Anyone have a clue?
When you upload a file it is created in the dir specified by the PHP's "upload_tmp_dir" setting. Then move_uploaded_file() moves it to your target dir. It maintains the permissions given to it upon creation and not those of the target directory you move the file to.
So you want the tmp dir to have the permissions you want, basically those you've given to your target dir. Then it will be created with the setgid having effect and the move will keep them.
IIRC "upload_tmp_dir" is not available in .htaccess so if you cannot change this setting or the permissions given to the dir then you will need to do it another way.

Zipping files through shell_exec in PHP: problem with paths

I'm working on a web backup service part of which allows the user to download a zip file containing the files they have selected from a list of what they have backed up.
The zipping process is done via a shell_exec command. For this to work I have given the apache user no-password sudo privileges to the zip command.
My problem is, since I can't do "sudo cd /path/to/user/files" to the folder containing the user's backed up files I need to specify the full absolute path to the files to be zipped which then go into the zip file which I don't want.
Here's what I mean:
The user, 'test1' has their files backed up which are stored as follows
/home/backups/test1/my pics/pic1.jpg
/home/backups/test1/my pics/pic2.jpg
/home/backups/test1/my pics/pic3.jpg
/home/backups/test1/hello.txt
/home/backups/test1/music/band/song.mp3
When these files are zipped up they keep the full paths but I just want it to show:
my pics/pic1.jpg
my pics/pic2.jpg
my pics/pic3.jpg
hello.txt
music/band/song.mp3
Is there some way to tell zip to truncate the paths, or specify a custom path for each file?
Or is there some way I can change the working directory to the user's root backup folder so I can simply specify relative file paths.
Thanks for reading, hope you got this far and it all makes sense!
I have to slightly question why you are making a potentially dangerous system shell call when there are a number of good PHP zipping classes around. The tutorial here shows how to easily create a class that will output a zip file to the browser. There is also a number of classes on phpclasses.org, this seems to be the best one.
If you have to do it with a system call my suggestions are:
To truncate the path for the file you could use symbolic links.
Can you not increase the permissions of the zip executable to mean that apache can use it without using sudo?
Have you tried using chdir() to change the working directory?
chdir('/home/backups/test1/');
A better idea may be to make a shell script, and grant the webserver sudo access to that shell script. This may be more secure, too
#!/bin/bash
#
# Zip up files for a given user
#
# usage: backupToZipFile.sh <username>
cd home/backups/$1
tar -czf /tmp/$1.tgz .
Also have you considered running sudo as the user you're trying to zip files as, instead of as root? This would be more secure as well.
It seems like a simple option, but according to man it's just not there. You could of course symlink the stuff you want to archive in the location where you'll create the zip file (I assume you can actually cd to that directory). With the proper options, zip will not archive the symlinks themselves but the symlinked files instead, using the names of the symlink.
E.g. create symlinks /tmp/$PID/my pics/pic1.jpg etc, and then zip everything in /tmp/$PID.

Why are my file permissions on Apache being reset?

We recently switched from using PCs at work to Macs, so I'm new to the *nix way of doing things. I have the default Apache running that shipped with 10.5, but I've noticed that when I drag files from a Windows server to my machine, the permissions are changed. Specifically, I'm writing data to an XML file, and occasionally after swapping some files back and forth, it stops working.
Can someone help me understand why this is happening and how I can either force Windows to respect the original file permissions (they were set on my machine when I created the file) or apply a less secure set of default permissions when the files are moved from Windows to Mac?
A couple facts to be aware of:
I'm using the Cornerstone Subversion client.
I can use Terminal if you spell it out for me.
Ultimately I'm uploading these files via Transmit to a Linux server in another location.
I'm already familiar with using Get Info to change the file permissions, but maybe I'm doing something wrong.
I'm logged in as root. (I know, bad bad bad.)
I should also mention I know this is a simple question that should have a simple answer, but I've googled up and down without finding it. I need your help.
Thanks.
It would be incredibly helpful if you could drop to the Terminal, use cd to navigate to the folder with the files that don't work because of permissions and then type: ls -l (those are both lowercase Ls back there).
If you start from your home folder, it'll look something like this:
macbookpro:~ artlogic$ cd Sites
macbookpro:Sites artlogic$ ls -l
total 8
drwxr-xr-x 6 artlogic staff 204 Mar 11 2008 images
-rw-r--r-- 1 artlogic staff 2628 Mar 11 2008 index.html
macbookpro:Sites artlogic$
Please paste the output into this thread. Knowing what Apache is changing the permissions to would help.
On a side note, Apache generally runs under a different user and permission level than the logged in user and so if it's somehow creating or overwriting files it may be changing the permissions that way.

Categories