scandir outside web tree using PHP5 - php

I'm looking for a way to get directory listing for files that are outside the webserver's tree, for example i want to list all files and folders under '/home' directory and put them in an array (just as scandir does).
I can 'sudo su' to a user that has rights to check the directory content but i don't know how to convert a directory listing that i could get from a
exec ('ls -la /home');
Or maybe with a bash script ?

Difficult. The 'su' command does not take the password from stdin for security reasons, and that makes it pretty much impossible to use it from PHP.
You need to find an alternative.
Will the list be huge? Does it have to be real time?
Can you perhaps edit the crontab of that other user who has permissions to list files? If yes, then you could execute a command that outputs the list of files to some text file, which you then would be able to read from PHP. Cron can run once every minute, but if your directory does not change very often, then that would work.

Related

How can I find the real location of the tmp dir for a specific program on Ubuntu?

Ubuntu creates a separate /tmp directory for each user, so when Apache asks for /tmp it actually gets /tmp/systemd-private-654e145185f84f6ba097649873c88a9c-apache2.service-uUyzNh/tmp. (That code is different each time.) This is generally a good thing, but it’s annoying me now.
I want to create a bunch of PDF files (using TCPDF in PHP) in /tmp, then use shell_exec() in PHP to run the pdfunite script to create a single output PDF, then load that PDF into memory to serve it to the browser. My problem is that the pdfunite script doesn’t work, which I presume is because it’s not seeing the same path as the files are actually in.
Does PHP’s shell_exec() run code as the Apache user (www-data)? If so, I would assume that it would see the same /tmp dir, but in that case the pdfunite script should work. Is there a sensible workaround here, other than using a different directory and doing the cleanup myself?

Running php script with cron: Could not open logfile, so such file or directory

I have a Raspberry Pi set up as a seedbox. I have a cron job that will run every 10 minutes, check for finished files using transmission-remote -l, grep for entries that are done (100%), get the names of the folders, copy these to the external drive, and then delete the original files on my Pi.
For every action that is done (A torrent is added, a torrent is finished, a file transfer has started, a file transfer has finished and the files have been deleted) an entry is written to my logfile.log, which is located in the same directory as all scripts, ´/home/pi/dev/´. Inside that folder I have a subfolder, logs that keeps logfiles on all moves from the pi to the external drive. These logfiles are all named after the folder/file being moved.
As I said, every 10 minutes, torrentfinished.phpis run through the cron job
*/10 * * * * php -f /home/pi/dev/torrentfinished.php
All output from the job is sent to my mail at /var/mail/pi.
Now, if I run the script manually, I can run it from anywhere by writing
php -f /home/pi/dev/torrentfinished.php
I have some debug lines written in right beneath the execution of each command. (I use shell_exec to run the commands because I'm more comfortable writing in php than bash).
It'll output
Started transfer
Wrote transfer to logfile
In logfilean entry is then added, with the text $timestamp : started transfer of data from torrent $torrentname. A separate file is created in logs/$torrentname.log. Basically, everything works perfectly.
However, when the cron job runs, I get the following output in /var/mail/pi
Unable to open logfile: No such file or directory
Started transfer
Wrote transfer to logfile
But as you've probably guessed, nothing happens. The files remain in their spot on the Pi and are not transferred. In addition, nothing is written to logfile or logs/$torrentname.log.
I've been wracking my brain over this, and been using chmod 777 on more files than could possibly be considered necessary nor safe, simply to make sure this isn't a permissions issue. I may have missed something of course, but I wouldn't think so. I've also tried renaming the file logfile to something else, but I still get the same error.
I have no more ideas on what to do, so if any of you have ideas, please do tell!
When you use this:
php -f /home/pi/dev/torrentfinished.php
You stay at /home/pi/dev/ directory. And logfile is written at /home/pi/dev/logs
When you run script in cron, base directory is another (for example it may be /bin or /usr/bin).
Try to use DIR or FILE constants to set a logfile path.
It might help to see the actual php code. My first step here would be to have the code print the path to logfile so I could figure out why it thinks it doesn't exist (are you using a relative path or some environment variable, because cron tends to run in a sanitized environment).

Can I change the current shell directory from a PHP script? If so, how?

I'm working on some PHP cli tools for MediaWiki and I want to be able to cd into a directory of the wiki, from the shell. Can I do that from PHP? If not, is it possible to wrap the entire thing into something else that is capable of doing this?
For example:
~$ mewsh #coin cd
Should take me the appropriate directory where coinwiki is located. chdir is not sufficient as this will not change the directory after the PHP script is finished.
It's not a matter of whether or not you can do it from php – you can't do it from any spawned process. A spawned process is a new process (created usually with fork and exec). Changes to the working directory of the new process cannot affect the working directory of the parent process.
That said, if you want to get crazy you can: Here's a php script that chdirs to the parent directory and then replaces itself with bash. It can't change the directory of the parent process, but it can start a new shell with a new directory.
<?php
chdir('..'); // Chdir to the parent directory in this php process.
pnctl_exec('/bin/bash'); /* Execute '/bin/bash' in the current process space.
This is oversimplified in many ways
– including that I didn't pass any arguments to the shell. */
For further hackery, if you're completely in control of how this script is invoked, you can exec it and replace the current shell with the new one.
~ $ exec php chdir.php
/Users $
But if you intend to share this script with someone you shouldn't have these expectations – it's much saner to manage the working directory only within your own program.

Folder permissions when telling PHP to save a file to that folder?

I'm trying to use this Dagon Design PHP form to help a local non-profit publication enable their readers to submit photos. I've got the "mailer" part working -- the notifications work fine -- but the "saving a file to a folder" part isn't functioning.
On the form page, the author says "the directory must have write permissions," but I'm not sure "who" is writing to that folder -- is this PHP script considered "Owner" when it saves something on my site? Or do I need to allow save permissions for Owner, Group and Others?
I'm not sure why the script isn't saving the photos, but this seems like a good place to start. I've tried looking around on Stack for answers, but most questions seem to have to do with folder creation/permissions.
The page I'm clumsily trying to build is here, if that helps.
As Jon has said already, you don't want to allow write access to everyone.
It's also possible (depending on the hosting) that something like suEXEC is being employed - which will cause your PHP script to run as a user other than the webserver's (as reported by Dunhamzzz).
Probably your best approach, in my opinion, is a script calling whoami:
passthru('whoami');
Or alternatively you could try:
var_dump(posix_getpwuid(posix_geteuid()));
Bear in mind, this does give system information away to the world - so delete the script once you've used it!
Then, as you've correctly asserted in your question, it'll likely be the file permissions.
If you do have CLI access, you can update the permissions safely as so (first command gets the group)
id -n -g <username>
chmod 770 <directory>
chown <username>:<group> <directory>
(You may have to pre-pend "sudo" to the "chown" command above, or find other means to run it as "root"..., reply back if you get stuck.)
If you've not got access to run command-line, you'll presumably be doing this via a (S)FTP client or the alike. I'm afraid the options get a little to broad at that point, you'll have to figure it out (or reply back with the client you're using!)
As always, YMMV.
Finally, bear in mind if this is your own code, people will at some point try uploading PHP scripts (or worse). If that directory is accessible via a public URL ... you're opening the hugest of security holes! (.htaccess, or non-document root locations are your friend.)
If you are not sure how is your server configured (and this would influence who's the final file owner) then add write permission to anyone (chmod a+w folder), upload one file and ls -l to see the owner. Then you can adjust permissions to allow write access to certain users only
The PHP script that saves the files is running with the privileges of some user account on the server; the specific account depends on your OS and the web server configuration. On Linux and when PHP is running as an Apache module this user is the same user that Apache runs as.
Solving your problem reduces to determining which user account we are talking about and then ensuring that this user has permission to write to the save directory (either as owner or as a member of the group; giving write access to everyone is not the best idea).
You'll need to set the permissions of the directory to that of the webserver (probably Apache, nginx or similiar), as that's what is executing the PHP.
You can quickly find out the apache user with ps aux | grep apache, then you want to set the permssions of the upload directory to that user, something like this:
chown -R www-data:www-data images/uploads

Zipping files through shell_exec in PHP: problem with paths

I'm working on a web backup service part of which allows the user to download a zip file containing the files they have selected from a list of what they have backed up.
The zipping process is done via a shell_exec command. For this to work I have given the apache user no-password sudo privileges to the zip command.
My problem is, since I can't do "sudo cd /path/to/user/files" to the folder containing the user's backed up files I need to specify the full absolute path to the files to be zipped which then go into the zip file which I don't want.
Here's what I mean:
The user, 'test1' has their files backed up which are stored as follows
/home/backups/test1/my pics/pic1.jpg
/home/backups/test1/my pics/pic2.jpg
/home/backups/test1/my pics/pic3.jpg
/home/backups/test1/hello.txt
/home/backups/test1/music/band/song.mp3
When these files are zipped up they keep the full paths but I just want it to show:
my pics/pic1.jpg
my pics/pic2.jpg
my pics/pic3.jpg
hello.txt
music/band/song.mp3
Is there some way to tell zip to truncate the paths, or specify a custom path for each file?
Or is there some way I can change the working directory to the user's root backup folder so I can simply specify relative file paths.
Thanks for reading, hope you got this far and it all makes sense!
I have to slightly question why you are making a potentially dangerous system shell call when there are a number of good PHP zipping classes around. The tutorial here shows how to easily create a class that will output a zip file to the browser. There is also a number of classes on phpclasses.org, this seems to be the best one.
If you have to do it with a system call my suggestions are:
To truncate the path for the file you could use symbolic links.
Can you not increase the permissions of the zip executable to mean that apache can use it without using sudo?
Have you tried using chdir() to change the working directory?
chdir('/home/backups/test1/');
A better idea may be to make a shell script, and grant the webserver sudo access to that shell script. This may be more secure, too
#!/bin/bash
#
# Zip up files for a given user
#
# usage: backupToZipFile.sh <username>
cd home/backups/$1
tar -czf /tmp/$1.tgz .
Also have you considered running sudo as the user you're trying to zip files as, instead of as root? This would be more secure as well.
It seems like a simple option, but according to man it's just not there. You could of course symlink the stuff you want to archive in the location where you'll create the zip file (I assume you can actually cd to that directory). With the proper options, zip will not archive the symlinks themselves but the symlinked files instead, using the names of the symlink.
E.g. create symlinks /tmp/$PID/my pics/pic1.jpg etc, and then zip everything in /tmp/$PID.

Categories