My rsync bash script isn't parsing my --exclude-from file that's generated via php, but it will if I manually create (as root) the same exact file locally. I've got a web interface on a Xubuntu 12.10 system that writes rsync --exclude-from files locally and then pushes them via rcp to our (CentOS 6) backup boxes that run the rsync script. (Please spare finger wagging about rcp... I know--don't have a choice in this case.)
Webpage writes file:
PHP:
file_put_contents($exclfile, $write_ex_val);
then pushes to the backup box from a local bash script on the webserver with:
Bash:
rcp -p /path/to/file/${servername}_${backupsource}.excl ${server}:/destination/path
I've compared the permissions and ownership of the hand-created file (that works) with the same file that's php/rcp'd (that doesn't), and they're both the same:
Bash:
stat -c '%a' server_backupsource_byhand.excl
644
stat -c '%a' server_backupsource_byphp.excl
644
ls -l server_backupsource_byhand.excl
-rw-r--r-- 1 root root 6 May 11 05:57 server_backupsource_byhand.excl
ls -l server_backupsource_byphp.excl
-rw-r--r-- 1 root root 6 May 11 05:58 server_backupsource_byphp.excl
In case it's relevant, here's my rsync line:
BASH:
rsync -vpaz -v --exclude-from=${exclfile} /mnt/${smbdir} /backup
I suspect php might be writing the file in a different format (e.g. UTF8 instead of ANSI), but I can't figure out how to test this, and have limited knowledge here.
Does anyone have any suggestions on how to get the php/rcp generated file to parse?
Using diff and file I found out that the php-written version wasn't writing the newline. In my variable definition I was using a "." as "$write_ex_val ." Removing this "." let php write the new lines. I also removed the space between the variable and "\n", although I'm not sure if this contributed to the solution. I'd upvote the comment, Earl, but I don't think I have enough rep. Thanks again.
Related
here is my problem: i would like to create a directory and limit it size using this method.
The thing is when i try it via cli its working perfectly (the file system is mounted on the newly created directory with its size limit), however when i put it in a bash script the directory is created but not with its size limit.
Here is my script.sh:
# !/bin/bash
# Set default parameters
limited_directory_name=$1
size=$2
# Setting path
parent_path="/path/to/directory/parent/"
directory_path="$parent_path$limited_directory_name"
# Creating directory/mountpoint
mkdir "$directory_path"
#Creating a file full of /dev/zero
limited_size_file="$directory_path.ext4"
touch "$limited_size_file"
dd if=/dev/zero of="$limited_size_file" bs="$size" count=1
#Formating the file
sudo mkfs.ext4 "$limited_size_file"
#Mount the disk
sudo mount -o loop,rw,usrquota,grpquota "$limited_size_file" "$directory_path"
I believe (pretty sure actually), that the problem is in these two last lines
sudo mkfs.ext4 "$limited_size_file"
or/and
sudo mount -o loop,rw,usrquota,grpquota "$limited_size_file" "$directory_path"
because as i said, the file and directory are created but just not with the size limit.
Also when i try to delete the directory ($directory_path/) after executing those command via cli i got : rm: cannot delete '$directory_path/': Device or resource busy, that i dont get when trying to delete it after executing the script. So i guess that the file system is not mounted when executing the script, and the problem is probably in the last two lines. I dont know if its has something to do with the way of using sudo inside a script or just something with mounting a file system inside a bash script.
I just wanna say that i am fairly new to bash scripting and i am sorry if my mistake is something like an obvious (noob) error. You can also say if i can improve my question in any way and i apologize if it's not clear enough.
And one last thing, i have tried different syntax for the last two line like:
sudo $(mkfs.ext4 "$limited_size_file")
or
sudo `mkfs.ext4 "$limited_size_file"`
or just
mkfs.ext4 "$limited_size_file" without sudo.
But nothing seems to work. I am using debian 10 btw and im calling the script like this in a PHP page (if it can help):
exec("myscript.sh $dname $dsize");
I am trying a POC running a python script in a back-end implemented in PHP. The web server is Apache in a Docker container.
This is the PHP code:
$command = escapeshellcmd('/usr/local/test/script.py');
$output = shell_exec($command);
echo $output;
When I execute the python script using the back-end we are getting a permission denied error for creating the file.
My python script:
#!/usr/bin/env python
file = open("/tmp/testfile.txt","w+")
file.write("Hello World")
file.close()
This is the error I'm getting:
IOError: [Errno 13] Permission denied: 'testfile.txt'
For the directory im working with the permissions are as follows,
drwxrwsr-x 2 1001 www-data 4096 May 8 05:35 .
drwxrwxr-x 3 1001 1001 4096 May 3 08:49 ..
Any thoughts on this? How do I overcome this problem?
To start is is incredibly bad practice to have relative paths in any scripting environment. Start by rewriting your code to use a full path such as /usr/local/test/script.py and /tmp/testfile.txt. My guess is your script is attempting to write to a different spot than you think it is.
When you know exactly where the files are being written go to the directory and run ls -la and check the permissions on the directory. You want it to be writeable by the same user or group as the web server runs.
Looking at the permissions you have shown you don't have the user able to write to the directory, just everyone and the group. You need to add user write permissions - chmod u+w /tmp will do the job.
I believe the problem is that you are trying to write to an existing file in the /tmp/ directory. Typically /tmp/ will have the sticky permission bit set. That means that only the owner of a file has permission to write or delete it. Group write permissions on files do not matter if the sticky bit is set on the parent directory.
So if this is the contents of your /tmp
$ ls -al /tmp
drwxrwxrwt 5 root root 760 Apr 30 12:00 .
drwxr-xr-x 21 root root 4096 Apr 30 12:00 ..
-rw-rw---- 2 1001 www-data 80 May 8 12:00 testfile.txt
We might assume that users in the group www-data should be able to write to testfile.txt. But that is not the case, since . (the /tmp/ directory itself) has the sticky bit set (the t in the permissions section indicates this).
The reason why the sticky bit is set here is that everyone should be able to write files there, but not have to worry that other users might modify our temporary files.
To avoid permission errors, you can use the standard library tempfile module. This code will create a unique filename such as testfile.JCDxK2.txt, so it doesn't matter if testfile.txt already exists.
#!/usr/bin/env python
import tempfile
with tempfile.NamedTemporaryFile(
mode='w',
prefix='testfile.',
suffix='.txt',
delete=False,
) as file:
file.write("Hello World")
I am able to run PhantomJS from both the shell and exec() fine. The server I am using looks for additional fonts in the ~/.fonts/ directory. When those additional fonts are present, from the shell only I am able to take a screenshot with PhantomJS and the expected fonts render well.
> strace ~/public_html/api/libraries/phantomjs-2.1.1-linux-x86_64/bin/phantomjs ~/public_html/api/libraries/phantomjs-2.1.1-linux-x86_64/examples/rasterize.js http://v1.jontangerine.com/silo/typography/web-fonts/ ~/tmp/test.jpg | grep font
open("/home/user1/.fonts/TTF/verdana.ttf", O_RDONLY) = 11
open("/home/user1/.fonts/TTF/AndaleMo.TTF", O_RDONLY) = 11
open("/home/user1/.fonts/TTF/arial.ttf", O_RDONLY) = 11
open("/home/user1/.fonts/TTF/cour.ttf", O_RDONLY) = 11
open("/home/user1/.fonts/TTF/georgia.ttf", O_RDONLY) = 11
open("/home/user1/.fonts/TTF/impact.ttf", O_RDONLY) = 11
...
When I try the same command from from exec(), the user fonts directory is not searched.
<?php
exec('strace ~/public_html/api/libraries/phantomjs-2.1.1-linux-x86_64/bin/phantomjs ~/public_html/api/libraries/phantomjs-2.1.1-linux-x86_64/examples/rasterize.js http://v1.jontangerine.com/silo/typography/web-fonts/ ~/tmp/test.jpg | grep font');
The ~/.fonts directory is not searched, but a screenshot is written to disk without the proper fonts being rendered.
I understand exec() to run as the Apache user so user fonts won't be used. However,
> whoami
user1
and
<?php
echo exec('whoami');
user1
both show as the same user, so I suspect this is misleading because this works perfectly (fonts and all) in the shell:
php -r "exec('~/public_html/api/libraries/phantomjs-2.1.1-linux-x86_64/bin/phantomjs ~/public_html/api/libraries/phantomjs-2.1.1-linux-x86_64/examples/rasterize.js http://v1.jontangerine.com/silo/typography/web-fonts/ ~/tmp/test.jpg');"
I understand setuid can allow users to exec a program with the permissions of its owner (user1), but this doesn't help. This particular server is a shared server, and su and sudo are disabled so running as a different user is not permitted.
Linux user configurations isn't my area of expertise, but using exec() how to run the PhantomJS command so the user fonts are included?
Research:
Running PhantomJS from PHP with exec() - the problem was with $PATH
PHP + PhantomJS Rasterize - This was a problem with HostGator
exec() and phantomjs issue with absolute paths - One answer suggested running the command as a cron which won't work. Also modifying /etc/sudoers is not possible.
The problem revealed itself when printenv and exec('printenv') were run, respectively
HOME=/home/user1
and
HOME=/home/user1/tmp
This means that even though whoami and exec('whoami') both return user1, something in the shared host configuration set the home directory to /tmp, which is wise to sandbox the script.
The solution is to prepend export HOME=~; (not HOME=~;) to the exec command. i.e.
exec('export HOME=~; strace ~/public_html/api/libraries/phantomjs-2.1.1-linux-x86_64/bin/phantomjs ~/public_html/api/libraries/phantomjs-2.1.1-linux-x86_64/examples/rasterize.js http://v1.jontangerine.com/silo/typography/web-fonts/ ~/tmp/test.jpg | grep font');
This will cause the HOME directory to be set and the user fonts can now be searched.
I'm having trouble using PHP to execute a casperjs script:
<?php
putenv("PHANTOMJS_EXECUTABLE=/usr/local/bin/phantomjs");
var_dump(exec("echo \$PATH"));
exec("/usr/local/bin/casperjs hello.js website.com 2>&1",$output);
var_dump($output);
Which results in the following output:
string(43) "/usr/gnu/bin:/usr/local/bin:/bin:/usr/bin:."
array(1) {
[0]=>
string(36) "env: node: No such file or directory"
}
The only stackoverflow posts I could find hinted that there's a problem with my paths, and that maybe the PHP user can't access what it needs.
I have also tried the following: sudo ln -s /usr/bin/nodejs /usr/bin/node
Does anyone know what I would need to do or change for this error to resolve?
Thanks
My guess is you have something, somewhere, that assumes node is installed.
First, are you running php from the commandline? I.e. as php test.php in a bash shell. If so, you can run the commands, below, as they are. If through a web server the environment can be different. I'd start with making a phpinfo(); script, and then run the troubleshooting commands through shell_exec() commands. But, as that is a pain, I'd get it working from the commandline first, and only mess around with this if the behaviour is different when run through a web server. (BTW, if you are running from a cron job, again, the environment can be slightly different. But only worry about this if it works from commandline but does not work from cron.)
Troubleshoot hello.js
The easy one. Make sure your script does not refer to node anywhere. Also remember you cannot use node modules. So look for require() commands that should not be there.
Troubleshoot your bash shell
Run printenv | grep -i node to see if anything is there. But when PHP runs a shell command, some other files get run too. So check what is in /etc/profile and ~/.bash_profile . Also check /etc/profile.d/, /etc/bashrc and ~/.bashrc. You're basically looking for anything that mentions node.
Troubleshoot phantomjs/casperjs
How did you install phantomjs and casperjs? Are the actual binaries under /usr/local/bin, or symlinks, or are they bash scripts to the . E.g. on my machine:
cd /usr/local/bin
ls -l casperjs phantomjs
gives:
lrwxrwxrwx 1 darren darren 36 Apr 29 2014 casperjs -> /usr/local/src/casperjs/bin/casperjs
lrwxrwxrwx 1 darren darren 57 Apr 29 2014 phantomjs -> /usr/local/src/phantomjs-1.9.7-linux-x86_64/bin/phantomjs
And then to check each file:
head /usr/local/src/casperjs/bin/casperjs
head /usr/local/src/phantomjs-1.9.7-linux-x86_64/bin/phantomjs
The first tells me casper is actually a python script #!/usr/bin/env python, while the second fills the screen with junk, telling me it is a binary executable.
I am trying to backup all the files on our server using some SSH commands via PHP and I have a script working to some extent.
The problem is that only some of the folders actually contain any files but the folder structure seems to be correct though.
This is the script I am using:
<?php
$output = `cd /
ls -al
tar -cf /home/b/a/backup/web/public_html/archive.tar home/*`;
echo "<pre>$output</pre>";
?>
I cant even view the files via SSH commands, an example of this is the test account. If I use the following command I am unable to view the website files.
<?php
$output = `cd /home/t/e/test/
ls -alRh`;
echo "<pre>$output</pre>";
?>
But if I use the same commands on the a different account I am able to see and download of the website files.
Is this a permission problem or am I missing something in my script?
Thanks
ls -l / | grep home
the output will be like this:
lrwxr-xr-x 1 root wheel 8 Mar 30 14:13 home -> usr/home
In my case, the owner is root, and the root user its primary group is wheel, so now we add www-data user to wheel group so he can list files in there:
usermod -a -G wheel www-data
You can download some files because they located in directory owned by www-data user, and when you can't, www-data has no permission in that.
I think it permission problem, try to give apache user(or what you set it) permission to read /home/* directory's.
To find the user name that used by apache service run this:
For linux:
egrep -iw --color=auto 'user|group' /etc/httpd/conf/httpd.conf
For FreeBSD:
egrep -iw --color=auto '^user|^group' /usr/local/etc/apache22/httpd.conf
My guess is that PHP is running in a chroot.
If you just want to execute a backup, consider doing it in a different language. Especially if it is just a sequence of UNIX commands, write a shell script. Perhaps more details on what this script will be used for and who is providing and maintaining your hosting will be useful.