Read Windows Samba Share Directory with PHP from LAMP Server - php

I have a Samba share from Windows network mounted to a directory on my Linux based Webserver. I have mounted the directory as follows:
mount -t cifs -o username=admin,password='passsword',domain=mydomain.local,file_mode=0644,dir_mode=0777,uid=client_user,gid=client_user '//192.168.0.x/d$' /home/client_user/mnt
The mount works and I can browse through the files and directories in the OS. However, I wish to be able to access this through a PHP script ran from the browser. However, any file operations on the share result in a permission denied error. I have experimented a little and replaced the uid and gid parameter values with apache, but still no luck.
Any suggestions are much appreciated
Edit
In further tests I have created a file with the following code:
if(is_readable('/path/to/mnt')) {
echo 'Readable';
}
else {
echo 'Not';
}
Running this from the command line on the server results in Readable being printed. I have ran this as root and as a user on the server, but it will not work from the browser.

So after some trial and error I worked out that SELinux was not permitting httpd access to the folders.
Running this command allows httpd to access cifs:
setsebool -P httpd_use_cifs on
However, further investigation revealed that I could set the httpd context on just the mounted folder. So I unmounted the drive and amended my mount command to include:
context="system_u:object_r:httpd_sys_rw_content_t:s0",
The full command:
mount -t cifs -o context="system_u:object_r:httpd_sys_rw_content_t:s0",username=admin,password='passsword',domain=mydomain.local,file_mode=0644,dir_mode=0777,uid=client_user,gid=client_user '//192.168.0.x/d$' /home/client_user/mnt

Related

Permission denied when acl says rwx

I changed my server; the old one ran Centos 8, the new runs Ubuntu 20.4. Now my php scripts have a problem with permissions -- why?
Example:
current user: root
script was executed under user: nobody
Message: fopen(/tmp/RebuildCat_sequence.cnt): failed to open stream: Permission denied
Actually this file is owned by nobody, so I should not get this warning at all:
CS-1 01:47:22 :/tmp# ls -latr /tmp/RebuildCat_sequence.cnt
--wxrwxrwT+ 1 nobody nogroup 480 Oct 23 00:02 /tmp/RebuildCat_sequence.cnt
This php file is called by cron (root), as a rule, and as I noticed that some similar cron triggered files are owned by systemd-timesync, I issued
setfacl -m u:systemd-timesync:rwX /tmp
setfacl -m u:nobody:rwX /tmp
and even
setfacl -R -m u:nobody:rwX /tmp
setfacl -R -m u:systemd-timesync:rwX /tmp
to no avail. How do I understand this?
In my understanding both users systemd-timesync and nobody should be able to read and write files in /tmp without problem due to acl. I think someone has to educate me here.
/tmp resides on /dev/md2
CS-1 01:32:34 :/tmp# tune2fs -l /dev/md2 | grep "Default mount options:"
Default mount options: user_xattr acl
No problem here, I guess.
CS-1 01:46:09 :/tmp# getfacl /tmp
getfacl: Removing leading '/' from absolute path names
# file: tmp
# owner: root
# group: root
# flags: --t
user::rwx
user:systemd-timesync:rwx
user:nobody:rwx
group::rwx
mask::rwx
other::rwx
Looks OK to me, as well. Clueless.
Addendum
OK, I set chmod 777 /tmp, but still I get a PHP error fopen(/tmp/RebuildCat_sequence.cnt): failed to open stream: Permission denied, but file_put_contents works with 777 -- how do I understand this? Why does fopen throw an error, but file_put_contents does not?
Alas, chmod 777 /tmp in the console is not the answer. It is much more complicated than that. So I wasn't silly at all.
How to reproduce?
In my case, I have 2 docker PHP containers writing to a number of case specific log files, one of them using apache, the other nginx. Both have a mapping /tmp:/tmp, so we can look at a file from the host or from inside a container.
From inside the containers, the owner is apache:apache if created in the apache container or nginx:nginx in the nginx container, but from the host's perspective it is systemd-timesync:systemd-journal for the same file in both cases.
Of course, the apache container does not have a user nginx and vice versa (neither does the host). So if my PHP script wants to write to a file created by the other container, I have said permission problem.
Fix
The remedy is easy, if at creation time the owner is changed to nobody and permissions to 666.
If not, we cannot change the owner from the other container and get failed to open stream: Permission denied. See https://serverfault.com/questions/772227/chmod-not-working-correctly-in-docker (also see the discussion about changing permissions from host via system call below).
So I wrote a wrapper function str_to_file to add this manipulation:
function str_to_file($str, $filename= "/tmp/tmp.txt", $mode="a+") {
if (!file_exists($filename)){
touch($filename); # create file
chmod($filename, 0666); # the owner can chmod
chown($filename, 'nobody'); # the owner can chown
} # if (!file_exists($filename))
$mode = $mode == 'w'
? LOCK_EX
: FILE_APPEND | LOCK_EX;
file_put_contents($filename, $str . PHP_EOL, $mode);
} # str_to_file
This works for both the apache and nginx containers.
Explanation
How did I get into this mess anyway?
It looks like either I didn't mess it up when on CentOS, so in this case most probably it has nothing to do with my switch to Ubuntu. I cannot remember what I did but I don't remember I had this kind of permission problem on CentOS.
Or CentOS handles files created in a container differently than Ubuntu. I am sure I never saw a user systemd-timesync:systemd-journal on CentOS. Unfortunately I cannot investigate without too much effort what user CentOS would substitute from the host point of view and which consequences this has.
Oh wait, I remember I do have another server running PHP and nginx on CentOS. Creating a file in /tmp from within the container reveals that the owner from both inside the container and from the host is nobody. So there is a significant difference between both Linux versions.
What's more, this insight explains why I experienced this permission problem only after the switch from CentOS to Ubuntu, to which I was forced due to the fact that my provider does not offer CentOS anymore for obvious reasons.
By the way, I first tried the CentOS substitutes AlmaLinux and RockyLinux with bad results of different nature which finally forced me to use Ubuntu. This switch in turn revealed lots of problems, this one being the last, which cost me several weeks so far. I hope that this nightmare ends now.
Acting from cron, the apache version is and was used exclusively, so no problem here.
Testing from the browser lately, I switched by chance and for no particular reason from the apache version to the nginx version and vice versa, creating those described problems.
Actually I don't know why the containers write on behalf of those users. The user of the container is root in both cases, as is standard with docker. I guess it is the browser engine which introduces these users.
Interestingly, when trying to change permissions and ownership via system call after creation, I failed, if I remember correctly, although the user then should be root and root should be capable of doing that.
It turns out that on apache the system user [system('whoami')] is not root but apache, but the file owner is apache where as on nginx the system user is nobody and the file owner is nginx. So changing permissions with system should work on apache [system("chmod 0666 $filename"); system("chown nobody:nobody $filename");], but not on nginx. Alas, it does not work on apache either.
Quick check in apache container:
/tmp # whoami
root
/tmp # ls -la *6_0.ins
-rw-r--r-- 1 apache apache 494 Nov 14 18:25 tmp_test_t6_0.ins
/tmp # su apache chmod 0666 tmp_test_t6_0.ins
This account is not available
Sorry, I can't understand this. And no idea about ACL.
apache vs. nginx
Why do I use both web server versions in the first place?
Well, the process I invoke processes random data which may take a long time, so chances are that nginx times out. This is a well-known nginx feature.
Despite all my studies and obvious instructions, I could not manage nginx to behave. Geez!
Finally, as an appropriate workaround, I introduced apache, which does not have this problem. For decent runtimes it makes no difference, of course.
Well, it looks like I was kind of silly. I'm sure I double checked permissions on /tmp in WinSCP (1777), but I am unsure if I did it in the console.
Now I issued chmod 777 /tmp in the console and there it is! This problem is gone. chmod 666 /tmp reintroduces the problem. chmod 676 /tmp is OK as well.
I googled quite a bit to educate myself about this topic, but frankly I don't understand it. In particular why did I have the problem in the first place and why did ACL not solve the problem? I'd appreciate some enlightenment.

Why can't an executable find a shared library only when run as a http request

I have a php file that has a shell_exec call. The shell_exec functions runs a .sh file.
#!/bin/bash
filename=$(ls *.jpg -Art | tail -n 1)
codegen_dir=/usr/local/codegen/
cd "$codegen_dir"
out=$(./classifier /var/www/$filename)
echo $out
The executable 'classifier' exists in the codegen_dir and has 1 shared library dependency. The script runs correctly from the command line. The php file also runs correctly from the command line. however, when I run the php file as a http request I get the following in std_err:
"./classifier: error while loading shared libraries: libreader.so: cannot open shared object file: No such file or directory"
The .so file is in the same directory as the executable
My php server root is : /var/www
All files in the server root have the permissions:-rwxrwxrwx 1 www-data www-data
All files in 'codegen_dir' have the permissions: -rwxrwxrwx 1 ubuntu www-data
I am able to read other files in the codegen_dir
Shared libarary path might be not accessible by apache user. You can allow classifier program in sudoers file for apache and use sudo to run classifier application as apache user
Or
make shared libraray and its path accessible by all users by changing its permission
out=$(sudo ./classifier /var/www/$filename)
Try to login with apache user and run above script or try to access shared lib
su -s /bin/bash apache

Copy remote file with rsync in php

I'm trying to execute with PHP a command (rsync) to copy folders and files from a remote server to a local folder.
This is the code I wrote in php. Command WORKS in SSH (local Terminal and remote with putty.exe), copying correctly the folders and the files.
But it doesn't work in PHP. What can I do? Do you know a better(secure/optimal) way to do this?
exec("echo superuserpassword | sudo -S sshpass -p 'sshremoteserverpassword' rsync -rvogp --chmod=ugo=rwX --chown=ftpuser:ftpuser -e ssh remoteserveruser#remoteserver.com:/path/files/folder /opt/lampp/htdocs/dowloadedfiles/", $output, $exit_code);
EDIT:
I had read this guide to create a link between my server and my local machine.
Now I can login with ssh in my remote machine without password.
I changed my command:
rsync -crahvP --chmod=ugo=rwX --chown=ftpuser:ftpuser remote.com:/path/to/remote/files /path/to/local/files/
This command works too in terminal, but when I send it with exec php command, it fails again, but I got another different error: 127.
As MarcoS told in his answer, I checked the error_log.
The messages are this:
ssh: relocation error: ssh: symbol EVP_des_cbc, version OPENSSL_1.0.0 not defined in file libcrypto.so.1.0.0 with link time reference
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: remote command not found (code 127) at io.c(226) [Receiver=3.1.1]
Well, after lot of try/error, I finished to cut the problem in the root:
I readed this guide (like the last one, but better explained) and I changed the php file that execute the rsync command to the remote server (where files are located) and run the rsync.php file there, and it worked perfectly.
To execute in the machine with the files (the files to copy and the rsync.php)
1.- ssh-keygen generates keys
ssh-keygen
Enter an empty passphrase and repeat empty passphrase again.
2.- ssh-copy-id copies public key to remote host
ssh-copy-id -i ~/.ssh/id_rsa.pub remoteserveraddressip(xxx.xxx.xxx.xxx)
The rsync.php file:
exec("rsync -crahvP /path/in/local/files/foldertocopy remoteuser#remoteserveraddress:/path/in/remote/destinationfolder/", $output, $exit_code);
After all of that, navigate to the rsync.php file and all must work. At least worked for me...
I suppose you are experiencing identity problems... :-)
On a cli, you are running the command as the logged-in user.
On PHP, you are running the command as the user your web server runs as (for example, apache often runs as www-data, or apache user...).
One possible solution I see (if the above is the problem real cause), is to add your user to web-server group...
I'd also suggest you to check the web-server error logs, to be sure about the real cause of the problem... :-)

Docker cannot write to assets folder as web server process in Yii/PHP application locally OS X

appreciate if you could help me.
I'm running docker VM in a MAC OS X and seems okay until i reach a permissions error when my app is trying to write files in the assets folder in the server:
CAssetManager.basePath "/var/www/html/assets" is invalid. Please make sure the directory exists and is writable by the Web server process.
I ran ls -l in the docker container shell ($ docker exec container) and saw that my folder permissions are set to
drwxrwxrwx 1000 staff assets. Following that, i tried to set it to www-data as i though it might work , so i ran usermod -u 1000 www-data. Now folder becomes: drwxrwxrwx www-data staff assets but the error persists.
In the shell, I also tried to run chmod and chown commands but i get these errors:
chown: changing ownership of 'assets': Read-only file system
chmod: changing ownership of 'assets': Read-only file system
How can i enable my directory to be writable by the web server process in docker?
UPDATE:
$ docker ps returns
$ docker info## Heading ## returns
UPDATE 2:
$ docker inspect returns
http://pastebin.com/wM3tT51v
Looking at the docker inspect output
"Mounts": [
{
"Source": "/Users/joelkoh/Sites/merrymaker/php-app",
"Destination": "/var/www/html",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
It looks like your directory is read only, I'm not familiar with elastic beanstalk, but you will need to change that volume so it isn't read only.
You might consider using http://docker-sync.io for mounting shares. Since it is not mounting, but syncing, it solved the user permission-problems by mapping your desired uid/guid to the container https://github.com/EugenMayer/docker-sync/blob/master/example/docker-sync.yml#L47
This way you have a very performant share, but also, having a proper user mapping to never care about permission issues in the container ( for host-mounted folders )

Error in using gammu in php exec

I successfully installed gammu in ubuntu 11, and send text message using command line.
echo "TEXTMESSAGE" | gammu sendsms TEXT mobilenumber
My problem is, when I use exec function in my php script I always have the following errors:
Warning: No configuration file found!
Warning: No configuration read, using builtin defaults!
Error opening device, it doesn't exist.
Thanks for the help
You are missing the .gammurc and the defaults fail to detect your device.
Try running gammu-detect. It should say something along the lines of
[gammu]
device = /dev/ttyUSB0
name = Phone on USB serial port HUAWEI_Technology HUAWEI_Mobile
connection = at
If that does not work, run gammu-config and manually set up port and connection.
Just resolved the similar trouble. In my case gammu was executed under nagios user, so that it was not able to find the configuration file until I placed it in /etc/gammurc.
According to gammu documentation on Linux, MacOS X, BSD and other Unix-like systems, the config file is searched in following order:
$XDG_CONFIG_HOME/gammu/config
~/.config/gammu/config
~/.gammurc
/etc/gammurc
My file was in /home/user/.gammurc, but when I executed it under nagios user "~" was a different directory, so that gammu was not able to find it.
Now permissions:
In order to gain access for your user to /dev/ttyUSB0 (use your path) you should add nagios (in your case www-data or whatever it is) user to dialout group this way:
sudo usermod -a -G dialout nagios
And then set the SUID bit on gammu to allow nagios (www-data in your case) execute it on behalf of the root:
sudo chmod 4755 /usr/bin/gammu
Try to execute gammu on behalf of the root (you could use su command)
Hope it would be useful.
You can change de path of .gammurc by doing this:
Copy the file (.gammurc) located on the root and past it on /etc.
cp .gammurc /etc/gammurc
Don't forget to remove the dot.
I use it raspberry Pi , the directory of gammu may change on your environment

Categories