PHP and Linux IPC sockets (and Dropbox) - php

I need to get Dropbox's status on linux.
This is done by interacting with Dropbox using a unix socket file as IPC.
Currently, a Python script exists to do this.
I've got this code so far:
echo 'usr='. get_current_user().'<br/>';
$address='/root/.dropbox/iface_socket';
$socket=socket_create(AF_UNIX,SOCK_STREAM,0);
if(!socket_connect($socket,$address))
die('socket_connect '.socket_last_error().': '.socket_strerror(socket_last_error()));
The above works in that it does know what I want to do, but it fails with this error/output:
usr=root
socket_bind 13: Permission denied
It is interesting to note that both PHP and Dropboxd are running under the same user.
Note: I tried using PHP's fsockopen, but failed (something to do with "bad protocol"). Tried it again and this time round it works....till I get the same error below...
Edit: Again, I know running as root is bad, spare it, ok? :)
Edit 2: As I said earlier, PHP, Apache, Dropbox and this socket file are all under user "root", group "root".
However, if I run the socket under stat, I get the following:
[root#cov .dropbox]# stat iface_socket
File: `iface_socket'
Size: 0 Blocks: 0 IO Block: 4096 socket
Device: 7dh/125d Inode: 255754311 Links: 1
Access: (0600/srw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2011-03-06 17:10:08.000000000 -0600
Modify: 2011-03-06 17:10:08.000000000 -0600
Change: 2011-03-06 17:10:08.000000000 -0600
Couldn't it be that all those -0600 is what is causing this issue? Note that if I chmod 0777 iface_socket, only the first line, (Access: (0600/srw-------)), changes; but not the other 3 underneath.
Edit 3: I was wondering, perhaps this topic would be better moved at unix/unix-like? At this point in time, it's not clear who's at fault in this issue.
Edit 4: Just ran the PHP script through strace like this:
strace php -nef /var/www/html/index.php
The relevant lines from output:
socket(PF_FILE, SOCK_STREAM, 0) = 3
fcntl(3, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0
connect(3, {sa_family=AF_FILE, path="/root/.dropbox/iface_socket"...}, 29) = 0
fcntl(3, F_SETFL, O_RDWR) = 0
close(3) = 0

In the rare case that Apache is running under the same user as Dropbox, I would just use the python command-line interface (Debian) /usr/bin/dropbox as you normally would from a terminal.
root#DevServer1:~# dropbox help
Dropbox command-line interface
commands:
Note: use dropbox help <command> to view usage for a specific command.
status get current status of the dropboxd
help provide help
puburl get public url of a file in your dropbox
stop stop dropboxd
running return whether dropbox is running
start start dropboxd
filestatus get current sync status of one or more files
ls list directory contents with current sync status
autostart automatically start dropbox at login
exclude ignores/excludes a directory from syncing
The fronted script can only effectively be used by the user that dropbox is running under. Everybody else should get a "Dropbox isn't running!" output. In your case you should be able to manipulate dropbox how you see fit from within PHP. Personally I run Dropbox as a restricted user other then my superuser. Using groups, you can safely link in folders at will and file permissions will be enforced.
<?php
$output = shell_exec('dropbox status');
echo "<pre>$output</pre>";
Dropbox isn't running!

A viable alternative.
try this instead:
<?php
$output = shell_exec("ps aux | grep '[d]ropbox'");
echo "<pre>$output</pre>";

Related

Laravel Github Actions CI/CD - err: fatal: could not read Username for 'https://github.com': No such device or address [duplicate]

I have the following problem when I try to pull code using git Bash on Windows:
fatal: could not read Username for 'https://github.com': No such file or directory
I already tried to implement the accepted solution provided here:
Error when push commits with Github: fatal: could not read Username
However the problem still persists.
After adding/removing origin I still get the same error.
Could advise on this issue?
Thanks!
Follow the steps to setup SSH keys here: https://help.github.com/articles/generating-ssh-keys
OR
git remote add origin https://{username}:{password}#github.com/{username}/project.git
Update: If you get "fatal: remote origin already exists." then you have to use set-url:
git remote set-url origin https://{username}:{password}#github.com/{username}/project.git
I faced the exact same problem. This problem occurred when I cloned a repo using HTTPS URL and then tried to push the changes (using Shell on Linux/Mac or Git Bash on Windows):
git clone https://github.com/{username}/{repo}.git
However, when I used SSH URL to clone, this problem didn't occur:
git clone git#github.com:{username}/{repo}.git
In case you already cloned the repo using HTTPS and don't want to redo everything, you may use set-url to change the origin URL to SSH URL:
git remote set-url origin git#github.com:{username}/{repo}.git
Note: I have SSH key added to my GitHub account. Without setting up SSH key, this method will not work either.
I found my answer here:
edit ~/.gitconfig and add the following:
[url "git#github.com:"]
insteadOf = https://github.com/
Although it solves a different problem, the error code is the same...
just check the below
Android Studio -> Preferences -> Version Control -> Git -> Use Credential Helper
For me nothing worked from suggested above, I use the git pull command from Jenkins Shell Script and apparently it takes wrong user name. I spent ages before I found a way to fix it without switching to SSH.
In your the user's folder create .gitconfig file (if you don't have it already) and put your credentials in following format: https://user:pass#example.com, more info. After your .gitconfig file link to those credentials, in my case it was:
[credential]
helper = store --file /Users/admin/.git-credentials
Now git will always use those credentials no matter what. I hope it will help someone, like it helped me.
If you want to continue use https instead ssh, and avoid type into your username and password for security reason.
You can also try Github OAuth token, then you can do
git config remote.origin.url 'https://{token}#github.com/{username}/{project}.git'
or
git remote add origin 'https://{token}#github.com/{username}/{project}.git'
This works for me!
Note that if you are getting this error instead:
fatal: could not read Username for 'https://github.com': No error
Then you need to update your Git to version 2.16 or later.
This error can also happen when trying to clone an invalid HTTP URL. For example, this is the error I got when trying to clone a GitHub URL that was a few characters off:
$ git clone -v http://github.com/username/repo-name.git
Cloning into 'repo-name'...
Username for 'https://github.com':
Password for 'https://github.com':
remote: Repository not found.
fatal: Authentication failed for 'https://github.com/username/repo-name.git/'
It actually happened inside Emacs, though, so the error in Emacs looked like this:
fatal: could not read Username for ’https://github.com’: No such device or address
So instead of a helpful error saying that there was no such repo at that URL, it gave me that, sending me on a wild goose chase until I finally realized that the URL was incorrect.
This is with git version 2.7.4.
I'm posting this here because it happened to me a month ago and again just now, sending me on the same wild goose chase again. >:(
TL;DR: check if you can read/write to /dev/tty. If no and you have used su to open the shell, check if you have used it correctly.
I was facing the same problem but on Linux and I have found the issue. I don't have my credentials stored so I always input them on prompt:
Username for 'https://github.com': foo
Password for 'https://foo#github.com':
The way how git handles http(s) connections is using /usr/lib/git-core/git-remote-https
you can see strace here:
stat("/usr/lib/git-core/git-remote-https", {st_mode=S_IFREG|0755, st_size=1366784, ...}) = 0
pipe([9, 10]) = 0
rt_sigprocmask(SIG_SETMASK, ~[RTMIN RT_1], [], 8) = 0
clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f65398bb350) = 18177
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
close(10) = 0
read(9, "", 8) = 0
close(9) = 0
close(5) = 0
close(8) = 0
dup(7) = 5
fcntl(5, F_GETFL) = 0 (flags O_RDONLY)
write(6, "capabilities\n", 13) = 13
fstat(5, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
read(5, "fetch\noption\npush\ncheck-connecti"..., 4096) = 38
write(6, "option progress true\n", 21) = 21
read(5, "ok\n", 4096) = 3
write(6, "option verbosity 1\n", 19) = 19
read(5, "ok\n", 4096) = 3
stat(".git/packed-refs", {st_mode=S_IFREG|0664, st_size=675, ...}) = 0
lstat(".git/objects/10/52401742a2e9a3e8bf068b115c3818180bf19e", {st_mode=S_IFREG|0444, st_size=179, ...}) = 0
lstat(".git/objects/4e/35fa16cf8f2676600f56e9ba78cf730adc706e", {st_mode=S_IFREG|0444, st_size=178, ...}) = 0
dup(7) = 8
fcntl(8, F_GETFL) = 0 (flags O_RDONLY)
close(8) = 0
write(6, "list for-push\n", 14) = 14
read(5, fatal: could not read Username for 'https://github.com': No such device or address
"", 4096) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=18177, si_uid=1000, si_status=128, si_utime=6, si_stime=2} ---
exit_group(128) = ?
+++ exited with 128 +++
So I tried to call it directly:
echo "list for-push" | strace /usr/lib/git-core/git-remote-https my
and the result:
poll([{fd=3, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 1 ([{fd=3, revents=POLLIN|POLLRDNORM}])
recvfrom(3, "\27\3\3\1\32", 5, 0, NULL, NULL) = 5
recvfrom(3, "\307|4Q\21\306\334\244o\237-\230\255\336\25\215D\257\227\274\r\330\314U\5\17\217T\274\262M\223"..., 282, 0, NULL, NULL) = 282
openat(AT_FDCWD, "/dev/tty", O_RDONLY) = -1 ENXIO (No such device or address)
openat(AT_FDCWD, "/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 4
fstat(4, {st_mode=S_IFREG|0644, st_size=2995, ...}) = 0
read(4, "# Locale name alias data base.\n#"..., 4096) = 2995
read(4, "", 4096) = 0
close(4) = 0
openat(AT_FDCWD, "/usr/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/locale-langpack/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/locale-langpack/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
write(2, "fatal: could not read Username f"..., 83fatal: could not read Username for 'https://github.com': No such device or address
) = 83
exit_group(128) = ?
+++ exited with 128 +++
And here it came to me:
openat(AT_FDCWD, "/dev/tty", O_RDONLY) = -1 ENXIO (No such device or address)
...
write(2, "fatal: could not read Username f"..., 83fatal: could not read Username for 'https://github.com': No such device or address
) = 83
git-remote-https tries to read credentials via /dev/tty so I tested if it works:
$ echo ahoj > /dev/tty
bash: /dev/tty: No such device or address
But in another terminal:
# echo ahoj > /dev/tty
ahoj
I knew I switched to this user using su so I exited the shell to see how and found out I used command su danman - so I tested it again:
~# su danman -
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
/root$ echo ahoj > /dev/tty
bash: /dev/tty: No such device or address
I probably ignored the message and continued working but this was the reason.
When I switched using the correct su - danman everything worked fine:
~# su - danman
danman#speedy:~$ echo ahoj > /dev/tty
ahoj
After this, git started working correctly
I fixed this by installing a newer version of Git. The version I installed is 2.10.2 from https://git-scm.com. See the last post here: https://www.bountysource.com/issues/31602800-git-fails-to-authenticate-access-to-private-repository-over-https
With newer Git Bash, the credential manager window pops up and you can enter your username and password, and it works!
For those getting this error in a Jenkins pipeline, it can be fixed by using an SSH Agent plugin. Then wrap your git commands in something like this:
sshagent(['my-ssh-key']) {
git remote set-url origin git#github.com:username/reponame.git
sh 'git push origin branch_name'
}
Tried everything here, didn't work.
However, when I tried to debug via git config --system --list, I got fatal: unable to read config file '/etc/gitconfig': No such file or directory.
So I did the following,
sudo ln -s $HOME/.gitconfig /etc/gitconfig
and voila, it works.
Short Answer:
git init
git add README.md
git commit -m "first commit"
git remote add origin https://github.com/{USER_NAME}/{REPOSITORY_NAME}.git
git push --set-upstream origin master
Ignore first three lines if it's not a new repository.
Longer description:
Just had the same problem, as non of the above answers helped me, I have decided to post this solution that worked for me.
Few Notes:
The SSH key was generated
SSH key was added to GitHub, still had this error.
I've made a new repository on GitHub for this project and followed the steps described
As the command line toolm I used GitShell (for Windows, I use Terminal.app on Mac).
GitShell is official GitHub tool, can be downloaded from https://windows.github.com/
Here is the simple trick that can work
android studio -> settings -> version control -> git -> use credential helper
have fun
Replace your remote url like this:
git remote set-url origin https://<username>#github.com/<username>/<repo>.git
[SSH] executing...
fatal: could not read Username for 'https://github.com': No such device or address
[SSH] completed
[SSH] exit-status: 1
Build step 'Execute shell script on remote host using ssh' marked build as failure
Finished: FAILURE
Sol:-
If the repository is private and also it belongs to another person or organisation and you are a contributor of that repository then you can run git commands from Jenkins job and it doesn't prompt you for the username and password.
https://username:token#github.com/accountname/reponame.git
In my case I had to setup "personal access token" under GitHub:
settings -> developer settings
and enable SSO.
I am using GitHub actions where the jobs are configured to accomplish a task. This solution applies to all the CIs (Jenkins, Travis, Circle) where the git command execution environment changes.
job1:
Checkout the repository using the github actions actions/checkout#v2 using the personal access token(PAT), The job completed successfully and repository is cloned.
job2:
Fetch the tags using the command git fetch --tags. The job failed with the same error
fatal: could not read Username for 'https://github.com': No such file or directory
Reason of failure: When the job changed, the command running environment also changed, even though the repository exists but the github credentials needs to be provided to fetch the tags.
Solution: add this command before fetching the tags in the job2
git config remote.origin.url 'https://${{ your_personal_access_token }}#github.com/${{ github.repository }}'
Note: ${{ expression }} is the github actions yaml syntex to uncover the environment varibales/ to run expressions.
This is an issue with your stored credentials in the system credential cache. You probably have the config variable 'credential.helper' set to either wincred or winstore and it is failing to clear it. If you start the Control Panel and launch the Credential Manager applet then look for items in the generic credentials section labelled git:https://github.com. If you delete these, then the will be recreated next time but the credential helper utility will ask you for your new credentials.
Try using a normal Windows shell such as CMD.
Earlier when I wasn't granted permission to access the repo, I had also added the SSH pubkey to gitlab. At the point I could access the repo and run go mod vendor, the same problem as your happens. (maybe because of cache)
go mod vendor
go: errors parsing go.mod:
/Users/macos/Documents/sample/go.mod:22: git ls-remote -q https://git.aaa.team/core/some_repo.git in /Users/macos/go/pkg/mod/cache/vcs/a94d20a18fd56245f5d0f9f1601688930cad7046e55dd453b82e959b12d78369: exit status 128:
fatal: could not read Username for 'https://git.aaa.team': terminal prompts disabled
After a while trying, I decide to remove the SSH key and terminal prompts filling in username and password. Everything is fine then!
Double check the repository URL, Github will prompt you to login if the repo doesn't exist.
I'm guessing this is probably to check if it's a private repo you have access to. Possibly to make it harder to enumerate private repos. But this is all conjecture. /shrug
I had this problem in a ssh jail (jailkit), where /dev/tty was not present. I added this device file with jk_cp and the error went away.
I got the answer by clearing Invalidate caches and then clean build helped me.
Select 'File > Invalidate Caches / Restart' and then click the
'Invalidate and Restart' button.
Select Build > Clean Project and then select Build > Rebuild
Project.
Here is the simple cause for people who get this error: you are trying to pull from a repo you haven't yet set up your local git to access. For me, I got this error when I changed a public repo to a private one. It's clear to me now that a public repo needs no auth, yet a private one does.
Hope this helps clarify why the error occurs and one potential trigger for that cause.
From the Android Studio terminal execute git pull
example: test#test:~/StudioProjects/dummy-project-android$ git pull
Add the user name
example: test#test:~/StudioProjects/dummy-project-android$ "username"
Add the user password
Password for 'https://dummy-project-android#git.com':"password"
After that, we can perform the rest of the operation
If your repo is private then
Instead of this format
https://github.com/username/<project_name>.git
Use this format
git#github.username/<project_name>.git
You can get a correct url under SSH option
Go To Your Github Repo -> Click on Code -> Select SSH -> Copy your repo URL
Hope this helps.
What worked for me is to change the access of the Git repository from private to public.

PHP can't see any files on a mounted filesystem

I have a new file server (FilServerB) that I have had up and running for a few months. I've been moving all my processing servers over to use FileServerB for their PHP code. I recently found a server (server name is susan) that I had missed, and it was still connecting to the old file server (FileServerA). When I mounted FileServerB on susan, none of the code would run on it anymore. In ssh, when I go to a directory with PHP code in it, and run "php cleanISL.php", it says this:
Fatal error: Unknown: Failed opening required 'cleanISL.php' (include_path='.:/local/online/live/common:/local/online/pear') in Unknown on line 0
If I create a new php file on the local filesystem, it runs just fine. If I try an is_file or is_directory for a file or directory that I know exists on the mounted filesystem, it always returns false. However, I can glob directories, and it shows the files in there just fine.
I've tried changing (and removing completely) my include_path, I've tried going back to the old file server (which works), and then unmounting the old one and mounting the new one the exact same way (still doesn't work), and a few other things. I can't tell if the issue is with PHP, or somehow with the way the server is mounted, or something else. I've made sure SELINUX is disabled. The issue seems to be only affecting PHP, and only when I mount the new FileServerB, and only on this particular server (susan). But I'm baffled at what could be causing it or how to fix it.
Also, I have mounts to other servers (for data/media) on this same server (susan), and those work just fine, ie, PHP can see and read files on those mounts too.
UPDATE 1, strace info
This is the relevant line of the strace on a failed is_dir:
stat64("/online/live/tools/test/fixes/", {st_mode=S_IFDIR|0775, st_size=4096, ...}) = 0
And this is the output of a stat command on the same dir:
File: `/online/live/tools/test/fixes/'
Size: 4096 Blocks: 8 IO Block: 32768 directory
Device: 18h/24d Inode: 11815229588 Links: 3
Access: (0775/drwxrwxr-x) Uid: ( 1004/ UNKNOWN) Gid: ( 1010/ UNKNOWN)
Access: 2016-07-06 07:28:46.606024801 -0600
Modify: 2016-06-24 16:23:42.206547505 -0600
Change: 2016-06-24 16:23:42.206547505 -0600
And this is a namei -m result:
dr-xr-xr-x /
lrwxrwxrwx online -> /mnt/code/online/
dr-xr-xr-x /
drwxr-xr-x mnt
drwxr-xr-x code
drwxr-xr-x online
drwxrwxrwx live
drwxrwxr-x tools
drwxrwxr-x test
drwxrwxr-x fixes

Why does rename() return false despite moving a file successully to an NFS mounted disk?

I'm getting an Invalid argument warning when moving a file from a local disk to an NFS mounted disk. The file is moved successfully despite the error message:
Warning: rename(/tmp/image.jpg,/mnt/remote.server-disk1/image.jpg): Invalid argument
The mounted disk:
$ df
remote.server:/disk1 917G 269M 871G 1% /mnt/remote.server-disk1
The exported disk on the remote server:
$ cat /etc/exports
/disk1 remote.server(rw,sync,root_squash,secure,crossmnt,anonuid=504,anongid=504)
The file on the local disk before rename():
$ stat /tmp/image.jpg
File: `image.jpg'
Size: 2105 Blocks: 8 IO Block: 4096 regular file
Device: 803h/2051d Inode: 33556339 Links: 1
Access: (0777/-rwxrwxrwx) Uid: ( 501/ apache) Gid: ( 501/ apache)
...
The file on the remote disk after rename():
$ stat /disk1/image.jpg
File: `image.jpg'
Size: 2105 Blocks: 8 IO Block: 4096 regular file
Device: 821h/2081d Inode: 34603214 Links: 1
Access: (0777/-rwxrwxrwx) Uid: ( 501/ apache) Gid: ( 501/ apache)
...
Any ideas?
Thanks
In Unix you can't rename or move between filesystems, Instead you must copy the file from one source location to the destination location, then delete the source. This will explain the error message you're getting. However, it seems unclear as why it still does the move. This could be related to permissions or the NFS mounted disk is locally cached.
Maybe a bit late, but i think the answer is probably because of permissions linked to the target file system.
We had the same issues, and a strace give us the proper diagnostic:
strace php -r "rename('SOURCE_FILE', 'DST_FILE');";
...
chown("SOURCE_FILE", 0, 0) = -1 EINVAL (Invalid argument)
write(2, "PHP Warning: rename(SOURCE_FILE"..., 192PHP Warning: rename(SOURCE_FILE): Invalid argument in Command line code on line 1
) = 192
write(1, "bool(false)\n", 12bool(false)
In our case, the target file system was an NFS with version 4, and idmap was enabled.
Even a simple
chown $(whoami) $DST_FILE
wasn't working.
It's the default behavior (under debian at least) to have idmap enabled with nfs-common utils.
So even if you fix it by using a copy + unlink (which is the best approach by the way), it may still hide an issue in your underlying file system.
(to disable idmap : https://forums.aws.amazon.com/thread.jspa?threadID=235501)
Maybe it's because you doing this with no quotes
rename(/tmp/image.jpg,/mnt/remote.server-disk1/image.jpg);
try adding quotes
rename('/tmp/image.jpg', '/mnt/remote.server-disk1/image.jpg');
I couldn't resolve this, but copy() and then unlink() worked without error.

Intermittent "too many open files" when including/requiring PHP scripts

On my development box (thank goodness it's not happening in production—that I know of—yet), as I'm working on a PHP site, I get this occasional error:
Warning:
require_once(filename.php): failed
to open stream: Too many open files in
path/functions.php on line 502
Fatal error: require_once(): Failed opening required 'filename.php'
(include_path='/my/include/path') in
path/functions.php on line 502
Line 502 of functions.php is my "autoload" function which automatically requires PHP files containing classes I use:
function autoload($className)
{
require_once $className . ".php"; // <-- Line 502
}
By an "occasional" error, I mean that it'll work fine for about a day of development, then when I first see this, I can refresh the page and it'll be okay again, then refreshing gives me the same error. This happens just a few times before it starts to show it every time. And sometimes the name of the file it's requiring (I have a script split out into several PHP files) is different... it's not always the first or last or middle files that it bombs on.
Restarting php-fpm seems to solve the symptoms, but not the problem in the long run.
I'm running PHP 5.5.3 on my Mac (OS X 10.8) with nginx 1.4.2 via php-fpm.
Running lsof | grep php-fpm | wc -l tells me that php-fpm has 824 files open. When I examined the actual output, I saw that, along with some .so and .dylib files, the vast majority of lines were like this:
php-fpm 4093 myuser 69u unix 0x45bc1a64810eb32b 0t0 ->(none)
The segment "69u" and the 0x45bc1a6481... number are different on each row. What could this mean? Is this the problem? (ulimit is "unlimited")
Incidentally (though perhaps un-related), there's also one or two of these:
php-fpm 4093 myuser 8u IPv4 0x45bc1a646b0f97b3 0t0 TCP 192.168.1.2:59611->rest.nexmo.com:https (CLOSE_WAIT)
(I have some pages which use HttpRequest (PECL libraries) to call out to the Nexmo API. Are these not being closed properly or something? How can I crack down on those?)
Try to set php-fpm (they're set to infinite by default) to more appropriate values on your needs.
For example:
emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s
Maybe, set this too if your app works with lots of files:
rlimit_files = 1024

Reading file from php module fails with errno 13

Good day.
There is a PHP module ( .so) loaded within PHP. On MINIT stage it tries to read a file.
The file is a /tmp/aaa.txt
The directory /tmp belongs to root and its permissions are set to 777.
The file /tmp/aaa.txt belongs to apache user and is also set to 777 permissions.
Module opens the file with VCWD_FOPEN(), which is define for
#define VCWD_FOPEN(path, mode) virtual_fopen(path, mode TSRMLS_CC) which eventually is a fopen().
The VCWD_FOPEN fails with error 13 (permission denied).
The strange thing is, if I invoke the module manually
( #php -r 'echo "hi";' ) - it works.
But when it runs from apache - it doesnt.
Anybody knows why?
Thank you
Found the problem.
The user permission policy was enforced by SELinux.
To disable it i typed
#setenforce 0
#service httpd restart
Works now

Categories