Laravel permission denied in storage/meta/services.json - php

Well, this error is so know but in my case I could not mitigate it in my side. I have migrated a laravel-4 installation to another server and for the first time accessed I get this error:
file_put_contents(/var/www/html/MyApp/app/storage/meta/services.json): failed to open stream: Permission denied
I have followed different googled aswers as below
https://stackoverflow.com/a/17971549/1424329
Can't make Laravel 4 to work on localhost
http://laravel.io/forum/05-08-2014-failed-to-open-stream-permission-denied
Laravel 4: Failed to open stream: Permission denied
However, any of them could not fix my problem. I also tried cleaning the cache and dump autoclass command:
php artisan cache:clear
chmod -R 777 app/storage
composer dump-autoload
Also, I have thought that the webserver process might be considered in the problem, so I seek for its user like this:
$ ps -ef|grep httpd
apache 11978 11976 0 11:14 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
Then, I added apache to the directory group owner and the problems persists.
I do not know what else to do, I am going insane because dancing naked under full moon neither fixed the problem.

I have discovered the cause of this problem. Looks like selinux does not allow to httpd service (apache web server) write in my app folder. So, I did:
setsebool -P httpd_unified 1
Now everthing is working fine!

Happens to me some times, but I just delete it and Laravel recreates it. As far as I know, this is just a cache list of services and can be safely removed.

Related

Permission denied when acl says rwx

I changed my server; the old one ran Centos 8, the new runs Ubuntu 20.4. Now my php scripts have a problem with permissions -- why?
Example:
current user: root
script was executed under user: nobody
Message: fopen(/tmp/RebuildCat_sequence.cnt): failed to open stream: Permission denied
Actually this file is owned by nobody, so I should not get this warning at all:
CS-1 01:47:22 :/tmp# ls -latr /tmp/RebuildCat_sequence.cnt
--wxrwxrwT+ 1 nobody nogroup 480 Oct 23 00:02 /tmp/RebuildCat_sequence.cnt
This php file is called by cron (root), as a rule, and as I noticed that some similar cron triggered files are owned by systemd-timesync, I issued
setfacl -m u:systemd-timesync:rwX /tmp
setfacl -m u:nobody:rwX /tmp
and even
setfacl -R -m u:nobody:rwX /tmp
setfacl -R -m u:systemd-timesync:rwX /tmp
to no avail. How do I understand this?
In my understanding both users systemd-timesync and nobody should be able to read and write files in /tmp without problem due to acl. I think someone has to educate me here.
/tmp resides on /dev/md2
CS-1 01:32:34 :/tmp# tune2fs -l /dev/md2 | grep "Default mount options:"
Default mount options: user_xattr acl
No problem here, I guess.
CS-1 01:46:09 :/tmp# getfacl /tmp
getfacl: Removing leading '/' from absolute path names
# file: tmp
# owner: root
# group: root
# flags: --t
user::rwx
user:systemd-timesync:rwx
user:nobody:rwx
group::rwx
mask::rwx
other::rwx
Looks OK to me, as well. Clueless.
Addendum
OK, I set chmod 777 /tmp, but still I get a PHP error fopen(/tmp/RebuildCat_sequence.cnt): failed to open stream: Permission denied, but file_put_contents works with 777 -- how do I understand this? Why does fopen throw an error, but file_put_contents does not?
Alas, chmod 777 /tmp in the console is not the answer. It is much more complicated than that. So I wasn't silly at all.
How to reproduce?
In my case, I have 2 docker PHP containers writing to a number of case specific log files, one of them using apache, the other nginx. Both have a mapping /tmp:/tmp, so we can look at a file from the host or from inside a container.
From inside the containers, the owner is apache:apache if created in the apache container or nginx:nginx in the nginx container, but from the host's perspective it is systemd-timesync:systemd-journal for the same file in both cases.
Of course, the apache container does not have a user nginx and vice versa (neither does the host). So if my PHP script wants to write to a file created by the other container, I have said permission problem.
Fix
The remedy is easy, if at creation time the owner is changed to nobody and permissions to 666.
If not, we cannot change the owner from the other container and get failed to open stream: Permission denied. See https://serverfault.com/questions/772227/chmod-not-working-correctly-in-docker (also see the discussion about changing permissions from host via system call below).
So I wrote a wrapper function str_to_file to add this manipulation:
function str_to_file($str, $filename= "/tmp/tmp.txt", $mode="a+") {
if (!file_exists($filename)){
touch($filename); # create file
chmod($filename, 0666); # the owner can chmod
chown($filename, 'nobody'); # the owner can chown
} # if (!file_exists($filename))
$mode = $mode == 'w'
? LOCK_EX
: FILE_APPEND | LOCK_EX;
file_put_contents($filename, $str . PHP_EOL, $mode);
} # str_to_file
This works for both the apache and nginx containers.
Explanation
How did I get into this mess anyway?
It looks like either I didn't mess it up when on CentOS, so in this case most probably it has nothing to do with my switch to Ubuntu. I cannot remember what I did but I don't remember I had this kind of permission problem on CentOS.
Or CentOS handles files created in a container differently than Ubuntu. I am sure I never saw a user systemd-timesync:systemd-journal on CentOS. Unfortunately I cannot investigate without too much effort what user CentOS would substitute from the host point of view and which consequences this has.
Oh wait, I remember I do have another server running PHP and nginx on CentOS. Creating a file in /tmp from within the container reveals that the owner from both inside the container and from the host is nobody. So there is a significant difference between both Linux versions.
What's more, this insight explains why I experienced this permission problem only after the switch from CentOS to Ubuntu, to which I was forced due to the fact that my provider does not offer CentOS anymore for obvious reasons.
By the way, I first tried the CentOS substitutes AlmaLinux and RockyLinux with bad results of different nature which finally forced me to use Ubuntu. This switch in turn revealed lots of problems, this one being the last, which cost me several weeks so far. I hope that this nightmare ends now.
Acting from cron, the apache version is and was used exclusively, so no problem here.
Testing from the browser lately, I switched by chance and for no particular reason from the apache version to the nginx version and vice versa, creating those described problems.
Actually I don't know why the containers write on behalf of those users. The user of the container is root in both cases, as is standard with docker. I guess it is the browser engine which introduces these users.
Interestingly, when trying to change permissions and ownership via system call after creation, I failed, if I remember correctly, although the user then should be root and root should be capable of doing that.
It turns out that on apache the system user [system('whoami')] is not root but apache, but the file owner is apache where as on nginx the system user is nobody and the file owner is nginx. So changing permissions with system should work on apache [system("chmod 0666 $filename"); system("chown nobody:nobody $filename");], but not on nginx. Alas, it does not work on apache either.
Quick check in apache container:
/tmp # whoami
root
/tmp # ls -la *6_0.ins
-rw-r--r-- 1 apache apache 494 Nov 14 18:25 tmp_test_t6_0.ins
/tmp # su apache chmod 0666 tmp_test_t6_0.ins
This account is not available
Sorry, I can't understand this. And no idea about ACL.
apache vs. nginx
Why do I use both web server versions in the first place?
Well, the process I invoke processes random data which may take a long time, so chances are that nginx times out. This is a well-known nginx feature.
Despite all my studies and obvious instructions, I could not manage nginx to behave. Geez!
Finally, as an appropriate workaround, I introduced apache, which does not have this problem. For decent runtimes it makes no difference, of course.
Well, it looks like I was kind of silly. I'm sure I double checked permissions on /tmp in WinSCP (1777), but I am unsure if I did it in the console.
Now I issued chmod 777 /tmp in the console and there it is! This problem is gone. chmod 666 /tmp reintroduces the problem. chmod 676 /tmp is OK as well.
I googled quite a bit to educate myself about this topic, but frankly I don't understand it. In particular why did I have the problem in the first place and why did ACL not solve the problem? I'd appreciate some enlightenment.

Laravel Sail - No docker-compose.yml file found, using WSL 2

After successfully making WSL2 work with Docker Desktop (v3.1.0)
NAME STATE VERSION
* Ubuntu-20.04 Running 2
docker-desktop Running 2
docker-desktop-data Running 2
I followed steps from the Laravel Docs: https://laravel.com/docs/8.x#getting-started-on-windows but the error:
[ErrorException]
file_get_contents(): Read of 8192 bytes failed with errno=13 Permission denied
showed up, and i couldn't find the docker-compose.yml file in the directory either, after running ./vendor/bin/sail up:
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
This is probably a Linux permission problem. The safest way to fix it is to chown the directory and make it owned by the same user / group as the php process is running under.
The default user is called sail: https://github.com/laravel/sail/blob/1.x/runtimes/8.0/Dockerfile#L46
And the group is read from a .env variable called WWWGROUP
So what you want to do is open your WSL2 console, cd to your project.
And run
chown -R sail:<whatever your group from the env is WWWGROUP> .
. means current directory.
Many beginners would do things like chmod 777, however this is a bad practise.

SQLSTATE[HY000]: General error: 13 Can't get stat of './pics' (Errcode: 13 - Permission denied)

I was working on my ubuntu 16.0.4 server, on a Symfony 3.4 app.
I accidentaly did a bad manipulation
sudo chown -R USER /var/
While I wanted to enter :
sudo chown -R USER var/
Since then, I can't access to my database.
My Symfony App says me :
An exception occured in driver: SQLSTATE[HY000] [1049] Unknown database 'pics'
And using doctrine, trying to create a new database, I have this error :
SQLSTATE[HY000]: General error: 13 Can't get stat of './pics' (Errcode: 13 - Permission denied)
I don't know how my database could be deleted like this.
Can somebody help me ?
If you updated the user on /var, then the /var/lib/mysql directory is owned by the wrong user, and the mysqld process cannot write to that directory (and possibly not read it).
You can likely restore permissions for the database by:
cd /var/lib
chown -R mysql:mysql mysql
(Note: assuming the use of the default process owner and default directory locations)
I would likely then restart the mysql process.
However, you may have multiple other issues, including /var/run not having all of the correct owners, and thus while the system may be semi-stable at the moment, a reboot could fail very badly.
While one can, as a comment noted, by-pass the issue by allowing full read-write via a chmod 777, that simply opens the system in a way that is not secure. By losing the permission sets, you would have added another layer of problems.
The correct approach is to fix the ownership of all the directories in the /var hierarchy. Possibly comparing against a known good system would provide the correct owners. But for the database the above will give access again.

Can't scandir() in php referenced from root Centos 7 using apache

After reading lots of questions regarding php scandir() I haven't found one that answered my question. If this is a duplicate, please let me know where the answer is before marking me down.
Problem:
If I do
var_dump(scandir("/"));
Then I get the contents of the system root folder: bin, installs, nvme, var, etc, etc... 😊
If I do
var_dump(scandir("/nvme"));
Then I get false and an error that var/www/public_html/nvme doesn't exist.
So then if I do
var_dump(scandir("../../../../../../../"));
I can see the system root folder
but if I do
var_dump(scandir("../../../../../../../nvme"));
then I get a permission denied error.
I tested that I can scan each directory between the public_html and the root directory, but the moment I try to scan forward a directory then I run into errors. All the directories on the way back are owned by the same user as the nvme directory I'm trying to scan. Using file_get_contents throws the same error.
In SElinux I gave apache permission to read and write to the nvme folder. I'm running on Centos 7 with SElinux enabled.
Why can't I can a up from root on a different directory path?
EDIT:
My specific error was that I hadn't completed setting the SElinux context.
Previously I had run
semanage fcontext -a -t httpd_sys_content_t "/nvme(/.*)
but after that I still needed to run
restorecon -v "/nvme"
so that the new context was actually implemented. Now var_dump(scandir("/./nvme")); and var_dump(scandir("../../../../../../../nvme")); both work.

Permission denied writing to one directory, but not the other -- both have same owner/group/755

This is driving me insane. httpd runs as the user apache. I have two directories within /var/www/html -- uploads and photos. Both have group:owner of apache:apache. Both are 755. uploads is writable from php -- photos is not.
Some test code:
var_dump(touch('/var/www/html/photos/_test.log'));
var_dump(touch('/var/www/html/uploads/_test.log'));
var_dump(touch('/var/www/html/uploadsasdf/_test.log'));
And results:
Warning: touch(): Unable to create file /var/www/html/photos/_test.log because Permission denied in /var/www/html/test.php on line 2
bool(false)
bool(true)
Warning: touch(): Unable to create file /var/www/html/uploadsasdf/_test.log because Permission denied in /var/www/html/test.php on line 4
bool(false)
I've confirmed permissions through a shell and GUI interface. I've chowned and chmoded everything again just to be sure. I've renamed the uploads directory to something else and renamed photos to uploads to see if the name of the directory was the key here, but it wasn't. It's the directory itself. The renamed uploads still works with a new name, and the photos directory that is now called "uploads" does not.
Of note, _test.log does not exist in the folders before testing, so it's not like that file has bad permissions or anything.
Interestingly, if I create a new directory, chown it to apache:apache, chmod it to 777, I can't write to it, so something larger may be wrong here; but the question remains: why then does the uploads directory work?
Has anyone seen this behavior before? Am I missing something obvious?
Thanks in advance for any help!
Edited to add more info:
exec('whoami')
"apache"
var_dump(posix_getpwuid(fileowner('/var/www/html/')));
var_dump(posix_getpwuid(fileowner('/var/www/html/uploads/')));
var_dump(posix_getpwuid(fileowner('/var/www/html/photos/')));
all "apache"
All have the same fileperms() value. However, is_writable() is false on all but "uploads".
mkdir('var/www/html/test');
Warning: mkdir(): Permission denied
ls-alF
drwxr-xr-x. 2 apache apache 286720 Nov 22 15:17 photos/
drwxr-xr-x. 2 apache apache 81920 Nov 22 12:06 uploads/
drwxr-xr-x. 2 apache apache 6 Nov 22 10:31 uploadsasdf/
I have called clearstatcache(); I have rebooted the server. What ... on ... Earth?
Since you are using CentOS and and you've tried everything else, my guess would be something related to SELinux. One of the answers from this question may be helpful Forbidden You don't have permission to access on this server. Centos 6 / Laravel 4
Specifically try this to analyze SELinux permissions (ls -lZ) and temporarily disable SELinux:
If you're using CentOS it could be an issue with selinux. Check to see if selinux is enabled with 'sestatus'. If it is enabled, you can check to see if that is the issue (temporarily) using 'sudo setenforce 0'. If apache can serve the site, then you just need to change the context of the files recursively using 'sudo chcon -R -t httpd_sys_content_t' (you can check the existing context using 'ls -Z'.
If selinux is enabled (sestatus will tell you), try sudo restorecon -Rv /var/www/ first. Sounds much like SELinux is getting in the way and you somehow have got a file/directory there which is not labelled correctly. Restorecon will revert labels to default, and -v will show you which labels have been corrected, if any.
Failing that, extended attributes come to mind. do lsattr <filename> and if the output looks anything like ------i-----, the immutable flag is set. Change it with chattr -i <filename> and you're good to go.

Categories