Heroku Local with PHP on Mac OS X - php

Currently I just use additional terminal tabs to manually start worker and clock processes in addition to an always-on apache proxying to php-fpm.
I tried heroku local when I started with heroku but its setup defeated me.
Now I want to give it another try.
I'm on High Sierra with homebrew php but have stuck with the mac os built in apache till now. It appears brew's apache may be a better choice but hopefully we'll find out if so in the following.
I realise from the answer provided here (by the top contributor to the buildpack) that the apache used by heroku local must be stopped at the time heroku local is started. There's a similar apparent quote from heroku support about this in an answer here.
I also note, my own discovery, that one should install the buildpack locally with * as in composer require-dev heroku/heroku-buildpack-php "*" to ensure the latest version.
Right now I get the following when I issue heroku local, using mac osx's built-in apache (I have it listening on port 8080 to serve my php dev environments, but stopped it for the purpose of this, with sudo apachectl stop).
m$ heroku local
[OKAY] Loaded ENV .env File as KEY=VALUE Format
12:21:03 PM horizon.1 | Horizon started successfully.
12:21:03 PM clock.1 | [2019-09-20 11:21:03] Calling scheduler
12:21:03 PM clock.1 | No scheduled commands are ready to run.
12:21:03 PM web.1 | DOCUMENT_ROOT changed to 'public/'
12:21:04 PM web.1 | 4 processes at 128MB memory limit.
12:21:04 PM web.1 | Starting php-fpm...
12:21:06 PM web.1 | Starting httpd...
12:21:06 PM web.1 | Application ready for connections on port 5000.
12:21:06 PM web.1 | [Fri Sep 20 12:21:06.155117 2019] [core:emerg] [pid 25867] (2)No such file or directory: AH00023: Couldn't create the mpm-accept mutex (file /private/var/run/mpm-accept-0.25867)
12:21:06 PM web.1 | (2)No such file or directory: could not create accept mutex
12:21:06 PM web.1 | AH00015: Unable to open logs
12:21:06 PM web.1 | Process exited unexpectedly: httpd
12:21:06 PM web.1 | Going down, terminating child processes...
[DONE] Killing all processes with signal SIGINT
12:21:06 PM horizon.1 | Shutting down...
12:21:06 PM clock.1 Exited with exit code SIGINT
12:21:06 PM web.1 Exited with exit code null
12:21:07 PM horizon.1 | [2019-09-20 11:21:06][1033] Processing: Laravel\Scout\Jobs\MakeSearchable
12:21:07 PM horizon.1 | [2019-09-20 11:21:06][1032] Processing: Laravel\Scout\Jobs\MakeSearchable
12:21:07 PM horizon.1 | [2019-09-20 11:21:06][1034] Processing: Laravel\Scout\Jobs\MakeSearchable
12:21:08 PM horizon.1 | [2019-09-20 11:21:06][1033] Processed: Laravel\Scout\Jobs\MakeSearchable
12:21:08 PM horizon.1 | [2019-09-20 11:21:06][1032] Processed: Laravel\Scout\Jobs\MakeSearchable
12:21:08 PM horizon.1 | [2019-09-20 11:21:06][1034] Processed: Laravel\Scout\Jobs\MakeSearchable
12:21:08 PM horizon.1 Exited Successfully
My mac os apache vhosts forward to /tmp/php72-fpm.sock. The permissions there are ok as apache in local browser reaches php-fpm fine.
I see that the actual error is (it scrolls off to the right in the code above due to no linebreak): Could not create the mpm-accept mutex. I now know there are a few different apache multi-processing modules, not sure if I need to know more about that.
But is that likely to just be permissions? I note mac osx apache needs sudo, as in sudo apachectl start and I am running heroku local without sudo. I could try sudo heroku local but would rather not until I know what it would do.
So in absence of more understanding here (which would be nice), I may trying installing brew apache (httpd24). It looks like heroku local will just call httpd, so, the one first in the path will be picked up.
Partial answer
I realised that when heroku local starts system apache, system apache will still have, of course, all of its current config. That means writing to sudo locations, error logs, vhosts which I have added. Of course, that's going to error without sudo. First step to fix the above was remove the listen directive from httpd.conf, which gave a new error:
(13)Permission denied: AH00091: httpd: could not open error log file /private/var/log/apache2/error_log.
So then I commented out errorlog to fix that one, which gave another:
(2)No such file or directory: AH02291: Cannot access directory '/usr/logs/' for main error log
2:17:25 PM web.1 | AH00014: Configuration check failed
2:17:25 PM web.1 | This program requires Apache 2.4.10 or newer with mod_proxy and mod_proxy_fcgi enabled; check your 'httpd' command.
Can see where this is going.
Basically, I would need to remove almost all of my mac osx apaches configuration for it not to error (when started without sudo).
So, let's consider brew apache instead... (below).
Of note, examining /tmp, each time I run heroku local I see the following files like heroku.xxxxx written as below of zero bytes. I note that the apache log files below can be found as Log directives in the default vhost include in the buildpack, in <buildpack>/conf/apache2/heroku.conf, hence their existence here.
mbp:tmp m$ ll
total 8
drwxrwxrwt# 16 root wheel 512B 20 Sep 08:54 ./
drwxr-xr-x 6 root wheel 192B 31 Dec 2017 ../
srwxrwxrwx 1 root wheel 0B 18 Sep 22:02 .dbfseventsd=
srwxrwxrwx 1 m wheel 0B 20 Sep 07:59 .s.PGSQL.5432=
-rw------- 1 m wheel 49B 20 Sep 07:59 .s.PGSQL.5432.lock
srwxr-xr-x 1 m wheel 0B 8 Sep 21:05 OSL_PIPE_501_SingleOfficeIPC_48607cb6b283d6f2d9ab5973acdb43c=
drwx------ 3 m wheel 96B 27 Aug 17:28 com.apple.launchd.9wuyYAXuof/
drwx------ 3 m wheel 96B 27 Aug 17:28 com.apple.launchd.v9lh33yWhI/
-rw-r--r-- 1 m wheel 0B 20 Sep 08:54 heroku.apache2_access.5000.log
-rw-r--r-- 1 m wheel 0B 20 Sep 08:54 heroku.apache2_error.5000.log
-rw-r--r-- 1 m wheel 0B 20 Sep 08:54 heroku.php-fpm.5000.log
-rw-r--r-- 1 m wheel 0B 20 Sep 08:54 heroku.php-fpm.5000.www.slowlog
-rw-r--r-- 1 m wheel 0B 20 Sep 08:54 heroku.php-fpm.www.5000.log
drwxr-xr-x 3 m wheel 96B 19 Sep 20:23 pear/
srwxrwxrwx 1 m staff 0B 19 Sep 22:11 php72-fpm.sock=
drwxr-xr-x 2 root wheel 64B 19 Sep 08:29 powerlog/
What else can I do to help the web process spin up?
Update - how to install brew apache for exclusive use of heroku local?
Perhaps what could be really useful is to know steps to install brew apache given that it'll be only used for heroku local. Is it a few simple brew commands and remove the listen directive? I'm cautious here as I would like ideally not to harm my current use of built in apache; these should be able to run alongside one another I think, just need to be reasonably sure before I do it.
Also
Homebrew now no longer supports options on formulae, so the brew install homebrew/apache/httpd24 --with-mpm-event as given at the link above
doesn't seem an option anymore. It is possible to do brew edit httpd24 to edit the formula directly; is that necessary? What are the correct install steps?

I'm posting a partial answer here which I'll update as I find out more rather than append this further information to the question itself.
I got it working with brew apache as follows:
brew install httpd
Edit /usr/local/etc/httpd/httpd.conf:
comment out line Listen 8080
comment in line LoadModule proxy_module lib/httpd/modules/mod_proxy.so
comment in line LoadModule proxy_fcgi_module lib/httpd/modules/mod_proxy_fcgi.so
comment in ServerName, I set it to ServerName localhost:5000
(this suppresses a complaint it will make - is that a correct value for ServerName?)
Edit /usr/local/etc/httpd/extra/httpd-ssl.conf as follows:
comment out Listen 8443
Don't do brew services start httpd, as you normally would for a brew service, that isn't relevant to us here.
(Now that we commented out the Listen directives, httpd won't respond anyway,
until heroku local starts it up and injects a Listen directive).
There is one area I don't fully understand yet.
When I run heroku local now, I can access my site at localhost:5000, but only the front page. Clicking on any subpage returns:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL was not found on this server.</p>
</body></html>
Perhaps we can fix that later. But let's consider php-fpm now.
Firstly, I had one easily fixable error, which is that heroku local was using mac osx's php-fpm (version 7.1 on my High Sierra), which I fixed by ensuring my php7.2 formula (which is keg only; I'll shortly move to php7.3) comes first in path.
Now which php-fpm returns /usr/local/opt/php#7.2/sbin/php-fpm and not /usr/sbin/php-fpm (mac osx built in). So, now we got it using the version we intend. Small but important fix.
However, we now must consider what happens when heroku local calls (i.e. starts up) php-fpm. Traditionally, I do brew services start php#7.2, which is always on (daemon). However, with Activity Monitor open, I see that heroku local spawns a new parent process and workers. Perhaps that's how it should be. It simply ignores the instance which is run by brew services, and creates its own.
The mystery arises though when I exit Heroku local by pressing Ctrl-C. Is that the correct way to exit it? If I do Ctrl-C, I get the command prompt back, but, looking at the spawned php-fpm processes in Activity Monitor, the php-fpm instances are not terminated. Then, if I issue heroku local a second time after stopping it the first, I get this:
5:44:37 PM web.1 | [20-Sep-2019 17:44:37] ERROR: An another FPM instance seems to already listen on /tmp/heroku.fcgi.5000.sock
5:44:37 PM web.1 | [20-Sep-2019 17:44:37] ERROR: FPM initialization failed
... which is quite understandable. That is fixable by manually terminating the process in Activity Monitor. However, I wonder what's happening here; what's the correct way to terminate heroku local?
To be specific, Ctrl-C quits a single php process which has a parent process of node, but does not quit five php-fpm processes, one of those five being the parent of the other four, and that one having a parent process of bash.
So, we've come a long way today; remaining issues seem to be; still to improve upon:
How to correctly terminate the php-fpm process (Ctrl-C doesn't do it).
Also if I have two terminals open, one which previously ran heroku local,
and I run it again in another window, the window which showed a regular command
prompt waiting for an instruction, suddenly jumps back to life, receives new
information, which is a but surprising, I wonder how that's working and if
that's considered normal.
Find out why apache isn't serving any page other than root as noted above.
It works!
Now we are motoring! The one important change I ommited in /usr/local/etc/httpd/httpd.conf was to comment in the line:
LoadModule rewrite_module lib/httpd/modules/mod_rewrite.so
I'm using laravel which serves all pages via index.php. That means you want to rewrite every page request that is not to index.php back to index php. Thus, the default .htaccess in laravel has the line RewriteRule ^ index.php [L] which does just that...
Now the only problem remaining that I'm currently aware of is that Ctrl-C doesn't terminate child processes.... but first I'm gonna set all my local apps to use fpm 7.3, as I saw a reference in the buildpack specifically to 7.3, so wondering if that might help...

Related

Execute Openwrt UCI command through PHP

Im developing simple (dead simple) front end for openwrt using PHP. To do this I need to call many openwrt UCI (Unified conf. interface) commands through PHP shell_exec() or system() functions. All the UCI commands that I tried in terminal are working perfectly fine. But as soon as I run them through above functions they are simply not working.
As an example I run following two commands which worked well in terminal
uci set wireless.#wifi-iface[0].ssid=test
uci commit
But as soon as I run them through PHP nothing happens. They are simply not working. The I make .sh file and save above two lines and run that files using PHP but again!! results are the same. But when I execute .sh file through terminal it works!!
For testing I set the both file permission to 777. but that doesn't helps. Is there are any additional requirements to run shell commands through PHP like root access to the PHP or Apache ? I'm new to this and I would thankful if someone can help
my apache error_log
[Wed Aug 19 08:26:53 2015] [error] [client 192.168.2.117] uci
[Wed Aug 19 08:26:53 2015] [error] [client 192.168.2.117] :
[Wed Aug 19 08:26:53 2015] [error] [client 192.168.2.117] I/O error
[Wed Aug 19 08:26:53 2015] [error] [client 192.168.2.117]
Im using apache as a web server and openwrt Chaos Calmer 15.05-rc3 as my base firmware on top of Raspberry pi 2
I managed to solve my problem using uhttpd web server instead of Apache. Apache somehow doesn't have enough privileges to execute UCI commands directly. uhttpd the default web server in openwrt can execute this commands directly
I tried to figure out the same problem and my conclusion so far is run php with root permissions. I know this is not secure, but at least it works. Here is one line from /etc/init.d/php5-fpm to run php-fpm with root privileges:
service_start $PROG -R -y $CONFIG -g $SERVICE_PID_FILE
The key flag here:
-R, --allow-to-run-as-root Allow pool to run as root (disabled by default)
Both answers are right. What sameera mentions was the fact that uhttpd has special rights running on LEDE or OPENWRT (as default), but what Anton Glukhov wrote is also correct, it allowed me to run as root but limited to avoid errors. I was not able to run my scripts (.sh) as with uhttpd, but php runs ok and does not have any bugs while running as root. I guess its a file permissions issue by default on Nginx with Openwrt. My solution was leave uhttpd running on a different port and run my program with all the rights and permissions while running everything else as non root in Nginx. Security is no issue in my case, its offline server.
service_start $PROG -R -y $CONFIG -g $SERVICE_PID_FILE
Works, in my case editing adding the -R flag in /etc/init.d/php7-fpm

Segfault and/or "Fatal error: Unknown: Failed opening required..." when running scripts located in CIFS mounts

I compiled PHP 5.4.22 and 5.5.6. In both versions I can't run any scripts that are located in a CIFS (SMB/Samba) mount.
If I try to run it the usual way, I get a weird error message:
user#machine:/mnt/windows# /opt/php/bin/php test.php
Fatal error: Unknown: Failed opening required 'test.php' (include_path='.:/opt/php/lib/php') in Unknown on line 0
If I try to use the built-in server, I simply get a segmentation fault:
user#machine:/mnt/windows# /opt/php/bin/php -S 0.0.0.0:8000
PHP 5.5.6 Development Server started at Thu Dec 5 23:04:53 2013
Listening on http://0.0.0.0:8000
Document root is /smb
Press Ctrl-C to quit.
--------> here I load the website from a browser <--------
Segmentation fault
It turns out that for some reason the inode number in the mounted folder was huge and that made PHP freak out:
user#machine:/mnt/windows# ls -i test.php
69524319247729677 -rwxrwxrwx 1 65535 65535 26 Dec 4 23:28 test.php
It looks like it's a feature of CIFS:
The UniqueID value is unique over the scope of the entire server and is often greater than 2 power 32. This value often makes programs that are not compiled with LFS (Large File Support), to trigger a glibc EOVERFLOW error as this won't fit in the target structure field. It is strongly recommended to compile your programs with LFS support (i.e. with -D_FILE_OFFSET_BITS=64) to prevent this problem. You can also use "noserverino" mount option to generate inode numbers smaller than 2 power 32 on the client. But you may not be able to detect hardlinks properly.
So I added noserverino to my mount options in /etc/fstab. After doing that and remounting the inode number is something nicer and everything works flawlessly:
user#machine:/mnt/windows# ls -i test.php
89 -rwxrwxrwx 1 65535 65535 26 Dec 4 23:28 test.php
user#machine:/mnt/windows# /opt/php/bin/php test.php
hello world
There seems to be a PHP related bug but it seems to have some side effects on performance and other functions.
Update: after doing this everything seems to be working, except that the server can't send files bigger than a few dozen KB. So we're at square one again.

Files created by php cannot be deleted through FTP

I have searched the web for possible solutions but it seems to make me more and more stuck.
Once a file is created by PHP that file cannot be deleted, other files in the same directory can be deleted.
If a directory is created by PHP the whole directory cannot be used by FTP, I can see the files in that directory, but cannot delete, modify or upload to this directory.
I do see a lot of these questions here but mostly they involve permission problems, in this case all permissions, umask and attributes are the same as a file that can be deleted on through FTP.
I also thought it might be that the file is in use, however this is not the case, (lsof results empty, killing apache and nginx, which normally releases the file did not help either).
The server is running PLESK 11.0.9
FTP server is ProFTPD.
Is this some limitation that is caused by PHP, PLESK or is it in ProFTPD itself?
I hope you can help me with it, as I'm quite getting frustrated.
If you have any questions that might help in answering this question, feel free to ask :).
[ EDIT ]
Some more details why I think it is not a permission problem.
[root#srv domains]# lsattr
-------------e- ./test
-------------e- ./test2
[root#srv domains]# ls -als
total 32
4 drwxrwxrwx. 7 user0041 psacln 4096 Mar 22 06:41 .
4 drwxr-x---. 12 user0041 psaserv 4096 Mar 22 07:28 ..
4 drwxrwxrwx. 2 user0041 psacln 4096 Mar 22 06:30 test
4 drwxrwxrwx. 2 user0041 psacln 4096 Mar 22 06:34 test2
[root#srv domains]# lsof test
[root#srv domains]#
I cannot do anything with the directory test through ftp, but I can do anything with the directory test2
test has been created with mkdir command of php, while test2 has been created through ssh, and has been chowned to the right user.
Also I removed some unrelated entries from the outputs
More information about the user proftpd runs when logged in:
[root#srv domains]# ps aux | grep ftp
10000 8508 0.5 0.0 152192 3684 ? SNs 08:37 0:00 proftpd: user0041 - x: IDLE
[root#srv domains]# cat /etc/passwd | grep 10000
user0041:x:10000:505::/var/www/vhosts/domain.ltd:/bin/false
For privacy matters I have changed the domain name to domain.ltd.
Looks like your FTP user doesn't have permissions to delete anything created by the user that PHP is running as. You'll need to contact your hosting company regarding this.
most likely a user permission problem. but it could help if you posted some use case with more details describing what exactly are you doing (file names,process,...)

PHP fopen() fails on files even with wide-open permissions

I'm currently migrating my LAMP from my Windows Server to a VPS running Debian 6. Most everything is working, however, one of the PHP scripts was failing to write to its configured log file. I could not determine why, so I wrote a new, simple, contrived PHP script to test the problem.
<?php
ini_set('display_errors', 1);
error_reporting(E_ALL);
echo exec('whoami');
$log = fopen('/var/log/apache2/writetest/writetest.log', 'a');
if ($log != NULL)
{
fflush($log);
fclose($log);
$log = NULL;
}
?>
However, it fails with the result:
www-data Warning: fopen(/var/log/apache2/writetest/writetest.log): failed to open stream: Permission denied in /var/www/_admin/phpwritetest.php on line 5
While I would never do it normally, to help diagnose, I set /var/log/apache2/writetest/writetest.log to chmod 777.
Both the directory and the file are owned by www-data:www-data.
The file was created with touch.
I ran strace to verify which process was performing the open:
[pid 21931] lstat("/var/log/apache2/writetest/writetest.log", 0x7fff81677d30) = -1 EACCES (Permission denied)
[pid 21931] lstat("/var/log/apache2/writetest", 0x7fff81677b90) = -1 EACCES (Permission denied)
[pid 21931] open("/var/log/apache2/writetest/writetest.log", O_RDWR|O_CREAT|O_TRUNC, 0666) = -1 EACCES (Permission denied)
I checked and pid 21931 was indeed one of the apache2 child processes running under www-data. As you can see, I also included echo exec('whoami'); in the script which confirmed the script was being run by www-data.
Other notes:
PHP is not running in safe mode
PHP open_basedir is not set
Version info: Apache/2.2.16 (Debian) PHP/5.3.3-7+squeeze3 with Suhosin-Patch mod_ssl/2.2.16 OpenSSL/0.9.8o
uname -a: 2.6.32-238.19.1.el5.028stab092.2 #1 SMP Thu Jul 21 19:23:22 MSD 2011 x86_64 GNU/Linux
This is on a VPS running under OpenVZ
ls -l (file): -rwxrwxrwx 1 www-data www-data 0 Sep 8 18:13 writetest.log
ls -l (directory): drwxr-xr-x 2 www-data www-data 4096 Sep 8 18:13 writetest
Apache2's parent process runs under root, and the child processes under www-data
selinux is not installed (thanks to Fabio for reminding me to mention this)
I have restarted apache many times and rebooted the server as well
Remember that in order to reach a file, ALL parent directories must be readable by www-data. You strace output seems to indicate that even accessing /var/log/apache2/writetest is failing. Make sure that www-data has permissions on the following directories:
/ (r-x)
/var (r-x)
/var/log (r-x)
/var/log/apache2 (r-x)
/var/log/apache2/writetest (rwx)
/var/log/apache2/writetest/writetest.log (rw-)
Does the php file doing the writing have proper permissions set? Try changing those to see if that's the issue.
Could be a SELinux issue, even if Debian doesn't ship it in the default installation your provider could have enabled it. Look for messages in /var/log with
grep -i selinux /var/log/{syslog,messages}
If that's the cause and you need to disable it, here are instructions: look for file /etc/selinux/config, here it's default content. Change SELINUX directive to disabled and reboot the system.
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted

is_dir returning false on symlinks in apache

I run a third party PHP application on my local AMP stack on my Mac. I recently bought a new Mac Mini with Lion, and am trying to set it up. My previous computer was a MB air with MAMP. Now I'm using the built-in apache/php and a homebrew installed MySQL.
Here's my problem: I have a directory with symbolic links. These symlinks are to directories, and the PHP application is checking these with is_dir().
On my Lion AMP setup, this is_dir() is failing. The same setup on my Snow Leopard MAMP is_dir() works fine with my symlinks.
Here's where it gets more curious. If I do php -a (php interactive command line mode), and do is_dir() on the very same directories, it returns true. It only returns false in the context of an apache request. This makes me think it has something to do with the apache user (which is _www) not being able to access the symlinks. Troubleshooting this falls outside of my expertise.
Other notes:
Yes, I have FollowSymLinks turned on in my apache config, and in
fact, the directory where the symlinks in question reside is a
symlink itself. Apache has no problem with it. Until PHP is_dir() is
used.
No, I cannot edit the PHP application and just fall back on is_link()
and readlink().
This exact same setup worked on my Snow Leopard/MAMP setup.
Any ideas?
Ah saw your comment on changing them to 777 but still wondering why it's not working.
My solution below might not help you.
EDIT:
If you have access to /etc/apache2/httpd.conf, edit it via sudo vi /etc/apache2/httpd.conf.
Then change these 1 of these lines or both of them
User _www
Group _www
Here is an example of my directory listing.
ace:remote-app ace (git::master)$ ls -al
total 72
drwxr-xr-x 24 ace staff 816 7 Aug 00:24 .
drwxr-xr-x 11 ace staff 374 4 Aug 13:46 ..
drwxr-xr-x 3 ace staff 102 12 Jul 17:06 .bundle
drwxr-xr-x 14 ace staff 476 7 Aug 02:29 .git
-rw-r--r-- 1 ace staff 100 1 Aug 19:20 .gitignore
-rw-r--r-- 1 ace staff 9 1 Aug 19:20 .rspec
drwxrwxr-x 10 ace staff 340 14 Jul 15:58 public
Now my public directory has 775 permissions, meaning owner and group have full permissions while other users can only read and execute.
It depends if you want apache user to become ace from the default _www or the apache group to become staff from the default _www.
Once you've decided on which to change, restart apache.
/usr/sbin/apachectl graceful
And your page should now have access to the directories / files.
One thing to note is that you have to change ownership for files that have been already been written by your webpage as those have _www:_www ownership and you won't have access to them after the restart.
You can change their new ownership through this, -R is to make it recursive.
sudo chown -R newapacheuser:newapachegroup <path>
Did you check permissions/owner?
From the PHP manual: Note: The results of this function are cached.
I had a similar issue. I created the following link:
cd /home/mike/uploads
ln -s ./data /sites/www.test.com/docroot/data
Then I created a test.php file in /sites/www.test.com/docroot that just did the following:
$dir = "/sites/www.test.com/docroot/data";
"is_dir\t\t" .is_dir($dir) ."\n";
When I ran test.php from the command line, it would show up as is_dir was True, but when I loaded test.php from a browser through apache, it was False.
I went to /sites/www.test.com/docroot/data and did a
chmod -R 755 .
That didn't change anything. Then I realized, the parent to the actual symlinked dir needed proper permissions set (/home/mike/uploads). I did a chmod on that dir, and everything worked!
Check open_basedir directive in php config. That path should also be included.
In linux, you can list multiple folders by separating them with a colon.
https://www.php.net/manual/en/ini.core.php#ini.open-basedir

Categories