executing suphp from memory - php

I manage a server on linux that has apache with php and suphp. Like most setups, all program files are currently stored on disk.
I want to run suphp from ram. I then copied everything in the suphp folder (config files, folders, and program totaling about 2.8MB) to ram. Then on disk, I renamed the folder so the old version doesn't get accessed. I used the -a switch with cp to preserve the permissions and such.
I then made two attempts which both led to failure.
First, I make a link (using ln -s) named suphp that points to the suphp folder in ram. Then when I browsed the file/folder structure, everything else looks identical as if the setup was ready to work.
Since that didn't work, I made another attempt by removing the suphp symbolic link, then creating an empty suphp folder and mount-binding it (using mount --bind) to the suphp folder in ram. That did not even work.
I then looked at my apache error_log, and during the time I tried this steps and until I restored the original setup, I received various error messages with the text similar to this in common in all of them:
"(2)No such file or directory: couldn't create child process: (suphp folder location)/sbin/suphp for (full path to php file on website)".
What baffles me is, why would it report no such file or directory instead of some other error...

suPHP will be loaded into RAM by Apache since it is an Apache module (mod_suPHP). There is no reason to move the actual files/executables to the system's "ram" directory and set up symlinks like you have done.
If your intent is to keep PHP scripts in memory, consider using an opcache like the one built into 5.5 or apc, see How to use PHP OPCache?

Related

How can I find the real location of the tmp dir for a specific program on Ubuntu?

Ubuntu creates a separate /tmp directory for each user, so when Apache asks for /tmp it actually gets /tmp/systemd-private-654e145185f84f6ba097649873c88a9c-apache2.service-uUyzNh/tmp. (That code is different each time.) This is generally a good thing, but it’s annoying me now.
I want to create a bunch of PDF files (using TCPDF in PHP) in /tmp, then use shell_exec() in PHP to run the pdfunite script to create a single output PDF, then load that PDF into memory to serve it to the browser. My problem is that the pdfunite script doesn’t work, which I presume is because it’s not seeing the same path as the files are actually in.
Does PHP’s shell_exec() run code as the Apache user (www-data)? If so, I would assume that it would see the same /tmp dir, but in that case the pdfunite script should work. Is there a sensible workaround here, other than using a different directory and doing the cleanup myself?

Prevent Apache/PHP from running code that affects another vHost

THE SITUATION
I have multiple folders in my /var/www/ directory.
Users are created that have control over a specific directory... /var/www/app1 belongs to app1:app1 (www-data is a member of the app1 group).
This works fine for what I want.
THE PROBLEM
If the app1 user uploads a PHP script that changes the file/folder permissions for something in app2s directory structure, the Apache process (as there's only one installed on the server) will be more than happy to run it, as it has the necessary permissions to access both /var/www/app1 and /var/www/app2 folders and files.
EDIT:
To the best of my knowledge, something like, /var/www/app1/includes/hack.php:
<?php
chmod("/var/www/app2", 777);
?>
The Apache process (owned by www-data) will run this, as it has permissions to change both /var/www/app1 and /var/www/app2 directories. The user app1 will then be able to cd /var/www/app2, rm -rf /var/www/app2, etc., which is obviously not good.
THE QUESTION
How can I avoid this cross-contamination of the Apache process? Can I instruct Apache to only run PHP scripts that affect the files/folders that reside within the relevant vHost root directory and below?
While open_basedir would help, there are several ways of bypassing this constraint. While you could break a lot of functionality in php to close off all the backdoors, a better solution would be to stop executing the php as a user whom has access to all the files. To do that, you need to use php-fpm with a separate process pool/uid/gid for each vhost.
You should still have a separate uid for the php execution from the uid owning the files with a common group allowing a default read only access to the files.
You also need to have separate storage directories for session data.
A more elaborate mechanism would be to use something like Apache traffic server in front of a container-per owner with each site running on its own instance of Apache - much better isolation, but technically demanding and somewhat more resource intensive.
Bear in mind, if you are using mariadb or similar, that the DBMS can also read and write arbitrary files (SELECT INTO OUTFILE.../LOAD DATA INFILE)
UPDATE
Rather than the effort of maintaining separate containers, better isolation could be achieved with less effort by setting the home directory of the php-fpm uid appX to the base directory of the vhost (which should contain, not be, the document_root - see below) and use apparmor to constrain access to the common files (e.g .so libs) and #{HOME}. Hence each /var/www/appX might contain:
.htaccess
.user.ini
data/ (writeable by fpm-appX)
html/ (the document root)
include/
sessions/ (writeable by fpm-appX)
You should add an open_basedir directive to each site's vhost file. The open_basedir directive limits the directories that a site can access.
You can read more about open_basedir here.

Clearing cache manifest via CLI

I've automated the deploying of my site and I have a script that runs framework/sake /dev/build "flush=1" This works however it clears the cache directory of the user who runs it, which is different from the apache user (which I can't run it from).
I've read a few bug reports and people talking about it on the SS forum however either there is no answer or it doesn't work for example
define('MANIFEST_FILE', TEMP_FOLDER . "/manifest-main");
I thought about just deleting the cache directory however it's a randomised string so not easy to script.
Whats the best way to clear the cache via command line?
To get this to work you need to first move the cache from the default directory to within the web directory by creating a folder silverstripe-cache at the web root. Also make sure the path is read/write (SS default config blocks this being readable by the public)
Then you can script:
sudo -u apache /path/to/web/root/framework/sake dev/build "flush=1"

Silverstripe TEMP_FOLDER differs by fpm and cli

We have a lot of silverstripe installations - each on its own vServer.
Deployment is done by a deployment service.
Each instance is powered by nginx and php5.6-fpm.
When a deployment is running, the typical build/flush actions are executed by the deployment service as ssh commands.
The cli tasks are run by the same user like php5.6-fpm is running.
But the php-Versions are not identical (fpm+cli)
This results in 2 different cache directories
/tmp/silverstripe-cache-php5.6.23... (fpm)
and
/tmp/silverstripe-cache-php5.6.29... (cli)
This is really bad. Example:
There is a new static class variable, that is stored inside the ConfigManifest.
But it is only stored in the manifest of the cache directory that matches the cli version.
The worst case: When browsing the website (php5.6-fpm usage) this config variable is not known. This can lead to server errors (500), because the manifest of the fpm does not know about the new config class variable.
Any idea how to fix this ?
Kind regards, Robert
The only way to mix slightly different php versions is to use temp folder in the root of the project.
create silverstripe-cache folder in the project root folder
add putenv('APACHE_RUN_USER=php-fpm'); in your _ss_environment.php file to force the name of the cache folder to be 'php-fpm'
it is system configuration to ensure write access to the 'silverstripe-cache/php-fpm' folder from php-fpm and cli.
See framework\core\TempPath.php for the logic.

Debugging PHP error on IIS (as it relates to calling com objects)

This question is related to another question I wrote:
Trouble using DOTNET from PHP.
Where I was using the DOTNET() function in PHP to call a DLL I had written.
I was able to get it working fine by running php.exe example.php from the command line (with the DLL's still in the PHP folder).
I moved the php file to an IIS 7 webserver folder on the same machine (leaving the DLLs in the same php folder), but I keep getting a 500 internal service error.
I've checked the server logs (in c:\inetput\logs\ and in c:\windows\temp\php53errors) but there doesn't seem to be any relevant information about what caused the error. I even tried to change the php.ini settings to get more error feedback, but that doesn't seem to help.
I can only guess that the issue may be related to:
that php file not having the proper permissions (my dll does some file reading/writing)
php can't find the DLLs
The actual error I get is:
The FastCGI process exited unexpectedly.
Any idea on how to debug this problem?
The problem here is almost certainly related to file permissions.
When you run php.exe from the command line you run as your own logged-in user. When running a PHP script from IIS, in response to an http request, php.exe runs as a different user. Depending on your version of Windows it could be
IUSR_machine - on IIS6 and prior
IUSR on IIS7 and later
These users need permissions on the php file to be executed.
Read more about it
On IIS7 and later I use a command-line tool called icacls.exe to set the permissions on directories or files that need to be read by IIS and the processes it starts (like php.exe). This security stuff applies to all IIS applications: PHP, ASPNET, ASP-classic, Python, and so on.
IIS also needs to be able to read static files, like .htm, .js, .css, .jpog, .png files and so on. You can set the same permissions for all of them: Read and Execute.
You can grant permissions directly to the user, like this:
icacls.exe YOUR-FILE-GOES-HERE /grant "NT AUTHORITY\IUSR:(RX)"
You can also grant permissions to the group, to which IUSR belongs, like this:
icacls.exe YOUR-FILE-HERE /grant "BUILTIN\IIS_IUSRS:(RX)"
In either case you may need to stop and restart IIS after setting file-level permissions.
If your .php script reads and writes other files or directories, then the same user needs pernissions on those other files and directories. If you need the .php script to be able to delete files, then you might want
icacls.exe YOUR-FILE-HERE /grant "BUILTIN\IIS_IUSRS:(F)"
...which grants full rights to the file.
You can grant permissions on an entire directory, too, specifying that all files created in that directory in the future will inherit the file-specific permissions set on the directory. For example, set the file perms for the directory, then copy a bunch of files into it, and all the files get the permissions from the parent. Do this with the OI and CI flags (those initials stand for "object-inherit" and "container-inherit").
icacls.exe DIRECTORY /grant "BUILTIN\IIS_IUSRS:(OI)(CI)(RX)"
copy FILE1 DIRECTORY
copy FILE2 DIRECTORY
...
When I want to create a new vdir in IIS, to allow running PHP scripts, or ASPX or .JS (yes, ASP Classic) or Python or whatever, I do these steps:
appcmd.exe add app /site.name:"Default Web Site" /path:/vdirpath /physicalPath:c:\docroot
icacls.exe DIRECTORY /grant "BUILTIN\IIS_IUSRS:(OI)(CI)(RX)"
Then I drop files into the directory, and they get the proper permissions.
Setting the ACL (access control list) on the directory will not change the ACL for the files that already exist in the directory. If you want to set permissions on the files that are already in the directory, you need to use icacls.exe on the particular files. icacls accepts wildcards, and it also has a /t switch that recurses.

Categories