Best way to run php scripts on website? [closed] - php

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I wanted to check to see what would be the most appropriate way to run a php script on a website that does several updates and makes dynamic changes to the website.
Should these be run by putting the php files in the same FTP directory as the rest of the website and accessing
them as webpages? If so, how could I control it so that only the web admins can access these links or php scripts?
Thank you!

you might use a htaccess protection on a folder containing your admin scripts.
.htaccess
AuthName "Restricted Area"
AuthType Basic
AuthUserFile /var/www/mysite/.htpasswd
AuthGroupFile /dev/null
<Files my-protected-file.php>
require valid-user
</Files>
.htpasswd (user:john, pw:john):
john:cH/Bl.u9Yl2x.
If you are protecting files, which live in an FTP folder, then move the htaccess/htpassword files one level up and adjust the paths OR set correct permissions to disallow reading (see comment).
/var/www/mysite/ftp (contains your admin scripts and has ftp access)
/var/www/mysite (has no ftp access, so add your protection here)

Put your php files in a non www directory.
For example, do not put PHP source files in a public_html directory so they can never be accessible by browser.

You gave us little detail, but I'll try my best to answer your question from as many standpoints as I can think of.
Should these be run by putting the php files in the same FTP directory as the rest of the website and accessing them as webpages? If so, how could I control it so that only the web admins can access these links or php scripts?
If I were you, I'd add my own account management system in PHP, that does not use .htpasswd. Then I'd protect it with HTTPS so that passwords cannot be sniffed with packet analyzers. HTTPS is considered significantly stronger protection than using .htpasswd files.
That way, in order to execute the updates, I'd have to click a button or something similar after logging in. This allows you to easily extend your admin panel in future and makes it human-friendly.
Doing some work just by visiting a URL seems very fragile. It's very useful when it's supposed to be part of API, so that it's bot-friendly (see also: REST API), but in this case it probably shouldn't be exposed to everybody. If it's going to be used by bots, but should be available to only "good" bots, you might want to read about REST authentication methods (most of them rely on accounts of some form anyway).
Finally, if you want to run jobs in automatic fashion, research cron tasks. Then again, nothing prevents you from creating both admin panel and cron jobs that execute the same code.
If you don't want to put it in your public WWW folder, you're making it accessible to only those who have SSH access (not to confuse with FTP access). These people can execute that script using php script.php command. I guess it's not what you're looking for, since you never mentioned SSH access. Additionally, making your administrators connect to your server via SSH to do some tasks may or may not be an good idea, depending on the nature of your script and application. Other than through SSH, there are no ways to execute the script outside WWW folder, except for cron jobs.

Are you familiar with the security implications of what you are designing? I assume that you have some concerns since you mentioned authenticating admins.
Survive the Deep End
OWASP Home Page
Personally, I would not make the scripts you are describing accessible via the web interface. Furthermore, I would not use FTP. I have used cron jobs + wget/curl to call PHP scripts in the past, and the only benefit that I can see you getting from doing that would be a consistent language and consistent environment definition. If there is nothing special in your environment that these admin scripts would need and they were to be run on a schedule, then you could just as easily invoke those scripts on the server from cron via the command line (don't expose the scripts via the web interface).
Cron works best if the maintenance scripts are run on a schedule, but never need to be run manually. Do you ever use SSH? Did you know that you can execute commands on the remote server in a single command executed on your local system? It works quite well and would address your concerns about authenticating admins on the server- SSH is already a strong authentication framework when configured properly (not hard at all).
$ ssh username#server.domain.com "php /path/to/scripts/task1.php"
password:
The credentials (i.e. username/password requested by SSH) are those that you defined on the system itself (unless you are using Kerbose, ldap, or similar credential managment infrastructure).
You can also install update you scripts to the server without using FTP; yes, I would keep them away from "normal" webserver script files. Perhaps you would find it easy to create a user account on the server, let's say you create the username maintenance, set-up key authentication and authorize admins to use the account by adding their public keys to the /home/maintenance/.ssh/authorized_keys file. This way you can control (read limit the potential damage in case of accident, angry admin, crazy girlfriend, aliens... it doesn't matter because you only give the maintenance user permissions and access to the things required to do it's job, can only write to the areas that are safe for those processes to write, and limit both read/execute permissions into areas that it doesn't need. Jails or chroot are wonderful, but probably a bit too much to worry about at this point.
BTW, your FTP and web server should be running as users (not root) with limited access as well. Hopefully you are already familiar with the concept I am trying to describe.
Imagine these accounts.
admin1
Description: some webdude I decided to trust
- id_rsa
- id_rsa.pub
admin2
Description: webdude's friend; I'm skeptical
- id_rsa
- id_rsa.pub
adminN
Description: the Nth admin that I let manage the server
- id_rsa
- id_rsa.pub
youruser
- id_rsa
- id_rsa.pub
server
User: httpd / www-data
Description: the user account under which the webserver runs
User: ftpd
Description: the user account under which the FTP server runs
User: root
Description: default account; ssh login disabled for root (learn how to use SUDO)
User: admin
Description: The first user account created when you set up the server; this might be the account that you log in as remotely unless you created an account to match your username or have centralized credential management. DO NOT GIVE YOUR ADMINS ACCESS TO THIS ACCOUNT.
User: maintenance
Description: Newly created shared account that admins will allowed to use to execute the server maintaince / update scripts as they see fit. Alternatively, you could create each admin their own account based on a template of limited privileges similar to this account. That is a burden if there are a lot of admins or high turnover. The main drawback to a shared account though is that it becomes a little more difficult to determine "who" is logging in because the account is shared- it's possible, but not as easy as just looking at the username; obviously.
Workflow. When you add/authorize a new admin to your team you ask him/her to create an ssh key-pair (with a passphrase that is not empty) and send you the public key. By default that file will be called "id_rsa.pub" and it is perfectly fine if anybody in the world sees the contents of that file- it is the "public" key of the key pair. They should keep the counterpart (which is similarly named "id_rsa") private key in a directory that does not allow other users to read it (this is the default for the ~/.ssh directory) and if they suscpect that their private key has been compromised, then simply create a new pair and throw away the old one.
When you receive their public key, you will add it to the server's maintance account authorized_keys file like this; from your local system you are going to copy the "id_rsa" key they create and give/share with you up to the server.
# just to set up the example (not required once you understand)
$ ssh maintenance#server.domain.com "ls ~/.ssh/
id_rsa id_rsa.pub admin1_id_rsa.pub
known_hosts authorized_keys2 ssh_config
# That's an example of the files you should might see if you you, admin1, and the
# maintenance account itself were already set up. The public are just for record
# keeping and not actually required once their contents are added to the authorized_keys
# file. If you don't keep the admin's public key files, then the following two commands
# could actually be done in a single step, but I've shown them as two here for clarity.
#
# Copy the new guy's public keyfile up to the server (admin2)
$ scp id_rsa.pub maintenance#server.example.com:~/.ssh/admin2_id_rsa.pub
$ ssh maintenance#server.example.com "cat ~/.ssh/admin2_id_rsa.pub >> ~/.ssh/authorized_keys2
That's it. You just added a new admin to the server. If you want to restrict them from logging into the server too, you could configure the account to do that while still allowing them to execute the php scripts remotely via ssh.
Did that make sense? Let me know if not and I'll try and clarify for you.

Related

What would be a safe way or alt. to run a command as root from php script?

Just as the question says... I've read up a few articles, others says just don't do it, but yet fail to mention a safe way. I know it hazardous to give it sudo access or root, but I was thinking about running a script that has root access through root.
One post was talking about a binary wrapper, but I did not fully understand it when I attempted it and when I tried to do a search to understand I didn't find anything that explain it well.
So, what would be a good-safe way? I don't even need to have a detailed explanation. You can just point me to a good source to start reading.
Thanks.
Specs:
Ubuntu Server 14.04
EDIT:
Commands I am talking about is mkdir, rmdir with an absolute path. Create user, remove user (which is why I need root) and edit some Apache files for me.
They fail to provide a safe way because, IMHO, there isn't one. Or, to put it another way, are you confident that your code that protects the create user and add user functions is cleverer than the hackers code that tries to gain access to your system via the back door you've built?
I can't think of a good reason for a web site to create a new system-level user. Usually web applications run using system users that are created for them by an administrator. The users inside your web site only have meaning for that web site so creating a new web site user gains that user no system privileges at all. That said, it's your call as to whether you need to do it or not.
In those cases where system operations are necessary a common approach is to build a background process that carries out those actions independently of the web site. The web site and that background process communicate via anything that works and is secure - sockets, a shared database, a text file, TCP-IP, etc. That separation allows you to control what actions can be requested and build in the necessary checks and balances. Of course it's not a small job, but you're not the first person to want to do this so I'd look for an existing tool that supports this administration.

Securely Allow PHP Read & Write Access to System Files

I have not been able to find solid information on preferred (best practices) and/or secure methods to allow php to access config or other types of files on a linux server not contained in the public web directory or owned by the apache user so I'm hoping to find some answers here.
I am a fairly competent PHP programmer but am increasingly tasked with writing web applications (most of which are not publicly accessible via the web however) that require updating, changing or adding to config files or files generated by some service or application on the server.
For instance, I need to create a web interface that will view, add or remove entries from a /etc/mail/spamassassin/white-list.cf file owned by root.
Another scenario is that I need php to parse mime messages in /var/vmail that are owned by user vmail.
These are just a couple examples, there will be other files in locations owned by other processes/users. How can I write PHP applications that securely access and manipulate these files without opening security risks?
If I were needing to implement something like this, I would probably look at using something like sudo to fine-tune permissions. I'm not a Linux CLI expert, so I'm sure there are issues that I haven't taken into account when typing this out.
I would probably determine what tasks need to be done, and would write a separate script for each task that needs to be completed. Using sudo, I'd assign the necessary level of permissions for that script only.
Obviously, as the number of tasks increase, so would the complexity and the amount of work involved. I'm not sure how this would affect you at the moment.

Execute shell commands with sudo via PHP

So far my search has shown the potential security holes that will be made while trying to perform a sudo'd command from within PHP.
My current problem is that I need to run a bash script as sudo on my work web server via PHP's exec() function. We currently host a little less than 200 websites. The website that will be doing this is restricted to only be accessible from my office's IP address. Will this remove any potential security issues that come with any of the available solutions?
One of the ways is to add the apache user to the sudoers file, I assume this will apply to the entire server so will still pose an issue on all other websites.
Is there any solution that will not pose a security threat when used on a website that has access restricted to our office?
Thanks in advance.
Edit: A brief background
Here's a brief description of exactly what I'm trying to achieve. The company I work for develops websites for tourism related businesses, amongst other things. At the moment when creating a new website I would need to setup a hosting package which includes: creating the directory structure for the new site, creating an apache config file which is included into httpd.conf, adding a new FTP user, creating a new database for use with the website CMS to name a few.
At the moment I have a bash script on the server which creates the directory structure, adds user, creates apache config file and gracefully restarts apache. That's just one part, what I'm looking to do is use this shell script in a PHP script to automate the entire website generation process in an easy to use way, for other colleagues and just general efficiency.
You have at least 4 options:
Add the apache user to the sudoers file (and restrict it to run the one command!)
In this case some security hole in your php-apps may run the script too (if they can include the calling php for example - or even bypass the restriction to your ip by using another url that also calls the script, mod_rewrite)
Flag the script with the s bit
Dangerous, don't do it.
Run another web server that only binds to a local interface and is not accessible from outside
This is my prefered solution, since the link calling the php is accessible by links from your main webserver and the security can be handled seperately. You can even create a new user for this server. Some simple server does the job, there are server modules for python and perl for example. It is not even necessary, that you enable exec in your php installation at all!
Run a daemon (inotify for example, to watch file events) or cronjob that reads some file or db-entry and then runs the command
This may be too complex and has the disadvantage, that the daemon can not check which script has generated the entry.

Securing Passwords in a Multi-Dev nginx setup

We have a Ubuntu12.04+PHP+nginx setup on our servers. Our developers have access to both /usr/lib/php5/ and /var/www/ folders. We work on a lot of projects and at given time have 50-100 different apps/modules each with db active.
We would like to come up with a mechanism to secure our DB passwords with the following considerations:
The sysadmins create the password and register it somewhere (a file, or a sqlite db or some such)
The apps provide a key indicating which DB and what permissions level they want and this module returns an object that contains everything needed for the connection. Something like "user_manager.client1.ro", "user_manager.client1.rw".
The mechanism should provide the specific password to the app and hence accessible by 'www-data', but all the other passwords can't be seen unless their keys are known.
We have managed to get a prototype going for this, but the central password-providing module runs in www-data space and hence the file/sqlite can always be accessed by any other file in /var/www/ or /usr/lib/php5 and hence all passwords can be compromised.
Is there a way to set things up such that the password-providing module runs at root privileges and the app request the passwords from this? I know we can build a whole new service for this, but it seems too much to build and maintain (specially because this service becomes our single point of failure.)
Any suggestions?
Using permissions, you could do something like:
1) give one developer a user
2) chown every folder under /var/www/ to user www-data, and a specific group for that site, something like:
/var/www/site-a www-data group-a
/var/www/site-b www-data group-b
etc.
3) chmod every directory (and all subdirectory and files with -R) to 770
4) add each developer to every group for which he is actually developing.
A different approach, as I mentioned in a different answer, would be to
to provide the crypto keys via an API, when an application asks for it.
Your strusted devs would then query the API with a unique key to get the relevant credentials. The key can be mapped to a set of credentials (for devs on several projects).
If you protect the API either via a client certificate or IP filtering you will reduce the risk of data leak (if the access key is lost, you still need to be in the right network or to have the certificate to access the API). I would favor the certificate if you trust the developers (per your comment).
Simplest solution is to run your application that manages the credentials and hands them out to the developers from a different instance of the webserver (obviously listening on a different port) and then you can run that instance as a different user and tighten down the permissions so only that user has access to the secret files it needs.
But create an additional user, don't run it as root.
Under apache I'd point to suexec or suPHP. But since you don't use apache, that's not an option for you.

Is it possible to restrict what commands php can pass through exec at an OS level?

I am currently hosting a Drupal 6 site on a CentOS machine. The Drupal (CMS) configuration contains a few dozen third-party modules that should not be forked as a general best coding practice. However, some of these modules make use of the php exec command in order to function properly.
The site allows for admins to embed php code snippets in any page via a UI configuration, granted that they have access to the php code input format. I need to keep this input format available to admins because there are several nodes (pages) and panel panes that make use of small, harmless php code snippets, like embedding a specific form into the content region, for example.
The issue is that if someone were to compromise an admin account, then they could run arbitrary php code on the site, and thus run shell commands via php's exec, passthru, etc. Is there any way, from an operating system level, to restrict what shell commands php can pass through to the machine? Could this be done via restricting file permissions to some programs from php?
Note: I cannot use the php.ini disable_functions directive as I still need exec to function normally for many cases, where modules make use of certain shell commands, like video encoding, for example.
If your admininstrator accounts get hacked, you are doomed. You are trying to be less doomed, but this will not work.
Disabling exec()
This is only a fraction of all the functions that are able to make calls to the system. There are plenty more, like passthru(), shell_exec(), popen(), proc_open(), the backtick operator, and so on. Will not help.
Restricting available executables
Would work only if the attacker cannot bring his own executables. But file_put_contents() will be able to write the executable to the harddisk, and it can then be called. Will also not help.
PHP cannot do any harm itself, can it?
Wrong. Executing stuff on the server via exec() might seem the best idea, but PHP itself is powerful enough to wreak havoc on your server and anything that is connected to it.
I think the only real solution is:
Do not allow your admin accounts to get hacked.
And if they get hacked, be able to know it immediately. Be able to trace back an attack to an administrator. Be able to know what exactly the attacker did to your machine so that you might be able to undo it. A very important part is that you implement an audit trail logger that saves it's data on a different machine.
And you'd probably implement tighter restrictions on who can login as an administrator in the first place. For example, there probably is no need to allow the whole worlds IP range to login if you know for sure that one certain admin always uses the IP range of his local ISP to work. At least, ring a bell and inform somebody else that a login from china is going on if this is not expected (and unless you are operating in china :-) ).
And there is two factor authentication. You could send an SMS to the phone number with an additional login code. Or you might be able to completely outsource the login by implementing Google or Facebook authentication. These players already have the infrastructure to support this.
And additionally, you get higher resistance against inside jobs. People do value their personal social network logins higher than the login for their employer. Getting someones facebook password will cost 30$ on average, but the company login is already shared for 8$. Go figure...
To answer your question: in theory, if you created a user account using an extremely restricted account that PHP could run commands as, you could tune your installation to be more secure. However, the real problem is that administrator users are able to execute arbitrary commands. If that happens, you will have a significantly larger problem on your hands.
The real solution here is:
The ability to submit and run arbitrary code from within Drupal is a significant risk that you can mitigate by not doing it. I strongly recommend re-designing those "harmless" bits of code, as they will lead to a compromise; arbitrary code execution is only one kind of exploit and there are many others to worry about.
The modules that require running shell commands are also significant security vulnerabilities. In some cases, I've been able to fork/patch or replace modules executing commands with ones that don't, but in some cases (e.g. video encoding) it cannot be avoided. In that situation, I would set up a very restricted backend service that the frontend can communicate with. This separates your concerns and leaves Drupal to do what it was intended to: manage and serve content.
This isn't at the OS level, but inside the Drupal core there is a module called PHP. You could use this as a base and create a custom module that extends the functionality of this module, and then simply enable this module as opposed to the Drupal 6 core module. The big problem with this however comes with disabling the Drupal 6 core module and then enabling your new module. I'd test it on a dev install to make sure previous content is not deleted and that the new module correctly parses stored PHP input. It should be alright as the module install only has a disable hook warning against PHP content now showing in plain text.
As for extending the core module, it's a very simple module to start. You could hardcode in a list of allowed or not-allowed commands to execute. You can then check the exec statement with resolved variables against this list and do whatever is appropriate. It's not a perfect match against simply blocking at the OS level programs themselves, but it's better than nothing. To hardcode a list, you'll simply want to modify the php_filter hook at the bottom of the module file and before doing a drupal_eval, do your test.
You could also extend this module to be configurable in the Drupal admin interface to create that list instead of hardcoding it. Hope this idea helps.
Another approach:
We believe that we need to create a test user that only has access to the system to perform a telnet to another machine on the network. Since we only need to run a telnet need to restrict the other commands available in a standard bash session. Let's go step by step configuring everything.
1) We create user test
This will be a regular user of the system, so should we as a normal user. The only peculiarity is that we change the shell of that user. The default is usually / bin / bash and we will set / bin / rbash. rbash is actually a copy of bash, but it really is a "restricted bash".
shell> adduser --shell /bin/test rbash
2) We create the file. Bash_profile
We must create this file in the user's home that was created and for which we want to apply the permissions. The contents of the file will be as follows,
if [-f ~/.bashrc]; then
    . ~/.bashrc
fi
PATH = $HOME/apps
export PATH
3)We avoid changes
Once you have created the file, we stop nobody can make changes to the file.
shell> chattr +i /home/test/.bash_profile
4) We create the apps directory and install the programs 'access'
Now once you have everything set up and only have to create the apps and inside it, create a link to the programs you want the user to have permissions. All programs that are within apps, the user can run the but, no.
shell> mkdir apps
shell> ln-s /usr/bin/telnet /home/test/apps/
5) We found that works
Now you can access the system and verified that it works correctly.
shell> ssh test#remote
test#remote password:
shell#remote> ls
-rbash: ls: command not found
shell#remote> cd
-rbash: cd: command not found
shell#remote> telnet
telnet>
Team:
Here's my approach.....
I created a small script with a list of accepted commands, and depending on the command, the execution will be controlled for avoiding issues. I'm not sure if it is in the scope of the question. The sample code have commands in Windows, but you can use it in Linux as well...
<?php
ini_set('display_errors',1);
error_reporting(E_ALL);
function executeCommand($my_command){
$commandExclusions = array ('format', 'del');
if (in_array(strtolower($my_command),$commandExclusions)) {
echo "Sorry, <strong>".$my_command." </strong> command is not allowed";
exit();
}
else{
echo exec($my_command, $result, $errorCode);
implode("n", $result);
}
}
echo "<h3>Is it possible to restrict what commands php can pass through exec at an OS level?</h3><br>";
echo "********************************<br>";
//test of an accepted command
echo "test of an accepted command:<br>";
executeCommand("dir");
echo "<br>********************************<br>";
echo "test of an unaccepted command:<br>";
//test of an unaccepted command
executeCommand("format");
echo "<br>********************************<br>";
?>
Output:
Is it possible to restrict what commands php can pass through exec at an OS level?
test of an accepted command:
117 Dir(s) 11,937,468,416 bytes free
test of an unaccepted command:
Sorry, format command is not allowed

Categories