There are a lot of ways for detecting if you are running PHP code on localhost or server. However they use $_SERVER and http header which can be fake by user.
It is serious for me because I have made a developer php shell interactive on my website which should go to 404 if it is not on the localhost.
The most straightforward answer is $_SERVER["REMOTE_ADDR"]. It's generally considered reasonably safe.
However, if you provide access to command line commands through your script, it may not be enough. It may be possible to send a request to your script from the outside through IP spoofing. That may be enough to trigger a destructive command even though IP spoofing usually means that the attacker will not receive a response. (This is a very esoteric scenario and I know little more about it than that it may be possible.)
What you could do:
Instead of checking the IP from within PHP, make sure the page can not be accessed from the outside using tougher means, for example by setting up a hardware or software firewall that prevents any access from outside, or configuring your web server to listen only to local requests.
Instead of checking the IP from within PHP, protect the page by using some sort of password authentication.
Talk to a security expert (maybe on http://security.stackexchange.com), explain your network setup and ask for opinions whether IP spoofing is a possibility in your specific scenario.
Make your script available through CLI, the server's local command line, instead of the web server. Place your script outside the web server's root. (This option will probably defeat your specific purpose of having an interactive shell, though)
Or you can of course trust that no one will ever find out. If this is for a low-risk, private project, thinking about IP spoofing is probably overthinking it massively.
I believe you are looking for $_SERVER['REMOTE_ADDR'].
Check it with localhost or 127.0.0.01 or a LAN IP of your choice.
Pekka 웃 with his answer goes into further details on how this may be spoofed.
$serverList = array('localhost', '127.0.0.1');
if(!in_array($_SERVER['HTTP_HOST'], $serverList)) {
}
you can't fake this one
Related
I have some PHP scripts on my server which I use for periodic cron jobs (for daily rapports and updating leaderboards for example).
To prevent outsiders from running these scripts manually (by running http://url/script.php in the browser for example) I included the following code to check the IP-address before running the actual script. Where XX.XX.XX.XX represents the IP address of my own network.
$remote = isset($_SERVER["REMOTE_ADDR"]) ? $_SERVER["REMOTE_ADDR"] : '127.0.0.1';
$whitelist = array('XX.XX.XX.XX', '127.0.0.1');
if (!in_array($remote, $whitelist))
{
exit;
}
So now I have the following questions:
How safe is this?
What are the risks?
How can I make this more safe?
Are there other (better) solutions?
PS. My previous questions was closed because someone thought this question is a duplicate of PHP IP Address Whitelist with Wildcards. But I this is not the case! That question is about using wildcards in whitelisting while this questions is about the safety and risks of this solution.
The presented method is not completely secure.
PHP acts as a text preprocessor, which means that in the event of a web server gateway error, the script content can be sent with the mime-type text/html, which is a risk of revealing sensitive data such as passwords to SQL database or (s)ftp accounts.
Administrative scripts placed in public also carry the risk of their unauthorized execution if the IP address controlled in the script was a shared (or dynamically transmitted) address. Cron scripts are executed using php-cli, therefore the web server gateway is not needed for anything and IP analysis in the script becomes unnecessary if it is outside the public directory.
Remote execution using e.g. curl could be the only reason for placing administrative scripts in the public space of the www server. This is usually a weak solution because then the script performs the php interpreter (and not php-cli) with other settings, usually with drastically limited execution time. However, if it is for some reason necessary it should be in a separate directory to which access is limited to specific IP addresses using .htaccess (and/or .iptables) and with assigned username and password by using htpasswd (Basic Auth).
The ideal situation is when the public directory of the www server (hereinafter referred to as public) contains only static content (img, css, js ... files) and the application trigger located in the parent directory. Example structure is:
/home/username/domainname/(apps,crons,public,tmp)
The apps directory should contain all application files and directories. The public directory should contain only static content (for order in some subdirectories) and a symbolic link to the main file of the application which can be obtained with the command:
ln -s ../apps/app.php index.php
Some server configurations do not allow the use of symlinks. Then you can use the index.php file containing:
<?php
include('/home/username/domainname/apps/app.php');
This solution is a bit worse because in the event of a gateway failure, the directory structure is revealed. However, sensitive data is still secure because the web server cannot display the content of files that are not there.
The presented IP analysis can be used to display part of content for authorized addresses, assuming that the php file itself is outside the public web server. If these are entire websites, however, I would prefer to use iptables or .htaccess to manage access to them.
How safe is this?
Realistically, it's pretty safe as long as you're in control of the address (127.0.0.1 is okay, XXX.XXX.XXX.XXX might not be). By pretty safe I mean that there's little chance that someone might abuse this system and not have a far greater chance of abusing the rest of the web application.
What are the risks?
Someone might call your script from outside, if they had a way of assuming the IP address XXX.XXX.XXX.XXX in some way, or tricking the systemm into believing they had.
How can I make this more safe?
You can include a secret in the original call, and check it against a hash of the same secret. The secret is not revealed even if someone can read the script.
if (!array_key_exists('key', $_GET)) {
die('Access denied');
}
if (sha1($_GET['key']) !== '713dca7cf928f23a2347cae828d98879629e1e80') {
die('Access denied');
}
You can also place the script outside the web root, and call it through a require statement. This way, either the PHP subsystem works, and the script cannot be read, or it does not work, and all that's revealed is the name of an unaccessible directory. You can even merge the two approaches:
if (sha1($_GET['key']) !== '713dca7cf928f23a2347cae828d98879629e1e80') {
die('Access denied');
}
$realScript = $_GET['key'];
require $realScript;
Now, the only script that can be included is the one whose name has that specific SHA1 hash, and no other (the risk of collisions is realistically negligible: you would need a collision with a valid filename, and the means of creating such a filename). So you know that the script is valid, but unless the name is supplied in the call, the whole construction will not work, and it will not even tell an attacker why.
curl http://yoursite.internal.address/cron/cron.php?key=../scripts7ab9ceef/mycron.php
Are there other (better) solutions?
Yes and no. Calling the script using the command line interface is just as safe, and does not need a working webserver. It also allows running as a different user if needed.
On the other hand, it requires a command-line interface to be installed, which may create other security issues, and even if the same user and home directory is used, the two interfaces might still behave subtly differently (or not so subtly: you might have a PHP7.3 web module and a PHP5.2 CLI installation, or vice versa, which would make a script with short array syntax (or with constructs like if(empty(some_function($_GET['x']))) not even load in one or the other interface.
All in all, the crontab call to curl or lynx is probably more maintainable and simple to use, even if it is undoubtedly less efficient.
I'm hosting a couple of sites for my friends, which they can edit using SFTP. But I recently stumbled upon something quite alarming.
Using: <? echo $realIP = file_get_contents("http://ipecho.net/plain"); ?>
They are able to get the real IP of the server. I'm using CloudFlare to "mask" the IP from the outside, so that's quite safe. I know that I can use a VPN for this, but that a quite expensive option. Is there any way to avoid, them using this certain methodes to gain the real server IP?
I would just generally secure the server in a way that is similar to typical shared hosting accounts.
While this is on the edge of what I know, I am by no means an expert on server security I do know a few things I have ran into over the years.
disable using stream file wrappers for remote files. This is controlled by the allow_url_fopen setting in PHP, and disables opening remote files using fopen, file_get_contents and their like.
disable some of the more dangerous PHP functions these include but are not limited to shell_exec, exec, eval, popen. These likewise can be disabled in the PHP ini file by adding them to the disabled_functions list.
remove shell access for the users. They will still be able to authenticate for sFTP, for file transfers. But they will not be able to login via SSH through something like putty you can modify the users like this usermod -s /sbin/nologin myuser for more details see this post on Unix.StackExchange
setup a test account with the same access your clients (people you provide hosting for) have and test what works and what doesn't. This will give you a bit of an idea what they can and cant do, and it gives you a place to test some configuration changes before applying them globally.
I am sure there are many more things you can do, and I can't really go into a whole article on server security. As I said I am by no means an authority on the subject. So the last thing I would say is do as much research as you can and see what others are doing for shared hosting servers as this is basically what you have.
I did find this Post on 14 best practices on server security.
http://www.hostingadvice.com/how-to/web-hosting-security-best-practices/
This just gives a high level overview of some of the concerns and doesn't really get to overly technical.
This is a pretty big topic, with many pitfalls, but I hope my limited knowledge at least gets you started down the road of securing your server. And remember, it's your server you get to say what the policies are on it.
That said it is very important to communicate with your users about any policy changes. They have pretty much had free reign up to this point. But if you explain to them that it's in their interest because it not only protects the server but also protects their data, it may go over a bit easier. They do have a right to know, and you do have an obligation to tell them. This way they can make any necessary changes to their code. But again, it is your server ultimately and it's your responsibility to make it as secure as you can.
Good Luck!
I was wondering about creating something that would compare to the titles implications.
There are so many websites that compare prices on goods and how they go about it is quite simple.
Please a file on the clients server, target it with your own server at any specific point in time.
So, within that file any code that is executable would only execute on authorisation.
What I commonly see is:
$required_ip = gethostbyname('admin.mydomain.com');
if ($_SERVER['REMOTE_ADDR'] != $required_ip) {
die('This file is not accessible.');
}
// Do some stuff like turn the remote product data into xml format and export to your local server
What I would like to find out is firstly, how secure is this method? I am quite sure there are a few ways to get around this and if anyone could suggest a way to bypass this situation then that would be great!
My goal however, is to reverse this process. So that once authenticated, data can be pushed to the remote server. It is one thing to extract but another to input so I am worried that this type of functionality could create serious security issues. What I would like to do, is find out how I could possibly work around that to make what could be a safe "datapusher".
Any advice, feedback or input would be greatly appreciated; thanks in advance!
(Paraphrasing your questions:)
How secure is it to do a DNS lookup and use that to authenticate a client.
Reasonably secure, though by no means perfect. The first problem is that the IP it resolves to may encompass quite a number of different machines, if it's pointing towards a NATed network. An attacker could pose as the correct remote IP if they're able to send their requests from somewhere within that network; or simply by tunnelling requests through it in one way or another. Essentially, the security lies in the hands of the owner of that domain/IP address, and there are numerous ways to screw it up.
In reverse, an attacker may be able to poison the DNS resolver that's used to resolve that IP address, allowing the attacker to point it to any IP address he pleases.
Both of these kinds of attacks are not infeasible, though not trivial either. If you're sending information which isn't terribly confidential, it's probably a "good enough" solution. For really sensitive data it's a no go.
How to ensure the identity of a remote server I'm pushing data to?
With your push idea, all your server really needs to do is to send some HTTP request to some remote server. There isn't even really any need for anyone to authenticate themselves. Your server is voluntarily pushing data to another system, that system merely needs to receive it; there's no real case of requiring an authentication.
However, you do want to make sure that you're sending the data to the right remote system, not to someone else. You also want to make sure the communication is secured. For that, use SSL. The remote system needs to have a signed SSL certificate which verifies its identity, and which is used to encrypt the traffic.
Can xampp configured with virtual hosts cause a security threat if it is run on a computer used as a normal computer and not a server? Example: I want to run xampp with virtual hosts on my mom's mac laptop. She uses it for emails, web-surfing, etc. Would this put her at risk for being hacked or personal information at risk? Thank you in advance!
you can deny all ip address and just add exception to no one can access the webserver
i think phpmyadmin used in xampp and appserv have vulnerability and can get shell access to
.htaccess file for deny ip :
order deny,allow
deny from all
allow from 111.222.333.444
111.222.333.444 => ip allowed
if you deny port 80 for other ip , it should be safe :D
by the way should take care about services you run on your operation system
there is no such thing as "a server", not in the way you obviously understand the term. "A server" is nothing but a simple computer that "serves" something, thus the term. Every computer is a server and every computer is a client. Whyever, somehow the term "a server" has become common for systems used mainly to server things, like for example "web server". That does not change the fact that the expression does not really make sense from a technical point of view.
certainly running a server software on any system adds a security risk to that system, since it exposes the system, makes it communicative. That is the sense of using a server software, so there is way around that. You can only try to limit the security thread, you cannot prevent it.
In the end it absolutely depends on what you server with xampp used as a "server software". There are many many web applications out there that pose high security risks on the system they run on. Bad programmed applications, unknown issues, all the bandwidth.
In addition the configuration of the server software is crucial when it comes to security. You can easily leave wide open gaps and accidentially make private data accessible to the public or risk that the system is taken over and turned into a bot or a zombie. So you should limit access to the system as far as possible. Block everything you do not really need. And, preferably, do not connect such a system to the internet, if you only need it internally.
So the answer clearly is YES, THERE IS A SECURITY RISK.
You should only run server software if you know what you do and if you know how to handle it.
And you certainly should never do that on someone elses system!
The answer here is: it depends.
If you are using this for internal support, then no. This will not cause a serious security risk on her computer. At least no more than anything else you do. Yes, by adding the server software, you can have other computers connect to it, but it depends on what you are doing with this server. If you are using it for internal development, you'll be fine. If you are THAT worried about it, make sure you aren't connected to the internet. However, if you are trying to host a web site with high traffic, then you may want to consider another option, like an actual server, rather than a local host.
To be honest, there is a security risk in EVERYTHING you do. Connecting to the internet in a coffee shop. Visiting web sites. Opening emails. They are ALL risks. So you just have to understand what you are doing. By adding virtual hosts, you can connect one computer to another; however, if this is for testing, and doing things internally, the chances of someone getting that IP address is extremely low.
Is it possible to implement a p2p using just PHP? Without Flash or Java and obviously without installing some sort of agent/client on one's computer.
so even though it might not be "true" p2p, but it'd use server to establish connection of some sort, but rest of communication must be done using p2p
i apologize for little miscommunication, by "php" i meant not a php binary, but a php script that hosted on web server remote from both peers, so each peer have nothing but a browser.
without installing some sort of
agent/client on one's computer
Each computer would have to have the PHP binaries installed.
EDIT
I see in a different post you mentioned browser based. Security restrictions in javascript would prohibit this type of interaction
No.
You could write a P2P client / server in PHP — but it would have to be installed on the participating computers.
You can't have PHP running on a webserver cause two other computers to communicate with each other without having P2P software installed.
You can't even use JavaScript to help — the same origin policy would prevent it.
JavaScript running a browser could use a PHP based server as a middleman so that two clients could communicate — but you aren't going to achieve P2P.
Since 2009 (when this answer was originally written), the WebRTC protocol was written and achieved widespread support among browsers.
This allows you to perform peer-to-peer between web browsers but you need to write the code in JavaScript (WebAssembly might also be an option and one that would let you write PHP.)
You also need a bunch of non-peer server code to support WebRTC (e.g. for allow peer discovery and proxy data around firewalls) which you could write in PHP.
It is non-theoretical because server side application(PHP) does not have peer's system access which is required to define ports, IP addresses, etc in order to establish a socket connection.
ADDITION:
But if you were to go with PHP in each peer's web servers, that may give you what you're looking for.
Doesn't peer-to-peer communication imply that communication is going directly from one client to another, without any servers in the middle? Since PHP is a server-based software, I don't think any program you write on it can be considered true p2p.
However, if you want to enable client to client communications with a php server as the middle man, that's definitely possible.
Depends on if you want the browser to be sending data to this PHP application.
I've made IRC bots entirely in PHP though, which showed their status and output in my web browser in a fashion much like mIRC. I just set the timeout limit to infinite and connected to the IRC server using sockets. You could connect to anything though. You can even make it listen for incoming connections and handle them.
What you can't do is to get a browser to keep a two-way connection without breaking off requests (not yet anyways...)
Yes, but its not what's generally called p2p, since there is a server in between. I have a feeling though that what you want to do is to have your peers communicate with each other, rather than have a direct connection between them with no 'middleman' server (which is what is normally meant by p2p)
Depending on the scalability requirements, implementing this kind of communication can be trivial (simple polling script on clients), or demanding (asynchronous comet server).
In case someone comes here seeing if you can write P2P software in PHP, the answer is yes, in this case, Quentin's answer to the original question is correct, PHP would have to be installed on the computer.
You can do whatever you want to do in PHP, including writing true p2p software. To create a true P2P program in PHP, you would use PHP as an interpreted language WITHOUT a web server, and you would use sockets - just like you would in c/c++. The original accepted answer is right and wrong, unless however the original poster was asking if PHP running on a webserver could be a p2p client - which would of course be no.
Basically to do this, you'd basically write a php script that:
Opens a server socket connection (stream_socket_server/socket_create)
Find a list of peer IP's
Open a client connection to each peer
...
Prove everyone wrong.
No, not really. PHP scripts are meant to run only for very small amount of time. Usually the default maximum runtime is two minutes which will be normally not enough for p2p communication. After this the script will be canceled though the server administrator can deactivate that. But even then the whole downloading time the http connection between the server and the client must be hold. The client's browser will show in this time its page loading indicator. If the connection breakes most web servers will kill the php script so the p2p download is canceled.
So it may be possible to implement the p2p protocol, but in a client/server scenario you run into problems with the execution model of php scripts.
both parties would need to be running a server such as apache although for demonstration purposes you could get away with just using the inbuilt php test server. Next you are going to have to research firewall hole punching in php I saw a script i think on github but was long time ago . Yes it can be done , if your client is not a savvy programmer type you would probably need to ensure that they have php installed and running. The path variable may not work unless you add it to the system registry in windows so make sure you provide a bat file that both would ensure the path is in the system registry so windows can find it .Sorry I am not a linux user.
Next you have to develop the code. There are instrucions for how hole punching works and it does require a server on the public domain which is required to allow 2 computers to find each others ip address. Maybe you could rig up something on a free website such as www.000.webhost.com alternatively you could use some kind of a built in mechanism such as using the persons email address. To report the current ip.
The biggest problem is routers and firewalls but packets even if they are directed at a public ip still need to know the destination on a lan so the information on how to write the packet should be straight forwards. With any luck you might find a script that has done most of the work for you.