Security vulnerabilities with file_get_contents() using variable location - php

Part of my site's application process is that a user must prove ownership of a website. I quickly threw together some code but until now didn't realize that there could be some vulnerabilities with it.
Something like this:
$generatedCode="9s8dfOJDFOIesdsa";
$url="http://anyDomainGivenByUser.com/verification.txt";
if(file_get_contents($url)==$generatedCode){
//verification complete!
}
Is there any threat to having a user-provided url for file_get_contents()?
Edit: The code above is just an example. The generatedCode is obviously a bit more elaborate but still just a string.

Yes, this could possibly be a Server Side Request Forgery vulnerability - if $url is dynamic, you should validate that it is an external internet address and the scheme specifies the HTTP or HTTPS protocol. Ideally you'd use the HTTPS protocol only and then validate the certificate to guard against any DNS hijacking possibilities.
If $url is user controllable, they could substitute internal IP addresses and probe the network behind the firewall using your application as a proxy. For example, if they set the host in $url to 192.168.123.1, your script would request http://192.168.123.1/verification.txt and they might be able to ascertain that another machine is in the hosted environment due to differences in response times between valid and invalid internal addresses. This is known as a Timing Attack. This could be a server that you might not necessarily want exposed publicly. Of course, this is unlikely to attack your network in isolation, but it is a form of Information Leakage and might help an attacker enumerate your network ready for another attack.
You would need to validate that the URL or resolved DNS each time it was requested, otherwise an attacker could set this to external to pass the validation, and then immediately re-point it to an internal address in order to begin probing.
file_get_contents in itself appears safe, as it retrieves the URL and places it into a string. As long as you're not processing the string in any script engine or using is as any execution parameter you should be safe. file_get_contents can also be used to retrieve a local file, but if you validate that it is a valid internet facing HTTP URL as described above, this measure should prevent reading of local files should you decide to show the user what verification.txt contained in case of mismatch. In addition, if you were to display the contents of verification.txt anywhere on your site, you should make sure the output is properly encoded to prevent XSS.

Related

PHP E-Commerce Platforms - Reversing a "datafeed" to create a "datapush" - Risks involved?

I was wondering about creating something that would compare to the titles implications.
There are so many websites that compare prices on goods and how they go about it is quite simple.
Please a file on the clients server, target it with your own server at any specific point in time.
So, within that file any code that is executable would only execute on authorisation.
What I commonly see is:
$required_ip = gethostbyname('admin.mydomain.com');
if ($_SERVER['REMOTE_ADDR'] != $required_ip) {
die('This file is not accessible.');
}
// Do some stuff like turn the remote product data into xml format and export to your local server
What I would like to find out is firstly, how secure is this method? I am quite sure there are a few ways to get around this and if anyone could suggest a way to bypass this situation then that would be great!
My goal however, is to reverse this process. So that once authenticated, data can be pushed to the remote server. It is one thing to extract but another to input so I am worried that this type of functionality could create serious security issues. What I would like to do, is find out how I could possibly work around that to make what could be a safe "datapusher".
Any advice, feedback or input would be greatly appreciated; thanks in advance!
(Paraphrasing your questions:)
How secure is it to do a DNS lookup and use that to authenticate a client.
Reasonably secure, though by no means perfect. The first problem is that the IP it resolves to may encompass quite a number of different machines, if it's pointing towards a NATed network. An attacker could pose as the correct remote IP if they're able to send their requests from somewhere within that network; or simply by tunnelling requests through it in one way or another. Essentially, the security lies in the hands of the owner of that domain/IP address, and there are numerous ways to screw it up.
In reverse, an attacker may be able to poison the DNS resolver that's used to resolve that IP address, allowing the attacker to point it to any IP address he pleases.
Both of these kinds of attacks are not infeasible, though not trivial either. If you're sending information which isn't terribly confidential, it's probably a "good enough" solution. For really sensitive data it's a no go.
How to ensure the identity of a remote server I'm pushing data to?
With your push idea, all your server really needs to do is to send some HTTP request to some remote server. There isn't even really any need for anyone to authenticate themselves. Your server is voluntarily pushing data to another system, that system merely needs to receive it; there's no real case of requiring an authentication.
However, you do want to make sure that you're sending the data to the right remote system, not to someone else. You also want to make sure the communication is secured. For that, use SSL. The remote system needs to have a signed SSL certificate which verifies its identity, and which is used to encrypt the traffic.

Use of .htaccess to mitigate denial of service attacks

I have an application that requires logon.
It is only possible to access the site via a single logon page.
I am concerned about DDOS and have (thanks to friends here) been able to write a script that will recognise potential DDOS attacks and lock the particular IP to prevent site access (also a security measure to prevent multiple password/username combination guesses)
Is there any value in blocking those IPs that offend with .htaccess. I can simply modify the file to prevent my server allowing access to the offending IP for a period of time but will it do any good? Will the incoming requests still bung up the system, even though .htaccess prevents them being served or will it reduce the load allowing genuine requests in?
it is worth noting that most of my requests will come from a limited range of genuine IPs so the implementation I intend is along the lines of:
If DDOS attack suspected, Allow access only from IPs from which there has been a previous good logon for a set time period. Block all suspect IPs where there has been no good logon permanently, unless a manual request to unblock has been made.
Your sage advice would be greatly appreciated. If you think this is a waste of time, please let me know!
Implementation is pretty much pure PHP.
Load caused by a DDOS attack will be lower if blocked by .htaccess as the unwanted connections will be refused early and not allowed to call your PHP scripts.
Take for example a request made for the login script, your apache server will call the PHP script which will (I'm assuming) do a user lookup in a database of some kind. This is load.
Request <---> Apache <---> PHP <---> MySQL (maybe)
If you block and ip (say 1.2.3.4) your htacess will have an extra line like this:
Deny from 1.2.3.4
And the request will go a little like this:
Request <---> Apache <-x-> [Blocked]
And no PHP script or database calls will happen, this is less load than the previous example.
This also has the added bonus of preventing bruteforce attacks on the login form. You'll have to decide when to add IPs to a blocklist, maybe when they give incorrect credentials 20 times in a minute or continuously over half an hour.
Firewall
It would be better to block the requests using a firewall though, rather than with .htaccess. This way the request never gets to apache, it's a simple action for the server to drop the packet based on a IP address rule.
The line below is a shell command that (when run as root) will add an iptables rule to drop all packets originating from that IP address:
/sbin/iptables -I INPUT -s 1.2.3.4 -j DROP

Better way to get where a request actually came from

I am aware that $_SERVER[REFERRER] can be used to detect where a request comes from however it turned out that it can be fooled or even a browser might not send it as part of the request.
Could anyone suggest a better way to detect or rather allow requests from own domain only to avoid spoofing, DoS, etc security attacks?
I could be wrong, but I think you are referring to CSRF attack. First tutorial that I found is this one.
As #afuzzyllama pointed out, DoS consists of sending more data than your server/network connection can handle. In such a case, your PHP script will not be accessible anymore, so you can not implement a DoS protection in your PHP application. This must be done by your network administrator or hosting company.

Consequences of turning off session.cookie_secure in PHP

What are the security risks associated with turning off "session.cookie_secure" in PHP under secure connections? I'm itching to turn this off since I'm unable to access session data from https pages to http pages.
The risk is that the cookie data is transfered over plain HTTP. Anyone sniffing packets on the network would be able to view the data in the cookie. Then, they can pretend to be you (Session Fixation).
Now, some would argue that if someone can sniff packets on the network, that they are in a position to execute a MITM attack so it's not a huge deal. However this is not 100% correct. Look at what happened with Google. They were able to sniff raw WIFI traffic without actually compromising the network (which would be required for a MITM attack). Sending cookies over HTTP can open up session hijacking attacks where if you kept them to HTTPS only they would not be.
If you need access to be secure, keep secure_only set. If you don't care about the data (or use multiple-factors, or want to risk it), then open it up...
One potential workaround is to use a custom error handler, and set 2 session identifiers (one is secure_only). Then you can "log in" via both, yet require the secure one for anything important (Such as accessing important data. This would require some work to do correctly, but could be a decent solution to the problem...

PHP Proxy - Basic Explanation

How does a PHP Proxy work ?
I am looking to make a little script which is similar to other php proxies
But how does it actually work ?
I'm thinking of a PHP Proxy, used to go around AJAX Sane Origin Policy. If you need a real HTTP proxy, the process is much more complex.
Simplest pseudocode:
get the URL from request (e.g. from $_POST['url'])
reject invalid URLs (e.g. don't make requests to localhost (or within your private subnet, if you have several servers))
(optional) check your script's cache, return cached response if applicable
make request to target URL, e.g. with cURL
(optional) cache response, if applicable
return response
Note: in this simplest form, you are allowing anyone to access any URL on the Internet through your PHP Proxy; some access control should be implemented (e.g. logged-in users only, depending on what you use the proxy for).
That's more work than you might think. Simply calling a remote web page and displaying its contents is not enough (that would be readfile('http://google.com') in the simplest case), you have to rewrite the urls in the html document to point to your own proxy again, you need to be able to process https (or you would be allowing normal access to sensitive data, if the target page needs https) and many others (that have partially been compiled in RFC 3143).
Maybe apache's mod_proxy has all you need, but if you really want to write one yourself, studying the source code of other projects (like php-proxy) might give you more insight into the matter.

Categories