Consequences of turning off session.cookie_secure in PHP - php

What are the security risks associated with turning off "session.cookie_secure" in PHP under secure connections? I'm itching to turn this off since I'm unable to access session data from https pages to http pages.

The risk is that the cookie data is transfered over plain HTTP. Anyone sniffing packets on the network would be able to view the data in the cookie. Then, they can pretend to be you (Session Fixation).
Now, some would argue that if someone can sniff packets on the network, that they are in a position to execute a MITM attack so it's not a huge deal. However this is not 100% correct. Look at what happened with Google. They were able to sniff raw WIFI traffic without actually compromising the network (which would be required for a MITM attack). Sending cookies over HTTP can open up session hijacking attacks where if you kept them to HTTPS only they would not be.
If you need access to be secure, keep secure_only set. If you don't care about the data (or use multiple-factors, or want to risk it), then open it up...
One potential workaround is to use a custom error handler, and set 2 session identifiers (one is secure_only). Then you can "log in" via both, yet require the secure one for anything important (Such as accessing important data. This would require some work to do correctly, but could be a decent solution to the problem...

Related

Security vulnerabilities with file_get_contents() using variable location

Part of my site's application process is that a user must prove ownership of a website. I quickly threw together some code but until now didn't realize that there could be some vulnerabilities with it.
Something like this:
$generatedCode="9s8dfOJDFOIesdsa";
$url="http://anyDomainGivenByUser.com/verification.txt";
if(file_get_contents($url)==$generatedCode){
//verification complete!
}
Is there any threat to having a user-provided url for file_get_contents()?
Edit: The code above is just an example. The generatedCode is obviously a bit more elaborate but still just a string.
Yes, this could possibly be a Server Side Request Forgery vulnerability - if $url is dynamic, you should validate that it is an external internet address and the scheme specifies the HTTP or HTTPS protocol. Ideally you'd use the HTTPS protocol only and then validate the certificate to guard against any DNS hijacking possibilities.
If $url is user controllable, they could substitute internal IP addresses and probe the network behind the firewall using your application as a proxy. For example, if they set the host in $url to 192.168.123.1, your script would request http://192.168.123.1/verification.txt and they might be able to ascertain that another machine is in the hosted environment due to differences in response times between valid and invalid internal addresses. This is known as a Timing Attack. This could be a server that you might not necessarily want exposed publicly. Of course, this is unlikely to attack your network in isolation, but it is a form of Information Leakage and might help an attacker enumerate your network ready for another attack.
You would need to validate that the URL or resolved DNS each time it was requested, otherwise an attacker could set this to external to pass the validation, and then immediately re-point it to an internal address in order to begin probing.
file_get_contents in itself appears safe, as it retrieves the URL and places it into a string. As long as you're not processing the string in any script engine or using is as any execution parameter you should be safe. file_get_contents can also be used to retrieve a local file, but if you validate that it is a valid internet facing HTTP URL as described above, this measure should prevent reading of local files should you decide to show the user what verification.txt contained in case of mismatch. In addition, if you were to display the contents of verification.txt anywhere on your site, you should make sure the output is properly encoded to prevent XSS.

What is gained by changing the name of the PHPSESSID cookie?

What is best practice regarding the naming of the PHPSESSID cookie? Symfony2 allows you to change this via the configuration and you can also change it in php.ini's session.name.
Why would you want to though?
It allows you to run multiple applications on the same site that each need their own cookies to perpetuate the session id. Of course, the same could also be accomplished by setting the session cookie path and/or cookie domain properly.
Another reason could be that you want to hide the fact that you're using PHP and the name PHPSESSID is pretty indicative of that fact.
Or you just don't like the name; much up to you - the developer - to choose a pretty name if you want to.
It may also be considered as a kind of trivial "security through obscurity" practice. Various HTTP fingerprinting applications try to detect the technologies used to implement a web application by monitoring the server header, page prefixes, session ID cookie name (which you'll change) and behavior of the web server upon receiving crafted requests.
Although these kind of stuff barely increase the security of web apps, they may be used to fool potential attackers. Jack's answer points out the main benefit.

Better way to get where a request actually came from

I am aware that $_SERVER[REFERRER] can be used to detect where a request comes from however it turned out that it can be fooled or even a browser might not send it as part of the request.
Could anyone suggest a better way to detect or rather allow requests from own domain only to avoid spoofing, DoS, etc security attacks?
I could be wrong, but I think you are referring to CSRF attack. First tutorial that I found is this one.
As #afuzzyllama pointed out, DoS consists of sending more data than your server/network connection can handle. In such a case, your PHP script will not be accessible anymore, so you can not implement a DoS protection in your PHP application. This must be done by your network administrator or hosting company.

https login form

What should i consider when switching a simple(user+pass) login form from http to https?
Are there any differences when using https compared to http?
From what i know the browser won't cache content server over https, so page-loading might be slower, but other that that i know nothing about this.
Anyone has any experience with this things?
Do not mix secure and non-secure content on the same site as browsers will display annoying warnings if you do so.
Additionally, set cookies as https-only when the users uses https so they are never sent over a http connection.
When switching over to https consider that ALL web assets (images, js, css) must be coming from a https domain, otherwise your user will get warnings about unsecure transmission of data. If you've got any hard coded urls you'll need to dynamically change them to https.
I would add that you should prefer to send your url parameters via post instead of get, otherwise you may be leaving private data all over the place in logfiles, browser windows, etc.
The security layer is implemented in the webserver (e.g. Apache), while your login is implemented at the business logic (your application).
There's no difference for your business logic to use http or https, by the time you receive the request, it's going to be the same, because you receive it decrypted. The web server does the dirty job for you.
As you say, it might be a little bit slower because the web server has to encrypt / decrypt the requests.
As Ben says, all the resources have to come from the secure domain, otherwise some browsers get really annoying (such as IE) with the warnings.
From what i know the browser won't cache content server over https
Provided you send caching instructions in the headers then the client should still cache the content (MSIE does have a switch hidden away to disable caching for HTTPS - but it defaults to caching enabled, Firefox probably has similar).
The time taken for the page to turn will be higher - and much more affected by network latency due the additional overhead of the SSL handshake (once encryption has been negotiated the overhead isn't that much, but depending on your webserver and how its configured you probably won't be able to use KeepAlives with MSIE).
Certainly there will be no difference to your PHP code.
C.

Should http be used for https login subsequent pages?

I've seen many threads on SO and they suggest that password can't be securely transferred without SSL. So suppose I've https login page but
Should I switch back to http after user has been authenticated over https (assuming no sensitive information is sent over after login)? Because it might load page a bit faster?
Would it create extra overhead in terms of development (with Zend Framework)? Like maintaining different directory structures and all that.
If the data is not sensitive you could switch back to http after authenticating users to get a small speed benefit. You do have to remember to switch to https again if any kind of sensitive data would appear on site (like user profile or such). It may actually be easier to have the whole session always encrypted so you won't have to worry about turning encryption on and off depending on the page contents.
SSL is transparent for developers, you create your app exactly the same as you would for non secure server. You do need to have a SSL certificate that you can buy or generate yourself and set up your server to handle it. Then depending on the protocol (http or https) your session will be or won't be encrypted automatically. So it's a matter of setting correct https:// links for pages where you need an encryption and standard http:// links for other pages.
The time takes for an SSL connection to be encrypted and decrypted (after it has been initialized) is negligible compared to the time it takes to transfer the data. So no, it won't load "a bit" faster even.
The extra folders is dependent on your server, not your framework. If you server routes all https requests through a /httpsdocs folder or something, you could put a .htaccess in it, which redirects it to the /httpdocs folder.
VolkerK is right, but his response errs on the side of caution. The session can be compromised by all sorts of methods. There are ways around this (e.g. using a cached javascript client side to generate hashes against a fixed salt of a challenge generated with each page) but they are messy. By far the simplest solution is to always use SSL. However you might consider using digest authentication combined with a session cookie.
Tor Valamo is wrong. These days bandwidth is very cheap, however what is difficult to achieve is eliminating latency - and latency is the primary determinant of HTTP transfer speed (where most of the content is relatively small). For an HTTP request, there are at least 2 round trips to the server - the TCP handshake then the Request/reply. It will vary depending on the size of files and other considerations, but typically the round trip latency accounts for 50-70% of the elapsed time taken to fetch an object.
Using Keep-alives eliminates one of the round trips and therefore improves throughput greatly.
With SSL, there is at least one additional round trip required (for resumption of an existing SSL session) and more than one for initial SSL negotiation. The real killer is that Microsoft's non-standard implementation of SSL means that you can't use keep-alives from anything other than MSIIS when talking to an MSIE client (see the mod_ssl docs for more info).
Yes, you can switch back to http after transfering user password. There is no need to crtypt all content when there is no sensitive data on it. When you have crypted ALL sites: server need to crypt all data and your server have worst performance than without crypting.

Categories