PHP : Posting to SSL - Clarification? - php

Probably pretty basic question, but I’m a bit confused after reading similar questions and answers so hoping for more clarification.
If I have a non-secure page and have a form which posts to a protected one, will the data be encrypted once submitted?
Example:
http://example.com/login.php
has a form that posts to secure page:
https://example.com/do.php” method=“post”>
.....
Is that enough to make it a secure post or does the initial page have to be secure as well? No ajax or anything, just php...

Anything you post over SSL is secure in transit. But that is not where your problem would be.
If you have an insecure page http://example.com/login.php, an attacker can change the page when a user downloads it, because there is no security (no encryption, no integrity protection, nothing). The attacker can change the form to send the credentials from a login plaintext. Or send it to the attacker's server instead. Or inject any javascript, like for example to send every keystroke to the attacker right as keys are pressed. The attacker has full control over a page downloaded via plain http.
Why would an attacker then bother with the encryption in the next post? He can already have all the info before that post takes place.
So in short, if the original page was downloaded via plain http, there is (almost) no point in making a subsequent request over ssl - anything in the ssl request can be known to the attacker already. (Note that some special cases may be exceptions from this, but in general, this is the case.)
One notable exception is when it's not the request that needs to be protected, but the response - that will be encrypted in an ssl exchange, and I think a different protocol, https vs http will count as a different origin, so it will be harder to get info from the response even if javascript is injected into the original plaintext page. But this is not the case for the supposed functionality of a login.php - that must be served over https.

Related

Encrypting data before it gets to the server

Say I have a PHP application and want the users data to be encrypted before it it gets to the server (to prove to users that their data will not be data mined or resold for advertising).
Similar question was asked here ( Secure Javascript encryption library? ) and implies that this is not going to work, but with the increase in privacy interest amonsgt users this requirement is only going to get greater over time.
Example, using the Stanford library (http://crypto.stanford.edu/sjcl/) a web form has an additional ‘long’ password field which the user pastes in (probably from email for example)
sjcl.encrypt(txtPassword, txtFormFieldToBeEncrypted)
The encrypted data is sent to the PHP page, and the process is reversed when the page is loaded.
Would this work if the users used Chrome or another browser that remembers form values - obviously this is not a secure result, but would this be effective enough to keep the users information private from the host server?
EDIT: Just to be clear, I am only interested in making the information invisible to the host server, and understand that this solution wont protect from 3rd party attacks
Protection on the page is useless, for the simple fact that the encryption key / mechanism will also be in the scope of the page and can thus be tampered with by a malicious party (or by the user itself when inspecting the page).
To avoid data going over the line unencrypted there is also no reason to "roll your own"(tm), because for that there is SSL.
If you want to make sure that the data that you receive on the server was actually originating from a page that you control, you can rely on CSRF protection.
First of all use SSL it is for an only way for secure communication. If you make encryption in JavaScript it is trivial to decrypt your message (because all your code with keys is public).
If you worry about CFRS attack use anti-forgery token (more here: http://bkcore.com/blog/code/nocsrf-php-class.html)
It's perfectly possible to do this, Lastpass for instance built their business model on it. All their server does is store an encrypted blob which they cannot do anything with, all encryption and decryption happens on the client; including a Javascript implementation in the browser. The entire blob of encrypted data is downloaded into the client, where the user's password decrypts it; and in reverse on the way back up to the server.
So if your question is whether it's possible: absolutely. It's also a lot of work, since you will need to be providing the same en-/decryption code for as many platforms as you want to support. You'll also need to secure every context where that code will run, to prevent third parties from injecting code which would allow them to access the client side decrypted data. So, everything needs to go over SSL with no 3rd party content being allowed to be injected.
Here are a bunch of reasons why javascript encryption in the browser is almost always a bad idea.
You need to think deeply about your trust model. Do the users trust the server? If not, there is no hope for trustworthy javascript crypto since the crypto software itself comes from the server. If the users do trust the server, why does the data need to be encrypted client-side? Just use SSL to secure the connection, then have the server encrypt the data before storing it.

Using a session token or nonce for Cross-site Request Forgery Protection (CSRF)?

I inherited some code that was recently attacked where the attacker sent repeated remote form submissions.
I implemented a prevention using a session auth token that I create for each user (not the session id). While I realize this specific attack is not CSRF, I adapted my solution from these posts (albeit dated).
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29
http://tyleregeto.com/a-guide-to-nonce
http://shiflett.org/articles/cross-site-request-forgeries
However, it still feels there is some vulnerability here. While I understand nothing is 100% secure, I have some questions:
Couldn't a potential attacker simply start a valid session then include the session id (via cookie) with each of their requests?
It seems an nonce would be better than session token. What's the best way to generate and track an nonce?
I came across some points about these solutions being only single window. Could someone elaborate on this point?
Do these solutions always require a session? Or can these tokens be created without a session? UPDATE, this particular page is just a single page form (no login). So starting a session just to generate a token seems excessive.
Is there a simpler solution (not CAPTCHA) that I could implement to protect against this particular attack that would not use sessions.
In the end, I am looking for a better understanding so I can implement a more robust solution.
As far as I understand you need to do three things: make all of you changing-data actions avaliable only with POST request, disallow POST requests without valid referrer(it must be from the same domain) and check auth token in each POST request(POST token value must be the same as token in cookie).
First two will make it really hard to do any harmfull CSRF request as they are usually hidden images in emails, on other sites etc., and making cross-domain POST request with valid referer should be impossible/hard to do in modern browsers. The thid will make it completely impossible to do any harmfull action without stealing user's cookies/sniffing his traffic.
Now about your questions:
This question really confuses me: if you are using auth tokens correctly then attacker must know user's token from cookie to send it along with request, so why starting a valid attacker's own session can do any harm?
Nonces will make all your links ugly - I have never seen anyone using them anymore. And I think your site can be Dosed using it as you must save/search all the nounces in database - a lot of request to generate nounces may increase your database size really fast(and searching for them will be slow).
If you allow only one nounce per user_id to prevent (2) Dos attack then if user opens a page, then opens another page and then submits the first page - his request will be denied as a new nounce was generated and the old one is already invalid.
How else you will identify a unique user without a session ID be it a cookie, GET or POST variable?
UPD: As we are not talking abot CSRF anymore: you may implement many obscure defences that will prevent spider bots from submitting your form:
Hidden form fields that should not be filled(bots usually fill most of form fields that they see that have good names, even if they are realy hidden for a user)
Javascript mouse trackers (you can analyse recorded mouse movements to detect bots)
File request logs analysis(when a page is loaded javascript/css/images should be loaded too in most cases, but some(really rare) users have it turned off)
Javascript form changes(when a hidden(or not) field is added to a form with javascript that is required on server-side: bots usually don't execute javascript)
Traffic analysis tools like Snort to detect Bot patterns (strange user-agents, too fast form submitting, etc.).
and more, but in the end of the day some modern bots use total emulation of real user behaviour(using real browser API calls) - so if anyone really want to attack your site, no defence like this will help you. Even CAPTCHA today is not very reliable - besides complex image recognition algorithms you can now buy 1000 CAPTCHA's solved by human for any site for as low as $1(you can find services like this mostly in developing countries). So really, there is no 100% defence against bots - each case is different: sometimes you will have to create complex defence system yourself, sometimes just a little tweak will help.

After login, should all pages be https?

This will be a bit difficult to explain but I will try my best. There is a website that has the login form on every page with username/password fields. These pages are not using SSL. After the user fills in the username/password and submits the form, the form is sent to an authentication page which is https.
I have a few questions about this situation.
When submitting a form to an https page, is the data encrypted? Or only after going from an https page (I assume only going from)?If the answer to number one is the ladder, does this mean I would need to use https for all pages because the login form is being redirected from there?After a user is authenticated using https, can the user be redirected back to http and continue using session data? Or should the user remain in https?Is it better/worse to leave the user in https?
Thanks a lot for any help!Metropolis
CONCLUSION
Ok, so after thinking about this for awhile I have decided to just make the whole thing https. #Mathew + #Rook, your answers were both great and I think you both make great points. If I was in a different situation I may have done this differently, but here are my reasons for making the whole thing https.
It will be easier to control the page requests, since I only have to stay in https.Im not overly concerned with the performace (in another situation I may have been)I will not need to wonder if the users data is being secured in all placesI will be following the OWASP guideline as Rook stated
According to The OWASP top 10 at no point can an authenticated session id be used over HTTP. So you create a session over HTTP and then that session becomes authenticated, then you have violated The OWASP Top 10 and you are allowing your users to be susceptible to attack.
I recommend setting the secure flag on your cookie. This is a terrible name for this feature but it forces cookies to be https only. This shouldn't be confused with "Httponly cookies", which is a different flag that is helpful at mitigating the impact from xss.
To make sure your users are safe I would force the use of HTTPS all of the time. ssl is a very lightweight protocol, if you run into resource problems, then consider chaining your https policies.
Yes. If the action URL is https, the form data is encrypted.
Because of #1 you don't have to make the page https, but you may get mixed content warnings. And of course, a man-in-the-middle attacker could manipulate the login page to point to a different action URL.
This is a decision for you to make. Clearly, any data transmitted over HTTP, whether cookies (including session cookies) or user data, can be intercepted and manipulated.
Again, this is a trade-off based on performance and security.
In addition to what The Rook says, submitting a form from http to https is a risk for a couple of reasons:
There is no "lock" icon on the page where people type in their username and password, so they have no way of knowing that their details are encrypted (except by "trusting you")
If someone hijacked your page, your users would have no way to know that they're about to type in their username and password and be redirected to a malicious page (this is somewhat of a corollary to #1).
This is a much simpler attack than http cookie interception, so it's actually an even bigger risk...
But The Rook's point is important: you should never mix http and https traffic. On our websites, as soon as you're logged in, everything is https from that point on.
Apart from the previous answers, since people tend to want to go from HTTPS to HTTP for performance reasons, this article about HTTPS at Google might be of interest. Its main message is:
SSL/TLS is not computationally
expensive any more.

Is there a way to verify the integrity of javascript files at the client?

I'm working on what aims to be a secure method of user registration and authentication using php and javascript but not ssl/tls.
I realise this may well be considered an impossible task that's been tried 1000 times before but I'm going to give it a go anyway. Every example I see online that claims to do it seems to have some huge fatal flaw!
Anyway, my current problem is verifying javascript at the client. The problem being that if my sha1 implementation in javascript is modified by some man-in-the-middle it's not secure at all. If I could just verify that the received javascript has not been tampered with then I think I can pull this off.
The real problem though, is that the only way to do things on the client side is javascript. Simple: write a javascript to verify the integrity of the other javascript files. Nope! Man-in-the-middle can just modify this file too.
Is there a way around this?
To secure your JavaScript, you must examine it in an guaranteed untampered environment. Since you can't create such an environment in the browser this is not possible. Your best option is to download the JavaScript via HTTPS. That isn't safe but better. Possible attack vectors left:
A virus can modify the browser to load some malicious JavaScript for every page
A keylogger can monitor what the user is typing
A proxy can act as a man-in-the-middle for an HTTPS connection. The proxy will actually decode what you send via HTTPS and encode it again (with a different certificate) for the browser.
A proxy can add iframes to the pages you send
Matt
I believe (despite the naysayers) that what you're asking is not impossible, merely extremely difficult. What you're asking is that code that is completely accessible to abuse nevertheless permits the user to identify herself to the server, and vice versa. One possible way is to use a zero-knowledge proof, which will leak no information to the eavesdropper (Eve). For example, the server might provide javascript that draws a representation of a graph that combines user provided information of no value to Eve on its own with server-provided information also of no value. The javascript may have been modified, but will either fail to provide the correct graph (at which point the user WALKS AWAY) or succeed. In the latter case, the user similarly provides 'zero-knowledge' evidence that they have an isomporphic representation of the graph; again, this either succeeds or fails. Also look at the SRP protocol, but the problem with this is that it's a patent minefield. See Practical Cryptography by Ferguson and Schneier. Jo
There's no way around it. As you said: if you cannot verify the source a man-in-the-middle attacker can replace anything the client receives, i.e. anything the client interprets and executes.
You say your only issue is a man in the middle modifying the javascript you use to perform a SHA1. I therefore guess you are using username + SHA1 of password for login....
That is completely insecure even with no Javascript tampering. Even though a man in the middle may not know the plain password if the javascript is not modified, he will know the hash, and he can perfectly use that hash to perform a login on his own by just replaying it.
Even if you include a salt / nonce, the man in the middle could still be able to use those tokens at the moment, and even steal the account by performing a password / email change.
Even ignoring this, and assuming you could actualy get around all that + actually get a javascript to test the integrity of a second javascript, how would you prevent that "verification script" from being tampered too? You keep depending on a script sent over an unsecure channel to assure security on such data (and could recursively go on for ever having a script that checks the integrity of a script that checks the integrity of a script...) all being perfectly tampered since they are sent over an unsecure channel.
The only way to do this would be to be able to build yourself a secure channel on top of http, which would need some client-side extras (a Firefox plugin / an ActiveX extension), but having native support for https that's just absurd.
as they are in the client you cannot access them.
This is the nature of the web pages...
try using important things in server side...
If your security architecture somehow requires functions to run in Javascript, then your security is flawed.
with JavaScript one can protect from passive network attacks (such as eavesdropping WiFi traffic), but you cannot protect yourself from active network attacks where the intruder is capable of controlling your HTTP response header and body data.
If you don't want to pay for the SSL certificate, you can create a self-signed certificate instead. However, this will only prevent passive network attacks, but is a lot easier than some hacky JavaScript implementations you ever create.
Essentially you need a CA signed SSL certificate to prevent active network attacks (a man in the middle).
You can only verify the integrity of Javascript files at the client if, and only if, server and client previously share a secret. That is most often not the case on the Internet. If such a secret is not available, then any attempt to verify transferred Javascript can be broken. It is a catch 22 situation.
Most often, people want to ensure JS integrity because it makes them feel like they can delegate security checks on the client side. In cryptography, there is a fundamental rule that should not be broken: never trust remote user input. Always double-check.
SSL/TLS can make middle-man attacks harder to achieve, but it is not watertight.

Is POST as secure as a Cookie?

While implementing a flash-based uploader, we were faced with an issue: Flash doesn't provide the correct cookies.
We need our PHP Session ID to be passed via a POST variable.
We have come up with and implemented a functional solution, checking for a POST PHPSESSID.
Is POSTing the Session ID as secure as sending it in a cookie?
Possible reason for: Because both are in the http header, and equally possible for a client to forge.
Possible reason against: Because it's easier to forge a POST variable than a Cookie.
It is as secure — forging the POST is equally as easy as the cookie. These are both done by simply setting flags in cURL.
That being said, I think you've got a good solution as well.
If you are able to obtain the session ID from active content in order to POST it, this presumably means that your session cookie is not marked HttpOnly, which one of our hosts claims is otherwise a good idea for defending against cross-site scripting attacks.
Consider instead a JavaScript-based or even refresh-based uploader monitor, which should integrate well enough with everything else that the cookie can be HttpOnly.
On the other hand, if your site does not accept third-party content, cross-site scripting attacks may not be of any concern; in that case, the POST is fine.
I think sending it via GET would also work fine, since you fake anything in a HTTP request (using curl or even flash).
The important thing is what is encrypted in you cookie/post/get parameter and how is it encrypted and checked on the server side.
Really if you are worried about which one is easier to forge, you're worrying about the wrong thing. Simply put, either will be trivial to an experienced attacker. You might keep out the "script kiddies" by choosing one over the other, but those people arern't the ones you need to be worried about. The question you should ask yourself is "what defenses do you have against someone forging an id?" It will happen. If your id is unencrypted, and easy to guess, that's a problem. It will get hacked. Since you are asking which is more secure, I would say that you are concerned.
Here's another thing to consider, since your application is flash, it's susceptable to modification (just like javascript HTML code), because the compiled code is on the attackers machine. They can look through the binary and figure out how the code works, and what it needs to retrieve from the server.
POST data is not an HTTP header, but it is sent as part of the TCP stream which makes it just as easy to read/forge as an HTTP header. If you intercepted an HTTP request it would look something like this:
POST /path/to/script HTTP/1.1
Host: yourdomain.com
User-Agent: Mozilla/99.9 (blahblahblah)
Cookie: __utma=whateverthisisacookievalue;phpsessid=somePHPsessionID
data=thisisthepostdata&otherdata=moredummydata&etc=blah
So as others have said, POST and cookies (and GET data, i.e. query strings) are all easy to spoof, since they're all just text in the same HTTP packet.
I just want to reiterate that both Cookie and Post are equally insecure.

Categories