I have a web service (which returns data) which is accessible only to a few "whitelisted" remote servers. So when a remote server sends a request to my server, I would check the $_SERVER['HTTP_REFERER'] field for the whitelisted domain name and the corresponding IP address (which was known to my web-service through a global array). Can this method of whitelisting requests be bypassed?
I know it is easy to implement referer spoofing....but do keep in mind that I am checking both the referer and the corresponding IP address both of which are known to my app with certainty.
If this is NOT a safe thing to do, does anyone have an alternate method of allowing only "whitelisted" domains to access a given web service?
As commented, I'm not sure why an HTTP Referer header would be set in the first place in your scenario, but let's assume it is and its domain corresponds to the IP of the client. The Referer header is an arbitrary value sent by the client, it's trivially spoofed. The client's IP OTOH is not spoofable (excluding elaborate network level attacks which require the attacker to basically already have compromised one side or the other). What you're asking is whether it makes sense to use an insecure, meaningless value to confirm a value which is already as secure as you can get. And the answer is No. Just stick to the IP filter, that's already good enough.
If you want to strengthen authentication further, use a proper authentication scheme in which you share a secret with your clients (username/password, API token, Oauth or similar).
I do not believe checking the Referer HTTP header in addition to the originating IP address yields any security benefits at all. Having said this, IP-based auth itself isn't the safest practice. If you really want to protect your API, better look into SSL and some form of HTTP authentication.
Related
How safe is it to pass passwords / username in POST or GET requests to an external server?
I will use PHP / CURL and I have second toughts about security.
Alternatives will be considered aswell!
If you use HTTP without SSL encryption, everything is transmitted in the clear, which includes the full URL, the HTTP request/response headers, and the body of the POST request and response.
If you place a password in the GET parameters, it will additionally be displayed in the address bar and quite likely saved in browser history, proxy server logs, and sent to other websites in the referrer header. Sending the password in the POST body or in the standard Authorization header avoids this obvious problem, but it is still visible to an observer who can sniff or proxy your traffic.
Digest Authentication avoids transmitting the password in the clear, and only a non-reusable signature is exposed to the outside observer. It is still vulnerable to man-in-the-middle attacks; see HTTP Digest Authentication versus SSL.
The correct solution is to use an SSL certificate and exclusively use HTTPS. When you do so, the URL string, HTTP headers, and POST body are all encrypted, and the browser verifies that no third party is operating a server in the middle. HTTP Basic Authentication is permissible in this case.
By themselves, not necessarily. You shouldn't use GET for things aside from queries, in general because they can get stored on the user's browser. POST is relatively easy to encrypt using libraries, as you shouldn't implement your own encryption.
Also, if you get an SSL, that would help. If you use HTTPS (rather than HTTP), then it will be even more secure.
You didn't give many details as to what the page was (read: the language) so I can't really recommend a good encryption library, but just Google it and I'm sure you'll find something.
Consider a scenario, where user authentication (username and password) is entered by the user in the page's form element, which is then submitted. The POST data is sent via HTTPS to a new page (where the php code will check for the credentials). Now if a hacker sits in the network, and say has access to all the traffic, is the Application layer security (HTTPS) enough in this case ? I mean, would there be adequate URL encryption or is there a need to have Transport Layer security ?
Yes, everything (including the URL) is going through the encrypted channel. The only thing that the villain would find out is the IP address of the server you are connecting to, and that you are using HTTPS.
Well, if he was monitoring your DNS requests as well, he might also know the domain name of the IP address. But just that, the path, query parameters, and everything else is encrypted.
Yes. In an HTTPS only the handshake is done unencrypted, but even the HTTP GET/POST query's are done encrypted.
It is however impossible to hide to what server you are connecting, since he can see your packets he can see the IP address to where your packets go. If you want to hide this too you can use a proxy (though the hacker would know that you are sending to a proxy, but not where your packets go afterwards).
HTTPS is sufficient "if" the client is secure. Otherwise someone can install a custom certificate and play man-in-the-middle.
As a web developer not much can be done other than disallowing HTTP requests. This can be done via mod_rewrite in Apache.
Is adequate, because if it have access to all your traffic, doesn't matter what encryption protocol do you use, he can use man in the middle for both encryption protocols.
I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).
How would you use https ?, would sending information via GET and POST be any different while using https ?
Any information and examples on how https is used in php for something simple like a secure login would be useful,
Thank you!
It will be no different for your php scripts, the encryption and decryption is done transparently on another layer.
Both GET and POST get encrypted, but GET will leave a trace in the web server log files.
HTTPS is handled at the SSL/TLS Layer, not at the Application Layer (HTTP). Your server will handle it as aularon was saying.
SSL and/or HTTPS is used to provide some level of confidentiality for data in transit between the web users and the web server. It can also be used to provide a level of confidence that the site the users are communicating with is in fact the one they intend to be.
In order to use SSL, you'll need to configure these capabilities on the server itself, which would include either purchasing (an authority-signed) or creating (a self-signed) certificate. If you create your own self-signed certificate, the level of confidence that the site is the intended one is significantly reduced for your users.
PHP
Once your webserver is able to serve SSL-protected pages, PHP will continue to operate as usual. Things to look out for are port numbers (normal HTTP is usually on port 80, while HTTPS traffic is usually on port 443), if your code relies on them.
GET & POST Data
Pierre 303 is correct, GET data may end up in the logs, and POST data will not, but this is no different than a non-SSL web server. SSL is meant to protect data in transit, it does nothing to protect you and your customers from web servers and their administrators that you may not trust.
Secure Login
There is also a performance hit (normally) when using SSL, so, some sites will configure their pages to only use https when the user is sending sensitive information, for example, their password or credit card details, etc. Other traffic would continue to use the normal, http server.
If this is the sort of thing you'd like to do, you'll want to ensure that your login form in HTML uses a ACTION that points to the https server's pages. Once the server accepts this form submission, it can send a redirect to send the user back to the page they requested using just http again.
Just ensure you're sending the correct headings when allowing files to be downloaded over ssl... IE can be a bit quirky. http://support.microsoft.com/kb/323308 for details of how to resolve
I know next to nothing when it comes to the how and why of https connections. Obviously, when I'm transmitting secure data like passwords or especially credit card information, https is a critical tool. What do I need to know about it, though? What are the most common mistakes you see developers making when they implement it in their projects? Are there times when https is just a bad idea? Thanks!
An HTTPS, or Secure Sockets Layer (SSL) certificate is served for a site, and is typically signed by a Certificate Authority (CA), which is effectively a trusted 3rd party that verifies some basic details about your site, and certifies it for use in browsers. If your browser trusts the CA, then it trusts any certificates signed by that CA (this is known as the trust chain).
Each HTTP (or HTTPS) request consists of two parts: a request, and a response. When you request something through HTTPS, there are actually a few things happening in the background:
The client (browser) does a "handshake", where it requests the server's public key and identification.
At this point, the browser can check for validity (does the site name match? is the date range current? is it signed by a CA it trusts?). It can even contact the CA and make sure the certificate is valid.
The client creates a new pre-master secret, which is encrypted using the servers's public key (so only the server can decrypt it) and sent to the server
The server and client both use this pre-master secret to generate the master secret, which is then used to create a symmetric session key for the actual data exchange
Both sides send a message saying they're done the handshake
The server then processes the request normally, and then encrypts the response using the session key
If the connection is kept open, the same symmetric key will be used for each.
If a new connection is established, and both sides still have the master secret, new session keys can be generated in an 'abbreviated handshake'. Typically a browser will store a master secret until it's closed, while a server will store it for a few minutes or several hours (depending on configuration).
For more on the length of sessions see How long does an HTTPS symmetric key last?
Certificates and Hostnames
Certificates are assigned a Common Name (CN), which for HTTPS is the domain name. The CN has to match exactly, eg, a certificate with a CN of "example.com" will NOT match the domain "www.example.com", and users will get a warning in their browser.
Before SNI, it was not possible to host multiple domain names on one IP. Because the certificate is fetched before the client even sends the actual HTTP request, and the HTTP request contains the Host: header line that tells the server what URL to use, there is no way for the server to know what certificate to serve for a given request. SNI adds the hostname to part of the TLS handshake, and so as long as it's supported on both client and server (and in 2015, it is widely supported) then the server can choose the correct certificate.
Even without SNI, one way to serve multiple hostnames is with certificates that include Subject Alternative Names (SANs), which are essentially additional domains the certificate is valid for. Google uses a single certificate to secure many of it's sites, for example.
Another way is to use wildcard certificates. It is possible to get a certificate like ".example.com" in which case "www.example.com" and "foo.example.com" will both be valid for that certificate. However, note that "example.com" does not match ".example.com", and neither does "foo.bar.example.com". If you use "www.example.com" for your certificate, you should redirect anyone at "example.com" to the "www." site. If they request https://example.com, unless you host it on a separate IP and have two certificates, the will get a certificate error.
Of course, you can mix both wildcard and SANs (as long as your CA lets you do this) and get a certificate for both "example.com" and with SANs ".example.com", "example.net", and ".example.net", for example.
Forms
Strictly speaking, if you are submitting a form, it doesn't matter if the form page itself is not encrypted, as long as the submit URL goes to an https:// URL. In reality, users have been trained (at least in theory) not to submit pages unless they see the little "lock icon", so even the form itself should be served via HTTPS to get this.
Traffic and Server Load
HTTPS traffic is much bigger than its equivalent HTTP traffic (due to encryption and certificate overhead), and it also puts a bigger strain on the server (encrypting and decrypting). If you have a heavily-loaded server, it may be desirable to be very selective about what content is served using HTTPS.
Best Practices
If you're not just using HTTPS for the entire site, it should automatically redirect to HTTPS as required. Whenever a user is logged in, they should be using HTTPS, and if you're using session cookies, the cookie should have the secure flag set. This prevents interception of the session cookie, which is especially important given the popularity of open (unencrypted) wifi networks.
Any resources on the page should come from the same scheme being used for the page. If you try to fetch images from http:// when the page is loaded with HTTPS, the user will get security warnings. You should either use fully-qualified URLs, or another easy way is to use absolute URLs that do not include the hostname (eg, src="/images/foo.png") because they work for both.
This includes external resources (eg, Google Analytics)
Don't do POSTs (form submits) when changing from HTTPS to HTTP. Most browsers will flag this as a security warning.
I'm not going to go in depth on SSL in general, gregmac did a great job on that, see below ;-).
However, some of the most common (and critical) mistakes made (not specifically PHP) with regards to use of SSL/TLS:
Allowing HTTP when you should be enforcing HTTPS
Retrieving some resources over HTTP from an HTTPS page (e.g. images, IFRAMEs, etc)
Directing to HTTP page from HTTPS page unintentionally - note that this includes "fake" pages, such as "about:blank" (I've seen this used as IFRAME placeholders), this will needlessly and unpleasantly popup a warning.
Web server configured to support old, unsecure versions of SSL (e.g. SSL v2 is common, yet horribly broken)
(okay, this isn't exactly the programmer's issue, but sometimes noone else will handle it...)
Web server configured to support unsecure cipher suites (I've seen NULL ciphers only in use, which basically provides absolutely NO encryption)
(ditto)
Self-signed certificates - prevents users from verifying the site's identity.
Requesting the user's credentials from an HTTP page, even if submitting to an HTTPS page. Again, this prevents a user from validating the server's identity BEFORE giving it his password... Even if the password is transmitted encrypted, the user has no way of knowing if he's on a bogus site - or even if it WILL be encrypted.
Non-secure cookie - security-related cookies (such as sessionId, authentication token, access token, etc.) MUST be set with the "secure" attribute set. This is important! If it's not set to secure, the security cookie, e.g. SessionId, can be transmitted over HTTP (!) - and attackers can ensure this will happen - and thus allowing session hijacking etc. While you're at it (tho this is not directly related), set the HttpOnly attribute on your cookies, too (helps mitigate some XSS).
Overly permissive certificates - say you have several subdomains, but not all of them are at the same trust level. For instance, you have www.yourdomain.com, dowload.yourdomain.com, and publicaccess.yourdomain.com. So you might think about going with a wildcard certificate.... BUT you also have secure.yourdomain.com, or finance.yourdomain.com - even on a different server. publicaccess.yourdomain.com will then be able to impersonate secure.yourdomain.com....
While there may be instances where this is okay, usually you'd want some separation of privileges...
That's all I can remember right now, might re-edit it later...
As far as when is it a BAD idea to use SSL/TLS - if you have public information which is NOT intended for a specific audience (either a single user or registered members), AND you're not particular about them retrieving it specifically from the proper source (e.g. stock ticker values MUST come from an authenticated source...) - then there is no real reason to incur the overhead (and not just performance... dev/test/cert/etc).
However, if you have shared resources (e.g. same server) between your site and another MORE SENSITIVE site, then the more sensitive site should be setting the rules here.
Also, passwords (and other credentials), credit card info, etc should ALWAYS be over SSL/TLS.
Be sure that, when on an HTTPS page, all elements on the page come from an HTTPS address. This means that elements should have relative paths (e.g. "/images/banner.jpg") so that the protocol is inherited, or that you need to do a check on every page to find the protocol, and use that for all elements.
NB: This includes all outside resources (like Google Analytics javascript files)!
The only down-side I can think of is that it adds (nearly negligible) processing time for the browser and your server. I would suggest encrypting only the transfers that need to be.
I would say the most common mistakes when working with an SSL-enabled site are
The site erroneously redirects users to http from a page as https
The site doesn't automatically switch to https when it's necessary
Images and other assets on an https page are being loading via http, which will trigger a security alert from the browser. Make sure all assets are using fully-qualified URIs that specify https.
The security certificate only works for one subdomain (such as www) but your site actually uses multiple subdomains. Make sure to get a wildcard certificate if you will need it.
I would suggest any time any user data is stored in a database and communicated, use https. Consider this requirement even if the user data is mundane, because even many of these mundane details are used by that user to identify themselves on other websites. Consider all the random security questions your bank asks you (like what street do you live on?). This can be taken from address fields really easily. In this case, the data is not what you consider a password, but it might as well be. Furthermore, you can never anticipate what user data will be used for a security question elsewhere. You can also expect that with the intelligence of the average web user (think your grandmother) that that tidbit of information might make up part of that user's password somewhere else.
One pointer if you use https
make it so that if the user types
http://www.website-that-needs-https.com/etc/yadda.php
they will automatically get redirected to
https://www.website-that-needs-https.com/etc/yadda.php
(personal pet peeve)
However, if you're just doing a plain html webpage, that will be essentially a one-way transmission of information from the server to the user, don't worry about it.
All very good tip here... but I just want to add something..
Ive seen some sites that gives you a http login page and only redirect you to https after you post your username/pass.. This means the username is transmitted in the clear before the https connection is established..
In short make the page where you login from ssl, instead of posting to an ssl page.
I found that trying to <link> to a non-existent style sheet also caused security warnings. When I used the correct path, the lock icon appeared.