I want to set up some basic web server protection to protect against replay attacks and data manipulation hacks.
At the moment, I make a REST request from the client (android) side such as:
http://www.example.com/add_book.php?user_name=eddy&nonce=534365756756&book_title=My%20First%20Book
Here, I am using a nonce that will be stored against the user's id and checked for duplicate requests. However, in this unencrypted system someone can simply insert their own (random) nonce. What I want do is convert the above request to something along the lines of:
http://www.example.com/add_book.php?fsjdfhdhsjfhdsjf538537854rj34i5348ur4rf4r3g4yrg4y32210dfsdjfsdhfjshru99jifjknjsdfnfjsfhuruwe
that can be unencrypted server side to the equivalent unencrypted URL so the parameters can be accessed with $_GET['book_title'] (like in the usual manner).
In the ideal world, the request itself could be encrypted with the user's hashed password as an easy way to certify the user is who they say they are.
I'm not really prepared to pay for an HTTPS certificate at this stage so that's not really an option.
Does anyone know how I can do this? My requests are plain text atm so are incredibly vulnerable.
Thanks.
Android encrypting URL parameters and values and decrypting server side
Don't encrypt URL parameters!
I want to set up some basic web server protection to protect against replay attacks and data manipulation hacks.
If you're trying to stop someone on the network from intercepting plaintext HTTP requests and manipulating them maliciously, HTTPS is the only solution you have.
If you're trying to stop a local user from doing the same, rethink the security model of your application.
I'm not really prepared to pay for an HTTPS certificate at this stage so that's not really an option.
You don't have to pay anything for HTTPS. It's free!
If you're using a hosting provider that makes it difficult to get HTTPS for free, set up a cheap VPS (LowEndBox, DigitalOcean droplets, etc.) and stop giving them your business. They'll adapt or die.
I have a web service (which returns data) which is accessible only to a few "whitelisted" remote servers. So when a remote server sends a request to my server, I would check the $_SERVER['HTTP_REFERER'] field for the whitelisted domain name and the corresponding IP address (which was known to my web-service through a global array). Can this method of whitelisting requests be bypassed?
I know it is easy to implement referer spoofing....but do keep in mind that I am checking both the referer and the corresponding IP address both of which are known to my app with certainty.
If this is NOT a safe thing to do, does anyone have an alternate method of allowing only "whitelisted" domains to access a given web service?
As commented, I'm not sure why an HTTP Referer header would be set in the first place in your scenario, but let's assume it is and its domain corresponds to the IP of the client. The Referer header is an arbitrary value sent by the client, it's trivially spoofed. The client's IP OTOH is not spoofable (excluding elaborate network level attacks which require the attacker to basically already have compromised one side or the other). What you're asking is whether it makes sense to use an insecure, meaningless value to confirm a value which is already as secure as you can get. And the answer is No. Just stick to the IP filter, that's already good enough.
If you want to strengthen authentication further, use a proper authentication scheme in which you share a secret with your clients (username/password, API token, Oauth or similar).
I do not believe checking the Referer HTTP header in addition to the originating IP address yields any security benefits at all. Having said this, IP-based auth itself isn't the safest practice. If you really want to protect your API, better look into SSL and some form of HTTP authentication.
How safe is it to pass passwords / username in POST or GET requests to an external server?
I will use PHP / CURL and I have second toughts about security.
Alternatives will be considered aswell!
If you use HTTP without SSL encryption, everything is transmitted in the clear, which includes the full URL, the HTTP request/response headers, and the body of the POST request and response.
If you place a password in the GET parameters, it will additionally be displayed in the address bar and quite likely saved in browser history, proxy server logs, and sent to other websites in the referrer header. Sending the password in the POST body or in the standard Authorization header avoids this obvious problem, but it is still visible to an observer who can sniff or proxy your traffic.
Digest Authentication avoids transmitting the password in the clear, and only a non-reusable signature is exposed to the outside observer. It is still vulnerable to man-in-the-middle attacks; see HTTP Digest Authentication versus SSL.
The correct solution is to use an SSL certificate and exclusively use HTTPS. When you do so, the URL string, HTTP headers, and POST body are all encrypted, and the browser verifies that no third party is operating a server in the middle. HTTP Basic Authentication is permissible in this case.
By themselves, not necessarily. You shouldn't use GET for things aside from queries, in general because they can get stored on the user's browser. POST is relatively easy to encrypt using libraries, as you shouldn't implement your own encryption.
Also, if you get an SSL, that would help. If you use HTTPS (rather than HTTP), then it will be even more secure.
You didn't give many details as to what the page was (read: the language) so I can't really recommend a good encryption library, but just Google it and I'm sure you'll find something.
How would you use https ?, would sending information via GET and POST be any different while using https ?
Any information and examples on how https is used in php for something simple like a secure login would be useful,
Thank you!
It will be no different for your php scripts, the encryption and decryption is done transparently on another layer.
Both GET and POST get encrypted, but GET will leave a trace in the web server log files.
HTTPS is handled at the SSL/TLS Layer, not at the Application Layer (HTTP). Your server will handle it as aularon was saying.
SSL and/or HTTPS is used to provide some level of confidentiality for data in transit between the web users and the web server. It can also be used to provide a level of confidence that the site the users are communicating with is in fact the one they intend to be.
In order to use SSL, you'll need to configure these capabilities on the server itself, which would include either purchasing (an authority-signed) or creating (a self-signed) certificate. If you create your own self-signed certificate, the level of confidence that the site is the intended one is significantly reduced for your users.
PHP
Once your webserver is able to serve SSL-protected pages, PHP will continue to operate as usual. Things to look out for are port numbers (normal HTTP is usually on port 80, while HTTPS traffic is usually on port 443), if your code relies on them.
GET & POST Data
Pierre 303 is correct, GET data may end up in the logs, and POST data will not, but this is no different than a non-SSL web server. SSL is meant to protect data in transit, it does nothing to protect you and your customers from web servers and their administrators that you may not trust.
Secure Login
There is also a performance hit (normally) when using SSL, so, some sites will configure their pages to only use https when the user is sending sensitive information, for example, their password or credit card details, etc. Other traffic would continue to use the normal, http server.
If this is the sort of thing you'd like to do, you'll want to ensure that your login form in HTML uses a ACTION that points to the https server's pages. Once the server accepts this form submission, it can send a redirect to send the user back to the page they requested using just http again.
Just ensure you're sending the correct headings when allowing files to be downloaded over ssl... IE can be a bit quirky. http://support.microsoft.com/kb/323308 for details of how to resolve
I know next to nothing when it comes to the how and why of https connections. Obviously, when I'm transmitting secure data like passwords or especially credit card information, https is a critical tool. What do I need to know about it, though? What are the most common mistakes you see developers making when they implement it in their projects? Are there times when https is just a bad idea? Thanks!
An HTTPS, or Secure Sockets Layer (SSL) certificate is served for a site, and is typically signed by a Certificate Authority (CA), which is effectively a trusted 3rd party that verifies some basic details about your site, and certifies it for use in browsers. If your browser trusts the CA, then it trusts any certificates signed by that CA (this is known as the trust chain).
Each HTTP (or HTTPS) request consists of two parts: a request, and a response. When you request something through HTTPS, there are actually a few things happening in the background:
The client (browser) does a "handshake", where it requests the server's public key and identification.
At this point, the browser can check for validity (does the site name match? is the date range current? is it signed by a CA it trusts?). It can even contact the CA and make sure the certificate is valid.
The client creates a new pre-master secret, which is encrypted using the servers's public key (so only the server can decrypt it) and sent to the server
The server and client both use this pre-master secret to generate the master secret, which is then used to create a symmetric session key for the actual data exchange
Both sides send a message saying they're done the handshake
The server then processes the request normally, and then encrypts the response using the session key
If the connection is kept open, the same symmetric key will be used for each.
If a new connection is established, and both sides still have the master secret, new session keys can be generated in an 'abbreviated handshake'. Typically a browser will store a master secret until it's closed, while a server will store it for a few minutes or several hours (depending on configuration).
For more on the length of sessions see How long does an HTTPS symmetric key last?
Certificates and Hostnames
Certificates are assigned a Common Name (CN), which for HTTPS is the domain name. The CN has to match exactly, eg, a certificate with a CN of "example.com" will NOT match the domain "www.example.com", and users will get a warning in their browser.
Before SNI, it was not possible to host multiple domain names on one IP. Because the certificate is fetched before the client even sends the actual HTTP request, and the HTTP request contains the Host: header line that tells the server what URL to use, there is no way for the server to know what certificate to serve for a given request. SNI adds the hostname to part of the TLS handshake, and so as long as it's supported on both client and server (and in 2015, it is widely supported) then the server can choose the correct certificate.
Even without SNI, one way to serve multiple hostnames is with certificates that include Subject Alternative Names (SANs), which are essentially additional domains the certificate is valid for. Google uses a single certificate to secure many of it's sites, for example.
Another way is to use wildcard certificates. It is possible to get a certificate like ".example.com" in which case "www.example.com" and "foo.example.com" will both be valid for that certificate. However, note that "example.com" does not match ".example.com", and neither does "foo.bar.example.com". If you use "www.example.com" for your certificate, you should redirect anyone at "example.com" to the "www." site. If they request https://example.com, unless you host it on a separate IP and have two certificates, the will get a certificate error.
Of course, you can mix both wildcard and SANs (as long as your CA lets you do this) and get a certificate for both "example.com" and with SANs ".example.com", "example.net", and ".example.net", for example.
Forms
Strictly speaking, if you are submitting a form, it doesn't matter if the form page itself is not encrypted, as long as the submit URL goes to an https:// URL. In reality, users have been trained (at least in theory) not to submit pages unless they see the little "lock icon", so even the form itself should be served via HTTPS to get this.
Traffic and Server Load
HTTPS traffic is much bigger than its equivalent HTTP traffic (due to encryption and certificate overhead), and it also puts a bigger strain on the server (encrypting and decrypting). If you have a heavily-loaded server, it may be desirable to be very selective about what content is served using HTTPS.
Best Practices
If you're not just using HTTPS for the entire site, it should automatically redirect to HTTPS as required. Whenever a user is logged in, they should be using HTTPS, and if you're using session cookies, the cookie should have the secure flag set. This prevents interception of the session cookie, which is especially important given the popularity of open (unencrypted) wifi networks.
Any resources on the page should come from the same scheme being used for the page. If you try to fetch images from http:// when the page is loaded with HTTPS, the user will get security warnings. You should either use fully-qualified URLs, or another easy way is to use absolute URLs that do not include the hostname (eg, src="/images/foo.png") because they work for both.
This includes external resources (eg, Google Analytics)
Don't do POSTs (form submits) when changing from HTTPS to HTTP. Most browsers will flag this as a security warning.
I'm not going to go in depth on SSL in general, gregmac did a great job on that, see below ;-).
However, some of the most common (and critical) mistakes made (not specifically PHP) with regards to use of SSL/TLS:
Allowing HTTP when you should be enforcing HTTPS
Retrieving some resources over HTTP from an HTTPS page (e.g. images, IFRAMEs, etc)
Directing to HTTP page from HTTPS page unintentionally - note that this includes "fake" pages, such as "about:blank" (I've seen this used as IFRAME placeholders), this will needlessly and unpleasantly popup a warning.
Web server configured to support old, unsecure versions of SSL (e.g. SSL v2 is common, yet horribly broken)
(okay, this isn't exactly the programmer's issue, but sometimes noone else will handle it...)
Web server configured to support unsecure cipher suites (I've seen NULL ciphers only in use, which basically provides absolutely NO encryption)
(ditto)
Self-signed certificates - prevents users from verifying the site's identity.
Requesting the user's credentials from an HTTP page, even if submitting to an HTTPS page. Again, this prevents a user from validating the server's identity BEFORE giving it his password... Even if the password is transmitted encrypted, the user has no way of knowing if he's on a bogus site - or even if it WILL be encrypted.
Non-secure cookie - security-related cookies (such as sessionId, authentication token, access token, etc.) MUST be set with the "secure" attribute set. This is important! If it's not set to secure, the security cookie, e.g. SessionId, can be transmitted over HTTP (!) - and attackers can ensure this will happen - and thus allowing session hijacking etc. While you're at it (tho this is not directly related), set the HttpOnly attribute on your cookies, too (helps mitigate some XSS).
Overly permissive certificates - say you have several subdomains, but not all of them are at the same trust level. For instance, you have www.yourdomain.com, dowload.yourdomain.com, and publicaccess.yourdomain.com. So you might think about going with a wildcard certificate.... BUT you also have secure.yourdomain.com, or finance.yourdomain.com - even on a different server. publicaccess.yourdomain.com will then be able to impersonate secure.yourdomain.com....
While there may be instances where this is okay, usually you'd want some separation of privileges...
That's all I can remember right now, might re-edit it later...
As far as when is it a BAD idea to use SSL/TLS - if you have public information which is NOT intended for a specific audience (either a single user or registered members), AND you're not particular about them retrieving it specifically from the proper source (e.g. stock ticker values MUST come from an authenticated source...) - then there is no real reason to incur the overhead (and not just performance... dev/test/cert/etc).
However, if you have shared resources (e.g. same server) between your site and another MORE SENSITIVE site, then the more sensitive site should be setting the rules here.
Also, passwords (and other credentials), credit card info, etc should ALWAYS be over SSL/TLS.
Be sure that, when on an HTTPS page, all elements on the page come from an HTTPS address. This means that elements should have relative paths (e.g. "/images/banner.jpg") so that the protocol is inherited, or that you need to do a check on every page to find the protocol, and use that for all elements.
NB: This includes all outside resources (like Google Analytics javascript files)!
The only down-side I can think of is that it adds (nearly negligible) processing time for the browser and your server. I would suggest encrypting only the transfers that need to be.
I would say the most common mistakes when working with an SSL-enabled site are
The site erroneously redirects users to http from a page as https
The site doesn't automatically switch to https when it's necessary
Images and other assets on an https page are being loading via http, which will trigger a security alert from the browser. Make sure all assets are using fully-qualified URIs that specify https.
The security certificate only works for one subdomain (such as www) but your site actually uses multiple subdomains. Make sure to get a wildcard certificate if you will need it.
I would suggest any time any user data is stored in a database and communicated, use https. Consider this requirement even if the user data is mundane, because even many of these mundane details are used by that user to identify themselves on other websites. Consider all the random security questions your bank asks you (like what street do you live on?). This can be taken from address fields really easily. In this case, the data is not what you consider a password, but it might as well be. Furthermore, you can never anticipate what user data will be used for a security question elsewhere. You can also expect that with the intelligence of the average web user (think your grandmother) that that tidbit of information might make up part of that user's password somewhere else.
One pointer if you use https
make it so that if the user types
http://www.website-that-needs-https.com/etc/yadda.php
they will automatically get redirected to
https://www.website-that-needs-https.com/etc/yadda.php
(personal pet peeve)
However, if you're just doing a plain html webpage, that will be essentially a one-way transmission of information from the server to the user, don't worry about it.
All very good tip here... but I just want to add something..
Ive seen some sites that gives you a http login page and only redirect you to https after you post your username/pass.. This means the username is transmitted in the clear before the https connection is established..
In short make the page where you login from ssl, instead of posting to an ssl page.
I found that trying to <link> to a non-existent style sheet also caused security warnings. When I used the correct path, the lock icon appeared.