If I call a single PHP file that in turn uses GET's and POST's to build the HTML page as well as process other data and store it in SESSION, do I need to mirror the entire site into an HTTPS capable directory or does only the page being called need to be in the directory?
So for example my computer sends my name via POST to the server and specifically Index.php.
If the address of Index.php is is the data secure going to the server?
Is the data returning, most specifically the SESSION data, also secure?
Also I apologize if this quest has been answered a hundred times, for some reason I could not think of the proper search terms to find the answer.
do I need to mirror the entire site into an HTTPS capable directory or does only the page being called need to be in the directory?
No. Webserver can be set up so it watches to the same directory for both http and https
If the address of Index.php is is the data secure going to the server?
If it is a https protocol specified in the url - then all the traffic (request from client to server and response from server to client) between client and serever is crypted.
Is the data returning, most specifically the SESSION data, also secure?
Session data is never sent to client. It is stored on the server.
Related
I have a shared SSL certificate from my web host which (for this posts sake) looks like this:
https://some-ssl-cert/mysite
Going to that link would go to my site, and display it in https:// with a green padlock.
The normal site is http://
How do I display the main login for the website as https://?
Obviously I cannot tell or redirect my users to https://some-ssl-cert/mysite so I am very confused on how to implement this.
Lastly, when I need to send sensitive information on other pages that aren't https:// would I simply send that information to https://some-ssl-cert/mysite?
So for instance, if I needed to make a secure ajax request or something would I access the .php file via https://some-ssl-cert/mysite?
How do I display the main login for the website as https://?
You need an SSL certificate for the host name used for your site. You also need your host to support it.
Lastly, when I need to send sensitive information on other pages that aren't https:// would I simply send that information to https://some-ssl-cert/mysite?
If you need to send sensitive information, then you need to do it over HTTPS. If you are using plain HTTP then you need to redirect to the HTTPS site.
So for instance, if I needed to make a secure ajax request or something would I access the .php file via https://some-ssl-cert/mysite?
The entire webpage needs to be served over HTTPS. Otherwise:
It will be a cross-origin request and the ajax will fail (CORS/JSONP/et al excepted)
The non-secured page could be interfered with (e.g. JS added that would steal the securely acquired data).
Sorry for the confusion. To clarify my question, the session will be created over ssl and will stay encrypted. While users browse using normal http, I'm asking if I "require" a ssl page that verifies the users' session, will it run in ssl or will it simply be a part of the parent page which is in http which will be unable to retrieve the session id because the session is saved in https.
I'm currently working on a secure member log in with php.
A log in form will redirect to a ssl url (i.e. https) to keep the password safe for people who are logging in using unencrypted network/wifi.
The only problem is, I can't think of any way to "securely" pass users' log in session from https to http.
So I was thinking to use "require_once" from php which includes a file url starting with https. And the included file will create a session under https and all I have to do is simply require the page in every authentication-required page.
The only issue is, I'm not too sure if the "required file" will run under https or the codes will simply be included in the parent page and run under http.
In other words, how exactly does include or require work (does the function run the code in the separate page or simply include the code in the parent page and run)? I searched php manual, but I was't able to find the answer. Also, I can't test it by myself because I don't have ssl license yet.
Also, any suggestion on building a secure log in using https (just for log in) in combination with http for any other user interface?
include() and require() will only go 'external' and do an HTTP-type request if the path you're providing to them looks like a url (e.g. 'http://....'). Otherwise it's interpreted as a local file file request and does NOT involve the HTTP layer.
There's no practical difference to PHP if a script was requested via HTTP or HTTPS, except there'll be extra SSL-specific entries in $_SERVER. Includes/requires still work as they if the script was running in a non-SSL environment, and the script can still do CURL requests and whatnot. Remember that the SSL link is established by the server and the client browser BEFORE php is invoked, and applies only to do the client<->server communications. Anything the script does with external resources will only involve SSL if the resources requested themselves are done via a completely separate SSL request.
You cannot "turn on" SSL from within a PHP script. There's no mechanism in HTTP to dynamically migrate a link from a regular unencrypted port 80 to an encrypted port 443 within the same request. You can redirect the client towards an SSL url, but that involves a completely new HTTP request - the original request started as non-SSL and will stay non-SSL.
Edit: The below is an answer to the original question, which was phrased in a way that made it sound like the author only wanted the login page to be protected.
I assume that the reason you want to redirect back to HTTP is that the site contents itself isn't confidential, and that you only care about protecting the user's password and account. However, if you redirect the user back to HTTP after logging in, your site will be almost as insecure as if you didn't use HTTPS at all. Granted, HTTPS login will prevent the user's password from being sniffed, but anyone can use Firesheep or similar applications to steal the user's session id after login if you redirect back to HTTP - then, they can take over the account by changing the password (or simply act as the user without changing the password).
(While we're on the subject: why on Earth doesn't StackOverflow use HTTPS after login?) :-(
In order to maintain security, you need to ensure the https:// is in the user's address bar at all times. You can't just include a file and expect it to be secure.
Think of it this way. Say you have a form on http:// and you make a curl call to https:// # Verisign to post a credit card payment. That unencrypted data can easily be intercepted before it reaches Verisign's secure page.
If it's SSL, keep it SSL throughout the entire session. You'll notice on bank sites, there is usually a login button which directs you to an https:// page containing the form - OR they mix it by grabbing your username on the http:// page and then posting that to the https:// page before asking for your password. US Bank does this just to get the user engaged on the home page.
EDIT:
To respond to the new clarification. I would not let a user browse http:// pages while logged in via https://. I would add this logic:
if(isset($_SESSION['LOGGED_IN_SSL']))
{
if ($_SERVER['HTTPS'] != "on")
{
$url = "https://". $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI'];
header("Location: $url");
exit();
}
}
That would force the user to view the https:// version of whatever page he/she wishes to view.
When using PHP I can use file_get_contents or cURL to get a URL.
jQuery runs on the client
In jQuery there is a function called jQuery.getJSON(). Javascript is run on the client. What server is used for the download of the JSON code of the external URL? What information does the called URL know about? Does it know of the domain? The IP of the client user? It's a client language.
Prefered for many request
To make many requests, is it safer to do this with Javascript than PHP because it runs on the every client instead of one server point?
What server is used for the download of the JSON code of the external URL?
The one that the domain name in the URL passed to that function resolves to.
What information does the called URL know about?
It is an HTTP request, like any other. The usual information will be available.
Does it know of the domain? The IP of the client user?
Of course.
It's a client language.
… making an HTTP request.
To make many requests, is it safer to do this with Javascript than PHP because it runs on the every client instead of one server point?
You control the server. You don't control the client. JavaScript can be disabled. It is safer to make the request from your server.
(For a value of "safe" equal to "Less likely to fail assuming the service you are using doesn't impose rate limiting")
Because of the Same Origin Policy all requests made in JavaScript must go to the domain from which the document was loaded. It's a standard HTTP request, so the server will have the same information it would if a user was just navigating around (including cookies, etc.) From the phrasing of your question it appears you need to make requests to some external site, in which case making those requests from your server which is not subject to such a security policy would likely be best.
In jQuery there is a function called jQuery.getJSON(). Javascript is
run on the client. What server is used for the download of the JSON
code of the external URL? What information does the called URL know
about? Does it know of the domain? The IP of the client user? It's a
client language.
The code that runs your web browser is only on your PC, too, yet it is perfectly capable of retrieving content via the HTTP protocol from a web server, and has done so for several decades.
AJAX requests are no different. jQuery creates an XMLHttpRequest object that performs an HTTP request in a manner uncoupled from the general page context. As far as the server's concerned, it's just an HTTP request like any other.
The text contents of the result you get back happen to be written in JSON format, but the HTTP layer neither knows nor cares about that.
I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).
so what i am trying to do is this:
login to the other server with a PHP on my own server (either with my username and pass/or with my cookies)
then have access to the page i want to display/download
i want to write a PHP script that is located on my own server, that automatically does a login to another server, that uses HTTPS and a web form for login.
after the login i have access to that page that i am trying to download.
i dont know if it would be possible to login and download the html only with the cookies that i have in my browser through a previous login, or if i need to do the login in my php script through some https login method.
can i do any of this with curl or fsocksopen or what would be the best way to realize this?
thanks in advance!
you just have to try. in most cases you should be fine if you export your cookies and use them in your curl request.
however the website mave hashed the cookies with the remote address, or given a timeout on them.
then you probably have to login from the server. with php / curl you can do that all.
the only thing that may be a problem is javascript/captcha codes.
in addition you should definately check zend http client, it has functionalities that makes "browsing" easy. for example saving cookies and automatically passing them on in the next request and also deleting them if the server tells you so etc.
Use the PEAR HTTP Request class.