So kind of very similar to "Detecting https requests in php":
Want to have https://example.com/pog.php go to http://example.com/pog.php or even vice versa.
Problems:
Can't read anything from $_SERVER["HTTPS"] since it's not there
Server is sending both requests over port 80, so can't check for 443 on the HTTPS version
apache_request_headers() and apache_response_headers() are sending back the same thing
Can't tell the loadbalancer anything or have it send extra somethings
Server feedback data spat out by the page on both URL calls is exactly the same save for the session ID. Bummer.
Are there any on page ways to detect if it's being called via SSL or non-SSL?
Edit: $_SERVER["HTTPS"] isn't there, switched on or not, no matter if you're looking at the site via SSL or non-SSL. For some reason the hosting has chosen to serve all the HTTPS requests encrypted, but down port 80. And thusly, the $_SERVER["HTTPS"] is never on, not there, just no helpful feedback on that server point. So that parameter is always be empty.
(And yeah, that means it gets flagged in say FF or Chrome for a partially invalid SSL certificate. But that part doesn't matter.)
Also, the most that can be gotten from detecting the URL is up to the point of the slashes. The PHP can't see if the request has https or http at the front.
Keyword -- Load Balancer
The problem boils down to the fact that the load balancer is handling SSL encryption/decryption and it is completely transparent to the webserver.
Request: Client -> 443or80 -> loadbalancer -> 80 -> php
Response: PHP -> 80 -> loadbalancer -> 443or80 -> Client
The real question here is "do you have control over the load balancer configuration?"
If you do, there are a couple ways to handle it. Configure the load balancer to have seperate service definitions for HTTP and HTTPS. Then send HTTP traffic to port 80 of the web servers, and HTTPS traffic to port 81 of the webservers. (port 81 is not used by anything else).
In apache, configure two different virtual hosts:
<VirtualHost 1.2.3.4:80>
ServerName foo.com
SetEnv USING_HTTPS 0
...
</VirtualHost>
<VirtualHost 1.2.3.4:81>
ServerName foo.com
SetEnv USING_HTTPS 1
...
</VirtualHost>
Then, the environment variable USING_HTTPS will be either 1|0, depending on which virtual host picked it up. That will be available in the $_SERVER array in PHP. Isn't that cool?
If you do not have access to the Load Balancer configuration, then things are a bit trickier. There will not be a way to definitively know if you are using HTTP or HTTPS, because HTTP and HTTPS are protocols. They specify how to connect and what format to send information across, but in either case, you are using HTTP 1.1 to make the request. There is no information in the actual request to say if it is HTTP or HTTPS.
But don't lose heart. There are a couple of ideas.
The 6th parameter to PHP's setcookie() function can instruct a client to send the cookie ONLY over HTTPS connections (http://www.php.net/setcookie). Perhaps you could set a cookie with this parameter and then check for it on subsequent requests?
Another possibility would be to use JavaScript to update the links on each page depending on the protocol (adding a GET parameter).
(neither of the above would be bullet proof)
Another pragmatic option would be to get your SSL on a different domain, such as secure.foo.com. Then you could resort to the VirtualHost trick above.
I know this isn't the easiest issue because I deal with it during the day (load balanced web cluster behind a Cisco CSS load balancer with SSL module).
Finally, you can always take the perspective that your web app should switch to SSL mode when needed, and trust the users NOT to move it back (after all, it is their data on the line (usually)).
Hope it helps a bit.
$_SERVER["HTTPS"] isn't there, switched on or not, no matter if you're looking at the site via SSL or non-SSL. For some reason the hosting has chosen to serve all the HTTPS requests encrypted, but down port 80. And thusly, the $_SERVER["HTTPS"] is never on, not there, just no helpful feedback on that server point. So that parameter is always be empty.
You gotta make sure that the provider has the following line in the VHOST entry for your site: SSLOptions +StdEnvVars. That line tells Apache to include the SSL variables in the environment for your scripts (PHP).
Related
I've searched a lot but I couldn't find a PHP proxy server which can be run on a shared host, so I decided to build a very simple one from scratch but I'm still on the first step. I've created a subdomain httpp.alvandsoft.com and also redirected all its subdirectories (REQUEST_URI) to the main index.php to be logged and to know what whould a proxy server exactly receive and send
(The log is accessible through https://httpp.alvandsoft.com/?log=1&log_filename=log.txt)
But whenever I set it as a proxy for Telegram or other apps, it doesn't receive ANY requests at all, even when I use 443 or 80 ports, neither in different proxies such as HTTP, SOCKS or MTPROTO.
Is proxy something that depends on the server's settings and works in a way other than regular HTTP requests and responses or I'm missing something?
I found it out myself. HTTP(s) proxies send their requested URL as Host request header and many hosts and websites, check this request header and if it's not a member of their valid IPs, redirect it immediately.
I'm facing two basic problem which i'm unable to rectify
1) I've a subdomain(with virtual host) with https enabled via letsencrypt, now that subdomain works great with ssl when i visit sub.domain.com and browser shows green sign
Now when i type sub.domain.com:8080 it serves my node application but as soon as i change it to https:// browser say unable to connect which is beyond my imagination how this happening.
2) When first method didn't work, I jumped to second method, on my root domain domain.com inside html folder i placed node application with url like domain.com/nodeapp now when i visit this url, it shows folder structurer with various files and folder, now as soon as i turn the port number on domain.com/nodeapp:8080/ browser shows 404 note i'm already running a php application on my root domain so it might conflict with it, but how can I solve this, We have one module which has to be in node, and we are unable to find the perfect solution.
I suppose your php application is served by an Apache web server, right? It has an extension which does the https encryption, if it is configured correctly (what letsencrypt normally does).
Your Node.js application is configured to listen on port 8080, but doesn't handle encryption automatically. Also, Letsencrypt doesn't configure it (yet) for you.
Have a look at this article how to configure encryption for Node.js by yourself.
I have two websites:
1)httpwebsite.com where I run my web application which uses APACHE, PHP and MYSQL;
2)wss.com where I run a nodeJS websocket server, used for a multiplayer game;
I want to host the javascript client-side files that communicate with the websocket server, on httpwebsite.com, so I dont have to configure a http server on nodeJS, for many reasons, like security and lack of experience with using nodeJS as HTTP server.
I want to use nodeJS only for the websocket server, for performance and flexibility reasons, among many others.
I've heard that Same-origin policy restricts communication from httpwebsite.com with wss.com , but can this be reconfigured to actually allow communication between two different domains that want to communicate with each other on purpose?
Do I have other options than actually running a HTTP server on the nodeJS server?
You can use CORS for secure requests from one domain to another domain.
http://www.html5rocks.com/en/tutorials/cors/
2 options:
You can add CORS headers to wss.com to allow access to website.com to load it's resources. The link Matt gave should explain how this works and you just need to add this HTTP Header to each Node server you need to access.
You can proxy your requests through your Apache server to the node server. So the web browser thinks it's talking to a service on the same origin. This is often used to only have your web server publically available and your app server (running node) not directly available and protected behind a firewall - though obviously Apache needs to be able to access it.
You can use this config in Apache to achieve option 2 to forward http://website.com/api calls to a service running in wss.com on port 3000.
#send all /api requests to node
ProxyPass /api http://wss.com:3000
#Optionally change all references to wss.com to this domain on return:
ProxyPassReverse /api http://wss.com:3000
I need to make a transparent redirect (without the user seeing the address change in the address bar). When the user types example.com he/she should be redirected to 123.123.123.123:9090 (IP:PORT). I cannot use CNAME or A cause it does not accept the PORT. How can I do that? I know that using SRV I can do that but my webserver does not allow it.
I also tried using THE mod_proxy on Apache to rewrite the request from domain example.com -> 123.123.123.123:9090 however it is absurd cause the user requests the content, the apache requests the content to my IP and after that the response has to go up all way back. I need the user request to reach the webserver directly without proxy.
Well, DNS doesn't care about ports, only about hostnames. Thats why you can access every server like example.com:8080 (will probably fail for 99% of all servers though ;)). So what should happen when I try to request your site via example.com:8080? Does port 8080, or 9090 takes precedence?
Long story short: There is no way around a proxy. But easier would be to let it listen on port 80 directly.
I have partially developed a property website that fetch properties data from a RETS IDX. You may know that RETS server listened to port 6103 over http protocol. My website is deployed on a shared hosting due to which I can not connect to 6103 port. I do have a dedicated server (which allows connect to port 6103). I want to use this dedicated server as a middle tier between my website and the RETS IDX server. My problem is I want to develop that middle tier script i.e HTTP Tunnel.
My website will send all RETS request to this Tunnel that will meanwhile sent it to the RETS IDX server and its response will be sent back to the website at the same moment.
port 80 port 6103
Website (shared hosting) ----------> Tunnel (Dedicated hosting) -----------> RETS Server
RETS Server also requires to login, so the session should be maintained properly.
I want to have quick/best solution to do the job. May be through .htaccess or streaming php script or may be some third party script can also cut some of my time.
I would love to hear any thought or suggestion you have.
P.S: I can not move my website to a dedicated server because in near future I am going to have plenty of them and they would cost too much.
I'd personally go for the Reverse Proxy approach. This will allow you to intelligently forward requests, based on configurable criteria.
Both Apache and nginx have reverse proxy capabilities (in fact it was nginx's original purpose). For Apache you need to use mod_proxy, while nginx has the functionality built in unless you explicitly disable it before compiling.
Of these two options I personally prefer nginx, it is robust and lightweight, and completely fit for purpose. I find Apache more cumbersome, but if you already have Apache set up on your dedicated server, you may prefer to use that instead.
The beauty of using web servers to proxy, is that they understand the underlying protocol. They will preserve headers, modify cookies (preserve sessions), and translate hostnames correctly.
Apache Config
In both cases configuration is very straightforward, the Apache config looks something like the following:
<Location /proxy-location/>
ProxyPass /rets http://rets-server:6103/api/path
ProxyPassReverse /rets http://rets-server:6103/api/path
</Location>
There's also options for tweaking cookies, setting timeouts etc. All of which can be found in the mod_proxy documentation
You should note that this cannot go in a .htaccess file. It must go in the main server config.
nginx Config
Equally as simple
location /proxy-location {
proxy_pass http://rets-server:6103/api/path;
}
Again tons of options in the HttpProxyModule documentation for caching, rewriting urls, adding headers etc.
Please do consult the docs. I've not tested either of these configurations and they may be a little off as they're from memory + a quick google.
Make sure you test your app by proxying to an unreachable server and ensure it handles failures correctly since you are introducing another point of failure.
I'm working on the assumption you are able to configure your own dedicated server. If this is not the case, your hosts may be willing to help you out. If not leave me a comment and I'll try and come up with a more robust curl option.
You can achieve this by using curl's PHP extension.
An example code could be :
$url = $_GET['url'];
$ch = curl_init( $url );
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$content = curl_exec($ch);
echo $content;
Obviously you have to add protection, perhaps add .htaccess/.htpasswrd protection to it.
A more complete code, with cookie support and such, can be found there : https://github.com/cowboy/php-simple-proxy