Suggestion on developing a RETS PHP Tunnel - php

I have partially developed a property website that fetch properties data from a RETS IDX. You may know that RETS server listened to port 6103 over http protocol. My website is deployed on a shared hosting due to which I can not connect to 6103 port. I do have a dedicated server (which allows connect to port 6103). I want to use this dedicated server as a middle tier between my website and the RETS IDX server. My problem is I want to develop that middle tier script i.e HTTP Tunnel.
My website will send all RETS request to this Tunnel that will meanwhile sent it to the RETS IDX server and its response will be sent back to the website at the same moment.
port 80 port 6103
Website (shared hosting) ----------> Tunnel (Dedicated hosting) -----------> RETS Server
RETS Server also requires to login, so the session should be maintained properly.
I want to have quick/best solution to do the job. May be through .htaccess or streaming php script or may be some third party script can also cut some of my time.
I would love to hear any thought or suggestion you have.
P.S: I can not move my website to a dedicated server because in near future I am going to have plenty of them and they would cost too much.

I'd personally go for the Reverse Proxy approach. This will allow you to intelligently forward requests, based on configurable criteria.
Both Apache and nginx have reverse proxy capabilities (in fact it was nginx's original purpose). For Apache you need to use mod_proxy, while nginx has the functionality built in unless you explicitly disable it before compiling.
Of these two options I personally prefer nginx, it is robust and lightweight, and completely fit for purpose. I find Apache more cumbersome, but if you already have Apache set up on your dedicated server, you may prefer to use that instead.
The beauty of using web servers to proxy, is that they understand the underlying protocol. They will preserve headers, modify cookies (preserve sessions), and translate hostnames correctly.
Apache Config
In both cases configuration is very straightforward, the Apache config looks something like the following:
<Location /proxy-location/>
ProxyPass /rets http://rets-server:6103/api/path
ProxyPassReverse /rets http://rets-server:6103/api/path
</Location>
There's also options for tweaking cookies, setting timeouts etc. All of which can be found in the mod_proxy documentation
You should note that this cannot go in a .htaccess file. It must go in the main server config.
nginx Config
Equally as simple
location /proxy-location {
proxy_pass http://rets-server:6103/api/path;
}
Again tons of options in the HttpProxyModule documentation for caching, rewriting urls, adding headers etc.
Please do consult the docs. I've not tested either of these configurations and they may be a little off as they're from memory + a quick google.
Make sure you test your app by proxying to an unreachable server and ensure it handles failures correctly since you are introducing another point of failure.
I'm working on the assumption you are able to configure your own dedicated server. If this is not the case, your hosts may be willing to help you out. If not leave me a comment and I'll try and come up with a more robust curl option.

You can achieve this by using curl's PHP extension.
An example code could be :
$url = $_GET['url'];
$ch = curl_init( $url );
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$content = curl_exec($ch);
echo $content;
Obviously you have to add protection, perhaps add .htaccess/.htpasswrd protection to it.
A more complete code, with cookie support and such, can be found there : https://github.com/cowboy/php-simple-proxy

Related

Websockets on shared hosting: not possible because of shared hosting itself or because of non-php integrations?

I want to run a basic chat application on my shared hosting.
I would use a PHP Websockets implementation library, Ratchet.
However, when I go to my shared hosting (Hostgator) websockets information page, it stated:
PHP Socket Support?
If you are connecting out, it should work. We do not allow clients to bind to local ports for incoming.
What does it mean? Can I create my own websocket running the command via ssh? I would use this basic code in order to run it.
require dirname(__DIR__) . '/vendor/autoload.php';
$server = IoServer::factory(
new HttpServer(
new WsServer(
new Chat()
)
),
8081
);
$server->run();
I noticed there were similar questions but most of the answers were saying it's not possible because the questioner was trying to use Node.js or Python websockets libraries, which are not supported on most of the shared hosting.
Main answer
Shared hosting will generally allow you to listen on a high port, e.g. the one you are using. However, there will be a number of problems in practice.
Firstly, the web server will probably only allow 80 (HTTP) and 443 (HTTPS) inbound, so a port of 8081 would be blocked by the firewall. Your PHP listener would attach to the port, but would wait patiently for traffic that never comes.
Secondly, some shared hosts will have a load balancer in front of them, and they might only be configured to forward HTTP traffic. Since Web Sockets are a different protocol, they won't be set up to forward that. The same problem with ports is repeated here too - non-standard ports won't be forwarded.
To solve these problems, you need your own web server, where you can open ports (and set up load balancers) in whatever way you like. This is pretty cheap to do these days - for the price of a couple of cafe coffees per month, you can rent a small virtual server. It won't have as much RAM as a shared server, but it will be a lot more flexible.
Design issues
I would draw attention also to using non-standard ports for Web Sockets to service a web application on standard 80/443 ports. This is not always a good idea. The non-standard ports will work fine on desktops and standard home internet connections, but for some office or mobile internet connections, you might get into a pickle.
It is better to put a load balancer in front of your app and then let it route traffic (Web Socket or HTTP) based on the protocol signature. This will allow you to use multiple protocols per port. If you are interested in exploring this, I recommend Traefik with Docker containers - I have set this up and it works very well indeed.

Apache behind corporate proxy

I'm developing a php application. I'm using wamp and I'm behind a corporate proxy. I'm using cntlm to authenticate NTLM. I need to curl google geocoding api, in order to do this I used the following settings that are working:
curl_setopt($ch,CURLOPT_PROXY, '127.0.0.1:3128');
curl_setopt($ch, CURLOPT_PROXYPORT, 3128);
I'd like to find a way to avoid setting those options for CURL. I believe I can play with Apache setting, but I'm not sure. The reasons I need to find this solution are:
In production env there will be no such a proxy (at the moment the above options are used only if the environment variable is set to DEV env, but still: it's not the best solution)
If I want to use 3rd party SDK (such as facebook php sdk), those are internally using CURL but they do not necessarily expose method to change CURL options (for example facebook sdk doesn't). I don't want to change the SDK source code to fit my proxy
What I tried:
So far I turned on proxy_module on apache and I added the following line on httpd.conf, but with no success
ProxyRemote * http://127.0.0.1:3128
I still can't access the internet. I googled a lot, but I couldn't find a solution, any ideas?
I could find many people talking about the same issue, but I didn't find any comprehensive solution, for this reason I raise a bounty.
What I found:
There is this bug in which I found: But I'm not sure if this will work for curl and anyway I can't find how to modify the php.ini
[2010-12-20 14:03 UTC]
jani#php.net
-Summary: changing the default http-wrapper
+Summary: Add php.ini option to set default proxy for wrappers
-Package: Feature/Change Request
+Package: Streams related
-PHP Version: 5CVS
+PHP Version: *
and
[2011-04-05 11:29 UTC] play4fun_82 at yahoo dot com Hi, i have the
same problem. My solution was to pass through proxy server with cntml
tunneling tool. You configure cntml proxy to forward http request
received on local host on a port to destination with proper
authentication.
And in pear u just configure
pear config-set http_proxy 127.0.0.1:3128
3128 - is port configuret in cntlm(it can be any other free port).
Thanks very much
You're saying you want this functionality on WAMP, for your development computer only, right? The SDK's work without modification in production, so you can just take advantage of your Window's Host file to redirect requests.
Here's a walkthrough.
After reading this article I was about to throw in the towel, but there is actually an easy solutiuon. I had to play with windows environment variable, at the end setting a system variable to https_proxy=https://localhost:3128 worked! Before it was not working because I was setting it to 127.0.0.1:3128. Run the command
reg query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" | find /i "proxyserver"
then copy/paste the answer in the system variable and now it's working!!
I think you can do it through proxy tools like Proxifier(http://www.proxifier.com).
AFAIK, they can capture traffic from selected applications and redirect to a proxy automatically.
Not a perfect solution but should be useful on dev machine.

Installing ssl on an open source application with PHP cURL

I'm working on an open source PHP application. The application may need to connect to my server, to transfer sensitive data. I have SSL installed on my server and I think I have set it up properly, but I'm hoping someone here can confirm.
The application will be used on other users servers, so it will be server to server communication.
I will treat users servers as clients when connecting to my server. My server will never connect to their server, so they don't need SSL on their end (right?).
I use cURL to make the calls (to my server) and POST data during the connection. So I cURL to a https address.
Now I thought that is it. Once I cURL a https address, everything is secure. I can send whatever I like (Credit card numbers, passwords, etc etc) securely, without worrying about the middle man. End of story.
But after reading around, I've noticed that some people are doing other stuff in their cURL session - Like including a certificate (.crt file):
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2);
curl_setopt($ch, CURLOPT_CAINFO, getcwd() . "/CAcerts/BuiltinObjectToken-EquifaxSecureCA.crt");
Is that safe for open source? Should I do it too? Or am I safe with what I've got?
Depending on the system you're installing cURL on, it may or may not have enough information to verify an SSL certificate (this can be improved by linking intermediate and root certificates into your website certificate). You can also read it here: http://curl.haxx.se/docs/sslcerts.html
It sometimes makes sense to ship a bundle explicitly, especially since cURL tends to get shipped with old certificate bundles. You can download a more recent one here (which is taken from the Firefox source code): http://curl.haxx.se/docs/caextract.html
If your software will exclusively talk to your own server, you could also ship a bundle containing only your own public certificate. This would allow you to use self signed certificates which is free :)
You attach your client certificate along CURL calls if the server at the other end has specifically provided you with them and are not accepting any connection from other clients than the ones who have those client certificates.
If you are talking about a website here, you dont even need to worry about that here.
But if you are talking about some service providers which you access for any secured resources, and if they require you to produce client certificates, they will issue them for you and tell you explicitely to use them.
For an example, we have a system which only private members can access. We have an RPC endpoint where other members send request to. And since we only allow access to our members (private NOT public like websites) we issue them client certificates and explicitely direct them to attach those along with their service calls.

Setting up a web interface to http proxies?

I want a way to allow users to go through my http proxy server (Squid, Privoxy, etc.) without having to type the IP/port in web browser settings. I was hoping I could use a simple web interface.
I'm envisioning this:
User goes to a website on my server (http://proxy.com) and types a URL
into the form.
The user's browser URL looks like (http://proxy.com/url=URL)
All connections to any future links are passed through my http proxy
running on a different port.
And I do NOT want to use existing php/cgi web proxy scripts.
My only reasoning for that is I feel it would be much more efficient re-routing connections through a native proxy server than having many php instances proxy the connections. Please tell me if you think this would not actually be the case.
Are there any simple ways of doing this? Thanks!
You may want to setup a transparent proxy. That way the clients do not know they are using a proxy so they do not have to set the proxy IP in their browsers. This obviously does not work for https. Some information for squid here: http://www.cyberciti.biz/tips/linux-setup-transparent-proxy-squid-howto.html

Detecting HTTPS vs HTTP on server sending back nothing useful

So kind of very similar to "Detecting https requests in php":
Want to have https://example.com/pog.php go to http://example.com/pog.php or even vice versa.
Problems:
Can't read anything from $_SERVER["HTTPS"] since it's not there
Server is sending both requests over port 80, so can't check for 443 on the HTTPS version
apache_request_headers() and apache_response_headers() are sending back the same thing
Can't tell the loadbalancer anything or have it send extra somethings
Server feedback data spat out by the page on both URL calls is exactly the same save for the session ID. Bummer.
Are there any on page ways to detect if it's being called via SSL or non-SSL?
Edit: $_SERVER["HTTPS"] isn't there, switched on or not, no matter if you're looking at the site via SSL or non-SSL. For some reason the hosting has chosen to serve all the HTTPS requests encrypted, but down port 80. And thusly, the $_SERVER["HTTPS"] is never on, not there, just no helpful feedback on that server point. So that parameter is always be empty.
(And yeah, that means it gets flagged in say FF or Chrome for a partially invalid SSL certificate. But that part doesn't matter.)
Also, the most that can be gotten from detecting the URL is up to the point of the slashes. The PHP can't see if the request has https or http at the front.
Keyword -- Load Balancer
The problem boils down to the fact that the load balancer is handling SSL encryption/decryption and it is completely transparent to the webserver.
Request: Client -> 443or80 -> loadbalancer -> 80 -> php
Response: PHP -> 80 -> loadbalancer -> 443or80 -> Client
The real question here is "do you have control over the load balancer configuration?"
If you do, there are a couple ways to handle it. Configure the load balancer to have seperate service definitions for HTTP and HTTPS. Then send HTTP traffic to port 80 of the web servers, and HTTPS traffic to port 81 of the webservers. (port 81 is not used by anything else).
In apache, configure two different virtual hosts:
<VirtualHost 1.2.3.4:80>
ServerName foo.com
SetEnv USING_HTTPS 0
...
</VirtualHost>
<VirtualHost 1.2.3.4:81>
ServerName foo.com
SetEnv USING_HTTPS 1
...
</VirtualHost>
Then, the environment variable USING_HTTPS will be either 1|0, depending on which virtual host picked it up. That will be available in the $_SERVER array in PHP. Isn't that cool?
If you do not have access to the Load Balancer configuration, then things are a bit trickier. There will not be a way to definitively know if you are using HTTP or HTTPS, because HTTP and HTTPS are protocols. They specify how to connect and what format to send information across, but in either case, you are using HTTP 1.1 to make the request. There is no information in the actual request to say if it is HTTP or HTTPS.
But don't lose heart. There are a couple of ideas.
The 6th parameter to PHP's setcookie() function can instruct a client to send the cookie ONLY over HTTPS connections (http://www.php.net/setcookie). Perhaps you could set a cookie with this parameter and then check for it on subsequent requests?
Another possibility would be to use JavaScript to update the links on each page depending on the protocol (adding a GET parameter).
(neither of the above would be bullet proof)
Another pragmatic option would be to get your SSL on a different domain, such as secure.foo.com. Then you could resort to the VirtualHost trick above.
I know this isn't the easiest issue because I deal with it during the day (load balanced web cluster behind a Cisco CSS load balancer with SSL module).
Finally, you can always take the perspective that your web app should switch to SSL mode when needed, and trust the users NOT to move it back (after all, it is their data on the line (usually)).
Hope it helps a bit.
$_SERVER["HTTPS"] isn't there, switched on or not, no matter if you're looking at the site via SSL or non-SSL. For some reason the hosting has chosen to serve all the HTTPS requests encrypted, but down port 80. And thusly, the $_SERVER["HTTPS"] is never on, not there, just no helpful feedback on that server point. So that parameter is always be empty.
You gotta make sure that the provider has the following line in the VHOST entry for your site: SSLOptions +StdEnvVars. That line tells Apache to include the SSL variables in the environment for your scripts (PHP).

Categories