I've searched a lot but I couldn't find a PHP proxy server which can be run on a shared host, so I decided to build a very simple one from scratch but I'm still on the first step. I've created a subdomain httpp.alvandsoft.com and also redirected all its subdirectories (REQUEST_URI) to the main index.php to be logged and to know what whould a proxy server exactly receive and send
(The log is accessible through https://httpp.alvandsoft.com/?log=1&log_filename=log.txt)
But whenever I set it as a proxy for Telegram or other apps, it doesn't receive ANY requests at all, even when I use 443 or 80 ports, neither in different proxies such as HTTP, SOCKS or MTPROTO.
Is proxy something that depends on the server's settings and works in a way other than regular HTTP requests and responses or I'm missing something?
I found it out myself. HTTP(s) proxies send their requested URL as Host request header and many hosts and websites, check this request header and if it's not a member of their valid IPs, redirect it immediately.
Related
I have my site hosted on GoDaddy and working on an application which is developed in Code-igniter Php. In my application I am using a library Grocery Crud. But when ever it access url related to the assets files and other files it calls them by IP address. As far as it was on local server it was running fine but as soon as I deployed it up it came up with the problem.
I know as the particular IP address is shared among many sites so that's why particular problem is occurring. but how to solve this problem is it what or something need to be configured in code igniter or some where else?
I understand that you are in a shared hosting environment:
I know as the particular IP address is shared among many sites
That's why you cannot access your site with its IP address. It's not possible. Period.
There's nothing to configure in Codeigniter, because it's the configuration of the HTTP server. HTTP server handles requests.
When you type a url in your browser, it will resolve the corresponding IP address. Then it will request the IP address, saying what hostname you are trying to reach. Based on these information, HTTP server will be able to handle your request and send it to the appropriate website.
When you type an IP address in your browser, HTTP server will not know what hostname you want to reach. Depending on the configuration, it will do what it has to, but probably not send the request to your site: in shared hosting environment, there is no reason for hoster to send request to a specific website it hosts. It will most probably display a 404, 403 or redirect to their homepage.
Many hosting providers assign a temporary hostname for your website, generally as a subdomain of theirs. You should temporarily use this hostname for your website.
To configure this hostname, open application/config/config.php and set the base_url parameter.
You can load different configuration files depending on your environment (for example: development, staging, production). See Handling multiple environments in CI documentation.
So this is the situation: I have a bunch of Arduinos and Raspberry Pis along with an ubuntu server on a local network. The arduinos and pis communicate with that local server routinely using PHP GET & POST requests.
Now this local server sometimes "fetches" something from a remote server in the cloud (also using PHP GETs) to respond to local requests from Arduinos and Pis.
Now here's the problem: The local server has no issues communicating with the remote server by GETs, but what if I want the server in the cloud to send a GET to the local server?
This part is kind of confusing to me as the local server is on a regular LAN and connects to the internet via a router through a local commercial ISP that issues dynamic IPs.
How can I send PHP GETs from an "online" server to a local server?
Please note that both servers are running Apache/PHP/MySQL on Ubuntu 14.04.
Thanks a ton in advance!
You will need two steps to accomplish that.
step 1 - make router forward external requests to LAN server
step 2 - make external server know the current dynamic WAN ip
step 1:
The router has to be configured to forward WAN requests to your LAN server. Assuming you use a normal home router, you typically point your browser towards the router ip and login on the router. Now you have to find where to configure forwarding (unfortunately naming of this feature varies from router to router).
While you typically can define an "exposed host" where just all external requests go to, you are better of in terms of security if you just forward specific ports to your server. As you are going to use HTTP protocol, the standard ports here would be 80 (http) and 443 (https). So assuming you use HTTPS with default port, a typical forwarding would be:
router WAN ip, port 443 --> server LAN ip, port 443
This forwards any external request to the router on port 443 to your internal server on port 443.
Now your server should be able to receive those requests, but you still would need to know your router's current dynamic WAN ip.
step 2:
As your router's WAN ip changes from time to time, you need to somehow announce that ip to your external server.
One easy way of doing is by using an external service which will provide you with a URL, which will resolve to your current ip. This is often referred to as DDNS or dynamic DNS. One quite well known DDNS provider is https://dyn.com/dns/ - but there are plenty others, and you will even find free ones. After registering with such a provider you will be given a URL which your external server can use instead of the ip.
Now you still would have to let know the DDNS provider you current dynamic WAN ip. Most easy way to do this again involves your router. Check its config for DDNS settings, typically routers do support this feature, often there are even some specific providers pre-configured. Setup your router with the credentials you got from the DDNS provider.
Now everything is set. You should be able to send requests to your internal server by using the URL you got from your DDNS provider, while your router both forwards such requests and notifies the DDNS provider about any ip changes.
A word of warning - you just exposed your local server to the internet. So you will have to treat it like any server on the internet to keep it safe, including careful configuration, installing security updates and so on.
You have to open a port on your router, and specify where the router should lead the request to. Lets assume your external ip is: 80.82.71.24, going to this ip address (fx: http://80.82.71.24) will lead to your router. Then the router decides what to do with this request, normally the request would timeoutted / refused. But on the router, if you specify that this certain request (could be: tcp/udp) (to a specified port) should point to a certain internal ip (the local server ip), then it's possible to do what you want.
But to do this, you need to read up on your router - first of all, see if you can login into it. Could you specify what router you use and if your internet connection is yours or shared (fx. campus, school, etc)?
By the way, it would not be a good idea to open up the port for the whole world, so maybe you should consider to only allow your cloud server ip to gain access to that specific port.
I need to make a transparent redirect (without the user seeing the address change in the address bar). When the user types example.com he/she should be redirected to 123.123.123.123:9090 (IP:PORT). I cannot use CNAME or A cause it does not accept the PORT. How can I do that? I know that using SRV I can do that but my webserver does not allow it.
I also tried using THE mod_proxy on Apache to rewrite the request from domain example.com -> 123.123.123.123:9090 however it is absurd cause the user requests the content, the apache requests the content to my IP and after that the response has to go up all way back. I need the user request to reach the webserver directly without proxy.
Well, DNS doesn't care about ports, only about hostnames. Thats why you can access every server like example.com:8080 (will probably fail for 99% of all servers though ;)). So what should happen when I try to request your site via example.com:8080? Does port 8080, or 9090 takes precedence?
Long story short: There is no way around a proxy. But easier would be to let it listen on port 80 directly.
I'm having a system where users can input their purchased domain into their profile, so when accessing their domain, it should replace their custom domain, e.g.
http://domain.com/custom-name to http://purchaseddomain.com.
So when they access their purchase domain, it should take them to their profile including their navigation links, such as links on their page will be replaced with their purchased domain, for example viewing their records would be:
http://domain.com/custom-name/records to http://purchaseddomain.com/records.
Tumblr enables this feature, however I have no idea how this all works:
This is exactly how I like to have a feature like this, I've searched on SO, but it didn't seem to help.
Now this is a problem, I'm not sure how I can validate, confirm and merge their purchased domain into my server without a problem using PHP - I'm using Codeigniter for this.
Is there a solid, stable plugin/library or detailed tutorial that can have the ability to enable custom domains masking a internal domain?
My server is running Ubuntu 11.10 on nginx 1.0.6.
The templating will be just fine for me, which I can do - all I need help on is how to safely accept and merge their domain to my server.
EDIT: Just looked into nginx VirtualHostExample, this looks good overall but how will I be able to dynamically add/remove those domain entries while the domain has an A record pointing to my server?
You won't merge their domain to your server.
In fact, when they will register their domains, they will make it point to your server.
On your server configuration, you'll have to dynamically create rules that implicitly redirect the page to the one they created on your server.
So, users will see http://purchaseddomain.com/on-uri but you serve the page http://domain.com/custom-name/one-uri
I.E:
it's like if you added on an .htaccess - even if you don't use apache, it's just to explain what the "system" must be:
RewriteCond %{HTTP_HOST} purchaseddomain\.com$ [NC]
RewriteRule (.*) /custom-name/$1
The accepted answer mentions customers pointing their DNS to your web server. But, that's not enough to make it work in this day and age.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
If you need more detail you can DM me on Twitter #dragocrnjac
This is what is working for me:
server {
server_name *.mydomain.com
root /var/www/$host;
...
}
Then you need to make directories like: /var/www/user1.mydomain.com/, /var/www/user2.mydomain.com/, ...
I couldn't figure out how to leave the '.mydomain.com' out of the directory name. If anyone has any idea, pls let me know :)
So kind of very similar to "Detecting https requests in php":
Want to have https://example.com/pog.php go to http://example.com/pog.php or even vice versa.
Problems:
Can't read anything from $_SERVER["HTTPS"] since it's not there
Server is sending both requests over port 80, so can't check for 443 on the HTTPS version
apache_request_headers() and apache_response_headers() are sending back the same thing
Can't tell the loadbalancer anything or have it send extra somethings
Server feedback data spat out by the page on both URL calls is exactly the same save for the session ID. Bummer.
Are there any on page ways to detect if it's being called via SSL or non-SSL?
Edit: $_SERVER["HTTPS"] isn't there, switched on or not, no matter if you're looking at the site via SSL or non-SSL. For some reason the hosting has chosen to serve all the HTTPS requests encrypted, but down port 80. And thusly, the $_SERVER["HTTPS"] is never on, not there, just no helpful feedback on that server point. So that parameter is always be empty.
(And yeah, that means it gets flagged in say FF or Chrome for a partially invalid SSL certificate. But that part doesn't matter.)
Also, the most that can be gotten from detecting the URL is up to the point of the slashes. The PHP can't see if the request has https or http at the front.
Keyword -- Load Balancer
The problem boils down to the fact that the load balancer is handling SSL encryption/decryption and it is completely transparent to the webserver.
Request: Client -> 443or80 -> loadbalancer -> 80 -> php
Response: PHP -> 80 -> loadbalancer -> 443or80 -> Client
The real question here is "do you have control over the load balancer configuration?"
If you do, there are a couple ways to handle it. Configure the load balancer to have seperate service definitions for HTTP and HTTPS. Then send HTTP traffic to port 80 of the web servers, and HTTPS traffic to port 81 of the webservers. (port 81 is not used by anything else).
In apache, configure two different virtual hosts:
<VirtualHost 1.2.3.4:80>
ServerName foo.com
SetEnv USING_HTTPS 0
...
</VirtualHost>
<VirtualHost 1.2.3.4:81>
ServerName foo.com
SetEnv USING_HTTPS 1
...
</VirtualHost>
Then, the environment variable USING_HTTPS will be either 1|0, depending on which virtual host picked it up. That will be available in the $_SERVER array in PHP. Isn't that cool?
If you do not have access to the Load Balancer configuration, then things are a bit trickier. There will not be a way to definitively know if you are using HTTP or HTTPS, because HTTP and HTTPS are protocols. They specify how to connect and what format to send information across, but in either case, you are using HTTP 1.1 to make the request. There is no information in the actual request to say if it is HTTP or HTTPS.
But don't lose heart. There are a couple of ideas.
The 6th parameter to PHP's setcookie() function can instruct a client to send the cookie ONLY over HTTPS connections (http://www.php.net/setcookie). Perhaps you could set a cookie with this parameter and then check for it on subsequent requests?
Another possibility would be to use JavaScript to update the links on each page depending on the protocol (adding a GET parameter).
(neither of the above would be bullet proof)
Another pragmatic option would be to get your SSL on a different domain, such as secure.foo.com. Then you could resort to the VirtualHost trick above.
I know this isn't the easiest issue because I deal with it during the day (load balanced web cluster behind a Cisco CSS load balancer with SSL module).
Finally, you can always take the perspective that your web app should switch to SSL mode when needed, and trust the users NOT to move it back (after all, it is their data on the line (usually)).
Hope it helps a bit.
$_SERVER["HTTPS"] isn't there, switched on or not, no matter if you're looking at the site via SSL or non-SSL. For some reason the hosting has chosen to serve all the HTTPS requests encrypted, but down port 80. And thusly, the $_SERVER["HTTPS"] is never on, not there, just no helpful feedback on that server point. So that parameter is always be empty.
You gotta make sure that the provider has the following line in the VHOST entry for your site: SSLOptions +StdEnvVars. That line tells Apache to include the SSL variables in the environment for your scripts (PHP).