I have my site hosted on GoDaddy and working on an application which is developed in Code-igniter Php. In my application I am using a library Grocery Crud. But when ever it access url related to the assets files and other files it calls them by IP address. As far as it was on local server it was running fine but as soon as I deployed it up it came up with the problem.
I know as the particular IP address is shared among many sites so that's why particular problem is occurring. but how to solve this problem is it what or something need to be configured in code igniter or some where else?
I understand that you are in a shared hosting environment:
I know as the particular IP address is shared among many sites
That's why you cannot access your site with its IP address. It's not possible. Period.
There's nothing to configure in Codeigniter, because it's the configuration of the HTTP server. HTTP server handles requests.
When you type a url in your browser, it will resolve the corresponding IP address. Then it will request the IP address, saying what hostname you are trying to reach. Based on these information, HTTP server will be able to handle your request and send it to the appropriate website.
When you type an IP address in your browser, HTTP server will not know what hostname you want to reach. Depending on the configuration, it will do what it has to, but probably not send the request to your site: in shared hosting environment, there is no reason for hoster to send request to a specific website it hosts. It will most probably display a 404, 403 or redirect to their homepage.
Many hosting providers assign a temporary hostname for your website, generally as a subdomain of theirs. You should temporarily use this hostname for your website.
To configure this hostname, open application/config/config.php and set the base_url parameter.
You can load different configuration files depending on your environment (for example: development, staging, production). See Handling multiple environments in CI documentation.
Related
I've searched a lot but I couldn't find a PHP proxy server which can be run on a shared host, so I decided to build a very simple one from scratch but I'm still on the first step. I've created a subdomain httpp.alvandsoft.com and also redirected all its subdirectories (REQUEST_URI) to the main index.php to be logged and to know what whould a proxy server exactly receive and send
(The log is accessible through https://httpp.alvandsoft.com/?log=1&log_filename=log.txt)
But whenever I set it as a proxy for Telegram or other apps, it doesn't receive ANY requests at all, even when I use 443 or 80 ports, neither in different proxies such as HTTP, SOCKS or MTPROTO.
Is proxy something that depends on the server's settings and works in a way other than regular HTTP requests and responses or I'm missing something?
I found it out myself. HTTP(s) proxies send their requested URL as Host request header and many hosts and websites, check this request header and if it's not a member of their valid IPs, redirect it immediately.
So this is the situation: I have a bunch of Arduinos and Raspberry Pis along with an ubuntu server on a local network. The arduinos and pis communicate with that local server routinely using PHP GET & POST requests.
Now this local server sometimes "fetches" something from a remote server in the cloud (also using PHP GETs) to respond to local requests from Arduinos and Pis.
Now here's the problem: The local server has no issues communicating with the remote server by GETs, but what if I want the server in the cloud to send a GET to the local server?
This part is kind of confusing to me as the local server is on a regular LAN and connects to the internet via a router through a local commercial ISP that issues dynamic IPs.
How can I send PHP GETs from an "online" server to a local server?
Please note that both servers are running Apache/PHP/MySQL on Ubuntu 14.04.
Thanks a ton in advance!
You will need two steps to accomplish that.
step 1 - make router forward external requests to LAN server
step 2 - make external server know the current dynamic WAN ip
step 1:
The router has to be configured to forward WAN requests to your LAN server. Assuming you use a normal home router, you typically point your browser towards the router ip and login on the router. Now you have to find where to configure forwarding (unfortunately naming of this feature varies from router to router).
While you typically can define an "exposed host" where just all external requests go to, you are better of in terms of security if you just forward specific ports to your server. As you are going to use HTTP protocol, the standard ports here would be 80 (http) and 443 (https). So assuming you use HTTPS with default port, a typical forwarding would be:
router WAN ip, port 443 --> server LAN ip, port 443
This forwards any external request to the router on port 443 to your internal server on port 443.
Now your server should be able to receive those requests, but you still would need to know your router's current dynamic WAN ip.
step 2:
As your router's WAN ip changes from time to time, you need to somehow announce that ip to your external server.
One easy way of doing is by using an external service which will provide you with a URL, which will resolve to your current ip. This is often referred to as DDNS or dynamic DNS. One quite well known DDNS provider is https://dyn.com/dns/ - but there are plenty others, and you will even find free ones. After registering with such a provider you will be given a URL which your external server can use instead of the ip.
Now you still would have to let know the DDNS provider you current dynamic WAN ip. Most easy way to do this again involves your router. Check its config for DDNS settings, typically routers do support this feature, often there are even some specific providers pre-configured. Setup your router with the credentials you got from the DDNS provider.
Now everything is set. You should be able to send requests to your internal server by using the URL you got from your DDNS provider, while your router both forwards such requests and notifies the DDNS provider about any ip changes.
A word of warning - you just exposed your local server to the internet. So you will have to treat it like any server on the internet to keep it safe, including careful configuration, installing security updates and so on.
You have to open a port on your router, and specify where the router should lead the request to. Lets assume your external ip is: 80.82.71.24, going to this ip address (fx: http://80.82.71.24) will lead to your router. Then the router decides what to do with this request, normally the request would timeoutted / refused. But on the router, if you specify that this certain request (could be: tcp/udp) (to a specified port) should point to a certain internal ip (the local server ip), then it's possible to do what you want.
But to do this, you need to read up on your router - first of all, see if you can login into it. Could you specify what router you use and if your internet connection is yours or shared (fx. campus, school, etc)?
By the way, it would not be a good idea to open up the port for the whole world, so maybe you should consider to only allow your cloud server ip to gain access to that specific port.
I have a hosting account with a domain, the domain not yet resolved to the IP address (not inserted DNS records yet). I know the IP of the server, and the domain is configured on the server.
Now I want to get the content of the website using PHP (with curl for example).
On my (windows) computer I can simply edit the hosts (windows/system32/drivers/etc/hosts) file to achieve this, but how can this be achieved within a PHP-script?
Backtground information:
I need to run a PHP script on a hosting account where the domainname is under registry. I use DirectAdmin on my own server, and normally the website will also be visible under ipaddress/~username. But as the server is on suPHP strict settings it will not run PHP-code that way. As the script should be working on any webserver, adjusting the suPHP settings to allow this is no option
You could directly use the IP address of the Web Server with manually formed "Host" request header value as your domain name which is under registry.
Assuming your domain name is foo-test.com, it is yet to be added to DNS record but already virtually hosted in a Web Server who is IP address is 1.1.1.1.
(Below example uses command line curl HTTP client)
curl --header "Host: foo-test.com" "http://1.1.1.1"
(Note: Host header value is used by Web Server to identify the virtually hosted Web Sites)
I'm having a system where users can input their purchased domain into their profile, so when accessing their domain, it should replace their custom domain, e.g.
http://domain.com/custom-name to http://purchaseddomain.com.
So when they access their purchase domain, it should take them to their profile including their navigation links, such as links on their page will be replaced with their purchased domain, for example viewing their records would be:
http://domain.com/custom-name/records to http://purchaseddomain.com/records.
Tumblr enables this feature, however I have no idea how this all works:
This is exactly how I like to have a feature like this, I've searched on SO, but it didn't seem to help.
Now this is a problem, I'm not sure how I can validate, confirm and merge their purchased domain into my server without a problem using PHP - I'm using Codeigniter for this.
Is there a solid, stable plugin/library or detailed tutorial that can have the ability to enable custom domains masking a internal domain?
My server is running Ubuntu 11.10 on nginx 1.0.6.
The templating will be just fine for me, which I can do - all I need help on is how to safely accept and merge their domain to my server.
EDIT: Just looked into nginx VirtualHostExample, this looks good overall but how will I be able to dynamically add/remove those domain entries while the domain has an A record pointing to my server?
You won't merge their domain to your server.
In fact, when they will register their domains, they will make it point to your server.
On your server configuration, you'll have to dynamically create rules that implicitly redirect the page to the one they created on your server.
So, users will see http://purchaseddomain.com/on-uri but you serve the page http://domain.com/custom-name/one-uri
I.E:
it's like if you added on an .htaccess - even if you don't use apache, it's just to explain what the "system" must be:
RewriteCond %{HTTP_HOST} purchaseddomain\.com$ [NC]
RewriteRule (.*) /custom-name/$1
The accepted answer mentions customers pointing their DNS to your web server. But, that's not enough to make it work in this day and age.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
If you need more detail you can DM me on Twitter #dragocrnjac
This is what is working for me:
server {
server_name *.mydomain.com
root /var/www/$host;
...
}
Then you need to make directories like: /var/www/user1.mydomain.com/, /var/www/user2.mydomain.com/, ...
I couldn't figure out how to leave the '.mydomain.com' out of the directory name. If anyone has any idea, pls let me know :)
I'm doing a PHP cURL post, using a complete URL (http://www.mysite.com), from one page to another on the same site. (I know this isn't the best way to do it; but for my purpose this is what I need)
My question is:
Will the cURL post still go out across the internet, do a name lookup and travel a route as though it were a post coming from a different site. Or will the post stay on the servers local network?
There are multiple parts to the request, the dns lookup and the get or post to the site.
DNS Records are usually cached on most OSes, so it's rather unlikely that the server would have to do a dns lookup for it's own external ip, but it's possible.
As for the post, let's assume a basic layout:
Firewall => DMZ Apache PHP Server (www.mysite.com)
222.xxx.xxx.123 => 192.168.0.2
And mysite.com resolves to 222.xxx.xxx.123, then your request will go to your firewall's external interface and bounce back in. That's not terribly public traffic, but it goes out none-the less.
However, if you wanted to bypass that, you could put an entry in the host file of the server to say
127.0.0.1 mysite.com
(assuming you control the server, ie not shared hosting)
No. The post itself (unless you have multiple interfaces and your routing is totally screwed up) will not traverse the internet. Your local host ought to be able to resolve its own name as well, but there is a possibility that a DNS request will be made to determine the IP address corresponding to the name. I would hope that the network stack implementation on your system would prevent the post's packets from even hitting the wire (similar to localhost), but I wouldn't count on it.
It depends on your network setup. Many sites have a domain name pointing to the IP address of a front facing router or load balancer which forward the request to the web server.
If that's the case a request to your own site can make a round-trip to the router. Though it's unlikely that the request will go through the internet unless you have a very unusual setup (such as round robin DNS with multiple datacenters).
You can avoid the round-trip by associating the site FQDN to the loopback interface in your webserver /etc/hosts which will also save you a DNS request.