i have saas app
http://example.com/
and each user have own website ( for example )
https://example.com/p/users1
https://example.com/p/users2
https://example.com/p/users3
What I want is to include a custom domain for each user
for example :
https://custom-domain1.com point to https://example.com/p/user1
https://custom-domain2.com point to https://ehlquran.com/p/user2
Note that I use php and codeigniter 3
waiting for your solutions
The question gives the sense of being quite lazy but I'll attribute that to the language barrier and not the laziness of the person asking the question.
With that being said, I will try to answer the question in more detail.
As liki-crus mentioned in the comment, your customers will have to point their DNS records to your server. But, that brings a whole lot of stuff into consideration.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
Alternatively, there have been a few services like this recently that allow you to add custom domains to your app without running the infrastructure yourself.
If you need more detail you can DM me on Twitter #dragocrnjac
Using the simple php curl function for Facebook user-account control, I pull out the site and do the detection according to the incoming data.
But because I have multiple queries, Facebook blocks and php codes are disabled. How can I show each browser function as if it was entered from a different computer by modifying the browser ip-user agent (if there is a proxy) before running it?
Thank you.
Your trying to ask that your ip is blocked to get data through API so that you are trying to fetch data from different ip[proxy]. If this is your concern then try to find why your ip has blocked and get whitelist your ip from FB!!!!!
First, access canhazip.com or jsonip.com from the server to make sure it has the public IP you think.
Second, make sure that IP address is in "Server IP Whitelist" for the app's Settings > Advanced section in the Developer console (https://developers.facebook.com/apps/[APP ID]/settings/advanced/).
Looking for some PHP help. What I'd like to try (and find out if its feasible) is to redirect all traffic coming from origin back to the Akamai CDN url. Obviously if I did this globally I would run into a loop. So instead I've set up a header sent only by Akamai that would be ignored by my app if it was found.
What I'm looking for is the best method to accomplish this with PHP on my app. Something along the lines of:
if (!$header_exists && $current_baseurl === origin.site.com {
301 redirect to www.site.com version of same request URL
}
This would allow me to make sure no requests coming in from outside of Akamai are properly redirected. Is this method sound? Does anyone currently has a code sample using a similar method?
This is a complete wrong approach. What you need to do is implement site shield in Akamai. Site shield will have a set of Akamai IP's. If you allow only those IP's that should solve your problem. Akamai will make sure all the requests to Origin are sent from one of akamai site shield map. This way any request that is sent directly to origin will be denied and requests from Akamai will be allowed. Contact Akamai support to help you create and map site shield for your domains. No code changes are required for this.
Additionally you can allow your office IP if you want origin domain to be open for your testing purpose.
I'm trying to run a SOAP method on another domain which must be received from a whitelisted IP Address, but the WSDL seems to somehow think the request is coming from the client and not the server. The call collects information from a form, uses AJAX to post to a PHP function, which formats the fields into the API-friendly format, which is then sent via PHP SoapClient.
I thought the best way to do this was to investigate the headers being sent with the SoapClient, but the headers don't list any IP address (Host, Connection, User-Agent, Content-Type, SOAPAction, and Content-Length).
First, I read the documentation of the SOAP endpoint. It doesn't apparently specify any specific parameter to be passed, which wouldn't make sense anyway because I'd just be able to fake an IP address. Then, I read the documentation for PHP SoapClient. Interestingly I couldn't find quite where the IP addresses were set, but I did find a comment which mentions using 'stream_context', but had no luck with that either.
I am also logging every request, including $_SERVER['SERVER_ADDR'] and $_SERVER['REMOTE_ADDR'], which are both reporting IP addresses as expected; technical support on their end tell me that they are receiving requests from the 'REMOTE_ADDR' value.
I tried sending a bare-bones request and expected to get a different error besides the IP address, but I keep getting IP address problems.
Is there any way I can be more sure that I am sending the SOAP request with the proper (server) IP?
Okay, I figured it out - at least for my own situation. The comment in the PHP manual that you read was correct (assuming we're talking about the same one), however, I needed to include the port on my IP address as well. So the code that I ended up using was:
$options = array('socket' => array('bindto' => 'xxx.xxx.xx.xxx:0'));
$context = stream_context_create($options);
$client = new SoapClient($url,
array('trace' => 1, 'exception' => 0, 'stream_context' => $context));
See, before, I had no included ":0" I had merely included the IP address. So don't forget that!
This is a tough one, because the question does not really draw a clear picture on what systems are actually involved and where and what kind of IP whitelisting is in use.
When using SOAP, the primary source of information about the service is included in the WSDL resource. It is supposed to be obtained via a HTTP request, and it might trigger additional HTTP request if the XML of the primary resource has xi:include elements. All these requests originate from the system that acts as the SOAP client. You cannot choose which IP address to use here (unless you have a very exotic setup of having TWO interfaces that BOTH have a valid route to the target system, and choosing the right IP is the task of the application - I wouldn't think this is the case here, and I stop thinking about it - you'd need to configure a stream context for this, set the "bindto" option, and pass it into the SoapClient).
Inside the WSDL, the URL of the real SOAP server is contained. Note that the server itself might be on a completely different domain than the WSDL description, although such a setup would also be unusual. You can override that location by passing an option array to the SoapClient with an entry "location" => "http://different.domain.example/path/to/service". This would not change the loading of WSDL ressources, but all requests for the SOAP service would go to that different base URL.
The call collects information from a form, uses AJAX to post to a PHP function, which formats the fields into the API-friendly format, which is then sent via PHP SoapClient.
There are plenty of clients and servers mentioned here. AJAX is mentioned, which makes me believe a browser is involved. This is system 1, acting as a client. A request gets sent to some PHP. This target is system 2, acting as a server here. It transforms it and, acting as a client, sends the SOAP request to system 3, which is acting as another server.
So where is the whitelist? If it is on system 3, it must list the IP that is used by system 2. Note that every networked computer has more than one IP address: At least the one from that network device, plus 127.0.0.1. With IPv6, there are even multiple addresses per each device. Using $_SERVER['SERVER_ADDR'] does not really make sense here - and additionally, systems that are on the transport way, like transparent proxies, might also influence the IP. You shouldn't use the SERVER_ADDR as the entry for the whitelist. You should really check the network setup on a shell to know which network device is used, and what IP it has. Or ask the server you are about to contact to check which IP they are seeing.
I am also logging every request, including $_SERVER['SERVER_ADDR'] and $_SERVER['REMOTE_ADDR'], which are both reporting IP addresses as expected; technical support on their end tell me that they are receiving requests from the 'REMOTE_ADDR' value.
This is the strange thing. SERVER_ADDR and REMOTE_ADDR on system 2 are set as expected. I read this as REMOTE_ADDR being the IP of system 1, and SERVER_ADDR being that of system 2. But system 3 sees the IP from system 1? What is happening here? I'd really like to have more feedback on this from the original poster, but the question is already half a year old.
And it does not have a proper description of the experienced "IP address problems". How are they seen? Is it a timeout with "connection failed", or is it any proper HTTP rejection with some 4xx status code that triggers a SoapFault? This problem can only really be solved if the description would be better.
I do suspect however, that there might be complicated code involved, and the real SOAP request is in fact sent by the browser/system 1, because the generated XML on system 2 is mirrored back to system 1 and then sent to system 3. It would at least explain why the IP of system 1 is seen on system 3.
Ok, so I found out that it is impossible to get the server IP rather than the client IP. Very unfortunate...
I guess you will have to keep track of credentials by use of login/pass
Had the same problem. You have two solutions. Both require an extra step for calling the web service.
1) Use file_get_contents to acces your page that actually interrogates the WS via php soap. Example:
public function test() {
$result = file_get_contents('http://your.domain.com/pages/php-soap?params=your-params');
header("Content-type: text/xml; charset=utf-8");
echo $homepage;
}
2) Use CURL for the same purpose.
That way, you'll pass the server ip to the webservice. You can also secure your pages, by checking in php-soap (this example) the ip of the caller (the server ip in your case, which is unique).
Quick and dirty.
today I came a across a pretty strange behaviour of an php based application of mine.
In a certain part of the system there's an UI making use of AJAX-calls to fill list
boxes with content from the backend.
Now, the AJAX listener performs a security check on all incoming requests, making sure
that only valid client IPs get responses. The valid IP are stored in the backend too.
To get the client's IP I used plain old
$_SERVER['REMOTE_ADDR']
which works out for most of the clients. Today I ran into an installation where
remote_addr contained the IP of an network adapter which was'nt that one which performed
the actual communication for my application.
Googling around agve me Roshan's Blog entry on the topuic:
function getRealIpAddr()
{
if (!empty($_SERVER['HTTP_CLIENT_IP'])) //check ip from share internet
{
$ip=$_SERVER['HTTP_CLIENT_IP'];
}
elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR']))//check ip is pass from prxy
{
$ip=$_SERVER['HTTP_X_FORWARDED_FOR'];
}
else
{
$ip=$_SERVER['REMOTE_ADDR'];
}
return $ip;
}
Sadly the problem persists.
Did anybody ever stumble into this sort of problem (actually I don't think that I discovered a completly new issue ^^) and has an idea for me how to fix this?
EDIT:
I'm on
PHP Version 5.2.9-1
Apache/2.2.9 (Win32)
The communication is done via a regular LAN card. Now the actuall client has several
devices more. VMNet adapters and such.
I'm wondering how a client configuration can 'disturb' a web server that much...
TIA
K
Unfortunately, you have to take all IP information with a grain of salt.
IP addresses are gathered during the request by taking the packet and request information into account. Sadly, this information can easily be spoofed or even be incorrect (based on a large number of network probabilities) and should not be used for anything more than vanity purposes.