How to reliably identify a website - php

I have a file that is being linked to from other sub websites.
The file: http://site.com/file.img
Website A linking to it <img src="http://site.com/file.img"></img>
website B linking to it <img src="http://site.com/file.img"></img>
I need to reliably identify which of these websites has accessed the file, but I know that $_SERVER['HTTP_REFERER'] can be spoofed. What other ways do I have to reliably confirm the requester site? By IP, get them to register an IP? not sure. setup an API key? What options are there?

If a website is only linking to a file, the "website" itself will never actually access your image. Instead, the client who's viewing the site will make a request for the image.
As such, you're depending on information sent by the client, which is completely out of your control and not reliable at all. If you have the opportunity to set some sort of unique cookie on the client, you may be able to use this in some fashion for extended identification, but even that won't be reliable.
There is no 100% reliable solution.

Getting the referrer is the best you can do without getting into complicated territory.
If you don't mind complicated, then read on: set up your Web server to serve file.img only to Website A and Website B, then require that Website A and Website B set up a proxy configuration on their end that will retrieve file.img on behalf of their visitors.
Example:
A visitor to Website A loads a page that contains an image tag like <img src="http://websiteA.com/file.img"/> (note reference to Website A rather than your site). Client requests file.img from WebsiteA.com accordingly. Website A is configured to proxy requests for the path /file.img to your server, http://site.com/file.img. Your site verifies that it is in fact Website A that is requesting the image and then serves it to Website A's proxy. Website A then serves it to the visitor.
Basically, that makes it a pain for Websites A and B, gives you a performance hit, and also requires further configuration on your part. But I imagine that would satisfy your requirement.

Have a look at how OpenID relying is implemented, it allows one site to authenticate against another. The protocol specification will give a hint at the effort and overhead required to reliably implement such a scheme.
http://googlecode.blogspot.com/2010/11/googles-sample-openid-relying-party.html

Related

A script that logs in to multiple websites at once. CURL and PHP tried.

I run a computer lab for grade schoolers (3-14 y.o.) and would like to create a desktop/dashboard page consisting of a number of iframes, each pointing at a different external website
(for which we have created individual accounts for each child); and when a kid logs in (to the dashboard) a script will log her in to those websites, so she does not have to.
I have 1 server and 20 workstations, I'll refer to them as 'myserver' and 'mybrowser'(s) respectively. All these behind the same router (dynamic IP).
A kid gets on a 'mybrowser' workstation, fires up Firefox and runs desktop.php (hosted in 'myserver') and gets a login screen (for 'myserver')
'mybrowser' ---http---> 'myserver'
Once logged in, 'myserver' will retrieve a set of username and password stored in its database and run a CURL script to send those to an 'external web server'.
'mybrowser' ---http---> 'myserver' ---curl---> 'external web server'
SUCCESSFUL, well, I thought.
Turns out CURL, being run off 'myserver', logs in 'myserver' instead of 'mybrowser'.
The session inside the iframe, after refresh, is still NOT logged in. Now I know.
Then I thought of capturing the cookies from 'myserver' and set it into 'mybrowser' so that 'mybrowser' can now browse (within the iframe)
as a logged in user. After all, we (all the 'mybrowsers') are behind the same router as 'myserver', thus same IP address.
So in other words, I only need 'myserver' to log a user in to several external websites all at once ,and once done pass the control over back to individual users' browsers.
I hope the answer will not resort to using CURL to display and control the external websites for the whole session, aside from being a drag that will lead to some other sticky issues.
I am getting the nuance that this is not permitted due to security issues, but what if all the 'mybrowsers' and 'myserver' are behind the same router? Assuming there's a way to copy the login cookies from 'myserver' to 'mybrowsers', would 'external web server' know that a request came from different machines?
Can this be done?
Thanks.
The problem you are facing relates to the security principles of cookies. You cannot set cookies for other domains, which means that myserver cannot set a cookie for facebook.com, for example.
You could set your server to run an HTTP proxy and make it so that all queries run through your server and do some kind of URL translation (e.g. facebook.com => facebook.myserver) which then in return allows you to set cookies for the clients (since you're running on facebook.myserver) and then translates cookies you receive from the clients and feed them to the third party websites.
An example of a non-transparent proxy that you could begin with: http://www.phpmyproxy.com/
Transparent proxies (in which URLs remain "correct" / untranslated) might be worth considering too. Squid is a pretty popular one. Can't say how easy this would be, though.
After all that you'll still need to build a local script for myserver that takes care of the login process, but at least a proxy should make it all possible.
If you have any say in the login process itself, it might be easier to set up all the services to use OpenID or similar login services, StackOverflow and its sister sites being a prime example on how easy login on multiple sites can be achieved.

Website Administration Location + PHP CURL

I'm building an online dating website at the moment.
There needs to be an admin backend to the site to approve users/photos etc.
I can add this admin part of the site/login etc to the same domain.
eg: www.domainname.com/admin
Or from my experience with PHP CURL I can put this site on a different domain and CURL the requests through.
Question: is it more secure to put the admin code/site on a completely different domain? or it really doesn't matter if it sits on the same domain? hacking/security is the really point of this.
thx
Technically it might be more secure if you ran it from a different server and hosted it on a subdomain using a different IP/vhost, or use a proxy mod for your webserver (see Apache mod_proxy) to proxy requests from yourdomain.com/admin to admin.otherdomain.com and enforce additional IP or access control using .htaccess or equivalent to access the proxy url.
Of course, if those other domains are web accessible, then they are only as secure as the users and passwords that use them.
For corporate applications, you may want to make the admin interface accessible from a VPN connection, but I don't know if that applies to you.
If there is a vulnerability on your public webserver that allows someone to get shell access, then it may make it slightly more difficult to get administrative access since they don't have the code for the administration portion.
In other words, it can provide additional security depending on the lengths you go to, but is not necessarily a solid solution.
Using something like cURL is a possibility, but you'd have far less troubleshooting to do using a more conventional method like proxy or subdomain on another server.

How to protect download URLs to be stolen with PHP?

I have several programs linked and hosted on my server. I need to protect the URLs from being stolen and placed on other sites because they'll use my bandwidth.
How can I do that in PHP?
Should I just check referrer or do something else?
If you have the binary files on your server, and someone gets the address, you can't use PHP to prevent them from downloading them. You want to protect them at the web server level. Assuming you're using Apache, looking to doing this with custom .htaccess directives.
This question, involving the direct download of MP4 videos, may point you in the right directions:
Disable hot linking or direct download of my videos and only stream the video when it's displayed from a page in my website
If you don't want them downloaded/stolen, then don't put them on your site.
On the plus side, if they are stolen, then your bandwidth will only get used once. Checking referer is easiest to do, and also easiest to bypass/subvert.
If you're concerned that your server is only hosting the files but users who download it don't see where it comes from, you can do the following:
check for the referrer. This can be fooled, however, if you're concerned about links from forums etc., this is an option.
Basically you're checking if the HTTP referer header is set and matches your site's pattern. If not, you could block the traffic, however, if you actually want to offer downloads, I would not block the user.
Instead you can display a download facade-page with your site design and offering the download then. With some session logic, you can allow users to download files.
This can be done to build a much better hotlinking checker than based on http headers as well.

Secure way to import third party web-content to my site so it sends over SSL?

As an example, lets say I'm working with a product that requires I put <iframe>s onto my page, which then get embedded into my existing website.
The issue being my website is SSL, and the iframe comes from another origin which is not SSL.
I'm not worried about javascript here as the other origin cannot access the DOM due to same-origin, but my huge concern here is that annoying "mixed content" error in IE that pops up, or the broken lock in other browsers, etc. If a user doesn't know to click "no," they don't get the content-- which is critical to the website.
What I want to do instead is provide a way to take this content and scoop it into my own script so it sends to the browser from my domain with my SSL certificate, for all resources it links to (ie, recursively parse the file and send the resources as my own). I realize this could open a huge hole because it's now coming from my origin.
What recommended approach should I take to get third party content to land on my site? Right now the content I'm pulling is the base HTML file, the CSS, and 9 images, all of which are dynamic. This is simmilar to a proxy I suppose.
What I want to do instead is provide a way to take this content and scoop it into my own script so it sends to the browser from my domain with my SSL certificate, for all resources it links to (ie, recursively parse the file and send the resources as my own). I realize this could open a huge hole because it's now coming from my origin.
Get a separate domain (mydomain.com -> thirdpartyname.mydomain-proxy.com) to serve as an HTTPS proxy for third-party content. That way they cannot run JavaScript in the same origin as your website.
Alternatively: Pressure them to adopt HTTPS. It's fast and it's free.

How to ensure the HTTP_REQUEST Is coming from the right place?

I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).

Categories