So I was asked to look at reconstructing a section of a website which I didn't build. One of the issues I'm running into is a contact form which is being loaded through an iFrame from another server. Obviously, the form's action submits to the other server, and the information is stored in a database for the client to see later.
I've never had to deal with something like this before and I'm wondering if I need to go through some sort of API the host may be able to provide, or can I recreate the form so I can style it and just have it submit to the same server. Sorry for the noob level of this question, but I'm just looking to be pointed in the right direction.
While what you are planning to do, technically works (I have done it myself on several occasions), it is possible the remote host might reject POST data from locations other than itself.
For example, if your site is running at www.example.com and the host site is running www.host.com The server running at host.com will be able to determine if you are sending POST data from example.com. This again, is only a problem if they are cross site checking.
Since you don't have access to their server to know, you will just have to try it and see.
Actually, this type of reject might or might not happen: Since a server needs to read the referrer to reject, but the referrer isn't sent by each and any browser.
Additionally, beware of protection mechanisms like session ids. Or some kind of authorization hash injected into forms as a hidden field.
Related
I have a use case where I need to be able to access my site from the local server. Specifically, it's for a HTML-to-PDF export of parts of various pages, but this would be nice for testing parts of the website as well.
The problem is that we have a login splash page, which needs to be dealt with before I can access any parts of the website. It would be really nice if I could just call a command "wkhtml2pdf 'localhost/[myurl]'" and have it PDF some stuff, but it hits this splash page.
Is there some way that I can perma-persist just one single session on the server? Or enable login-less access from localhost? Or could I just add a new Apache entry that accesses our site, whitelists only localhost and somehow circumvents the login?
What's the best solution?
You can pass your session cookie as parameter in wkhtml2pdf to solve your problem.
You can also execute it from a php file like this.
exec("wkhtmltopdf --cookie '{$cookieName}' '{$cookieValue}' http://example.com");
Soliciting feedback on this solution now:
I whitelisted localhost via $_SERVER['REMOTE_ADDR'] in the login scripts to bypass the usual user authentication and get an automatic localhost-user login. The server is running, however, on a university LAN, so the LAN maybe really big, possibly enabling bidirectional TCP spoofing.
Should I be worried about this, or does someone need admin rights on the routers or something? I trust the IT folks, but not others.
I realize that this sounds like a separate question, but I feel that security relates to whether or not this is a good solution.
I have a website with some php scripts, some of them are called in ajax.
I'd like to prevent my site from some malicious users who would try calling and using those scripts from another site, or from a dummy localhost site.
I thought about filtering the domain name, but with some tools like EasyPHP and virtual host managers, you can run a local website tricking the "domain" name.
I also thought about filtering the IP adress of the caller, but I guess that if you can trick the "domain" name, you can also trick the localhost IP.
So, how may I do this to have this security work fine ?
What are you referring to is called Cross Site Request Forgery.
Calling one of your scripts from another website will be forbidden by same-origin policy. Taking this into consideration and the fact that an AJAX request can contain only a few headers without the consent of the server via Cross-Origin Resource Sharing, you can send a custom HTTP header and checking that header on the server side, from PHP. If the header is missing, most likely the request is not coming from your own application.
You could also require each client to send a unique token for each request in order to fetch the data. Most common used token method is called Synchronizer token pattern.
Sorry for the long list of links included in this answer, but I consider the subject to be a delicate one and like any security problem, I think it is crucial to read as much as you can, from many sources, in order to understand the problem from different perspectives, available solutions and pick the right one for your use case.
Resources to read:
How to stop other website to send cross domain ajax requests?
What's the point of X-Requested-With header?
Using CORS
If I have a wordpress website, and a user on the website enters some survey information, is it possible to send the results to a local server inside a company (assuming the website is hosted on some other companies server). From looking around I see people using the JSON formats and GET, PUT etc.. but I havent seen this demonstrated with wordpress. Is there a standard way to do this? I can see it is possible to send via emails, but I was hoping for something more like TCP/IP communications
If it must run through the front-ends WordPress installation, then the easiest way to be a simple HTTP POST request to a server you control. PHP has several different ways you can accomplish this with minimal effort.
The other way you can do this is just to set up a form that will send an AJAX response to your server. Just make sure your receiving server is configured to allow the originating domain.
I have a main website (with backend SQL database), and I have satellite websites which are all separate domains. Each of these websites are hosted by a provider and have their own SQL databases, however, I don't want to maintain 6 or 7 different databases. Instead I would like just one centralised one.
What I would like, is that when a user submits a form on one of the satellite websites, the data is able to get transmitted and stored in the database of the main website. May have to be via a special URL or something - I really don't know.
Is this possible and if so, how?
I think AJAX may have something to do with it, but I cant seem to get to grips with it and it doesn't seem to work for me. SO I'm hoping this is possible using simple PHP. Any help would be appreciated.
Thanks in advance.
On the server where you are hosting the database, you can setup a PHP web service that would receive post requests from the remote forms and do the communication with the database. You can pass in your post request some extra parameters to differentiate between sources from which the requests are coming.
You will have to be extra careful with such a design idea, as your script would be receiving cross domain requests from different sources and might be prone to CSRF attacks unless you take some extra security measures by validating the sources and forms from which the requests are coming.
In addition to the above mentioned solution, you can also simply allow your sattelite sites to connect to your database directly if such a remote DB connection to your server is supported/enabled.
You can have your satellite sites connect to your central database directly as well. They don't have to be on the same servers.
All you need for that to work is a user account on your DB server which allows access from other addresses than localhost.
Yes, it's certainly possible, and probably better to do it server side with PHP rather than client side with AJAX, because on the client you'll run into XSS issues. You'll probably need to build your own API endpoints, and I suggest looking at this article for more info on making the requests.
You can generate post requests and submit to any domain. That's not a problem. Doing cross site requests can be problematic, but would like to see your code!
I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).