This is a question of security, so I am not looking for a solution on how to do this, I just want to make sure that it cannot be done.
Let's say I have a file called login.php and it's hosted online and running live, let's say on http://www.rimmer.sk/login.php
Now, let's image this file looks like this:
<?php
if (isset($_POST['register'])){
echo 'all is done !';
}
?>
Question: Can you, externally, send $_POST['register'] to my file, or can this be done only internally from files hosted within the same virtualhost?
It can be done. Everyone can send you a POST (or a GET, for that matter) request. There is no limit that forbids requests from outside your virtualhost.
(maybe not for you, maybe it is, but not everyone can set your $_SESSION, so an external domain cannot alter that)
In short yes it can be posted from the external site.
Yes this can be done very easily. Take a look at: http://php.net/manual/en/book.curl.php
Of course I can post from an external location, after all that's what the user's browser does when they submit the form. I can therefore write a script to post the register field to your server with ease.
What use case are you imagining? There are lots of security options (firewalls etc) but without knowing what you are trying to achieve, it's hard to give specifics.
One way of denying script attacks is to generate one time passwords on the server that you send to the browser with each registration form, then when you get a response back, check that the OTP is valid. This at least adds another layer of security.
But as I say ... without knowing more it's hard to be specific.
or can this be done only internally from files hosted within the same virtualhost?
Quite contrary. This cannot be done only internally. In fact, a form being sent not from server internals but from the user's browser
Related
I am new to the world of programming and I have learnt enough about basic CRUD-type web applications using HTML-AJAX-PHP-MySQL. I have been learning to code as a hobby and as a result have only been using a WAMP/XAMP setup (localhost). I now want to venture into using a VPS and learning to set it up and eventually open up a new project for public use.
I notice that whenever I send form data to my PHP file using AJAX or even a regular POST, if I open the Chrome debugger, and go to "Network", I can see the data being sent, and also to which backend PHP file it is sending the data to.
If a user can see this, can they intercept this data, modify it, and send it to the same backend PHP file? If they create their own simple HTML page and send the POST data to my PHP backend file, will it work?
If so, how can I avoid this? I have been reading up on using HTTPS but I am still confused. Would using HTTPS mean I would have to alter my code in any way?
The browser is obviously going to know what data it is sending, and it is going to show it in the debugger. HTTPS encrypts that data in transit and the remote server will decrypt it upon receipt; i.e. it protects against any 3rd parties in the middle being able to read or manipulate the data.
This may come as a shock to you (or perhaps not), but communication with your server happens exclusively over HTTP(S). That is a simple text protocol. Anyone can send arbitrary HTTP requests to your server at any time from anywhere. HTTPS encrypted or not. If you're concerned about somebody manipulating the data being sent through the browsers debugger tools… your concerns are entirely misdirected. There are many simpler ways to send any arbitrary crafted HTTP request to your server without even going to your site.
Your server can only rely on the data it receives and must strictly validate the given data on its own merits. Trying to lock down the client side in any way is futile.
This is even simpler than that.
Whether you are using GET or POST to transmit parameters, the HTTP request is sent to your server by the user's client, whether it's a web browser, telnet or anything else. The user can know what these POST parameters are simply because it's the user who sends them - regardless of the user's personal involvement in the process.
You are taking the problem from the wrong end.
One of the most important rules of programming is :
Never trust user entries is a basic rule of programming ! Users can and will make mistakes, and some of them will try to damage you or steal from you.
Welcome into the club.
Therefore, you must not allow your code to perform any operation that could damage you in any way if the POST or GET parameters you receive aren't what you expect, be it by mistake or from malicious intents. If your code, by the way it's designed, renders you vulnerable to harm simply by sending specific POST values to one of your pages, then your design is at fault and you should redo it taking that problematic into account.
That problematic being a major issue while designing programs, you will find plenty of documentation, tutorials and tips regarding how to prevent your code to turn against you.
Don't worry, that's not that hard to handle, and the fact that you came up with that concern by yourself show how good you are at figuring things out and how commited you are to produce good code, there is no reason why you should fail.
Feel free to post another question if you are stuck regarding a particular matter while taking on your security update.
HTTPS encrypts in-transit, so won't address this issue.
You cannot trust anything client-side. Any data sent via a webform can be set to whatever the client wants. They don't even have to intercept it. They can just modify the HTML on the page.
There is no way around this. You can, and should, do client side validation. But, since this is typically just JavaScript, it can be modified/disabled.
Therefore, you must validate all data server side when it is received. Digits should be digits, strip any backslashes or invalid special characters, etc.
Everyone can send whatever they want to your application. HTTPS just means that they can't see and manipulate what others send to your application. But you always have to work under the assumption that what is sent to your application as POST, GET, COOKIE or whatever is evil.
In HTTPS, the TLS channel is established before and HTTP data is transfered so, from that point of view, there is no difference between GET and POST requests.
It is encrypted but that is only supposed to protects against mitm attacks.
your php backend has no idea where the data it receives comes from which is why you have to assume any data it receives comes straight from a hacker.
Since you can't protect against unsavoury data being sent you have to ensure that you handle all data received safely. Some steps to take involve ensuring that any files uploaded can't be executed (i.e. if someone uploads a php file instead of an image), ensuring that data received never directly interacts with the database (i.e. https://xkcd.com/327/), & ensuring you don't trust someone just because they say they are logged in as a user.
To protect further do some research into whatever you are doing with the received post data and look up the best practices for whatever it is.
It's rare, but I have to pay MS a compliment: the ASP.NET WebMethod (AJAX) authorization is a dream, regarding my desire for security and laziness.
Encosia's ASP.NET page methods are only as secure as you make them absolutely fits those needs. ASP.NET is actually workable for me now. Free at last! (From the noble but disastrous AJAXControlToolkit).
Anyways, the problem is, that's for work. I'm not buying the MS architecture when LAMP's out there for free. I'm new to AJAX, and I can't seem to find a clear answer on how to authorize AJAX calls to PHP in the same way as Encosia above.
Can anyone suggest the PHP equivalent of what Encosia does in the link above?
Thanks in advance!
More Details
OK, let me be more specific. Encosia's solution above gives 401 denied to anyone not logged in trying to access a webmethod. Neat, clean, easy. Before, I tried to user session data to give access, but it, unknowingly to me, forced synchronous mode. Nono.
I need both, for my site. I need to be able to give 401 denieds on certain pages if a user isn't logged in. I need to be able to allow anyone to call other phps via ajax regardless of login.
Clarity
Bottom line: I don't want anyone accessing certain AJAX PHPs unless if they are logged in. I don't care what the response or any other details as long as its' still AJAX. How to?
Not really clear from the question, but if you want to only allow access to your AJAX server side listening scripts (maybe XML or JSON output) to users that have either authed or are on the related page,then how about adding a session identifier to your JS AJAX requests? In the server side script you can check that identifier against maybe a DB table holding your current sessions.
For extra security, you could check against IP, a cookie etc. These are all values that you can set when the session is started.
The main thing you need to ask yourself is this:
If a user is either logged in or browsing, what kind of access to the database do you really want / need to give? Each application will have its own needs. If you are going to have AJAX listeners on your server, then all that's needed is a quick look at Firebug (example) to see where your scripts are and the format of the requests. This could allow a potential security hole to be found. Make sure all your incoming requests are correctly treated so as to remove the possibility of injection attacks.
I am designing a website, and I really want it to be as secure as possible.
I have a private folder that cannot be accessed (.htaccess) which contains all my php classes (and similar structures), and a public folder that has my Javascript, CSS and a PHP file for the Javascript(via AJAX) to interface with, which in turn accesses the classes in the private folder.
Now here is my issue, and for the life of me I just cannot seem to get my head around this one:
If someone was to look at the js code they would see the commands / data being sent to the publicly available PHP Script (as described above), therefore getting an idea of what commands to use interface with that script and potentially gain access to stored data etc.
now I know that ajax wont work remotely etc but as long as you got the commands from the ajax script you could interface directly with it, so i thought i would do a referrer check on the interface script and that worked perfectly until I realized how easy it was to spoof your referrer header!
does anyone have any ideas on how to secure this. if this just sounds like complete garbage tell me and I'll try and break it down further.
AJAX and JS are client-based - everything they do, any user can do. If you expose an API method to AJAX, you expose it to the user - there's nothing you can do about that. That's your design choice. You could of course obfuscate your API calls, but that doesn't really do anything other than make it less user-friendly.
The bottom line: don't trust any user input, regardless of whether it came from your AJAX code or somewhere else.
Well, someone scripting your site directly would only be able to access the same stuff he already can in UI, right?
If you have an script function doAdminStuff(), you would check server side if the user is logged in AND is an admin, before taking any actions
Relax, dude.
This is SPARTA! WEB.
Every site in the world is "exposed" like this. That's the way the web works, Ajax or non-ajax based.
You can't help it, yet there is no harm in this. There is nothing to secure.
Here are my recommendations:
Use SSL if you are not already.
Use a (software) token for all requests that you want to protect.
To discourage others from reading your javascript files, you can obfuscate them. Dean Edward's packer is a famous one.
Write a script that sniffs logs and data for potentially bad activity. If you are not logging all the activity you need to (like if the apache logs are not enough) consider writing activity to your own log.
don't be paranoid, just filter input params, maybe you should switch on SSL so you ajax requests content will be hard to sniff, etc.
Are you using the ajax-thing only for security-reasons or for any other reason? Because you can build up an architecture like this (a PHP-file as "gateway" and all other PHP-files in access-restricted folder) without using ajax as well. If you want to check out, you could take a look at the default folder structure of Zend Framework. This structure has the advantage that there is no logic visible for your users at all.
Also important is that IE (at least IE 6 & 7 I think) does not send a referrer at all by default so this probably wouldn't work anyway.
I have a script that uses JSONP to make cross domain ajax calls. This works great but my question is, is there a way to prevent other sites from accessing and getting data from these URL's? I basically would like to make a list of sites that are allowed and only return data if they are in the list. I am using PHP and figure I might be able to use "HTTP_REFERER" but have read that some browsers will not send this info.... ??? Any ideas?
Thanks!
There really is no effective solution. If your JSON is accessible through the browser, then it is equally accessible to other sites. To the web server a request originating from a browser or another server are virtually indistinguishable aside from the headers. Like ILMV commented, referrers (and other headers) can be falsified. They are after all, self-reported.
Security is never perfect. A sufficiently determined person can overcome any security measures in place, but the goal of security is to create a high enough deterrent that laypeople and or most people would be dissuaded from putting the time and resources necessary to compromise the security.
With that thought in mind, you can create a barrier of entry high enough that other sites would probably not bother making requests with the barriers of entry put into place. You can generate single use tokens that are required to grab the json data. Once a token is used to grab the json data, the token is then subsequently invalidated. In order to retrieve a token, the web page must be requested with a token embedded within the page in javascript that is then put into the ajax call for the json data. Combine this with time-expiring tokens, and sufficient obfuscation in the javascript and you've created a high enough barrier.
Just remember, this isn't impossible to circumvent. Another website could extract the token out of the javascript, and or intercept the ajax call and hijack the data at multiple points.
Do you have access to the servers/sites that you would like to give access to the JSONP?
What you could do, although not ideal is to add a record to a db of the IP on the page load that is allowed to view the JSONP, then on the jsonp load, check if that record exists. Perhaps have an expiry on the record if appropriate.
e.g.
http://mysite.com/some_page/ - user loads page, add their IP to the database of allowed users
http://anothersite.com/anotherpage - as above, add to database
load JSONP, check the IP exists in the database.
After one hour delete the record from the db, so another page load would be required for example
Although this could quite easily be worked around if the scraper (or other sites) managed to work out what method you are using to allow users to view the JSONP, they'd only have to hit the page first.
How about using a cookie that holds a token used with every jsonp request?
Depending on the setup you can also use a variable if you don't want to use cookies.
Working with importScript form the Web Worker is quite the same as jsonp.
Make a double check like theAlexPoon said. Main-script to web worker, web worker to sever and back with security query. If the web worker answer to the main script without to be asked or with the wrong token, its better to forward your website to the nirvana. If the server is asked with the wrong token don't answer. Cookies will not be send with an importScript request, because document is not available at web worker level. Always send security relevant cookies with a post request.
But there are still a lot of risks. The man in the middle knows how.
I'm certain you can do this with htaccess -
Ensure your headers are sending "HTTP_REFERER" - I don't know any browser that wont send it if you tell it to. (if you're still worried, fall back gracefully)
Then use htaccess to allow/deny access from the right referer.
# deny all except those indicated here
order deny,allow
deny from all
allow from .*domain\.com.*
I'm currently building an app that renders RSS and ATOM Feeds on the client side. I can't directly send an ajax request to "https://stackoverflow.com/feeds/tag/php", but I can send a request to my server that just echos the XML File like:
<?php
echo file_get_contents('https://stackoverflow.com/feeds/tag/php');
?>
What are the security implications(if any) on doing this?
StackOverflow is now allowed to hack the data your clients get and replace it with something malicious or annoying. (To be fair, they could even if you were able to use the URL directly.)
Your clients are now allowed to cause your server to make a lot of requests to StackOverflow, who may block you for DOSing the site or something like that. (I do hope you apply a modicum of caching.)
You may be able to use the Filter functions to sanitize the data before the echo. In general, unless the host you're getting the data from is controlled by you and doesn't allow general users to upload or add data that will be echo'ed then I wouldn't trust it. You just don't ever know what someone might be able to get through.
I would write a script which would run on cron and fetch the data and write your own database/filesystem/cache (your choice) and give them to users asychrously.
You never know how slow the other server responds and if it really responds slow, it also slows your site.
You have to send
header('application/xml');
Then the client will handle it as XML, and no XSS can occur as far as I can tell.