Is it possible to modify javascript source code on a https connection before it gets executed by the browser. This modification doesn't have to necessarily come from a man-in-the middle. It can also come from the intended recipient of the script. Also, is there a particular type of request that cannot be made to a PHP server using cURL? In other words, what are the limitations of cURL?
If you have secrets to hide, don't include them in your Javascript.
Yes, it's quite possible. With something like Greasemonkey, or a browser plug-in, the user could certainly transform the source before executing it.
Moreover, since your source is eventually delivered in plain text, even if obfuscated, the user can reverse engineer it to find the secret, and often can do so quite quickly.
Related
I am new to the world of programming and I have learnt enough about basic CRUD-type web applications using HTML-AJAX-PHP-MySQL. I have been learning to code as a hobby and as a result have only been using a WAMP/XAMP setup (localhost). I now want to venture into using a VPS and learning to set it up and eventually open up a new project for public use.
I notice that whenever I send form data to my PHP file using AJAX or even a regular POST, if I open the Chrome debugger, and go to "Network", I can see the data being sent, and also to which backend PHP file it is sending the data to.
If a user can see this, can they intercept this data, modify it, and send it to the same backend PHP file? If they create their own simple HTML page and send the POST data to my PHP backend file, will it work?
If so, how can I avoid this? I have been reading up on using HTTPS but I am still confused. Would using HTTPS mean I would have to alter my code in any way?
The browser is obviously going to know what data it is sending, and it is going to show it in the debugger. HTTPS encrypts that data in transit and the remote server will decrypt it upon receipt; i.e. it protects against any 3rd parties in the middle being able to read or manipulate the data.
This may come as a shock to you (or perhaps not), but communication with your server happens exclusively over HTTP(S). That is a simple text protocol. Anyone can send arbitrary HTTP requests to your server at any time from anywhere. HTTPS encrypted or not. If you're concerned about somebody manipulating the data being sent through the browsers debugger tools… your concerns are entirely misdirected. There are many simpler ways to send any arbitrary crafted HTTP request to your server without even going to your site.
Your server can only rely on the data it receives and must strictly validate the given data on its own merits. Trying to lock down the client side in any way is futile.
This is even simpler than that.
Whether you are using GET or POST to transmit parameters, the HTTP request is sent to your server by the user's client, whether it's a web browser, telnet or anything else. The user can know what these POST parameters are simply because it's the user who sends them - regardless of the user's personal involvement in the process.
You are taking the problem from the wrong end.
One of the most important rules of programming is :
Never trust user entries is a basic rule of programming ! Users can and will make mistakes, and some of them will try to damage you or steal from you.
Welcome into the club.
Therefore, you must not allow your code to perform any operation that could damage you in any way if the POST or GET parameters you receive aren't what you expect, be it by mistake or from malicious intents. If your code, by the way it's designed, renders you vulnerable to harm simply by sending specific POST values to one of your pages, then your design is at fault and you should redo it taking that problematic into account.
That problematic being a major issue while designing programs, you will find plenty of documentation, tutorials and tips regarding how to prevent your code to turn against you.
Don't worry, that's not that hard to handle, and the fact that you came up with that concern by yourself show how good you are at figuring things out and how commited you are to produce good code, there is no reason why you should fail.
Feel free to post another question if you are stuck regarding a particular matter while taking on your security update.
HTTPS encrypts in-transit, so won't address this issue.
You cannot trust anything client-side. Any data sent via a webform can be set to whatever the client wants. They don't even have to intercept it. They can just modify the HTML on the page.
There is no way around this. You can, and should, do client side validation. But, since this is typically just JavaScript, it can be modified/disabled.
Therefore, you must validate all data server side when it is received. Digits should be digits, strip any backslashes or invalid special characters, etc.
Everyone can send whatever they want to your application. HTTPS just means that they can't see and manipulate what others send to your application. But you always have to work under the assumption that what is sent to your application as POST, GET, COOKIE or whatever is evil.
In HTTPS, the TLS channel is established before and HTTP data is transfered so, from that point of view, there is no difference between GET and POST requests.
It is encrypted but that is only supposed to protects against mitm attacks.
your php backend has no idea where the data it receives comes from which is why you have to assume any data it receives comes straight from a hacker.
Since you can't protect against unsavoury data being sent you have to ensure that you handle all data received safely. Some steps to take involve ensuring that any files uploaded can't be executed (i.e. if someone uploads a php file instead of an image), ensuring that data received never directly interacts with the database (i.e. https://xkcd.com/327/), & ensuring you don't trust someone just because they say they are logged in as a user.
To protect further do some research into whatever you are doing with the received post data and look up the best practices for whatever it is.
This question already has answers here:
How do I hide javascript code in a webpage?
(12 answers)
Closed 9 years ago.
is it possible to hide codes written in java script (j query)?
i have written a program and i have used two much load() function .
every one can see my pages address is it a risk?
something Like this:
load('account/module/message/index.php');
load('account/module/ads/index.php');
load('account/module/stat/index.html');
No.
JavaScript is client side therefore all code written is, in some fashion, directly visible to the client (end user). You can obfuscate it and make it more difficult to decipher, but in the end is still accessible.
If security is of concern you can keep "business logic" within php and access it using JavaScript (e.g. Ajax calls) but the end points would still be visible.
On every site that uses Javascript, that javascript code is visible to the end user. Not only that, but the end user is able to debug it, and change the either the variable contents or even the code itself at any moment.
Despite this, millions of sites use Javascript, and many of those sites are considered secure. The point is that while JS code may be visible to the end user, it doesn't necessarily mean your system is insecure. You just have to write your system with the understanding of how it works.
Here are some pointers:
If you put secrets (eg passwords or business logic that must be kept private) into your JS code, then those secrets are not secure. Don't do this; keep those details on the server.
If your JS code does any kind of validation, then that validation could be bypassed, so your server-side code must also do the same validation.
If your JS code makes calls that run code on the server (eg your load(...) calls, then the server must verify that the user has permission to do so; don't rely on the JS code to do that check.
You can't "hide" the client-side code, the most you could hope to do is obfuscate it, which to me is largely pointless in the context of the web - code that is delivered to the client should be exposable without being dangerous - and you can hardly obfsucate URLs, anyway.
For parts that shouldn't be exposed, don't expose them. Do server-side generation and output only what is needed, what is "safe"; some trouble can come when mixing the two (say, wanting to hide away logic by doing it on the server, but still deliver it dynamically using AJAX), because your logic is indirectly exposed (that is, although it can't be seen, the results can be gathered, perhaps from a different domain to use your content, etc.)
You can try using an Obfuscation Tool like YUI Compressor http://yui.github.io/yuicompressor/
So your code will not be readable for end user... but hidding it it's impossible
Hidding values and stuff
If you want to keep your values private, so user can't read them obfuscation won't be really your choice, but of course your source will be minified, it will be a mess if you want to read it, but it's still there...
So your choice here is use some kind of encryption which will be decrypted when page loads, but it is a hard work, you can use base64, sha1 or whatever you want only the strings or values you want. But anyone can decrypt it if they really want to.
Definately not, because javascript executed client side so either if possible you do all the operation on server side scripting ( jsp/php/asp) or minify/compress your javascript code after moving it to a sepatate file.
Unfortunately not.
Javascript runs on the client machine in the web browser and cannot be hidden from someone looking at the source code.
However this does not pose a security risk for your application provided nothing internal is visible should you visit those pages in your browser.
process all your "secret" code on the server, where the user doesn't have access to the code. Send only "non secret" things to the client, like for the UI. If you can't avoid sending secret code to the client, obfuscate it to make it more difficult to read.
Put your Javascript code in external file.
And then minified your javscript code, may this helps you.
To Convert Normal Javascript into Minified Javascript Refer this http://jscompress.com/
I am curious to know if detecting the visitor browser with client-side script is more reliable than server-side script?
It is easy and popular to get the visitor browser both by PHP and Javascript. In the former one, we analyze $_SERVER['HTTP_USER_AGENT'] sent by the header array. However, header is not always reliable. Can Javascript be more reliable as it get the visitor browser from the visitor's machine?
I mean is it possible to miss the USER AGENT in header and get the browser by javascript?
UPDATE: Please do not introduce methods such as jQuery as I am familiar with them. I just want to know if it's possible for header's user agent to fail when javascript still can detect browser? Comparison of client-side and server-side methods.
The User-Agent can be tested server side or client side, either way it can be spoofed.
You can finger print the browser with JavaScript (seeing what methods and objects the browser provides) and use that to infer the browser, but that is less precise and JavaScript can be disabled / blocked / edited by the client.
So neither is entirely reliable.
It is generally a bad idea to do anything based on the identify of the browser though.
OK. So User-Agent header is not required by RFC
User agents SHOULD include this field with requests.
https://www.rfc-editor.org/rfc/rfc2616#section-14.43
Which means the server side detection is not guaranteed.
Similarly client side detection typically relies on navigator.userAgent but that is also provided by the user agent (browser or what not) and similarly cannot be guaranteed.
Thus the answer to your question is 50/50 :)
Now, if you are trying to figure out how to handle different browsers - feature detection is your safest bet here - but that's a different question ;)
I would just use the server side detection.
If a user wants to mask their browser, their browser will likely be masked on both ends.
If you want to find out their browser for HTML compatibility, they should be expecting mildly broken pages if they've masked their browser (but you should always try your best not to have browser specific HTML). If it's for javascript compatibility, they should also be expecting some broken javascript.
Take a look at $.browser() in jquery
A different angle: why do we want to detect the browser?
In the case of analytics, there isn't much you can do really. Anyone that does a little research can send whatever user agent string they like, but who's going to go through all the trouble ;)
If we're talking about features to enable/disable on a website, you should really be going for feature detection. By focusing on what the browser can/can't do, instead of what it calls itself, you can generally expect that browser to perform whatever action reliably if the feature you need is present.
More info: http://jibbering.com/faq/notes/detect-browser/
One big advantage to use client-side javascript is that you can get much more information about the browser.
Here is an interesting example: https://panopticlick.eff.org/
i'm using AMFPHP to stream content from my server to my Flex application, since Flash is a clientside technology i would like to make it harder for people to grab protected files, so i created a system that streams swf file to another flash player, i've done all the testing on URL stream, now i want to pass the swf file to the player as bytearray .. since i think it's safer and harder to break and in the future i even might do some encryption of my own if i became more familiar with bytes and bits .. anyways is my method the best one? (SWF to ByteArray?) or is there a better one? if the bytearray method is the best, i am facing a problem in outputing the swf file in the right format, i'm using a very primitive method ..
$file = file_get_contents($file);
$byteArr = str_split($file);
$byteArr = array_map('ord', $byteArr);
$return['fileBin']->data = $byteArr;
and i return the $return variable ...
your help is highly respected and appreciated.
Rami
Hm...
I use something very similar currently (I'm developing an MMORPG...) - I decided I needed the content of the game preloaded so the player doesn't have to wait so much. Unfortunately - These files would be easy to just browse and even decompile.
Because of that I made a custom compression+encryption combo that needs a password. This password is sent by the server when needed.
However, in your case this is not the best thing to do. ByteArray is not anything hard to break - essentially it will send raw byte data. Unless of course you encrypt it (+1).
Another good thing to do would be to tokenize the requests (+1). I.e. in PHP you generate a "token" and write it to a token-list file. The token would be something like 32 alpha-numerals. This token would also be passed to the SWF object / page requested, which would immediately use it in request. The request works ONLY with a valid token, that is, one that was recorded in the token-list. When it is used, it is removed instantly. Also, there could be a time limit on each token. Like 15 or 20 seconds. If it is not used by then, remove it. User-side (if loading too long) would need to be reloaded (although not manually - can be some script, or iFrame to reload just the SWF) if time limit was exceeded.
EDIT : the author of the question asked differently - his aim is apparently to make the SWF file requested reachable / loadable ONLY by the loader application. Nothing else. So:
I'm afraid this is a hard thing but... It's not really possible to make something impossible to crack / hack into. Fortunately it is easier to do in networks (although they are attacked more often) or internet. You CAN'T make something that can be loaded only by your application. If you think about it - it is impossible even logically - in both cases (user-requested AND application-requested), the user computer requests one file, and it is easy to track that request and replicate it or to simply intercept it. Decompilation of SWFs would be used if any of the former two doesn't work. A little bit about countering all possibilities:
A) track and replicate
This is easily doable with such tools as Firebug on FF, or equally good (really) Inspector on Safari. With these, it is easy to see what was requested, the headers and response (luckily it is not possible to download the file requested - it is not recorded as long as it is not html or plain text MIME), also the response headers.
There is no use in obfuscating the request URL in code, if it will ultimately be shown as requested in the console of one of these tools. Solutions would be:
make the request one-time only (shown above - tokenization)
use a special header for the request, such as "Connection: keep-alive" - this makes it quite harder for normal attackers, because they will often just copy the URL and request it in browser - but the connection there will be automatically "Connection: close", check for that in server-side code and accept only the keep-alive (or your "special") requests
use a protocol different from HTTP - unfortunately this involves server-side socket functions for communicating on different ports than HTTP's 80... most server providers don't let users do this, but if you can - and want security - do it - don't use any known protocol, but rather something that suits just your need - after all, you control both server-side and client-side, so the communication can be done anyhow
B) interception
It is a little bit higher-level attack - but if the attacker is skilled and has SOME resources, not so hard to do. Essentially this comes to having a proxy of kind (hence the resources - need for a server with sockets enabled, which I myself have :D), that he will use to connect through with his browser. The proxy will, however not only forward content, but at the same time record it. Countering:
use different protocol
encryption of that protocol - because even if the data is recorded, it doesn't matter - it is no longer just HTTP headers followed by raw file data (the headers are easily removed and "violá") - only the client-side application knows how to use this data
C) decompilation
This isn't even so hard to do - SWF decompilers are not anything new now, and any code from your application is freely available to the attacker. Even if you use different protocol, the attacker can, with less or more effort, break into it. Solutions:
NONE - you can just make it harder for the attacker - obfuscate your code, have a lot of it (if you really WANT the security... chaos might just be your friend), use cross-client cross-server requests for the actual code - dynamically loaded secondary SWFs that load the code...
Is there a way to detect in my script whether the request is coming from normal web browser or some script executing curl. I can see the headers and can distinguish with "User-Agent and other few headers" but in curl fake headers can be set, so i am not able to track the request.
Please suggest me ways about identifying the curl or other similar non browser request.
The only way to catch most "automated" requests is to code in logic that spots activity that couldn't possibly be human with a browser.
For example, hitting pages too fast, filling out a form too fast, have an external source in the html file (like a fake css file through a php file), and check to see if the requesting IP has downloaded it in the previous stage of your site (kind of like a reverse honeypot), but you would need to exclude certain IP's/user agents from being blocked, otherwise you'll block google's webspiders. etc.
This is probably the only way of doing it if curl (or any other automated script) is faking its headers to look like a browser.
Strictly speaking, there is no way.
Although there are non-direct techiques, but I would never discuss it in public, especially on a site like Stackoverflow, which encourage screen scraping, content swiping autoposting and all this dirty roboting stuff.
In some cases you can use CAPTCHA test to tell a human from a bot.
As far as i know, you can't see the difference between a "real" call from your browser and one from curl.
You can compare the header (User-agent) but its all i know.