I'm using loadVars to load a PHP URL with lots of sensitive information required for the Flash application. Only problem is that URL can be accessed via a web browser which raises security issues if someone gets a hold of this URL. Is it possible to have the PHP page only accessible via my Flash application?
Appreciate the help!
Thanks
No, there is no way to limit a page to a specific app, browser or user agent, since all of those things can be mimicked. If you are passing around sensitive information then you need to do authentication and use encrypted data transfer (HTTPS).
Regardless of how you attempt to make it only accessible from your Flash application, a determined user will certainly be able to view the page also. It can be as simple as proxying the requests through an HTTP proxy like Charles, Firebug or Wireshark.
There are things that can be done to make it more difficult to figure out what the data is from viewing the page directly. For instance, you can encrypt the data or output it as binary. But since SWF is an open sourced format, users can use decompilers or just inspect the ABC (Actionscript Byte Code) to see what is really going on.
The short answer is NO, you cannot protect the information available to the client side (Flash) from being accessible from other clients.
As long as you have a page on HTTP, a determined user can always find a way around any user-agent restrictions imposed by you.
One way to protect the data (other than using HTTPS) is to encrypt it at the server, send it over HTTP and then decrypt it in Flash using as3Crypto or some other cryptography library.
Hope this helps,
Related
Is it possible to read the cookies that are sent by a third-party homepage using php?
In concrete, i want to find out if a page using GTM does also set .ga cookies.
I was thinking of a "virtual browser" solution on the server, is that possible / is anybody experienced with that?
Thanks!
No, because PHP runs on the server and gets only the cookies of that domain
Cookies are stored on the client (browser). PHP is executed on the other side. The cookies are stored in the browser and the browser sends the cookie values along with the HTTP request to the server.
Therefore, the PHP process only gets to see the cookies of that domain.
And if you think of it, everything else would be a security flaw because every site could read for example secrets of sessions that are open on another site!
I use a JSON API to get data for a website. I am aware of various methods that I could make it secure, but my situation is different from common methods.
Because of cross domain issues, I had to create an API folder with various PHP files that do cURL requests to the REStful API. I then request these local PHP files through AJAX on my site. On the next release it should be JSONP to avoid this issue.
Many of these JSON requests contain sensitive information so the first thing I did was check for the HTTP Referrer so people don't just grab the URL when inspecting the JavaScript code and try to run it on their browser. This is obviously not safe nor should I rely on it.
Any data I may try to post to the request will be through JavaScript so something like an API key or token would be visible and would defeat the whole purpose.
Is there a way I can prevent these PHP files to be run outside the website or something? Basically make them inaccesible for visitors?
This does not have to do anything with REST. You have a server side REST client, in which you call the REST service with cURL and the browser cannot see anything of this process. Until you don't want to build your own REST service for this AJAX client this is just a regular webapplication (from the perspective of the browser and the AJAX client ofc.). As Lorenz said in the comment, you should use sessions as you would do normally. That's all. If you want to restrict access to certain pages, you can use an access control solution, e.g. role based access control is very common.
I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).
My friend and I are working on a program. This program is going to submit GET data to our webpage. However, we don't want users accessing the webpage any other way than the program. We can prevent users from sharing the program using HWID authentication, but nothing prevents them from using a packet scanner to get the URL of the webpage. We thought about user-agent authentication, which we will implement, but user-agents can easily be spoofed.
So my question is, how can we prevent users from accessing the webpage directly, instead of through the program?
Even if you don't have an answer that will completely work, anything that will help deter them would be nice.
Currently we will be implementing:
HWID Authentication to use the program
User-Agent Authentication to access the web page
Instant IP Blacklisting to anyone accessing the webpage without the proper User-Agent
Do not rely on user agent or any kind of browser fingerprint, HTTP headers are easily forged/spoofed.
You could add some secret token (eg. password/login) to the request and send it through SSL to prevent eavesdropping.
Or better, use an SSL client certificate.
Edit Are you going to distribute the VB program? If so, as bobince mentioned, there's no way you can prevent a determined hacker to forge requests. You can raise the bar but it will be security through obscurity. Even with client certs, the hacker will be able to extract the cert from your program and send modified requests.
As long as you accept requests from the client, these requests can be forged. Deal with it.
One option is you can set an encrypted token in the request header.
The Token can be used only for single time. If the same token is sent again the server will reject it, means u have to maintain the copy of utilized tokens on the server side.
one option is to use and verify a custom header which a web browser does not send, i did a similar thing for a program of my own. Do that ontop of the other verifications you are doing. On serverside, have your server script verify the custom header and simply redirect if the header is wrong
Try encrypting all ur webpages using the a long key(512bits or more) use the HWID as a salt.
This way only ur program can decode it and render it as a webpage.
en.wikipedia.org/wiki/Salt_%28cryptography%29
C# & VB.net here:
obviex.com/samples/hash.aspx
I have a client running an ASP.NET application. Inside of that, there's a self-contained PHP wiki. The problem is that the wiki won't use the .NET authentication, so requests directly to http://foobar/path/wiki/ will resolve without forcing a login.
My simple solution for this is to run the PHP application in an iFrame from an .aspx file that will force authentication, and then use PHP to detect if the page is loaded outside of a frame and redirect if so.
I know this can be done with JavaScript quite easily, but I would prefer to do this test server-side before the Wiki content loads. I need help figuring out a way that this can be done. Referrer comparison perhaps?
Any suggestions?
Thanks!
There is no way to tell on the server-side if a client's browser is loading a page within a frame, tab, or dedicated window.
What you can do is have your .NET application set a cookie after authenticating that the PHP application will read. If the cookie doesn't exist then do a redirect to the authentication page.
Even with JavaScript this is not secure. One could simply request the Wiki pages and ignore the JavaScript. For example, I could use WGET to pull down all your content without ever authenticating.
If security is important, I would highly recommend figuring out a way to make the PHP app aware of the authentication.
The simplest approach, if this is all on one server, would be to have the .NET application store some sort of token after authenticating, somewhere PHP can access it. Then set a cookie that the PHP wiki will receive and check that value is a valid session for each request.