SWF to bytearray from a PHP script - php

i'm using AMFPHP to stream content from my server to my Flex application, since Flash is a clientside technology i would like to make it harder for people to grab protected files, so i created a system that streams swf file to another flash player, i've done all the testing on URL stream, now i want to pass the swf file to the player as bytearray .. since i think it's safer and harder to break and in the future i even might do some encryption of my own if i became more familiar with bytes and bits .. anyways is my method the best one? (SWF to ByteArray?) or is there a better one? if the bytearray method is the best, i am facing a problem in outputing the swf file in the right format, i'm using a very primitive method ..
$file = file_get_contents($file);
$byteArr = str_split($file);
$byteArr = array_map('ord', $byteArr);
$return['fileBin']->data = $byteArr;
and i return the $return variable ...
your help is highly respected and appreciated.
Rami

Hm...
I use something very similar currently (I'm developing an MMORPG...) - I decided I needed the content of the game preloaded so the player doesn't have to wait so much. Unfortunately - These files would be easy to just browse and even decompile.
Because of that I made a custom compression+encryption combo that needs a password. This password is sent by the server when needed.
However, in your case this is not the best thing to do. ByteArray is not anything hard to break - essentially it will send raw byte data. Unless of course you encrypt it (+1).
Another good thing to do would be to tokenize the requests (+1). I.e. in PHP you generate a "token" and write it to a token-list file. The token would be something like 32 alpha-numerals. This token would also be passed to the SWF object / page requested, which would immediately use it in request. The request works ONLY with a valid token, that is, one that was recorded in the token-list. When it is used, it is removed instantly. Also, there could be a time limit on each token. Like 15 or 20 seconds. If it is not used by then, remove it. User-side (if loading too long) would need to be reloaded (although not manually - can be some script, or iFrame to reload just the SWF) if time limit was exceeded.
EDIT : the author of the question asked differently - his aim is apparently to make the SWF file requested reachable / loadable ONLY by the loader application. Nothing else. So:
I'm afraid this is a hard thing but... It's not really possible to make something impossible to crack / hack into. Fortunately it is easier to do in networks (although they are attacked more often) or internet. You CAN'T make something that can be loaded only by your application. If you think about it - it is impossible even logically - in both cases (user-requested AND application-requested), the user computer requests one file, and it is easy to track that request and replicate it or to simply intercept it. Decompilation of SWFs would be used if any of the former two doesn't work. A little bit about countering all possibilities:
A) track and replicate
This is easily doable with such tools as Firebug on FF, or equally good (really) Inspector on Safari. With these, it is easy to see what was requested, the headers and response (luckily it is not possible to download the file requested - it is not recorded as long as it is not html or plain text MIME), also the response headers.
There is no use in obfuscating the request URL in code, if it will ultimately be shown as requested in the console of one of these tools. Solutions would be:
make the request one-time only (shown above - tokenization)
use a special header for the request, such as "Connection: keep-alive" - this makes it quite harder for normal attackers, because they will often just copy the URL and request it in browser - but the connection there will be automatically "Connection: close", check for that in server-side code and accept only the keep-alive (or your "special") requests
use a protocol different from HTTP - unfortunately this involves server-side socket functions for communicating on different ports than HTTP's 80... most server providers don't let users do this, but if you can - and want security - do it - don't use any known protocol, but rather something that suits just your need - after all, you control both server-side and client-side, so the communication can be done anyhow
B) interception
It is a little bit higher-level attack - but if the attacker is skilled and has SOME resources, not so hard to do. Essentially this comes to having a proxy of kind (hence the resources - need for a server with sockets enabled, which I myself have :D), that he will use to connect through with his browser. The proxy will, however not only forward content, but at the same time record it. Countering:
use different protocol
encryption of that protocol - because even if the data is recorded, it doesn't matter - it is no longer just HTTP headers followed by raw file data (the headers are easily removed and "violá") - only the client-side application knows how to use this data
C) decompilation
This isn't even so hard to do - SWF decompilers are not anything new now, and any code from your application is freely available to the attacker. Even if you use different protocol, the attacker can, with less or more effort, break into it. Solutions:
NONE - you can just make it harder for the attacker - obfuscate your code, have a lot of it (if you really WANT the security... chaos might just be your friend), use cross-client cross-server requests for the actual code - dynamically loaded secondary SWFs that load the code...

Related

Does HTTPS make POST data encrypted?

I am new to the world of programming and I have learnt enough about basic CRUD-type web applications using HTML-AJAX-PHP-MySQL. I have been learning to code as a hobby and as a result have only been using a WAMP/XAMP setup (localhost). I now want to venture into using a VPS and learning to set it up and eventually open up a new project for public use.
I notice that whenever I send form data to my PHP file using AJAX or even a regular POST, if I open the Chrome debugger, and go to "Network", I can see the data being sent, and also to which backend PHP file it is sending the data to.
If a user can see this, can they intercept this data, modify it, and send it to the same backend PHP file? If they create their own simple HTML page and send the POST data to my PHP backend file, will it work?
If so, how can I avoid this? I have been reading up on using HTTPS but I am still confused. Would using HTTPS mean I would have to alter my code in any way?
The browser is obviously going to know what data it is sending, and it is going to show it in the debugger. HTTPS encrypts that data in transit and the remote server will decrypt it upon receipt; i.e. it protects against any 3rd parties in the middle being able to read or manipulate the data.
This may come as a shock to you (or perhaps not), but communication with your server happens exclusively over HTTP(S). That is a simple text protocol. Anyone can send arbitrary HTTP requests to your server at any time from anywhere. HTTPS encrypted or not. If you're concerned about somebody manipulating the data being sent through the browsers debugger tools… your concerns are entirely misdirected. There are many simpler ways to send any arbitrary crafted HTTP request to your server without even going to your site.
Your server can only rely on the data it receives and must strictly validate the given data on its own merits. Trying to lock down the client side in any way is futile.
This is even simpler than that.
Whether you are using GET or POST to transmit parameters, the HTTP request is sent to your server by the user's client, whether it's a web browser, telnet or anything else. The user can know what these POST parameters are simply because it's the user who sends them - regardless of the user's personal involvement in the process.
You are taking the problem from the wrong end.
One of the most important rules of programming is :
Never trust user entries is a basic rule of programming ! Users can and will make mistakes, and some of them will try to damage you or steal from you.
Welcome into the club.
Therefore, you must not allow your code to perform any operation that could damage you in any way if the POST or GET parameters you receive aren't what you expect, be it by mistake or from malicious intents. If your code, by the way it's designed, renders you vulnerable to harm simply by sending specific POST values to one of your pages, then your design is at fault and you should redo it taking that problematic into account.
That problematic being a major issue while designing programs, you will find plenty of documentation, tutorials and tips regarding how to prevent your code to turn against you.
Don't worry, that's not that hard to handle, and the fact that you came up with that concern by yourself show how good you are at figuring things out and how commited you are to produce good code, there is no reason why you should fail.
Feel free to post another question if you are stuck regarding a particular matter while taking on your security update.
HTTPS encrypts in-transit, so won't address this issue.
You cannot trust anything client-side. Any data sent via a webform can be set to whatever the client wants. They don't even have to intercept it. They can just modify the HTML on the page.
There is no way around this. You can, and should, do client side validation. But, since this is typically just JavaScript, it can be modified/disabled.
Therefore, you must validate all data server side when it is received. Digits should be digits, strip any backslashes or invalid special characters, etc.
Everyone can send whatever they want to your application. HTTPS just means that they can't see and manipulate what others send to your application. But you always have to work under the assumption that what is sent to your application as POST, GET, COOKIE or whatever is evil.
In HTTPS, the TLS channel is established before and HTTP data is transfered so, from that point of view, there is no difference between GET and POST requests.
It is encrypted but that is only supposed to protects against mitm attacks.
your php backend has no idea where the data it receives comes from which is why you have to assume any data it receives comes straight from a hacker.
Since you can't protect against unsavoury data being sent you have to ensure that you handle all data received safely. Some steps to take involve ensuring that any files uploaded can't be executed (i.e. if someone uploads a php file instead of an image), ensuring that data received never directly interacts with the database (i.e. https://xkcd.com/327/), & ensuring you don't trust someone just because they say they are logged in as a user.
To protect further do some research into whatever you are doing with the received post data and look up the best practices for whatever it is.

best practice to cache dynamic content at client side

Sorry for that, but i really concluded with the decision that it's better to ask directly than browsing tons of pages in vain.
I've already looked through enough resources, but haven't found a decent explanation that could fulfill my curiosity about simplest question.
Assume there’s a URI located at – hhtp://example.com/example (including php script, queries to the database).
Let’s imagine I’ve loaded it in my browser, click some link and hit “back” to return to hhtp://example.com/example
As far as I can allow myself to understand about what happens behind the scene looks smth like this:
After “back” been clicked there browser checks its cache specifically hhtp://example.com/example which matches exactly to the requested file (after “back”) and finds out that it wasn’t changed within this short period of time since it was first time loaded and returns it from its cache.
Wait!!!!
The file contains server side scripts, database queries and so forth.
So it should again reach web server, request same data from mysql and output it in a file.
So what’s the best strategy to cache dynamic content preferably on client-side vs server-side?
In which cases it makes useful to cache content at server-side, and what practice is the best?
Please can someone provide some resources covering this subject that can be conceived by such dumpers like my and refute or adjust the scheme above about what actually happens.
While browsing the issue i run into one service - http://gtmetrix.com/ I liked very much,
There were smth mentioned about making ajax request cacheable – I may assume that it can be perfectly used for client-side caching of dynamic content retrieved from database. Can someone please acknowledge it or deprecate?

Restricting access to php file

I'm currently writing an Android app at the moment, that accesses a PHP file on my server and displays JSON data provided by my MYSQL database.
Everything works great and I love the simplicity of it, but I'm not too comfortable with the fact that someone could just type in the URL of this PHP file and be presented with a page full of potentially sensitive data.
What advice would you give me to prevent access to this PHP file from anyone except those using my android app?
Thanks very much for any information.
The keyword is authentication. HTTP-Authentication is designed just for that purpose!
There are 2 forms of HTTP-auth:
Basic: easy to setup, less secure
Digest: harder to setup, more
secure
Here is the php manual.
And this is what you can do in your android app.
There isn't really a fool-proof way to do this. However you can require the user agent to match that of your application. You can also hide a private key in your application that is passed as POST data to your PHP file. Now, neither of these will stop someone who is determined to get at the raw output, but it will slow down the people who are just screwing around killing a little time seeing what they can accomplish.
Why not only enable a valid response if the request is sent with the following header:
Content-Type=application/json
If the request doesn't pass it as the Content-Type, then you just terminate the script (as regular browsers usually want to get text/html or similar things). It's not really worth locking everything tight shut, as if your app can get the data from your server, any user would have the opportunity too.

Workaround for cross-site parameters reaching max URL limit

We are developing a bookmarklet, and we use JSONP to communicate with the server. We have reached a phase when we must send from the browser to server parameters that will exceed the well know 2000ish URL length.
We are looking for solutions to overcome this problem. Please note that the bookmarklet will be executed on 3rd party URLS, some of them are HTTP and some of them HTTPS, and JSONP is limited to GET requests only
The only thing I can think of would be to do multiple requests - throw an id in with the request and setup the state server side in a persistent way and then request the data.
Multiple requests is pretty ugly too - what if one message gets lost while another makes it, etc.
Unfortunately, JSONP doesn't have a lot of flexibility since its just simulating script loads - and theres really no way around this with current browser security standards.
With the know limitations, I see only three ways:
Send less data. Potentially you could compress it?
Use more than one request. This can be complicated for blobs, but should be possible.
Extend the URL length limit - there are configs for that in your server

Is there a way to check if pipelining is enabled in php or javascript?

Is there a way to check if pipelining is enabled in php or javascript?
There are certain services in the browser you can access in about:config that speed up the browser without uncertainty so I wanna check if some of these are or even get the value aas a string if needed in php or javascript, is this possible?
If it is possible in jQuery I would be please too ask for a reference.
Mozilla browser
Reading out about:config from JS would be a really really big privacy/security bummer! I presume you can do a timing attack perhaps: load a bunch of images which are PHP generated and check the order they are landing on the server. It will be unreliable a little because even if HTTP request A is started before request B the networking latencies might just differ and swap the order but you can make a fairly good guess I think if you load enough images. Ie, if the request order looks like A-B-C-D-E-F and PHP sees B-A-C-D-E-F then it's not pipelined bu if you see A-D-B-E-C-F then it's likely.

Categories