Workaround for cross-site parameters reaching max URL limit - php

We are developing a bookmarklet, and we use JSONP to communicate with the server. We have reached a phase when we must send from the browser to server parameters that will exceed the well know 2000ish URL length.
We are looking for solutions to overcome this problem. Please note that the bookmarklet will be executed on 3rd party URLS, some of them are HTTP and some of them HTTPS, and JSONP is limited to GET requests only

The only thing I can think of would be to do multiple requests - throw an id in with the request and setup the state server side in a persistent way and then request the data.
Multiple requests is pretty ugly too - what if one message gets lost while another makes it, etc.
Unfortunately, JSONP doesn't have a lot of flexibility since its just simulating script loads - and theres really no way around this with current browser security standards.

With the know limitations, I see only three ways:
Send less data. Potentially you could compress it?
Use more than one request. This can be complicated for blobs, but should be possible.
Extend the URL length limit - there are configs for that in your server

Related

node.js how to handle intentionally large (and bad) http requests

I'm working out on a few projects using node and each line of code that I write spans lots of ideas of how could someone destroy my node process.
Right now I'm thinking of this:
require('http').createServer(function(req, res)) {
//DEAL WITH REQUEST HERE
}.listen(port, net);
That's standard code for setting up a server and dealing with requests.
Let's say that I want to bring down that node process, I could send POST requests with loads of data on them and node.js would spend lots of time (and bandwith) on receiving all of them.
Is there a way to avoid this?
PHP Pros: How do you normally deal with this?
Is there a way to tell node or php (maybe apache) to just ignore requests from certain IPs ?
You can limit the http request size.
Here is the middleware you can use.
https://github.com/senchalabs/connect/blob/master/lib/middleware/limit.js
http://www.senchalabs.org/connect/middleware-limit.html
P.S. Possibility duplication from maximum request lengths in node.js
For getting the IP address in node.js, you can try request.connection.remoteAddress.

Caching data retrieved by jQuery

I am using jQuery's $.ajax method to retrieve some JSON from an API.
Each time the page is loaded, a call to the API is made, regardless if the user has has received this data before - which means when a large amount of users are on the page, the API limiting would come into effect.
My thought of how to deal with this would be firstly pushing the data to a database (pushing to a PHP script), and then checking the database to see if anything is cached, before going back to the API to get more up to date information if required.
Is this a viable method? What are the alternatives?
It just seems like jQuery is actually a hurdle, rather than doing it all in PHP to begin with, but as I'm learning the language, would like to use it as much as I can!
In order to help distinguish between opinions and recommended techniques, lets first break down your problem to make sure everyone understands your scenario.
Let's say we have two servers: 'Server A' and 'Server B'. Call 'Server A' our PHP web server and 'Server B' our API server. I'm assuming you don't have control over the API server which is why you are calling it separately and can't scale the API server in parallel to your demand. Lets say its some third party application like flickr or harvest or something... let's say this third party API server throttles requests per hour by your developer API key effectively limiting you to 150 requests per hour.
When one of your pages loads in the end-users browser, the source is coming from 'Server A' (our php server) and in the body of that page is some nice jQuery that performs an .ajax() call to 'Server B' our API server.
Now your developer API key only allows 150 requests per hour, while hypothetically you might see 1000 requests inside one hours to your 'Server A' PHP server. So how do we handle this discrepancy of loads, given the assumption that we can't simple scale up the API server (the best choice if possible).
Here are a few things you can do in this situation:
Just continue as normal, and when jQuery.ajax() returns a 503 service
unavailable error due to throttling (what most third party APIs do) tell your end user politely that
you are experiencing higher than normal traffic and to try again
later. This is not a bad idea to implement even if you also add in
some caching.
Assuming that data being retrieved by the API is cache-able, you
could run a proxy on your PHP server. This is particularly well
suited when the same ajax request would return the same response
repeatedly over time. (ex: maybe you are fetching some description
for an object, the same object request should return the same description response for
some period of time). This could be a PHP Pass through proxy or a
lower level proxy like SQUID caching proxy. In the case of a PHP Pass through proxy you would use the "save to DB or filesystem" strategy for caching and could even re-write the expires headers to suite your level of desired cache lifetime.
If you have established that the response data is cache-able, you can
allow the client side to also cache the ajax response using
cache:true. jQuery.ajax() actually defaults to having cache:true, so you simply need to not set cache to false for it to be caching responses.
If your response data is small, you might consider caching it client-side in a
cookie. But experience says that most users who clear their temporary
internet files will also clear their cookies. So maybe the built in caching with jQuery.ajax() is just as good?
HTML5 local storage for client-side caching is cool, but lacks wide-spread popular support.
If you control your user-base (such as in a corporate environment) you
may be able to mandate the use of an HTML5 compliant browser. Otherwise, you
will likely need a cookie based fallback or polyfill for browsers lacking
HTML5 local storage. In which case you might just reconsider other options above.
To summarize, you should be able to present the user with a friendly service unavailable message no matter what caching techniques you employ. In addition to this you may employ either or both server-side and client-side caching of your API response to reduce the impact. Server-side caching saves repeated requests to the same resource while client side caching saves repeated requests to the same resources by the same user and browser. Given the scenario described, I'd avoid Html5 LocalStorage because you'll need to support fallback/polyfill options which make the built in request caching just as effective in most scenarios.
As you can see jQuery won't really change this situation for you much either way vs calling the API server from PHP server-side. The same caching techniques could be applied if you performed the API calls in PHP on the server side vs performing the API calls via jQuery.ajax() on the client side. The same friendly service unavailable message should be implemented one way or another for when you are over capacity.
If I've misunderstood your question please feel free to leave a comment and clarify and/or edit your original question.
No, don't do it in PHP. Use HTML5 LocalStorage to cache the first request, then do your checking. If you must support older browsers, use a fallback (try these plugins).

Guidance on the number of http calls; is too much AJAX a problem?

I develop a website based on the way that every front end thing is written in JavaScript. Communication with server is made trough JSON. So I am hesitating about it: - is the fact I'm asking for every single data with http request query OK, or is it completely unacceptable? (after all many web developers change multiple image request to css sprites).
Can you give me a hint please?
Thanks
It really depends upon the overall server load and bandwidth use.
If your site is very low traffic and is under no CPU or bandwidth burden, write your application in whatever manner is (a) most maintainable (b) lowest chance to introduce bugs.
Of course, if the latency involved in making thirty HTTP requests for data is too awful, your users will hate you :) even if you server is very lightly loaded. Thirty times even 30 milliseconds equals an unhappy experience. So it depends very much on how much data each client will need to render each page or action.
If your application starts to suffer from too many HTTP connections, then you should look at bundling together the data that is always used together -- it wouldn't make sense to send your entire database to every client on every connection :) -- so try to hit the 'lowest hanging fruit' first, and combine the data together that is always used together, to reduce extra connections.
If you can request multiple related things at once, do it.
But there's no real reason against sending multiple HTTP requests - that's how AJAX apps usually work. ;)
The reason for using sprites instead of single small images is to reduce loading times since only one file has to be loaded instead of tons of small files at once - or at a certain time when it'd be desirable to have the image already available to be displayed.
My personal philosophy is:
The initial page load should be AJAX-free.
The page should operate without JavaScript well enough for the user to do all basic tasks.
With JavaScript, use AJAX in response to user actions and to replace full page reloads with targeted AJAX calls. After that, use as many AJAX calls as seem reasonable.

SWF to bytearray from a PHP script

i'm using AMFPHP to stream content from my server to my Flex application, since Flash is a clientside technology i would like to make it harder for people to grab protected files, so i created a system that streams swf file to another flash player, i've done all the testing on URL stream, now i want to pass the swf file to the player as bytearray .. since i think it's safer and harder to break and in the future i even might do some encryption of my own if i became more familiar with bytes and bits .. anyways is my method the best one? (SWF to ByteArray?) or is there a better one? if the bytearray method is the best, i am facing a problem in outputing the swf file in the right format, i'm using a very primitive method ..
$file = file_get_contents($file);
$byteArr = str_split($file);
$byteArr = array_map('ord', $byteArr);
$return['fileBin']->data = $byteArr;
and i return the $return variable ...
your help is highly respected and appreciated.
Rami
Hm...
I use something very similar currently (I'm developing an MMORPG...) - I decided I needed the content of the game preloaded so the player doesn't have to wait so much. Unfortunately - These files would be easy to just browse and even decompile.
Because of that I made a custom compression+encryption combo that needs a password. This password is sent by the server when needed.
However, in your case this is not the best thing to do. ByteArray is not anything hard to break - essentially it will send raw byte data. Unless of course you encrypt it (+1).
Another good thing to do would be to tokenize the requests (+1). I.e. in PHP you generate a "token" and write it to a token-list file. The token would be something like 32 alpha-numerals. This token would also be passed to the SWF object / page requested, which would immediately use it in request. The request works ONLY with a valid token, that is, one that was recorded in the token-list. When it is used, it is removed instantly. Also, there could be a time limit on each token. Like 15 or 20 seconds. If it is not used by then, remove it. User-side (if loading too long) would need to be reloaded (although not manually - can be some script, or iFrame to reload just the SWF) if time limit was exceeded.
EDIT : the author of the question asked differently - his aim is apparently to make the SWF file requested reachable / loadable ONLY by the loader application. Nothing else. So:
I'm afraid this is a hard thing but... It's not really possible to make something impossible to crack / hack into. Fortunately it is easier to do in networks (although they are attacked more often) or internet. You CAN'T make something that can be loaded only by your application. If you think about it - it is impossible even logically - in both cases (user-requested AND application-requested), the user computer requests one file, and it is easy to track that request and replicate it or to simply intercept it. Decompilation of SWFs would be used if any of the former two doesn't work. A little bit about countering all possibilities:
A) track and replicate
This is easily doable with such tools as Firebug on FF, or equally good (really) Inspector on Safari. With these, it is easy to see what was requested, the headers and response (luckily it is not possible to download the file requested - it is not recorded as long as it is not html or plain text MIME), also the response headers.
There is no use in obfuscating the request URL in code, if it will ultimately be shown as requested in the console of one of these tools. Solutions would be:
make the request one-time only (shown above - tokenization)
use a special header for the request, such as "Connection: keep-alive" - this makes it quite harder for normal attackers, because they will often just copy the URL and request it in browser - but the connection there will be automatically "Connection: close", check for that in server-side code and accept only the keep-alive (or your "special") requests
use a protocol different from HTTP - unfortunately this involves server-side socket functions for communicating on different ports than HTTP's 80... most server providers don't let users do this, but if you can - and want security - do it - don't use any known protocol, but rather something that suits just your need - after all, you control both server-side and client-side, so the communication can be done anyhow
B) interception
It is a little bit higher-level attack - but if the attacker is skilled and has SOME resources, not so hard to do. Essentially this comes to having a proxy of kind (hence the resources - need for a server with sockets enabled, which I myself have :D), that he will use to connect through with his browser. The proxy will, however not only forward content, but at the same time record it. Countering:
use different protocol
encryption of that protocol - because even if the data is recorded, it doesn't matter - it is no longer just HTTP headers followed by raw file data (the headers are easily removed and "violá") - only the client-side application knows how to use this data
C) decompilation
This isn't even so hard to do - SWF decompilers are not anything new now, and any code from your application is freely available to the attacker. Even if you use different protocol, the attacker can, with less or more effort, break into it. Solutions:
NONE - you can just make it harder for the attacker - obfuscate your code, have a lot of it (if you really WANT the security... chaos might just be your friend), use cross-client cross-server requests for the actual code - dynamically loaded secondary SWFs that load the code...

XmlHttpRequest vs cURL

I was wondering if anyone has done any testing on the speed differences of cURL and XHR (in regards to the time it takes to complete a request, or series of requests).
Specifically I'm wondering because I would like to use XHR to go to php script, and use cURL from there to grab a resource. The php page will ensure ensuring the data is in the right format, and alter it if it isn't. I would like to avoid doing this on the javascript end because it's my understanding that if the users computer is slow it could take noticeably longer.
If it makes a difference, all data will be retrieved locally.
There is no speed difference between the two. You're comparing an HTTP request to an... HTTP request. For our purposes they both do the exact same thing, only one does it in JavaScript and one in PHP. Having a chain will take twice as long (probably more) since you're making a request to your server, and then your server is making a request to another server.
I don't understand why you wouldn't want to just get the resource with JavaScript and scrap the PHP median. I don't see any problem with doing it that way. (Unless your data is on another domain, then it gets trickier, but it's still doable.)
If I understand the question correctly, the difference will be that XmlHttpRequest would be on the client side (javascript), and cURL would be on the server side (PHP)
This would impact performance one way or the other, depending on where the resource is (you say local), and how many concurrent requests you'll get.

Categories