node.js hmac.digest() output seems wrong - php

I'm trying to implement a Facebook library with node.js, and the request signing isn't working. I have the PHP example seen here translated into node. I'm trying it out with the example given there, where the secret is the string "secret". My code looks like this:
var signedRequest = request.signed_request.split('.');
var sig = b64url.decode(signedRequest[0]);
var expected = crypto.createHmac('sha256', 'secret').update(signedRequest[1]).digest();
console.log(sig == expected); // false
I can't console.log the decoded strings themselves, because they have special characters that cause the console to clear (if you have a suggestion to get around that please let me know) but I can output the b64url encodings of them.
The expected encoded sig, as you can see on the FB documentation, is
vlXgu64BQGFSQrY0ZcJBZASMvYvTHu9GQ0YM9rjPSso
My expected value, when encoded, is
wr5Vw6DCu8KuAUBhUkLCtjRlw4JBZATCjMK9wovDkx7Dr0ZDRgzDtsK4w49Kw4o
So why do I think it's digest that's wrong? Maybe the error is on my side? Well, if I execute the exact example in PHP given in the documentation, the correct result comes out. But if I change the hash_hmac call so the last parameter is false, outputting hex, I get
YmU1NWUwYmJhZTAxNDA2MTUyNDJiNjM0NjVjMjQxNjQwNDhjYmQ4YmQzMWVlZjQ2NDM0NjBjZjZiOGNmNGFjYQ==
Now, if I go back to my javascript code, and change my hmac code to .digest("hex") instead of the default "binary" and log the base64 encoding of the result, I get... surprise!
YmU1NWUwYmJhZTAxNDA2MTUyNDJiNjM0NjVjMjQxNjQwNDhjYmQ4YmQzMWVlZjQ2NDM0NjBjZjZiOGNmNGFjYQ
Same, except the == signs are missing off the end, but I think that's a console thing. I can't imagine that being the issue, without them it's not even a valid base64 string length.
So, how come the digest method outputs the correct result when using hex, but the wrong answer when using binary? Is the binary not quite the same as the "raw" output of the PHP equivalent? And if that's the case what is the correct way to call it?

We have discovered that this was indeed a bug in the crypto lib, and was a known issue logged on github. We will have to upgrade and get the fix.

I am Tesserex's partner. I believe the answer may have been combination of both Tesserex's self posted answer and Juicy Scripter's answer. We were still using Node ver. 0.4.7. The bug Tesserex mentioned can be found here: https://github.com/joyent/node/issues/324. I'm not entirely certain that this bug affected us, but it seems a good possibility. We updated Node to ver 0.6.5 and applied Juicy Scripter's solution and everything is now working. Thank you.
As a note about the suggestion of using existing libraries. Most of the existing libraries require express, this is something we are trying to avoid do to some of the specifics of our application. Also the existing libraries tend to assume that your using node.js like a web server and answering a single users request at a time. We are using persistent connections with websockets and our facebook client will be handling session data for multiple users simultaneously. Eventually I hope to make our Facebook client open source for use with applications like ours.

Actually there is no problem with digest, the results of b64url.decode are in utf8 encoding by default (which can be specified by second parameter) if you use:
var sig = b64url.decode(signedRequest[0], 'binary');
var expected = crypto.createHmac('sha256', 'secret').update(signedRequest[1]).digest();
// sig === expected
signature and result of digest will be the same.
You may also check this by turning digest results into utf8 encoded string:
var sig = b64url.decode(signedRequest[0]);
var expected = crypto.createHmac('sha256', 'secret').update(signedRequest[1]).digest();
var expected_buffer = new Buffer(expected_sig.digest(), 'binary');
// sig === expected_buffer.toString()
Also you may consider using existing libraries to do that kind of work (and probably more), to name a few:
facebook-wrapper
everyauth

Related

parse special character xml file using simplexml

Apologies if there is an obvious answer (and I know there are about 1000 of these similar questions) - but I have spent two days trying to attack this without success. I cannot seem to crack why I get a null response...
Short background: the following works just fine
$xurl= new SimpleXMLElement('https://gptxsw.appspot.com/view/submissionList?formId=GP_v7&numEntries=1', NULL, TRUE);
$keyname = $xurl->idList->id[0];
echo $keyname;
this provides a response: a unique key like uuid:d0721391-6953-4d0b-b981-26e38f05d2e5
however I try a similar request (which ultimately would be based on first request) and get a failure. I've simplified code as follows...
$xdurl= new SimpleXMLElement('https://gptxsw.appspot.com/view/downloadSubmission?formId=GP_v7[#version=null%20and%20#uiVersion=null]/GP_v7[#key=uuid:d0721391-6953-4d0b-b981-26e38f05d2e5]', NULL, TRUE);
$keyname2 = $xdurl->data->GP_v7->SDD_ID_N[0];
echo $keyname2;
this provides null. And if I try something like
echo $xdurl->asXML();
I get an error response from the site (not from PHP).
Do I need to eject from SimpleXMLElement for the second request? I've read about using XPath and about defining the namespace, but I'm not sure that either would be required: the second file does have two namespaces but one of them isn't used and the other has no prefix for elements. Plus I have tried variations of those - enough to think that my problem/error is either more global in nature (or oversight due to inexperience).
For purposes of this request I have no control over the formatting of either XML file.
Here we go: SimpleXMLElement seems to re-escape (or incorrectly handle in some way) already url-escaped characters like white spaces. Try:
$xdurl= new SimpleXMLElement('https://gptxsw.appspot.com/view/downloadSubmission?formId=GP_v7[#version=null and #uiVersion=null]/GP_v7[#key=uuid:d0721391-6953-4d0b-b981-26e38f05d2e5]', NULL, TRUE);
$keyname2 = $xdurl->data->GP_v7->SDD_ID_N[0];
echo $keyname2;
and you should be fine.
(FYI: I debugged this by manually creating a local copy of the XML request result named "foo.xml" which worked perfectly.)
Thanks to #Matze for getting me on right track.
Issue is that URL has special characters that SimpleXMLElement cannot parse without help.
Solution: add urlencode() command like the following
$fixurl = urlencode('https://gptxsw.appspot.com/view/downloadSubmission?formId=GP_v7[#version=null and #uiVersion=null]/GP_v7[#key=uuid:d0721391-6953-4d0b-b981-26e38f05d2e5]');
$xdurl= new SimpleXMLElement($fixurl, NULL, TRUE);
$keyname2 = $xdurl->data->GP_v7->SDD_ID_N[0];
echo $keyname2;
this provided the answer (in this case 958)

Remove double-quotes from a json_encoded string on the keys

I have a json_encoded array which is fine.
I need to strip the double-quotes on all of the keys of the json string on returning it from a function call.
How would I go about doing this and returning it successfully?
Thanks!
I do apologise, here is a snippet of the json code:
{"start_date":"2011-01-01 09:00","end_date":"2011-01-01 10:00","text":"test"}
Just to add a little more info:
I will be retrieving the JSON via an AJAX request, so if it would be easier, I am open to ideas in how to do this on the javascript side.
EDITED as per anubhava's comment
$str = '{"start_date":"2011-01-01 09:00","end_date":"2011-01-01 10:00","text":"test"}';
$str = preg_replace('/"([^"]+)"\s*:\s*/', '$1:', $str);
echo $str;
This certainly works for the above string, although there maybe some edge cases that I haven't thought of for which this will not work. Whether this will suit your purposes depends on how static the format of the string and the elements/values it contains will be.
TL;DR: Missing quotes is how Chrome shows it is a JSON object instead of a string. Ensure that you have Header('Content-Type: application/json; charset=UTF8'); in PHP's AJAX response to solve the real problem.
DETAILS:
A common reason for wanting to solve this problem is due to finding this difference while debugging the processing of returned AJAX data.
In my case I saw the difference using Chrome's debugging tools. When connected to the legacy system, upon success, Chrome showed that there were no quotes shown around keys in the response according to the debugger. This allowed the object to be immediately treated as an object without using a JSON.parse() call. Debugging my new AJAX destination, there were quotes shown in the response and variable was a string and not an object.
I finally realized the true issue when I tested the AJAX response externally saw the legacy system actually DID have quotes around the keys. This was not what the Chrome dev tools showed.
The only difference was that on the legacy system there was a header specifying the content type. I added this to the new (WordPress) system and the calls were now fully compatible with the original script and the success function could handle the response as an object without any parsing required. Now I can switch between the legacy and new system without any changes except the destination URL.

Why doesn't jQuery.parseJSON() work on all servers?

Hey there, I have an Arabic contact script that uses Ajax to retrieve a response from the server after filling the form.
On some apache servers, jQuery.parseJSON() throws an invalid json excepion for the same json it parses perfectly on other servers. This exception is thrown only on chrome and IE.
The json content gets encoded using php's json_encode() function. I tried sending the correct header with the json data and setting the unicode to utf-8, but that didn't help.
This is one of the json responses I try to parse (removed the second part of if because it's long):
{"pageTitle":"\u062e\u0637\u0623 \u0639\u0646\u062f \u0627\u0644\u0625\u0631\u0633\u0627\u0644 !"}
Note: This language of this data is Arabic, that's why it looks like this after being parsed with php's json_encode().
You can try to make a request in the examples given down and look at the full response data using firebug or webkit developer tools. The response passes jsonlint!
Finally, I have two urls using the same version of the script, try to browse them using chrome or IE to see the error in the broken example.
The working example : http://namodg.com/n/
The broken example: http://www.mt-is.co.cc/my/call-me/
Updated: To clarify more, I would like to note that I manged to fix this by using the old eval() to parse the content, I released another version with this fix, it was like this:
// Parse the JSON data
try
{
// Use jquery's default parser
data = $.parseJSON(data);
}
catch(e)
{
/*
* Fix a bug where strange unicode chars in the json data makes the jQuery
* parseJSON() throw an error (only on some servers), by using the old eval() - slower though!
*/
data = eval( "(" + data + ")" );
}
I still want to know if this is a bug in jquery's parseJSON() method, so that I can report it to them.
Found the problem! It was very hard to notice, but I saw something funny about that opening brace... there seemed to be a couple of little dots near it. I used this JavaScript bookmarklet to find out what it was:
javascript:window.location='http://www.google.com/search?q=u+'+('000'+prompt('String?').charCodeAt(prompt('Index?')).toString(16)).slice(-4)
I got the results page. Guess what the problem is! There is an invisible character, repeated twice actually, at the beginning of your output. The zero width non-breaking space is also called the Unicode byte order mark (BOM). It is the reason why jQuery is rejecting your otherwise valid JSON and why pasting the JSON into JSONLint mysteriously works (depending on how you do it).
One way to get this unwanted character into your output is to save your PHP files using Windows Notepad in UTF-8 mode! If this is what you are doing, get another text editor such as Notepad++. Resave all your PHP files without the BOM to fix your problem.
Step 1: Set up Notepad++ to encode files in UTF-8 without BOM by default.
Step 2: Open each existing PHP file, change the Encoding setting, and resave it.
You should try using json2.js (it's on https://github.com/douglascrockford/JSON-js)
Even John Resig (creator of jQuery) says you should:
This version of JSON.js is highly recommended. If you're still using the old version, please please upgrade (this one, undoubtedly, cause less issues than the previous one).
http://ejohn.org/blog/the-state-of-json/
I don't see anything related to parseJSON()
The only difference I see is that in the working example a session-cookie is set(guess it is needed for the "captcha", the mathematical calculation), in the other example no session-cookie is set. So maybe the comparision of the calculation-result fails without the session-cookie.

char limit with php file_get_contents() and Google Chart API?

the specific issue I am working on is enabling https with Google charts API, and a possible character limit when using php file_get_contents on a url string. Let me take you through what is going on. I have made good progress using some tutorials on the net, specifically to enable the https. I am using their 'basic method' from this tutorial:
http://webguru.org/2009/11/09/php/how-to-use-google-charts-api-in-your-secure-https-webpage/
I have a chart.php file with this code in it:
<?php
$url = urldecode($_GET['api_url']);
$image_contents = file_get_contents($url);
echo $image_contents;
exit;
?>
I am calling this file from my main page, passing a 'test' Google chart URL (I have used many different ones) to it, which is 513 chars long:
$chartUrl = urlencode('http://chart.apis.google.com/chart?chxl=0:|Jan|Feb|Mar|Jun|Jul|Aug|1:|100|75|50|25|0&chxt=x,y&chs=300x150&cht=lc&chd=t:60.037,57.869,56.39,51.408,42.773,39.38,38.494,31.165,30.397,26.876,23.841,20.253,16.232,13.417,12.677,15.248,16.244,13.434,10.331,10.58,9.738,10.717,11.282,10.758,10.083,17.299,6.142,19.044,7.331,8.898,14.494,17.054,16.546,13.559,13.892,12.541,16.004,20.026,18.529,20.265,23.13,27.584,28.966,31.691,36.72,40.083,41.538,42.788,42.322,43.593,44.326,46.152,46.312,47.454&chg=25,25&chls=0.75,-1,-1');
To display the image in my main page I am using this code:
<img src="https://mysite.com/chart.php?api_url=<?php echo $chartUrl; ?>" />
The example $chartUrl string should display nothing. It will work fine until the $chartUrl string exceeds 512 characters in length (unencoded). For example if you use this string below (512 chars long):
$chartUrl = urlencode('http://chart.apis.google.com/chart?chxl=0:|Jan|Feb|Mar|Jun|Jul|Aug|1:|100|75|50|25|0&chxt=x,y&chs=300x150&cht=lc&chd=t:60.037,57.869,56.39,51.408,42.773,39.38,38.494,31.165,30.397,26.876,23.841,20.253,16.232,13.417,12.677,15.248,16.244,13.434,10.331,10.58,9.738,10.717,11.282,10.758,10.083,17.299,6.142,19.044,7.331,8.898,14.494,17.054,16.546,13.559,13.892,12.54,16.004,20.026,18.529,20.265,23.13,27.584,28.966,31.691,36.72,40.083,41.538,42.788,42.322,43.593,44.326,46.152,46.312,47.454&chg=25,25&chls=0.75,-1,-1');
The chart should show up. The difference between the strings is one character. The 'real' Google chart API string that I will be using in the final version is about 1250 chars long.
So is this a limit on get_file_contents()? I have looked at cURL as an alternative, but its specifics go over my head. Can someone confirm the char limit, and if possible make some suggestions?
Many thanks,
Neil
Edit: Unlike I stated below, this is probably not a server problem: Apache's limit on GET strings is said to be around 4000 bytes. The workaround I suggest is still valid, though, so I'm leaving this answer in place.
This is an awful lot of data to put in a GET string, and could be a server side limitation (Apache handling the request) as much as a client side one (file_get_contents sending the request).
I would look for an alternative way of doing this, e.g. by storing the long URL in a session variable with a random key:
$_SESSION["URL_1923843294284"] = $loooooong_url;
and pass that random key in the URL:
<img src="https://mysite.com/chart.php?api_url=1923843294284" />
Update: There does not seem to be a native length limit to file_get_contents() according to this question. This may well be a server issue.

What JSON does this CF code return?

Trying to implement the excellent jQuery bidirectional infite scroll as explained here:
http://www.bennadel.com/blog/1803-Creating-A-Bidirectional-Infinite-Scroll-Page-With-jQuery-And-ColdFusion.htm
For the server-side, which returns JSON, the example is in ColdFusion. Trying to implement it in PHP.
I need to find out what the format of the JSON is.
RIght now, I am returning
[{"src":"https:\/\/s3.amazonaws.com\/gbblr_2\/100\/IMG_1400 - original.jpg","offset":"5"},{"src":"https:\/\/s3.amazonaws.com\/gbblr_2\/100\/IMG_1399 - original.jpg","offset":6},{"src":"https:\/\/s3.amazonaws.com\/gbblr_2\/100\/IMG_1398 - original.jpg","offset":7}]
which doesn't work, in the html that is generated it shows "UNDEFINED" for both the src and the offset variables.
So my question: what kind of JSON does that coldfusion code generate? What is the format of JSON that I need to return.
Thanks for any tips!!
CF's JSON mentioned in Ben's post is similar to this:
[{"SRC":"http:\/\/example.com\/public","OFFSET":3.0},{"SRC":"http:\/\/example.com\/public","OFFSET":3.0}]
I'd try to check key names first. Yes, CF makes them uppercase, and JS doesn't like it sometimes. Check his function applyListItems() and check if RegExp finds something or not.
If this doesn't help little Firebug line debugging and console.log will do the trick I guess.
Looks like the JSON you're creating should be equivalent to his. He is creating an array of structures; where each structure contains the keys "src" and "offset".
He is converting to base64 and binary for streaming purposes, but I don't know how that would work -- or if it would be required -- for a php implementation.
I would use Firebug to figure out exactly where in your JavaScript the error is being thrown. That will tell you more about what exactly the problem is.

Categories