I have ran a security scan in my website and scan report showing security thread in below URL, saying "HTTP header injection vulnerability in REST-style parameter to /catalog/product/view/id"
The following URL adding the custom header XSaint:test/planting-a-stake/category/99 in HTTP Response header.(See the last line in Response Header)
I tried different solutions but no luck! Can any one suggest me to prevent the modifying HTTP Response header.
URL: /catalog/product/view/id/1256/x%0D%0AXSaint:%20test/planting-a-stake/category/99
Response Header:
Cache-Control:max-age=2592000
Content-Encoding:gzip
Content-Length:253
Content-Type:text/html; charset=iso-8859-1
Date:Fri, 26 May 2017 11:27:12 GMT
Expires:Sun, 25 Jun 2017 11:27:12 GMT
Location:https://www.xxxxxx.com/catalog/product/view/id/1256/x
Server:Apache
Vary:Accept-Encoding
XSaint:test/planting-a-stake/category/99
HTTP header injection vulnerability is related to someone injecting data in your application that can be used to insert arbitrary headers (see https://www.owasp.org/index.php/HTTP_Response_Splitting).
In this specific case, the scanner assume the vulnerability might come from the URI put in the Location header:
Location:https://www.xxxxxx.com/catalog/product/view/id/1256/x
The need here is to ensure that the data put into this URI cannot embed the line return characters, to quote the OWASP HTTP Response Splitting page:
CR (carriage return, also given by %0d or \r)
LF (line feed, also given by %0a or \n)
I use WordPress 4.1.1.
I tried to install the JSON API plugin.
Strange letters are displayed above the JSON content. And they update after refresh of the page.
I tried to bring another letter under the code of plugin. These letters appeared under these figures, so is the problem in the WordPress system?
Please help me to understand and to remove them, because I can't parse my JSON.
On localhost it works fine with the same properties and data...
The letter are: 7b00c, 78709, 6eb3d... and they change with updates..
The strange characters is probably a chunk-size.
Content-Length
When a server-side process sends a response through an HTTP server, the data will typically be stored in a buffer before it is transmitted to the client (browser). If the entire response fits in the buffer in a timely manner, the server will declare the size in a Content-Length: header, and send the response as-is to the client.
Chunked Transfer Coding
If the response does not fit in the buffer, or the server decides to vacate the buffer for other reasons before the full size is known, it will instead send the response in chunks. This is indicated by the Transfer-Encoding: chunked header. Each chunk is preceeded by its length in hexadecimal (followed by a CRLF-pair). The end of the response is indicated by a 0 chunk-size. The exact syntax is detailed below.
Solution
If you are parsing the HTTP response yourself, there are all sorts of intricacies that you need to consider. Chunked encoding is one of them. You need to check for the Transfer-Encoding: chunked header and assemble the response by parsing and stripping out the chunk-size parts.
It's much easier to use a library such as cURL which will handle all the details for you.
One hack to avoid chunks is to send the response using HTTP/1.0 rather than HTTP/1.1. In HTTP/1.0, the length is indicated either by the Content-Length: header, or by closing the connection.
Syntax
This is the syntax for chunked bodies specified in RFC 7230 - "Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing" (ABNF notation):
4.1. Chunked Transfer Coding
chunked-body = *chunk
last-chunk
trailer-part
CRLF
chunk = chunk-size [ chunk-ext ] CRLF
chunk-data CRLF
chunk-size = 1*HEXDIG
last-chunk = 1*("0") [ chunk-ext ] CRLF
chunk-data = 1*OCTET ; a sequence of chunk-size octets
trailer-part = *( header-field CRLF )
My database stores some texts which I have to get with AJAX. This is going well but only when it not contains special characters such as ë or ä. I found some articles about this topic which told me to change the charset of the AJAX-request, but none of these worked for me.
When I start firebug it said this about the headers:
Antwoordheaders (dutch for responseheaders)
Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection close
Content-Length 94
Content-Type text/html; charset=ISO-8859-15
Date Wed, 26 Sep 2012 09:52:56 GMT
Expires Thu, 19 Nov 1981 08:52:00 GMT
Pragma no-cache
Server Apache
X-Powered-By PleskLin
Verzoekheaders (dutch for requestheaders)
Accept text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Encoding gzip, deflate
Accept-Language nl,en-us;q=0.7,en;q=0.3
Authorization Basic c3BvdGlkczp6SkBVajRrcw==
Connection keep-alive
Content-Type text/html; charset=ISO-8859-15
Cookie __utma=196329838.697518114.1346065716.1346065716.1346065716.1; __utmz=196329838.1346065716.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); PHPSESSID=2h4vu8gu9v8fe5l1t3ad5agp86
DNT 1
Host www.spotids.com
Referer http://www.spotids.com/private/?p=16
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:14.0) Gecko/20100101 Firefox/14.0.1
Both of the headers are talking about charset=ISO-8859-15 which should include characters like ë, but it doesn't work for me.
The code I used for this (PHP):
`$newresult = mysql_query($query2);
$result = array();
while( $row = mysql_fetch_array($newresult))
{
array_push($result, $row);
}
$jsonText = json_encode($result);
echo $jsonText;`
Make sure you set the headers to UTF-8:
header('Content-Type: application/json; charset=utf-8');
Make sure your connection to database is made with UTF-8 encoding before any queries:
$query = mysql_query("SET NAMES 'UTF8'");
As far as I know, JSON encodes any characters that cannot be represented in pure ASCII. And you should decode that JSON on response.
Try to move to PDO as mysql_* functions are deprecated. Use this nice tutorial
From JSON RFC-4627 : JSON text SHALL be encoded in Unicode. The default encoding is
UTF-8.
Use mb_convert_encoding or iconv to change string encoding.
And send correct header:
header('Content-Type: application/json;charset=utf-8');
echo json_encode($data);
verify the Content-Type meat
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
I wanna download this link from google which mage txt file by php.
when I do it by browser,the unicode is correct and all things are right,but when I do it by curl or file_get_content it contain bad alphabets.
what is difference and how should I solve it?
downloaded by brower
[[["سلام","hello","",""]],[["interjection",["سلام","هالو","الو"],[["سلام",["hello","hi","aloha","all hail"]],["هالو",["hallo","hello","halloo"]],["الو",["hello"]]]]],"en",,[["سلام",[5],0,0,1000,0,1,0]],[["hello",4,,,""],["hello",5,[["سلام",1000,0,0],["خوش",0,0,0],["میهمان گرامی",0,0,0],["خوش آمدید",0,0,0],["درود کاربر",0,0,0]],[[0,5]],"hello"]],,,[["en"]],65]
download by following php script:
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<?php
$t = file_get_contents("http://translate.google.com/translate_a/t?client=t&hl=en&sl=auto&tl=fa&multires=1&prev=btn&ssel=0&tsel=3&uptl=fa&alttl=en&sc=1&text=hello");
$f = fopen("t.txt", "w+");
fwrite($f, $t);
fclose($f);
?>
</body></html>
[[["ÓáÇã","hello","",""]],[["interjection",["ÓáÇã","åÇáæ","Çáæ"],[["ÓáÇã",["hello","hi","aloha","all hail"]],["åÇáæ",["hallo","hello","halloo"]],["Çáæ",["hello"]]]]],"en",,[["ÓáÇã",[5],0,0,1000,0,1,0]],[["hello",4,,,""],["hello",5,[["ÓáÇã",1000,0,0],["ÎæÔ",0,0,0],["ã\u06CCåãÇä ÑÇã\u06CC",0,0,0],["ÎæÔ ÂãÏ\u06CCÏ",0,0,0],["ÏÑæÏ ÇÑÈÑ",0,0,0]],[[0,5]],"hello"]],,,[["en"]],4]
Header:
Header are:
HTTP/1.1 200 OK
Pragma: no-cache
Date: Fri, 25 May 2012 22:29:12 GMT
Expires: Fri, 25 May 2012 22:29:12 GMT
Cache-Control: private, max-age=600
Content-Type: text/javascript; charset=UTF-8
Content-Language: fa
Set-Cookie: PREF=ID=b6c08a0545f50594:TM=1337984952:LM=1337984952:S=Sf1xcow2qPZrFeu0; expires=Sun, 25-May-2014 22:29:12 GMT; path=/; domain=.google.com
X-Content-Type-Options: nosniff
Content-Disposition: attachment
Server: HTTP server (unknown)
X-XSS-Protection: 1; mode=block
Transfer-Encoding: chunked
Add parameters ie=UTF-8 and oe=UTF-8 to query string of the url:
$t = file_get_contents("http://translate.google.com/translate_a/t?ie=UTF-8&oe=UTF-8&client=t&hl=en&sl=auto&tl=fa&multires=1&prev=btn&ssel=0&tsel=3&uptl=fa&alttl=en&sc=1&text=hello");
This worked for me once, as I was about to throw lots of code to the garbage! Maybe it will help you too
iconv( 'CP1252', 'UTF-8', $string);
echoing what you get from file_get_contents into the PHP output should work fine, as you are going from a UTF-8 JSON response to a UTF-8 HTML response. Works for me off the given URL.
When you store to a file, you then have to worry about what encoding the tools you are using to read the file are working in. Just fwriteing is fine as long as the text editor you view it in knows the output is UTF-8. On Windows, Notepad may instead try to read it in the locale-dependent default ('ANSI') codepage, which won't be UTF-8. On a Western European install it'd be code page 1252 and you'd get output like سلام for سلام.
(One way around that is to put a UTF-8 fake-BOM at the front of the file with fwrite($f, "\xef\xbb\xbf");. This is a bit dodgy because UTF-8 doesn't need a Byte Order Mark (its byte order is fixed) and it breaks UTF-8's ASCII-compatibility, but Windows tools like fake-BOMs. The other way around it is to get a better text editor that allows you to default to handling files as UTF-8.)
You've got something slightly different here, as ÓáÇã is what you get when you save سلام in the Windows default Arabic encoding (code page 1256) and then read it in the Windows default Western encoding (code page 1252). This implies there's some kind of extra store-and-load step involved in your testing, that's messing up the encoding.
If it's anything to do with Windows command line tools you might as well give up, because the Command Prompt and MSVCRT apps don't really play well with Unicode at all.
hi guys i below is what i receive from a curl response.
HTTP/1.1 200 OK
X-Account-Object-Count: 4
X-Account-Bytes-Used: 3072798
X-Account-Container-Count: 3
Accept-Ranges: bytes
Content-Length: 15
Content-Type: text/plain; charset=utf-8
Date: Thu, 12 Jan 2012 04:07:33 GMT
a1
abc
testing
i found a good function which parses the headers and i can grab the key value pairs in headers not a problem the problem i have is how to grab the names in the body
a1
abc
testing
i think may be regex can do the best job but do not know if regex is the best approach or is there any other function which can return headers separete and body separate.
Any help is appreciated. thanks.
Updates
Now i am getting the response as
HTTP/1.1 200 OK
X-Account-Object-Count: 4
X-Account-Bytes-Used: 3072798
X-Account-Container-Count: 3
Accept-Ranges: bytes
Content-Length: 115
Content-Type: application/json; charset=utf-8
Date: Thu, 12 Jan 2012 04:47:36 GMT
[{"name":"a1","count":0,"bytes":0},{"name":"abc","count":0,"bytes":0},{"name":"testing","count":4,"bytes":3072798}]
so the names are in json
Seems like it should be as simple as:
list($headers, $body) = explode("\n\n", $response);
$bodyValues = explode("\n", $body);