I use WordPress 4.1.1.
I tried to install the JSON API plugin.
Strange letters are displayed above the JSON content. And they update after refresh of the page.
I tried to bring another letter under the code of plugin. These letters appeared under these figures, so is the problem in the WordPress system?
Please help me to understand and to remove them, because I can't parse my JSON.
On localhost it works fine with the same properties and data...
The letter are: 7b00c, 78709, 6eb3d... and they change with updates..
The strange characters is probably a chunk-size.
Content-Length
When a server-side process sends a response through an HTTP server, the data will typically be stored in a buffer before it is transmitted to the client (browser). If the entire response fits in the buffer in a timely manner, the server will declare the size in a Content-Length: header, and send the response as-is to the client.
Chunked Transfer Coding
If the response does not fit in the buffer, or the server decides to vacate the buffer for other reasons before the full size is known, it will instead send the response in chunks. This is indicated by the Transfer-Encoding: chunked header. Each chunk is preceeded by its length in hexadecimal (followed by a CRLF-pair). The end of the response is indicated by a 0 chunk-size. The exact syntax is detailed below.
Solution
If you are parsing the HTTP response yourself, there are all sorts of intricacies that you need to consider. Chunked encoding is one of them. You need to check for the Transfer-Encoding: chunked header and assemble the response by parsing and stripping out the chunk-size parts.
It's much easier to use a library such as cURL which will handle all the details for you.
One hack to avoid chunks is to send the response using HTTP/1.0 rather than HTTP/1.1. In HTTP/1.0, the length is indicated either by the Content-Length: header, or by closing the connection.
Syntax
This is the syntax for chunked bodies specified in RFC 7230 - "Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing" (ABNF notation):
4.1. Chunked Transfer Coding
chunked-body = *chunk
last-chunk
trailer-part
CRLF
chunk = chunk-size [ chunk-ext ] CRLF
chunk-data CRLF
chunk-size = 1*HEXDIG
last-chunk = 1*("0") [ chunk-ext ] CRLF
chunk-data = 1*OCTET ; a sequence of chunk-size octets
trailer-part = *( header-field CRLF )
While evaluating performance of PHP frameworks I came across a strange problem
Sending a JSON as application/json seems to be much slower than sending with no extra header (which seems to fallback to text/html)
Example #1 (application/json)
header('Content-Type: application/json');
echo json_encode($data);
Example #2 (text/html)
echo json_encode($data);
Testing with apache bench (ab -c10 -n1000) gives me:
Example #1: 350 #/sec
Example #2: 440 #/sec
which shows that setting the extra header seems to be a little bit slower.
But:
Getting the same JSONs via "ajax" (jQuery.getJSON('url', function(j){console.log(j)});) makes the difference very big (timing as seen in Chrome Web Inspector):
Example #1: 340 ms / request
Example #2: 980 ms / request
Whats the matter of this difference?
Is there a reason to use application/json despite the performance difference?
I'll take up the last part of question:
Is there a reason to use application/json despite the performance
difference?
Answer: Yes
Why:
1) text/html can often be malformed json and will go uncaught until you try parsing it. application/json will fail and you can easily debug whenever the json is malformed
2) If you are viewing json in browser, having the header type will format it in a user-friendly formatting. text/html will show it more as a blob.
3) If you are consuming this json on your webpage, application/json will immediately be converted into js object and you may access them as obj.firstnode.childnode etc.
4) callback feature can work on application/json, but not on text/html
Note:
Using gzip will sufficiently alleviate the performance problem. text/html will still be bit faster, but not the recommended way for fetching json objects
Would like to see more insight on performance though. Header length is definitely not causing performance issue. More to do with your webserver analyzing the header format.
Does your server handle gzipping/deflate differently depending on content-type? Mine does. Believe ab does not accept gzip by default. (You can set this in ab with a custom header with the -H flag). But Chrome will always say it accepts gzipping.
You can use curl test to see if the files are different sizes:
curl http://www.example.com/whatever --silent -H "Accept-Encoding: gzip,deflate" --write-out "size_download=%{size_download}\n" --output /dev/null
You can also look at the headers to see if gzipping is applied:
curl http://www.example.com/whatever -I -H "Accept-Encoding: gzip,deflate"
I wanna download this link from google which mage txt file by php.
when I do it by browser,the unicode is correct and all things are right,but when I do it by curl or file_get_content it contain bad alphabets.
what is difference and how should I solve it?
downloaded by brower
[[["سلام","hello","",""]],[["interjection",["سلام","هالو","الو"],[["سلام",["hello","hi","aloha","all hail"]],["هالو",["hallo","hello","halloo"]],["الو",["hello"]]]]],"en",,[["سلام",[5],0,0,1000,0,1,0]],[["hello",4,,,""],["hello",5,[["سلام",1000,0,0],["خوش",0,0,0],["میهمان گرامی",0,0,0],["خوش آمدید",0,0,0],["درود کاربر",0,0,0]],[[0,5]],"hello"]],,,[["en"]],65]
download by following php script:
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<?php
$t = file_get_contents("http://translate.google.com/translate_a/t?client=t&hl=en&sl=auto&tl=fa&multires=1&prev=btn&ssel=0&tsel=3&uptl=fa&alttl=en&sc=1&text=hello");
$f = fopen("t.txt", "w+");
fwrite($f, $t);
fclose($f);
?>
</body></html>
[[["ÓáÇã","hello","",""]],[["interjection",["ÓáÇã","åÇáæ","Çáæ"],[["ÓáÇã",["hello","hi","aloha","all hail"]],["åÇáæ",["hallo","hello","halloo"]],["Çáæ",["hello"]]]]],"en",,[["ÓáÇã",[5],0,0,1000,0,1,0]],[["hello",4,,,""],["hello",5,[["ÓáÇã",1000,0,0],["ÎæÔ",0,0,0],["ã\u06CCåãÇä ÑÇã\u06CC",0,0,0],["ÎæÔ ÂãÏ\u06CCÏ",0,0,0],["ÏÑæÏ ÇÑÈÑ",0,0,0]],[[0,5]],"hello"]],,,[["en"]],4]
Header:
Header are:
HTTP/1.1 200 OK
Pragma: no-cache
Date: Fri, 25 May 2012 22:29:12 GMT
Expires: Fri, 25 May 2012 22:29:12 GMT
Cache-Control: private, max-age=600
Content-Type: text/javascript; charset=UTF-8
Content-Language: fa
Set-Cookie: PREF=ID=b6c08a0545f50594:TM=1337984952:LM=1337984952:S=Sf1xcow2qPZrFeu0; expires=Sun, 25-May-2014 22:29:12 GMT; path=/; domain=.google.com
X-Content-Type-Options: nosniff
Content-Disposition: attachment
Server: HTTP server (unknown)
X-XSS-Protection: 1; mode=block
Transfer-Encoding: chunked
Add parameters ie=UTF-8 and oe=UTF-8 to query string of the url:
$t = file_get_contents("http://translate.google.com/translate_a/t?ie=UTF-8&oe=UTF-8&client=t&hl=en&sl=auto&tl=fa&multires=1&prev=btn&ssel=0&tsel=3&uptl=fa&alttl=en&sc=1&text=hello");
This worked for me once, as I was about to throw lots of code to the garbage! Maybe it will help you too
iconv( 'CP1252', 'UTF-8', $string);
echoing what you get from file_get_contents into the PHP output should work fine, as you are going from a UTF-8 JSON response to a UTF-8 HTML response. Works for me off the given URL.
When you store to a file, you then have to worry about what encoding the tools you are using to read the file are working in. Just fwriteing is fine as long as the text editor you view it in knows the output is UTF-8. On Windows, Notepad may instead try to read it in the locale-dependent default ('ANSI') codepage, which won't be UTF-8. On a Western European install it'd be code page 1252 and you'd get output like سلام for سلام.
(One way around that is to put a UTF-8 fake-BOM at the front of the file with fwrite($f, "\xef\xbb\xbf");. This is a bit dodgy because UTF-8 doesn't need a Byte Order Mark (its byte order is fixed) and it breaks UTF-8's ASCII-compatibility, but Windows tools like fake-BOMs. The other way around it is to get a better text editor that allows you to default to handling files as UTF-8.)
You've got something slightly different here, as ÓáÇã is what you get when you save سلام in the Windows default Arabic encoding (code page 1256) and then read it in the Windows default Western encoding (code page 1252). This implies there's some kind of extra store-and-load step involved in your testing, that's messing up the encoding.
If it's anything to do with Windows command line tools you might as well give up, because the Command Prompt and MSVCRT apps don't really play well with Unicode at all.
I am parsing emails with Zend_Mail, and strangely some content gets truncated without an obvious reason and malforms the email parts.
For example
Content-Disposition: attachment; filename="file.sdv"
DQogICAgICBTT05FO0xBTkRJTkdTREE7U0FMR1NEQVRPIDtOQVNKIDtSRURTS0FQICAgICAgICAg
ICAgIDsgRklTS0VTTEFHO1BSRVNFUlYgICA7ICBUSUxTVEFORDsgU1TYUlJFTFNFOyAgS1ZBTElU
RVQ7T01TVFlQRSAgO01JTlNURVBSSVM7ICAgICBWRVJESTsgICBLVkFOVFVNOyAgUlVORFZFS1Qg
IA0KLS0tLS0tLS0tLTstLS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS07LS0tLS0tLS0tLS0tLS0t
LS0tLS07LS0tLS0tLS0tLTstLS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS0tLS0tLTstLS0tLS0t
LS0tOy0tLS0tLS0tLTstLS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS0tLS0tLTstLS0tLS0tLS0t
ICANCiAgICAgICAgIDA7MjAxMC4wOS4wODsyMDEwLjA5LjA4O05vcnNrO0dhcm4gICAgICAgICAg
ICAgICAgOyAgICAgIDEwMjE7RkVSU0sgICAgIDsgICAgICAgMjEwOyAgIDQwMjA5OTk7ICAgICAg
ICAyMDtFZ2Vub3ZlcnQ7ICAgICAgICAgIDsgICAzMDcyLDE2OyAgICAgICAyMTE7ICAgICAyNTMs
MiAgDQogICAgICAgICAwOzIwMTAuMDkuMDg7MjAxMC4wOS4wODtOb3JzaztHYXJuICAgICAgICAg
Gets truncated to
Content-Disposition: attachment; filename="file.sdv"
DQogICAgICBTT05FO0xBTkRJTkdTREE7U0FMR1NEQVRPIDtOQVNKIDtSRURTS0FQICAgICAgICAg
ICAgIDsgRklTS0VTTEFHO1BSRVNFUlYgICA7ICBUSUxTVEFORDsgU1TYUlJFTFNFOyAgS1ZBTElU
RVQ7T01TVFlQRSAgO01JTlNURVBSSVM7ICAgICBWRVJESTsgICBLVkFOVFVNOyAgUlVORFZFS1Qg
IA0KLS0tLS0tLS0tLTstLS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS07LS0tLS0tLS0tLS0tLS0t
LS0tLS07LS0tLS0tLS0tLTstLS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS0tLS0tLTstLS0tLS0t
LS
a var_dump on each line shows this.
string(78) "DQogICAgICBTT05FO0xBTkRJTkdTREE7U0FMR1NEQVRPIDtOQVNKIDtSRURTS0FQICAgICAgICAg
"
string(78) "ICAgIDsgRklTS0VTTEFHO1BSRVNFUlYgICA7ICBUSUxTVEFORDsgU1TYUlJFTFNFOyAgS1ZBTElU
"
string(78) "RVQ7T01TVFlQRSAgO01JTlNURVBSSVM7ICAgICBWRVJESTsgICBLVkFOVFVNOyAgUlVORFZFS1Qg
"
string(78) "IA0KLS0tLS0tLS0tLTstLS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS07LS0tLS0tLS0tLS0tLS0t
"
string(78) "LS0tLS07LS0tLS0tLS0tLTstLS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS0tLS0tLTstLS0tLS0t
"
string(5) "LS)
"
string(17) "TAG5 OK Success
"
or in other email at
DQogICAgICBTT05FO0xBTkRJTkdTREE7U0FMR1NEQVRPIDtOQVNKIDtSRURTS0FQICAgICAgICAg
ICAgIDsgRklTS0VTTEFHO1BSRVNFUlYgICA7ICBUSUxTVEFORDsgU1TYUlJFTFNFOyAgS1ZBTElU
RVQ7T01TVFlQRSAgO01JTlNURVBSSVM7ICAgICBWRVJESTsgICBLVkFOVFVNOyAgUlVORFZFS1Qg
IA0KLS0tLS0tLS0tLTstLS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS07LS0tLS0tLS0tLS0tLS0t
LS0tLS07LS0tLS0tLS0tLTstLS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS0tLS0tLTstLS0tLS0t
LS0tOy0tLS0tLS0tLTstLS0tLS0tLS0tO
I cannot figure out why is stopping there. The transmitions should have stoped at the end of the line only. This is the line that gets the string from the IMAP Server.
$line = #fgets($this->_socket);
The encoded text contains a string like, but again this is truncated in various parts in different emails.
----------;----------;----------;-----;--------------------;----------;----------;--
I've tried to add a size to fgets() but to no results.
I also enabled/disabled "auto_detect_line_endings" php_ini setting, again to no result.
I've also opened a bug report with ZF although the error does not seem to be in the library.
Do you see anything strange with this encoded string?
UPDATE
New research shows that the emails get truncated after 584 chars. Still don't know why.
Sent a question to google as well. See here.
A Bad email headers :
Delivered-To: email#removed.com
Received: by 10.216.3.208 with SMTP id 58cs248812weh;
Fri, 20 Nov 2009 05:14:14 -0800 (PST)
Received: by 10.204.153.217 with SMTP id l25mr1285471bkw.108.1258722853863;
Fri, 20 Nov 2009 05:14:13 -0800 (PST)
Return-Path: <>
Received: from MTX4.mbn1.net (mtx4.mbn1.net [213.188.129.252])
by mx.google.com with SMTP id 2si1800716bwz.60.2009.11.20.05.14.12;
Fri, 20 Nov 2009 05:14:13 -0800 (PST)
Received-SPF: pass (google.com: best guess record for domain of MTX4.mbn1.net designates 213.188.129.252 as permitted sender) client-ip=213.188.129.252;
Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of MTX4.mbn1.net designates 213.188.129.252 as permitted sender) smtp.mail=
Resent-From: <email#removed.com>
Content-Type: multipart/mixed; boundary="===============1703099044=="
MIME-Version: 1.0
From: <email#removed.com>
To: <email#removed.com>
CC:
Subject: some subject
Message-ID: <FLYNDRElQ080Gxw8Zw500000f46email#removed.com>
X-OriginalArrivalTime: 20 Nov 2009 13:14:08.0121 (UTC) FILETIME=[5792C690:01CA69E3]
Date: Fri, 20 Nov 2009 14:14:08 +0100
X-STA-Metric: 0 (engine=030)
X-STA-NotSpam: tlf: vedlagt skip:__ 40 fil cc:2**0
X-STA-Spam: header:MIME-Version: charset:us-ascii header:Subject:1 to:2**0 header:From:1
X-BTI-AntiSpam: score:0,sta:0/030,dnsbl:passed,sw:off,bsn:38/passed,spf:off,bsctr:passed/1,dk:off,pbmf:none,ipr:0/3,trusted:no,ts:no,bs:no,ubl:passed
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
Resent-Message-Id: <19740416124736.CF5804B33EF632B0email#removed.com>
Resent-Date: Fri, 20 Nov 2009 14:14:11 +0100 (CET)
--===============1703099044==
Content-Type: application/octet-stream
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="file.sdv"
DQpHUlVQUEVOQVZOICAgICAgICAgIDtLSthQRTtQUk9EQU5MO1BBS0tFTlI7TU9UVEFLTkFWTiAg
ICAgICAgICAgICAgICAgICAgO1NPTjtMQU5ESU5HU0RBO1NBTEdTREFUTyA7TkFTSiA7UkVEU0tB
UCAgIDtGSVNLRVNMQUcgO1BSRVNFUlYgICA7VElMU1RBTkQ7U1TYUlJFTFM7S1ZBTElURVQ7TUlO
U1RFUFJJUzsgICAgICAgIFZFUkRJOyAgICAgS1ZBTlRVTTsgICAgUlVORFZFS1QgICAgDQotLS0t
LS0tLS0tLS0tLS0tLS0tLTstLS0tLTstLS0tLS0tOy0tLS0tLS07LS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tOy0tLTstLS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS07LS0tLS0tLS0tLTst
LS0tLS0tLS0tOy0tLS0tLS0tLS07LS0tLS0tLS07LS0tLS0tLS07LS0tLS0tLS07LS0tLS0tLS0t
LTstLS0tLS0tLS0tLS0tOy0tLS0tLS0tLS0tLTstLS0tLS0tLS0tLS0gICAgDQpMb3JlbnR6ZW4g
....
For those interested in an answer and not in the (ex) bounty, more clues.
Gmail is returning a short value in response to RFC822.SIZE, which can lead to truncated messages. (They are off by one byte for each header line, apparently not counting two characters for CR/LF.)
I think you're looking in the wrong place.
The imap server gives you the mail message truncated, and then returns its status line TAG5 OK Success.
I don't see how your (/php's) handling of the socket would make a few kb worth of stream disappear, to magically fix the stream right before this status line.
So either the message is truncated by itself (have you verified the message contents through some other way?) or the imap server is just broken.
The first things I would do, are:
find a sufficiently silent environment to put your project, where you can strace -f -s 10240 -p <pid> apache's process to verify the socket interaction (assuming a linux/apache environment)
and/or: use tcpdump, ethereal or equivalent to check what's coming in on the line
My guess is that you will see the exact same truncated strings coming in on the wire. Meaning you can shift your focus to the imap server.
Reassuring yourself that you're looking in the right place can save a lot of time.
1: try removing the # for more verbosity
2: try using http://www.php.net/manual/en/function.fread.php instead of fgets
This might have something to do with the IMAP server, because i see TAG5 OK Success as a response, even if its not supposed to be there.
Have you tried issuing another fgets and see if you get the rest of the data? You may be retrieving a multi-part email which would require multiple requests.
But regardless, you are using functions designed for file access on a network. Usually this works fine, but depending on the network, issues can arise. For example, you can use file_get_contents to retrieve a web page. But if the issue issues a redirect, then it fails. But using curl will be much more successfully.
If you truly want to read the network socket, you should try socket_read. That is designed with the network in mind, like curl.
Do not know Zend and forgot all about PHP but played with MIME and HTTP before (C++).
I suggest you start looking at finding way to add a Content-Length header entry. It gives a hint to the "message decoder/loader" to expect a certain size in the content (message payload). (Not sure if IMAP does that)
In the code above I would try to convince fgets to read a specific amount of expected data from the network. It could be that the data is buffered or not yet sent over the network (async communication) and fgets only reads an internal buffer thus stopping before the whole message was read.
To see if this is the case, send a small message that falls under your "584 breaking point".
Do some network tracing the see if all the data actually flows. (You would probably need to do some local setup)
The code you are referring to is here?
Most likely one of your server hardware is compromised and thus you want to change it completely or just change the RAM modules or Disk-Drives. I've some experience with Web-and-Mail based encoding and I can confirm you that base64 encoded string is very secure. At least it uses a texture mapping algorithm.
Why/when does one has to use CRLF's at the end of header in PHP?
Here is one example (it's not necessarily correct):
header("method: POST\r\n");
header('Host: '.get_option('transact_url')."\r\n");
header('Content-type: application/x-www-form-urlencoded');
header('Content-length: '.strlen($transaction)."\r\n");
header($transaction."\r\n\r\n");
header("Connection: close\r\n\r\n");
header("Location: ".$key_client_url."\r\n");
You should never do manual line-breaks inside of header(). The current implementation removes line-breaks so you're safe, but this could change in future (although there's no reason why it should be changed).
If it's PHP, this code is nonsense.
header() function is used to send answer headers, while some of these headers are request ones.
You can see this code because one who wrote it has no clue.
Got my log-in's all mixed, so I'm posting, although this a comment for halfdan. Feel free to correct it, if someone can.
link
I saw that in the link and nobody mentioned about what you've just told me, so I thought it has something to do with Linux line-endings. Granted, it was the only time I saw "\r\n" in header().
Halfdan is correct.
Here is the explanation.
The request line and headers must all end with CRLF (that is, a carriage return followed by a line feed). The empty line must consist of only and no other whitespace.
Request = Request-Line ; Section 5.1
*(( general-header ; Section 4.5
| request-header ; Section 5.3
| entity-header ) CRLF) ; Section 7.1
CRLF
[ message-body ] ; Section 4.3
source: w3c.org - Hypertext Transfer Protocol -- HTTP/1.1
There was a way to add \r\n to header in php 4, which was vulnerability that could be exploited, using CRLF injection attacks see PHP HTTP Header Multiple Vulnerabilities.