How exactly does the websocket fragmention work? - php

I am working with a websocket server in php and I am encountering a problem. When the websocket fragment is too long, it will be splited into several fragments. However, I cannot find a websocket fragment decoder in php which can handle the fragmention for long data. So I decide to realize the decoder by myself. I read the RFC for websocket protocol, but I still don't understand how exactly does the fragmention work.
Here are the questions:
When applying fragmention, every fragments has the independent FIN(for example, the last fragment's FIN is set while other fragments' FIN is zero) and opcode. But does every fragments has independent mask bit and masking key?
Does the beginning fragment's payload length bits stand for the original data or just stand for the payload in the begining fragment and each fragment in fragmention has a payload length bits for itself?
Hope ur answering! I am quite confused.

Every fragment should have independent mask bit and masking key.
Payload length should represent a payload length of each fragment.

Related

byte array over $_GET request feasibility?

I hope this will be a relatively quick question.
If i send a byte array via the URL and retrieve it from a $_GET request in PHP server side scripts will the url be capable of transmitting the byte array? Is a URL capable of being long enough for this purpose? or do i need another way to transmit the byte array?
example of what im attempting: http://www.website.com/scrypt.php?image="bytearray"
better yet is there a best practices for transmitting this data from say an Android app to php?
As long as it doesn't exceed the limit for a URL or contain reserved characters that would be interpreted by the CGI...you're all set. Go for it.

PHP_NORMAL_READ cutting off the data being sent

I need some help please,I have problem in reading the binary data that was sent by the device via socket.I could not receive the exact data that was sent. I am using this code
$data = #socket_read($read_sock,2048,PHP_NORMAL_READ);
I am using PHP_NORMAL_READ because it will stop reading with this "\r\n".
but when I receive,the data is not exact it only receive few binary data.
The length parameter specifies the maximum length that will be read from the stream. The PHP documentation is a bit misguiding on this subject, but what I think it means is, that you will get:
less than or exactly 'length' bytes
at least one byte
no '\r' or '\n' in the response, unless it is the only character
Most of the Socket APIs you encounter work this way, they may give you less bytes than requested, because more bytes may not be available and the data may arrive in smaller parts than that the device sent them in. The solution is to read from the socket repeatedly, until you get what you want (that means until you get string ending with newline, in your case).
You also may want to consult http://php.net/manual/en/function.socket-read.php, where the commenters suggest the functions is somewhat buggy when used with PHP_NORMAL_READ. It might be worth searching for some socket library for PHP that supports readLine.

maximum URI length for file_get_contents()

Is there a maximum length for the URI in the file_get_contents() function in PHP?
I suppose there is a maximum length, but you'll be hard pressed to find it. If you do hit the maximum, you're doing something wrong. :)
I haven't been able to find a number for PHP specifically, but MS IIS, Apache and the Perl HTTP::Daemon seem to have limits between 4,000 and 16,384 bytes, PHP will probably be somewhere around there as well.
What you need to consider is not really how much your side can handle, but also how much the other server you're querying can handle (which is presumably what you're doing). As such, any URL longer than ~1000 characters is usually already way too long and never really encountered in the real world.
As others have stated, it is most likely limited by the HTTP protocol.
You can view this answer for more info on that : What is the maximum length of an url?
In HTTP there's no length-limit for URI,and there's no note of file_get_contents() about this in the manual .So I think you needn't to consider about this problem.
BTW,the length of URI is limited by some browser and webserver.For example,in IE,the length should be less than 2083 and in FF it's 65,536.I tried to test this I found that only not more than 8182 is OK when I visited my apache on ubuntu because of limit of my apache.

PHP <-> JavaScript communication: Am I stuck with ASCII?

I am passing a lot of data between PHP and JavaScript. I am using JSON and json_encode in php, but the problem here is that I am passing a lot of numbers stored as strings - for example, numbers like 1.2345.
Is there a way to pass the data directly as numbers (floats, integers) and not have to convert it to ASCII and then back?
Thanks,
No. HTTP is a byte stream protocol(*); anything that goes down it has to be packed into bytes. You can certainly use a more compact packed binary representation of values if you like, but it's going to be much more work for your PHP to encode and your JS to decode.
Anyhow, for the common case of small numbers, text representations tend to be very efficient. Your example 1.2345 is actually smaller as a string (6 bytes) than a double-precision float (8 bytes).
JSON was invented precisely to allow non-string types to be transferred over the HTTP connection. It's as seamless as you're going to get. Is there any good reason to care that there was a serialise->string->parse step between the PHP float and the JavaScript Number?
(* exposed to JavaScript as a character protocol, since JS has no byte datatype. By setting the charset of the JSON response to iso-8859-1 you can make it work as if it were pure bytes, but the default utf-8 is usually more suitable.)
If you didn't want to use JSON, there are other encoding options. The data returned from an HTTP request is an octect stream (and not 7-bit clean ASCII stream -- if it were, there would be no way to server UTF-8 encoded documents or binary files, as simple counter examples).
Some binary serialization/data protocols are ASN.1, Thrift, Google Protocol Buffers, Avro, or, of course, some custom format. The advantage of JSON is "unified human-readable simplicity".
But in the end -- JSON is JSON.
Perhaps of interest to someone: JavaScript Protocol Buffer Implementation

http_post_data adding extra characters in response

Hey Guys I am getting some extra characters like '5ae' and '45c' interspersed along with valid data when using http_post_data. The data I am sending is XML and so is the response. The response contains these weird characters thats making the XML invalid. If I use fsockopen I do not have this issue. Would really like some input on this.
Your question is not giving much details, but (quite a wild guess, but this reminds me of that) this could related to Chunked transfer encoding (quoting) :
If a Transfer-Encoding header with a
value of chunked is specified in an
HTTP message, the body of the message
is made of an unspecified number of
chunks ending with a last, zero-sized,
chunk.
Each non-empty chunk starts with the
number of octets of the data it embeds
(size written in hexadecimal) followed by a CRLF (carriage return
and line feed), and the data itself.
The 5ae and 45c you're getting in your data could correspond to the size of each chunk.
If you are trying to send HTTP requests by hand, that might no be such a good idea : HTTP is not such an easy protocol, and you should use already-existing libraries that will deal with those kind of troubles for you.
For instance, you could take a look at curl -- see curl_setopt for the impressive list of possible options.
Edit : I realize that http_post_data is a function provided by the PECL http extension.
There's a function that might interest you, in that library, to decode chunked data : http_chunked_decode
Yes, it is a result of Chucked Transfer Encoding.
We may observe this behavior in fiddler by unchecking the 'Chunked Transfer-Encoding' in the Transformer tab of response pane.

Categories