I need some help please,I have problem in reading the binary data that was sent by the device via socket.I could not receive the exact data that was sent. I am using this code
$data = #socket_read($read_sock,2048,PHP_NORMAL_READ);
I am using PHP_NORMAL_READ because it will stop reading with this "\r\n".
but when I receive,the data is not exact it only receive few binary data.
The length parameter specifies the maximum length that will be read from the stream. The PHP documentation is a bit misguiding on this subject, but what I think it means is, that you will get:
less than or exactly 'length' bytes
at least one byte
no '\r' or '\n' in the response, unless it is the only character
Most of the Socket APIs you encounter work this way, they may give you less bytes than requested, because more bytes may not be available and the data may arrive in smaller parts than that the device sent them in. The solution is to read from the socket repeatedly, until you get what you want (that means until you get string ending with newline, in your case).
You also may want to consult http://php.net/manual/en/function.socket-read.php, where the commenters suggest the functions is somewhat buggy when used with PHP_NORMAL_READ. It might be worth searching for some socket library for PHP that supports readLine.
Related
I have a program in labview which is sending UDP packets and using a php program I am receiving those packets. So labview program is sender and php program is receiver.
In labview program, the float array is type casted to string using type cast function block and sent as UDP packets. While receiving those packets using php, I am receiving some data which is not in readable format.
I have tried converting the string array into float array using array_map ('floatval', $array).. But still the values are not coming in the readable format.
Please help me to solve this issue.
The LabVIEW help for Type Cast points you at the document on flattened data which mentions that the representation is big-endian (most significant byte first). The entry on How LabVIEW Stores Data in Memory shows the actual representation of a single-precision floating-point number (SGL):
Now that you know what LabVIEW is sending, your question becomes how to decode this in PHP - if you can't solve this yourself, I suggest asking a new question.
If you can change the LabVIEW code, you could alter the format in which the data is sent so as to make it easier to decode at the other end. Possible options there might include:
If network bandwidth is not an issue, use a standard text-based format such as JSON
If JSON is too big but you can afford eight bytes per value, convert to DBL - using a conversion 'bullet' from the numeric palette - before flattening to string, then reorder the bytes of the string to little-endian at the LabVIEW end. From Ton Plomp's comment, that might be correct for your current PHP code.
If you really can't afford more than four bytes per value, but the range of your data values is not too wide, you could scale them to an integer value (U32 or I32) before flattening; again, that might be easier to decode at the other end.
Note that although the data format you get from Type Cast and/or Flatten to String is documented and historically has been stable, I don't think it's absolutely guaranteed not to change between LabVIEW versions.
Also the unreadable section of data could be header info added by the UDP function. You may be able to parse that data and discard.
Another thing to try is to read the UDP Rx data in Labview and compare to the Tx data to try and identify what is going on.
I am working with a websocket server in php and I am encountering a problem. When the websocket fragment is too long, it will be splited into several fragments. However, I cannot find a websocket fragment decoder in php which can handle the fragmention for long data. So I decide to realize the decoder by myself. I read the RFC for websocket protocol, but I still don't understand how exactly does the fragmention work.
Here are the questions:
When applying fragmention, every fragments has the independent FIN(for example, the last fragment's FIN is set while other fragments' FIN is zero) and opcode. But does every fragments has independent mask bit and masking key?
Does the beginning fragment's payload length bits stand for the original data or just stand for the payload in the begining fragment and each fragment in fragmention has a payload length bits for itself?
Hope ur answering! I am quite confused.
Every fragment should have independent mask bit and masking key.
Payload length should represent a payload length of each fragment.
I'm writing a command line application in PHP that accepts a path to a local input file as an argument. The input file will contain one of the following things:
JSON encoded associative array
A serialized() version of the associative array
A base 64 encoded version of the serialized() associative array
Base 64 encoded JSON encoded associative array
A plain old PHP associative array
Rubbish
In short, there are several dissimilar programs that I have no control over that will be writing to this file, in a uniform way that I can understand, once I actually figure out the format. Once I figure out how to ingest the data, I can just run with it.
What I'm considering is:
If the first byte of the file is { , try json_decode(), see if it fails.
If the first byte of the file is < or $, try include(), see if it fails.
if the first three bytes of the file match a:[0-9], try unserialize().
If not the first three, try base64_decode(), see if it fails. If not:
Check the first bytes of the decoded data, again.
If all of that fails, it's rubbish.
That just seems quite expensive for quite a simple task. Could I be doing it in a better way? If so, how?
There isn't much to optimize here. The magic bytes approach is already the way to go. But of course the actual deserialization functions can be avoided. It's feasible to use a verification regex for each instead (which despite the meme are often faster than having PHP actually unpack a nested array).
base64 is easy enough to probe for.
json can be checked with a regex. Fastest way to check if a string is JSON in PHP? is the RFC version for securing it in JS. But it would be feasible to write a complete json (?R) match rule.
serialize is a bit more difficult without a proper unpack function. But with some heuristics you can already assert that it's a serialize blob.
php array scripts can be probed a bit faster with token_get_all. Or if the format and data is constrained enough, again with a regex.
The more important question here is, do you need reliability - or simplicity and speed?
For speed, you could use the file(1) utility and add "magic numbers" in /usr/share/file/magic. It should be faster than a pure PHP alternative.
You can try json_decode() and unserialize() which will return NULL if they fail, then base64_decode() and run that again. It's not fast, but it's infinitely less error prone than hand parsing them...
The issue here is that if you have no idea which it can be, you will need to develop a detection algorithm. Conventions should be set with an extension (check the extension, if it fails, tell whoever put the file there to place the correct extension on), otherwise you will need to check yourself. Most algorithms that detect what type a file actually is do use hereustics to determine it's contents (exe, jpg etc) because generally they have some sort of signature that identifies them. So if you have no idea what the content will be for definate, it's best to look for features that are specific to those contents. This does sometimes mean reading more than a couple of bytes.
Is there a maximum length for the URI in the file_get_contents() function in PHP?
I suppose there is a maximum length, but you'll be hard pressed to find it. If you do hit the maximum, you're doing something wrong. :)
I haven't been able to find a number for PHP specifically, but MS IIS, Apache and the Perl HTTP::Daemon seem to have limits between 4,000 and 16,384 bytes, PHP will probably be somewhere around there as well.
What you need to consider is not really how much your side can handle, but also how much the other server you're querying can handle (which is presumably what you're doing). As such, any URL longer than ~1000 characters is usually already way too long and never really encountered in the real world.
As others have stated, it is most likely limited by the HTTP protocol.
You can view this answer for more info on that : What is the maximum length of an url?
In HTTP there's no length-limit for URI,and there's no note of file_get_contents() about this in the manual .So I think you needn't to consider about this problem.
BTW,the length of URI is limited by some browser and webserver.For example,in IE,the length should be less than 2083 and in FF it's 65,536.I tried to test this I found that only not more than 8182 is OK when I visited my apache on ubuntu because of limit of my apache.
Hey Guys I am getting some extra characters like '5ae' and '45c' interspersed along with valid data when using http_post_data. The data I am sending is XML and so is the response. The response contains these weird characters thats making the XML invalid. If I use fsockopen I do not have this issue. Would really like some input on this.
Your question is not giving much details, but (quite a wild guess, but this reminds me of that) this could related to Chunked transfer encoding (quoting) :
If a Transfer-Encoding header with a
value of chunked is specified in an
HTTP message, the body of the message
is made of an unspecified number of
chunks ending with a last, zero-sized,
chunk.
Each non-empty chunk starts with the
number of octets of the data it embeds
(size written in hexadecimal) followed by a CRLF (carriage return
and line feed), and the data itself.
The 5ae and 45c you're getting in your data could correspond to the size of each chunk.
If you are trying to send HTTP requests by hand, that might no be such a good idea : HTTP is not such an easy protocol, and you should use already-existing libraries that will deal with those kind of troubles for you.
For instance, you could take a look at curl -- see curl_setopt for the impressive list of possible options.
Edit : I realize that http_post_data is a function provided by the PECL http extension.
There's a function that might interest you, in that library, to decode chunked data : http_chunked_decode
Yes, it is a result of Chucked Transfer Encoding.
We may observe this behavior in fiddler by unchecking the 'Chunked Transfer-Encoding' in the Transformer tab of response pane.