so I have my php API (html Get api for Flash builder and C# apps). So if you want to submit data to it you use string like
http://localhost/cms/api.php?method=someMethod&string=Your_String
If there are english letters in it its ok. But what if I need to pass UTF-8 string like this Русское Имя to my api what shall I do?
Use the rawurlencode() function. It will encode your string byte by byte, but it is not a problem, since UTF-8 is an ASCII aware representation. All code positions below 128 are identical to the ASCII one, all code positions above 127 are represented with byte sequences which are all between 128 and 255, so you will not have problems with it. The input wrapper should decode the parameters into your $_REQUEST array properly.
Related
I have an online service that uses MD5s of JSON-encoded REQUEST objects as a form of verification that the object that is sent over POST is the object that is received, and has not been edited or truncated in transit.
We have come across a problem when working with clients written in VBA, which convert UTF characters to uppercase unicode when converted to JSON through WebHelpers.ConvertToJson
For example, in PHP, json_encode("£") will return '\u00a3', but in VBA, WebHelpers.ConvertToJson('£') will return '\u00A3'.
When I MD5 those two strings, the MD5s are obviously very different.
So, my question is - how do I set WebHelpers.ConvertToJson to output lowercase unicode?
Or, how would I translate a JSON string after the conversion to lowercase the unicode?
I want a string to sha1 encoding, then to hex, with a length of 40 characters. This is a Java webservice and the client side is to be done with PHP.
Original code is Java (I don't have the source, only the documentation) and it hashes the following string:
chNFe=43120910585504000174650010000000541123456781&nVersao=100&tpAmb=2&
dhEmi=323031322d30392d32375431363a32303a33342d30333a3030&vNF=1000.00&vICMS=180.00&digVal=37327151612b623074616f514f3966414a7766646c5875715176383d&cIdToken=0000011058550420130001
To the following hex:
3FACB55248244D98C658FC8A826413BCEF10A4AE
The example above is from the webservice documentation and it says string was encoded with sha1, then the result was encoded to hex.
I tried sha1 then dechex and many other ways, but cannot get the same result. Has anyone an idea of what type of encoding PHP have to do to get this hash?
Thank you.
The NFE manual is wrong. The example String has a white space at the end of string..
where appears
5176383d&cIdToken=000001105855042013000 is really
5176383d &cIdToken=000001105855042013000
Convencional functions the cript this using sha1 resolves the problem ;)
in mysql you can do :
sha1(yourExampleString)...
in php could have the something like...
I'm trying to convert a string to UTF8, on both obj-c and php.
I get different results:
"\xd7\x91\xd7\x93\xd7\x99\xd7\xa7\xd7\x94" //Obj-C
"\u05d1\u05d3\u05d9\u05e7\u05d4" //PHP
Obj-C code:
const char *cData = [#"בדיקה" cStringUsingEncoding:NSUTF8StringEncoding]
PHP code:
utf8_encode('בדיקה')
This difference breaks my hash algorithm that follows.
How can I make the two strings encoded the same way? Should I change the obj-c\php ?
Go to http://www.utf8-chartable.de/unicode-utf8-table.pl
In the combo box switch to “U+0590 … U+5FF Hebrew”
Scroll down to “U+05D1” which is the rightmost character of your input string.
The third column shows the two UTF-8 bytes: “d7 91”
If you keep looking you will see that the PHP and the Objective-C are actually the same. The “problem” you are seeing is that while PHP uses an Unicode escape (\u), Objective-C uses direct byte hexadecimal escapes (\x). Those are only visual representations of the strings, the bytes in memory are actually the same.
If your hash algorithm deals with bytes correctly, you should not see differences.
What are you using to do the encoding on PHP? It looks like you're generating a UTF-16 string.
Try utf8_encode() and see if that gives better results.
I want to encode some binary strings with something like base64 but only with alpha numeric chars. I know bin2hex could do this, but it makes the encoded string much longer (i tries gzcompress in the strings but didn't make much difference).
Is there any other existing encoding method to do this?
http://en.wikipedia.org/wiki/Binary-to-text_encoding
The most used forms of binary-to-text encodings are:
hexadecimal
base64
quoted-printable
uuencoding
yEnc
Ascii85
BinHex
Percent encoding
I am trying to send something to serial port (r232) with PHP.
I am using this class: http://www.phpclasses.org/browse/package/3679.html
The problem is that I am allowed to send only 1 byte.
But if I send something like "1", I am actually sending 49 (ASCII for 1).
Instead of send("1"), I tried with send(1) but it is no good, because this is integer which has 2 bytes.
So is there a way to send a "real" char, not ASCII equivalent?
The chr() function returns a character given by the integer for the corresponding ascii character.
It looks like the library is expecting characters as input. If you need to send the character which would encode to 0x01, you just send "\001". The function chr() would convert characters to integer values and would be no use here.
One more thing: The byte size of integers depends on the underlying system and is mostly 4 bytes.
I'm not sure what you are trying to accomplish. Are you trying to to send the integer 1? Not being familiar with the class, have you tried to give just the value 1 as an argument? If that doesn't work, try to wrap it with the chr() function.