what about using more than one encoder to encode my code, because if we take a look at ioncube you will see some people can decode it.
my idea here is using non familliar encoder on the encoding code of i*oncube*
i mean i will encode two times
first with ioncube
second with any other decoder
is that will make a problem?
Well, it would depend on the type of encoding done, if its non destructive, meaning it only obfuscates the code then you probably can run multiple passes, however if it does something more like ioncube, then the person who you would give access to it will need to run both encoders in reverse order to do the decoding.
I dont think it would do any harm, but as far as I know ioncube, encodes the code using a license file which means that even someone with the software cant decode your code without having a license
Related
Example :
$fire = '🔥';
I know PHP 5+ supports this functionality natively but is it best practice or should I be storing them using their codepoints instead and if so, why?
As far as your editor and the PHP compiler are concerned, it's all just text, and '🔥' is no different from 'fire' or 'Φωτιά'.
When PHP runs, it will read the bytes in from the file and put them in memory, without caring what they mean. This leads to the most likely problem you'll have: if you save the file in your text editor as UTF-16, and then echo the string to a browser telling it that it's UTF-8, the browser won't show the right thing. But that's easily avoided by making sure your editor always uses UTF-8, and your output headers tell the browser that's what you're using.
If you don't trust your editor to do that, and you're running PHP7, you could write it in the escaped notation "\u{1f525}", but when it runs, the same bytes will end up in memory.
You might have similar problems if you send the text elsewhere - to a database, for instance - and that somewhere else doesn't know to handle it as UTF-8. How you write the string in your source file won't make any difference to that, though, that's just a case of making sure everything is configured to match.
Note: you don't actually have to use UTF-8 for this, you could use UTF-16, or some other encoding, as long as you're consistent; but UTF-8 is by far the most common these days, particularly on the web.
Ok, So we have a script that takes emails sent to thunderbird, convertes part of the message to html and saves it to a MySQL. Every file, every part written is set to UTF-8. Finally, on my end of the work, the CRM (written in PHP5.3 expected output Chrome and Firefox), I pull the message, along with other info and display something resembling GMail, but as a "task list" for our employees.
The problem I'm having, if you havn't guessed already, some customer emails are obviously using different encodings. Thus, some (not all, and certainly not majority) of the e-mails don't show all characters correctly.
At first I made use of utf8_encode to get the email messages to look right, and this helps with most email messages coming from the database, however, a few slip by with bad characters.
In the DB these "bad apostrophes" appear as ’, but after utf8_encode they come through as �??. I've tried various encoding things to guess and change as needed, however, this tends to hurt the vast majority of the other emails.
Any suggestions, on one end of the pipe or the other, how I might get these few emails to match everything else, or how i might at least create a possible preg_replace filter at the end or something?
update
it seems even the emails with bad characters are passed to end php as utf-8 according to mb_detect_encoding. This is before any extra encoding. iconv does detect the ones that ahve problems, but this really gives me no way to solve them and just puts a php error box up on the screen instead of a simple FALSE return that it says it's supposed to give, so this too seems to be no solution.
The problem is that you don't know the encoding of the mail. utf8_encode encodes only from ISO-8859-1 to UTF-8. So you could try to get the encoding with mb_detect_encoding and then convert to UTF-8 with iconv.
EDIT: You could also try to read the Content-Type's charset of the mail.
Found My Answer!
Let me start by saying thanks Sebastián Grignoli for creating this VERY handy class(raw). I ended up working it into my final solution.
Second, I added the class to Codeigniter. For any of you using CI, this is an easy implementation. Simply create a file in application/libraries named Encoding.php (yes with the capital e). Then copy in the code to that file, but comment out(or remove) namespace ForceUTF8 on line 40.
My end result looks something like:
echo(Encoding::fixUTF8(utf8_decode($msgHTML)));
I'm still double checking, but thus far, I've yet to find one single error!
If I do find another encoding issue after this, I'll make sure to update.
SO Question I found that helped.
I'm trying to figure out the best method for encrypting or encoding a URL in my code, this is what it looks like:
$ProUpdateChecker=new PluginUpdateChecker('http://the-url-is-here.com/file.json',__FILE__,'pluginslug');
I want users to not be able to see that URL. I'm not too worried about them being able to decrypt it, I just want a little bit of added security since mostly newbies will be using my script anyways.
What is the best method to accomplish this? I used ionCube to encode the entire code (not just the URL), but it broke some of the functionality.
You could just store the base64 encoded version of the URL in your scripts as a string, and everywhere you use it run it through base64 decode.
$url = base64_decode('VGhpcyBpcyBhbiBlbmNvZGVkIHN0cmluZw==');
Sort of a silly, poor man's option, but you said keep it away from noobs...
If you are just trying to make it so the URL isn't readable ASCII text, try Base64 Encoding:
http://php.net/manual/en/function.base64-encode.php
I'm doing a kind of roundabout experiment thing where I'm pulling data from tables in a remote page to turn it into an ICS so that I can find out when this sports team is playing (because I can't find anywhere that the information is more readily available than in this table), but that's just to give you some context.
I pull this data using cURL and parse it using domDocument. Then I take it and parse it for the info I need. What's giving me trouble is the opposing team. When I display the data on the initial PHP page, it's correct. But when I write to an ICS file, special UTF-8 characters get messed up. I thought utf8_encode would solve that problem, but it actually seems to have the opposite effect: when I run the function on my data, even the stuff displayed on the page (which had been displaying correctly), not in the separate ICS file (which was writing incorrectly), is incorrect. As an example: it turns "Inđija" to "InÄija."
Any tips or resources as far as dealing with UTF-8 strings in PHP? My server (a remote host) doesn't have mbstring installed either, which is a pain.
utf8_encode encodes a string in ISO 8859-1 as UTF-8. If you put UTF-8 into it, it's going to interpret it as if it was ISO 8859-1, and hence produce mojibake.
To help with your first problem, before this, I'd want to know what sort of "special" characters are being messed up in the original problem, and what way are they being messed up?
The Interwebs are no help on this one. We're encoding data in ColdFusion using serializeJSON and trying to decode it in PHP using json_decode. Most of the time, this is working fine, but in some cases, json_decode returns NULL. We've looked for the obvious culprits, but serializeJSON seems to be formatting things as expected. What else could be the problem?
UPDATE: A couple of people (wisely) asked me to post the output that is causing the problem. I would, except we just discovered that the result set is all of our data (listing information for 2300+ rental properties for a total of 565,135 ASCII characters)! That could be a problem, though I didn't see anything in the PHP docs about a max size for the string. What would be the limiting factor there? RAM?
UPDATE II: It looks like the problem was that a couple of our users had copied and pasted Microsoft Word text with "smart" quotes. Those pesky users...
You could try operating in UTF-8 and also letting PHP know that fact.
I had an issue with PHP's json_decode not being able to decode a UTF-8 JSON string (with some "weird" characters other than the curly quotes that you have). My solution was to hint PHP that I was working in UTF-8 mode by inserting a Content-Type meta tag in the HTML page that was doing the submit to the PHP. That way the content type of the submitted data, which is the JSON string, would also be UTF-8:
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
After that, PHP's json_decode was able to properly decode the string.
can you replicate this issue reliably? and if so can you post sample data that returns null? i'm sure you know this, but for informational sake for others stumbling on this who may not, RFC 4627 describes JSON, and it's a common mistake to assume valid javascript is valid JSON. it's better to think of JSON as a subset of javascript.
in response to the edit:
i'd suggest checking to make sure your information is being populated in your PHP script (before it's being passed off to json_decode), and also validating that information (especially if you can reliably reproduce the error). you can try an online validator for convenience. based on the very limited information it sounds like perhaps it's timing out and not grabbing all the data? is there a need for such a large dataset?
I had this exact problem and it turns out it was due to ColdFusion putting none printable characters into the JSON packets (these characters did actually exist in our data) but they can't go into JSON.
Two questions on this site fixed this problem for me, although I went for the PHP solution rather than the ColdFusion solution as I felt it was the more elegant of the two.
PHP solution
Fix the string before you pass it to json_decode()
$string = preg_replace('/[\x00-\x1F\x80-\xFF]/', '', $string);
ColdFusion solution
Use the cleanXmlString() function in that SO question after using serializeJSON()
You could try parsing it with another parser, and looking for an error -- I know Python's JSON parsers are very high quality. If you have Python installed it's easy enough to run the text through demjson's syntax checker. If it's a very large dataset you can use my library jsonlib -- memory use will be higher than with demjson, but it will run faster because it's written in C.