Multi-tier applications with PHP? - php

I am relatively new to PHP, but experienced Java programmer in complex enterprise environments with SOA architecture and multitier applications. There, we'd normally implement business applications with business logic on the middle tier.
I am programming an alternative currency system, which should be easy deployable and customizable by individuals and communities; it will be open source. That's why php/mysql seems the best choice for me.
Users have accounts, and they get a balance. also, the system calculates prices depending on total services delivered and total available assets.
This means, on a purchase a series of calculations happen; the balance and the totals get updated; these are derived figures, something normally not put into a database.
Nevertheless, I resorted to putting triggers and stored procedures into the db, so that in the php code none of these updates are made.
What do people think? Is that a good approach? My experience suggests to me that this is not the best solution, and prompts me to implement a middle tier. However, I would not even know how to do that. On the other hand, what I have so far with store procs seems to me the most appropriate.
I hope I made my question clear. All comments appreciated. There might not be a "perfect" solution.

As is the tendency these days, getting away from the DB is generally a good thing. You get easier version control and you get to work in just one language. More than that, I feel that stored procedures are a hard way to go. On the other hand, if you like that stuff and you feel comfortable with SPs in MySql, they're not bad, but my feeling has always been that they're harder to debug and harder to handle.
On the triggers issue, I'm not sure whether that's necessary for your app. Since the events that trigger the calculations are invoked by the user, those things can happen in PHP, even if the user is redirected to a "waiting" page or another page in the meantime. Obviously, true triggers can only be done on the DB level, but you could use a daemon thread that runs a PHP script every X seconds... Avoid this at all costs and try to get the event to trigger from the user side.
All of this said, I wanted to plug my favorite solution for the data access layer on PHP: Doctrine. It's not perfect, but PHP being what it is, it's good enough. Does most of what you want, and keeps you working with objects instead of database procedures and so forth.
Regarding your title, multiple tiers are, in PHP, totally doable, but you have to do them and respect them. PHP code can call other PHP code, and it is now (5.2+) nicely OO and all that. Do make sure to ignore the fact that a lot of PHP code you'll see around is total crap and does not even use methods, let alone tiers, and decent OO modelling. It's all possible if you want to do it, including doing your own (or using an existing) MVC solution.

One issue with pushing lots of features to the DB level, instead of a data abstraction layer, is that you get locked into the DBMS's feature set. Open source software is often written so that it can be used with different DBs (certainly not always). It's possible that down the road you will want to make it easy to port to postgres or some other DBMS. Using lots of MySQL specific features now will make that harder.

There is absolutely nothing wrong with using triggers and stored procedures and other features that are provided by your DB server. It works and works well, you are using the full potential of the DB, instead of simply relegating it to being a simplistic data store.
However, I'm sure that for every developer on here who agrees with you (and me), there are at least as many who think the exact opposite and have had good experiences with doing that.

Thanks guys.
I was using db triggers because I thought it might be easier to control transaction integrity like that. As you might realize, I am a developer who is also trying to get grip of the db knowledge.
Now, I see there is the solution to spread the php code on multiple tiers, not only logically but also physically by deploying on different servers.
However, at this stage of development, I think I'll stick to my triggers/sp solution, as that doesn't feel to be that wrong. Distributing on multiple layers would require me to redesign my app consistently.
Also, thinking open source, if someone likes the alternative money system, it might be easier for people to just change layout for their requirements, while I would not need to worry that calculations get wrong if people touch php code.
On the other hand, of course, I agree that db stuff might get very hard to debug.
The DB init scripts are in source control, as are the php files :)
Thanks again

Related

Is it a good idea to create a 'middleware backend' between customer backend and my frontend?

The thing is this, I have several projects where the customer have an horrific backend definition, returning data in several formats and with lot of stuff I don't need. Since Im doing the mobile webapps, Im creating a middle layer in php, using slimframework (www.slimframework.com), which basically give me a RESTFUL syntax also removing all the data I dont need, and in the format I want (JSON). Of course this middle layer will be deployed in customer backend, so even if makes me so easy the frontend implementation, im a little worry about the performance and also adding another break-point to the 'chain'. For performance, every call I get to my slimframework Im saving in a unique JSON data as a cache, and I have a text file where I can configure easily the max amount of seconds of each petition.
More technically, Im reading with curl the real web service, convert to PHP object, remove and change all the data I need and then make a json_encode...also, Ive though another ideas, like creating a batch process in cron that pulls all the web services from customer and generate local jsons... dont worry about not getting the lastest data, since is an video catch up application, so Im caching every WS but the final url is no cached.
Is there any simpler solution for my workflow?
Sounds good to me.
Sure, you're adding a new potential point of failure, but you're also adding a new place at which problems can be caught and handled resiliently — it sounds like the existing back-end cannot be trusted to do that itself. Unit/stress test the heck out of your intercept layer and gain all confidence that you're not adding undue new risk.
As for performance? Well, as with anything, you need to benchmark it and then balance the results with the other benefits. I love a good abstraction layer† and as long as you're not seeing service-denying levels of performance drop (and I don't see why you should) it's almost certainly well worth it.
If nothing else you're abstracting away this data backend that you appear to have no control over, which will effectively allow you complete flexbility to switch it out for something else someday.
And if the backend changes spontaneously? Well, at least you only need to adjust some isolated portion of your intercept layer, and not every piece of your customer-facing front-end that relies on pieces of that third-party data.
In conclusion, it seems to me like a perfectly robust solution and I think you should absolutely go ahead with it.
† of course, you don't want too many of them. It's up to us to decide how many is appropriate. I usually find zero to be an unacceptable answer. :)

Server side execution of user submitted code

Here is my situation. I am building an application that contains some heavy mathematical calculations where the formula needs to be editable by a sufficiently privileged, but untrusted, user.
I need a secure server side scripting language. I need to be able to access constants and values from 4+ database tables, the results of previous calculations, define user variables and functions, use if/then/else statements, and I'm sure more that I can't think of right now.
Some options I've considered:
I have considered using something like this matheval library but I would end up needing to extend it considerably for my use case. I would essentially be creating my own custom language.
PHP runkit sandbox. I've never used this before but am very concerned about the security issues involved. Considering the possible security issues, I don't think that this is a viable option.
One other idea that has crossed my mind that I don't know if it is possible would be to use something like javascript on the server side. I've seen js used as a scripting platform in desktop applications to extend functionality and it seems a similar approach may be feasible. I could ideally define the environment that things ran it, such as disabling filesystem access etc. Again, security seems like it would be an issue.
From the research I have done, it seems like #1 is probably my only option, but I thought I would check with a larger talent pool. :-)
If #3 is possible, it seems that it would be the way to go, but I can't seem to turn up anything that is helpful. On the other hand, there may not be much difference between #2 and #3.
Performance is another consideration. There will be roughly 65 some odd formulas each executing about 450 times. Each formula will have access to approximately 15 unique variables a hundred or so constants, and the results of previous formulas. (Yes, there is a specific order of execution.)
I can work with an asynchronous approach to calculation where the calculation would be initiated by a user event and stored in the db, but would prefer to not have to.
What is the best way to work with this situation? Are there any other third party libraries that I haven't turned up in my research? Is there another option in addition to my 3 that I should consider?
There's almost no reason to create a custom language today. There's so many available and hackable, writing your own is really a waste of time.
If you're not serving a zillion users (for assorted values of a zillion), most any modern scripting language is securable, especially if you're willing to take draconian measures to do so (such as completely eliminating I/O and system interfaces).
JavaScript is a valid option. Its straightforward to create mini-sandboxes within JS itself to run foreign code. If you want folks to be able to persist state across runs, simply require them store it in "JSON-like" JS structures that can be readily serialized from the system on exit, and just as easily reloaded. These can even be the results of the function.
If there's a function or routine you don't want them to use, you can un-define it before firing off of the foreign code. Don't want them using "read" to read a file? read = func(s) { }
Obviously you should talk to the mailing lists of the JS implementation you want to use to get some tips for better securing it.
But JS has good support, well documented, and the interpreters are really accessible.
You have two basic choices:
a) Provide your own language in which you completely control what is done,
so nothing bad can happen,
b) Use some other execution engine, and check everything it does to verify nothing bad happens.
My problem with b) is it is pretty hard to figure out all the bad things somebody might do in obscure ways.
I prefer a), because you only have to give them the ability to do what you allow.
If you have a rather simple set of formulas you want to process, it is actually pretty easy to write a parser/evaluator. See Is there an alternative for flex/bison that is usable on 8-bit embedded systems?
It isn't clear to me that you have a performance problem. yes, you want to execute something 450 times; but it includes database accesses, whose cost will dominate any computation involivng a 1000 arithmetic steps. You may find that your speed is limited by the DB access that that you need to cache the DB accesses to get it to go faster.

A form that is saved real-time. Practical?

I am trying to build a very user-friendly user interface for my site. The standard right now is to use client side as well as server side validation for forms. Right? I was wondering if I could just forgo client side validation, and rely simply on server side. The validation would be triggered on blur, and will use ajax.
To go one step ahead, I was also planning to save a particular field in the database if it has been validated as correct. Something like a real-time form update.
You see, I am totally new to programming. So I dont know if this approach can work practically. I mean, will there be speed or connection problems? Will it take toll on the server in case of high traffic? Will the site slow down on HTTPS?
Are there any site out there which have implemented this?
Also, the way I see it, I would need a separate PHP script for every field! Is there a shorter way?
What you want to do is very doable. In fact, this is the out-of-the-box functionality you would get if you were using JSF with a rich component framework like ICEfaces or PrimeFaces.
Like all web technology, being able to do it with one language means you can do it with others. I have written forms like you describe in PHP manually. It's a substantial amount of work, and when you're first getting started it will definitely be easiest with one script per field backing the form. As you get better, you will discover how you can include the field name in the request and back it down to one script for Ajax interactions per form. You can of course reduce the burden even further.
PHP frameworks may be able to make this process less onerous, but I haven't used them and would recommend you avoid them initially until you get your bearings. The magic that a system like Cake or Rails provides is very helpful but you have to understand the tradeoffs and the underlying technology or it will be very hard to build robust systems atop their abstractions.
Calculating the server toll is not intuitive. On the one hand, handling large submissions is more work than handling smaller ones. It may be that you are replacing one big request with several tiny ones for a net gain. It's going to depend on the kind of work you have to do with each form field. For example, auto completion is much more expensive than checking for a username already being taken, which is more expensive than (say) verifying that some string is actually a number or some other obvious validation.
Since you don't want to repeat yourself it's very tempting to put all your validation on one side or the other, but there are tradeoffs either way, and it is true that server-side validation is going to be slower than client-side. But the speed of client-side validation is no substitute for the fact that it will introduce security problems if you count on it. So my general approach is to do validation on the server-side, and if I have time, I will add it to the client side as well so as to improve responsiveness. (In point of fact, I actually start with validation in the database as much as possible, then in the server-side code, then client-side, because this way even if my app blows up I don't have invalid data sticking around to worry about).
It used to be that you could expect your site to run about 1/3 as fast under SSL. I don't have up-to-date numbers but it will always be more expensive than unencrypted. It's just plain more work. SSL setup is also not a great deal of fun. Most sites I've worked on either put the whole thing under SSL, or broke the site into some kind of shopping cart which was encrypted and left the rest alone. I would not spend undue energy trying to optimize this. If you need encryption, use it and get on with your day.
At your stage of the game I would not lose too much sleep over performance. Since you're totally new, focus on the learning process, try to implement the features that you think will be gratifying and aim for improvement. It's easy to obsess about performance, but you're not going to have the kind of traffic that will squash you for a long time, unless half the planet is going to want to buy your product and your site is extremely heavy and your host extremely weak. When it comes, you should profile your code and find where you are doing too much work and fix that, and you will get much further than if you try and design up front a performant system. You just don't have enough data yet to do that. And most servers these days are well beyond equipped to handle fairly heavy load—you're probably not going to have hundreds of visitors per second sustained in the near future, and it will take a lot more than that to bring down a $20 VPS running a fairly simple PHP site. Consider that one visitor a second works out to about 80,000 hits a day, you'd need 8 million hits a day to reach 100/second. You're not going to need a whole second to render a page unless you've done something stupid. Which we all do, a few times, when we're learning. :)
Good luck on your journey!

What might be the best way to benchmark a users PC, PHP or JS?

PHP - Apache with Codeigniter
JS - typical with jQuery and in house lib
The Problem: Determining (without forcing a download) a user's PC ability &/or virus issue
The Why: We put out a software that is mostly used in clinics, but can be used from home, however, we need to know, before they go to our mainsite, if their pc can handle the enormities of our web-based, browser-served software.
Progress: So far, we've come up with a decent way to test dl speed, but that's about it.
What we've done: In php we create about a 2.5Gb array of data to send to the user in a view, from there the view calculates the time it took to get the data and then subtracts the php benchmark from this time in order to get a point of reference of upload/download time. This is not enough.
Some of our (local) users have been found to have "crappy" pc's or are virus infected and this can lead to 2 problems. (1)They crash in the middle of preforming task in our program, or (2) their virus' could be trying to inject into our js thus creating a bad experience that may make us look bad to the average (uneducated on how this stuff works) user, thus hurting "our" integrity.
I've done some googling around, but most plug-ins or advice forums/blogs i've found simply give ways to benchmark the speed of your JS and that is simply not enough. I need a simple bit of code (no visual interface included, another problem i found with one nice piece of js lib that did this, but would take days to remove all of the authors personal visual code) that will allow me to test the following 3 things:
The user's data transfer rate (i think we have this covered, but if better method presented i won't rule it out)
The user's processing speed, how fast is the computer in general
possible test for infection via malware, adware, whatever maybe harmful to the user's experience
What we are not looking to do: repair their pc! We don't care if they have problems, we just don't want to lead them into our site if they have too many problems. If they can't do it from home, then they will be recommended to go to their nearest local office to use this software "in house" so to speak.
Further Explanation
We know your can't test the user-side stuff with PHP, we're not that stupid, PHP is mentioned because it can still be useful in either determining connection speed or in delivering a script that may do what we want. Also, this is not a software for just anyone on the net to go sign up and use, if you find it online, unless you are affiliated with a specific clinic and have a login name and what not, your not ment to use the sight, and if you get in otherwise, it's illegal. I can't really reveal a whole lot of information yet as the sight is not live yet. What I can say, is it mostly used by clinics/offices for customers to preform a certain task. If they don't have time/transport/or otherwise and need to do it from home, then the option is available. However, if their home PC is not "up to snuff" it will be nothing but a problem for them and make the 2 hours task they are meant to preform become a 4-6hour nightmare. Thus the reason, i'm at one of my fav quest sights asking if anyone may have had experience with this before and may know a good way to test the user's PC so they can have the best possible resolution, either do it from home (as their PC is suitable) or be told they need to go to their local office. Hopefully this clears things up enough we can refrain from the "sillier" answers. I need a REAL viable solution and/or suggestions, please.
PHP has (virtually) no access to information about the client's computer. Data transfer can just as easily be limited by network speed as computer speed. Though if you don't care which is the limiter, it might work.
JavaScript can reliably check how quickly a set of operations are run, and send them back to the server... but that's about it. It has no access to the file system, for security reasons.
EDIT: Okay, with that revision, I think I can offer a real suggestion - basically, compromise. You are not going to be able to gather enough information to absolutely guarantee one way or another that the user's computer and connection are adequate, but you can get a general idea.
As someone suggested, use a 10MB-20MB file and several smaller ones to test actual transfer rate; this will give you a reasonable estimate. Then, use JavaScript to test their system speed. But don't just stick with one test, because that can be heavily dependent on browser. Do the research on what tests will best give an accurate representation of capability across browsers; things like looping over arrays, manipulating (invisible) elements, and complex math. If there is a significant discrepancy between browsers, then use different thresholds; PHP does know what browser they're using, so you can give the system different "good enough" ratings depending on that. Limiting by version (like, completely rejecting IE6) may help in that.
Finally... inform the user. Gently. First let them know, "Hey, this is going to run a test to see if your network connection and computer are fast enough to use our system." And if it fails, tell them which part, and give them a warning. "Hey, this really isn't as fast as we recommend. You really ought to go down to the local clinic to perform this task; if you choose to proceed, it may take a lot longer than intended." Hopefully, at that point, the user will realize that any issues are on them, not on you.
What you've heard is correct, there's no way to effectively benchmark a machine based on Javascript - especially because the javascript engine mostly depends on the actual browser the user is using, amongst numerous other variables - no file system permissions etc. A computer is hardly going to let a browsers sub-process stress itself anyway, the browser would simply crash first. PHP is obviously out as it's server-side.
Sites like System Requirements Lab have the user download a java applet to run in it's own scope.

Challenge: maximize cost of obfuscation's reverse engineering

Disclaimer: Similar questions has been asked a number of times on SO, however this question is much more specific, and has not been adequately addressed so far.
We're developing a new packaged software, which, for business security reasons, must run on our customer's server, in PHP. The software is sold with a per-user end-license; price range is within $20-80 per user, target market is small (and web-savy) consultancies, and IT agencies.
To discourage piracy (eg. removing the user-license enforcement), we'd like to maximize the protection of the PHP code in any means technologically available, which does not inconvenience the user.
Let's break this down:
does not inconvenience the user: no additional server-side installs (no zend decoder, or other binaries). Has to run on a plain-vanilla shared PHP host out-of-the-box.
Maximize the protection: breaking the protection has to outweigh the cost of buying an additional license. That is, it has to take at least 3-5 working days for a professional hacker to remove the user license protection.
Any means technologically available: might call home, might use high-end crypto, might implement a c64 emulator.
To pro-actively address the so far highest-voted non-solutions:
NOT looking for perfect obfuscation, just extremely hard ones (defined as: have to take at least 3-5 working days to decrypt), OR other anti-piracy methods
NOT looking for "black-box" software packages, which I don't know how they work, and can't determine whether it fits our purpose; looking for algorithmic ,and out-of-the-box ideas.
NOT looking for license/law-side protection, we already have that covered.
We DO know, that given enough time, and focus, all obfuscation will be hacked sooner or later; we merely want this not to be the economical solution.
Given the above constraints, what methods, or ideas would you use to maximize anti-piracy measures?
Bounty-hunt: point goes for the hardest algorithmic method to reverse-engineer the code, given the constraints above.
Update / Bounty-hunt: I've accepted Ira Baxter's answer, mostly because the rest failed to answer the core question, and attempted to question the underlying assumptions (business, closed source, yadda yadda). Thanks all!
I think what you want to do is to transform the code algorithmically, to obfuscate not only what is executed, but also to obfuscate the data structures. We assume we start with a clean version of the program, produced by the developer. He always works wih the clean version. Obfuscation produces the to-ship version. Good obfuscation will produce a to-ship version with exactly the same functionality as the original, so no further testing is (arguably) needed.
For control flow scrambling, the idea is to take the nicely written code you have at the start, and push it through transformations that make static (and human) analysis of the decisions that control the flow difficult by multiplying the set of assumptions that have to analyzed. For instance, if you have two pointers, and store a value through one, can it affect the value seen by the other? Depending on whether the pointers are aliased on not, you can get two different answers. Now take N pointers, each of which may be aliased; you get 2^N possible aliasing relations. If the reader doesn't know the exact combination, he won't be able to determine if a decision might be true, false or conditional. Of course, the tool that generates this produces conditionals whose outcome it knows, because it designs (generates) the pointer rat's nest to produce a specific outcome.
See Code Obfuscation Literature Survey (not my paper), which discusses a variety of control flow and data flow obfuscation. This is likely not the most recent summary of what is possible, but its pretty instructive. You should note doing this kind of obfuscation has some impact on execution time.
What the papers on this topic make clear is that control and data flow obfuscated programs are extremely hard for static analyzers to "understand"; the papers provide/reference demonstrations of the algorithmic complexity of processing such obfuscated programs.
Now, you might argue that people aren't static analyzers and therefore don't suffer the same limitations. You might be right; Roger Penrose famously argues that people do not have the same constraints as Turing machines; the argument isn't settled by a long shot. But the entire foundation of encryption/hashing technology is built on essentially the same kind of computational complexity arguments. And to date, nobody has proven smart enough to crack these technologies in ways
that can be used in daily life by theives (good thing, or your bank accounts would be empty).
To do this to a PHP program, you need tools that can parse the PHP code, and carry out such transformations. Our DMS Software Reengineering Toolkit has robust PHP parsers, and can apply very complex transformations to code. To do this really well, you want to apply the transformations globally across all your code, not just on a file-by-file basis. We don't have this kind of obfuscation transformation implemented on PHP, but if you really wanted to do it, this would be the way. We have applied complex transformations to PHP programs for other commercial products that we sell.
When you are all done, ideally you'd compile this result to machine code, say using the HipHop compiler. (Just compiling would defeat some folks, but not the serious software engineers).
EDIT: Obfuscation != AntiPiracy is a theme in other answers. So how does obfuscation help?
First you need to deal with the anti-piracy issue. The obvious things to do are:
Add copyright comments to each file. These serve as warnings to theives. Not good ones.
Add copyright strings in various places and print them out occasionally;
these will end up in memory and play a roleif a pirate steals the code; he stole this string, too.
Add a string to your application saying, "licensed to ". This makes
your customer unenthusiastic about letting it be stolen.
Add a check to your application that it is running on the intended customer's machine.
(Since your app is intended to be very cheap, you'll probably need to automate
a registration process)
Have the application phone home with its machine ID occasionally.
Now, these steps prevent someone (legally and technically) from stealing your code.
If this is all you have, an unfazed pirate will simply remove the technical checks and its stolen.
It is very hard to prevent somebody from copying the bit stream that makes up
your product; computers are far too good at copying.
So your goal is to arrange for it to be hard for him to derive
value if he does, and that's where obfuscation comes in.
If the code is sufficiently obfuscated, he will have a difficult time locating the license check
and phone home mechansisms to disable them. (I suggest several checks, none of them always called, to make it hard for the theif to tell when he is successful.).
The obfuscation, well done, should protect the printing of the original
owner's name, which means the original owner will have some interest in prevent it from being
stolen as you'll name him along with pirate in any lawsuit.
If they defeat the licenses, copyright printing, and phone-home mechanisms,
and simply want to run it in the back room without telling you, you might be stuck.
(For $80.00, I can't imagine why they'd go to all this trouble just for this effect).
But many thieves want to modify the software to "improve" it, especially if they want your market. Serious obfuscation will prevent them for doing this; it will even
make it hard for them to add thier own license controls.
That limits the value pretty severely.
They may simply steal it and release it to world for free; your hope here is
the applicaton is hard to crack. If they succeed, your only good defense
is a continuing stream of upgrades that licensed owners get.
Obfuscation is a key to successful piracy defense, IMHO.
Obfuscation != Anti-piracy For instance you could have a heavily obfuscated class, but I can use reflection to see all methods that this class implements. I can then extend this class and override any methods that I don't like. Are you storing a secret? Because any secret value can be pulled from memory using a debugger.
3-5 days? Even with Zend-Guard it takes 3-5 seconds to break using some open source tool. Most obfuscation tools are very primitive and easy to break.
I'm sorry but I don't think there is a good solution for this.
The best anti piracy method is no method.
If you don't want to use tools such as zend, then you are better off doing absolutely nothing.
Take it from me you can waste more time and lose sales trying to stop pirates. you will only hurt yourself. Hey they don't care and its good fun, the harder you make it the more satisfaction they get in doing it. and once its done it will be available for all via a torrent. so no-one needs to repeat the effort.
Make a good application. make it work well. give Fantastic service and the customers you want will gladly pay. those customers you don't want will NEVER pay so don't waste time on them. And guess what, they actually become good advertising. people see your software on more sites they come looking for it.
So in effect you are getting free advertising.
So don't stress, don't waste your time and don't blame pirates if your software fails. blame yourself because you got too distracted trying to do the impossible
I wanted to add a little bit of my personal experience.
Back in the 90's I spent many months creating encryption techniques to reduce/prevent pirating of a heavily pirated piece of software, in the end I 'mostly' succeeded.
I used custom encryption, junk insertion, random number generators, cross module CRC checking, blah blah blah.
I used to hang out in the news group devoted to hacking my software and others like it and even struck up conversations. one polite fellow said "why are you wasting your time we do this for fun". but I was hooked. it was a competition.
If I had spent the time and effort on improving the software instead, I would have earned 10x the amount I thought I had lost to piracy.
It was a fools victory.
I thought about this a lot, and what you are asking is essentially impossible. You can obfuscate to no end and people will still steal your software. There is little you can do about it. If you write in code to call home, someone will strip it out and just put true in instead. Your best bet is to write quality software so people want to buy it. It's either that or use a commercial solution like ionCube or Zend.
Only a few things can really work. The most basic logic I can think of that would be effective (since this market sounds like it's fairly controlled, and finite) would be to use something similar to a licensing server, but with a two-way communication channel (that you can encrypt etc.. etc..).
Now, of course you can have someone disable that communication channel, but between the coding you will add to disable the software, and the fact that your company will be able to follow up with the client since you will know exactly who it is that is "down" that will help.
The third part of the logic, is for each license that is given out to play a role in generating the "checks" that will occur between the software and your licensing server. This means you generate, on-premise, unique hash codes that are used as part of the answer your software send back to the server. That pretty much rules out the hacking, because the hacker would have to know what algorith you are using to generate the licensing (since it is pre-generated, there is no logic to use to decipher it) and the hacker would have to feed you a licensing key.
The fourth step, optionally, would be to push updates to clients to refresh the security mechanisms you have in place and run "tamper" checks on your code, possibly periodically feed some sort of hash to be used in the logic your software uses to connect to the licensing server.
This still isn't perfect, someone "will" be able to clone a production machine, circumvent/redirect the licensing (and you won't know since it will be a copy) and try to work away at the check that you have in your code which require a license (as someone above mentioned, set all the logic to "True")... but you could definitly spend the time putting checks and encryption on your licensing system and make it a time-consuming and "risky" process. Unlesss.. as a final touch... you can have some deliverable from your product generated by your server (none of the code is in what the client has) and pushed to the software that has this licensing mechanism in place.. but i don't know how possible that is.
Artificial code bloat
By using post processors to automatically bloat the code and insert logic multipliers you make the code hard to modify
I use tags in the original source to indicate the type of code in each method and which code multiplier to use. Randomisers can help too, as each release looks very different
The code bloat is achieved by a variety of processes. e.g. repeating and random fiddling of variables before and after they are officially in scope. Lots of extra logic steps that will never get followed. Breaking single statements into many random small steps. Interlace these with as many other statements as possible as long as the final step is in the correct order. etc etc
The final and most important part of this process is to interlace key generation and call home processes through this mess, and to be part of this mess (remember the "random fiddling of variables before and after they are officially in scope") so that the time taken to remove the key generation and call home become unwieldy
The call home server has to act like a rolling code remote control so while the attacker might discover the call home functions, taking them out will result in incorrect initialisation values for general variables in general methods, and in as many cases as you can work with
Over time you can build the general purpose code re-parser, and a library of functions to mess the code up. Keep adding the code mess library to improve the obfuscation level
You need to have a well covering unit and integration test library to validate the code after being messed up
I have not done this with PHP, but with other languages with similar constraints as PHP
Note: This technique works fine for complex scientific software where there is large amounts of cryptic logic and maths anyway. It may not work so well for typical web sites like CMS's unless your code multipliers are very convincing
If I get this right, why not invest in a server to be delivered within the cost of the application, a server which can be placed at the customer, with only one port opened for http access, I mean with a $1000 you can get a machine that can work as a safe for your software. If anyone attempts to hack into it you will know.
Another solution might be:
Currently I am working for a huge company that has aprox 350 selling points(shops) all over the country. As we can not rely on internet connection 100% we have a server at each shop. This server handles the business required for actual selling and it is linked to a local database. The rest of the stuff sits at the head-office server. Now, the clerks have computers in front of them, and all these computers work with the application hosted on the local server, the catch on the local server is that a registry which knows if a certain service is placed locally (on the same machine) or remote (at the head office) and executes the call as required (over http from remote location or direct call from local service). Services can be placed anywhere (local or remote) and all one needs to do is to configure their location in the registry by simply entering one of the keywords : local,remote,application (application keyword means that the service is first called from remote and if it fails it is called locally). This way you can make an acceptable compromise. Highly necessary stuff can sit locally and the rest of the business logic can reside on your server where nobody can touch it.
The short answer is no, there is no way to obfuscate code in such a complex manner that it takes days to crack. The simple explanation: obfuscation is a two way process. It can be done and undone. If a computer can do it, a determined person can do it too.
Instead of wasting so much time on protecting your code, why not take the hint from the popular TV show 24 (side note: Should have never been canceled!). To ensure scripts weren't stolen or revealed to the public, they watermarked each with a number specific to cast member, director, producer, etc. You can do something similar with you scripts by "watermarking" each PHP file. This can be something as simple as changing the name of the variable to reflect a client ID or something as complex as spreading identifying characters over multiple variable and function values/names. Try working this identifier and/or parts of it into as many inconspicuous places in your scripts as possible. Only you can know the exact combination that creates the identifying information. This way if code is leaked you can sue the responsible party.
Just a suggestion, you might just want to add needed lines of code that don't really do anything, except it looks like it.

Categories