I made a web based program for a customer, and I want to install the app on a local server of him.
I don't want to give him all the source until he has paid for it, so my idea was to store most of the core code on an external server, and only have a kind of include on his server, so he would not be able to see / copy / change the actual PHP code.
I know I can use include() with a URL as soon as I have changed the corresponding entry in the PHP.ini file, but is there a more secure way of doing this?
Also, what configuration should my server have so that the PHP code on his local server would be able to read the PHP on mine? Wouldn't that pose a huge security risk if I allow other servers to "load" my PHP code?
(Notice that I use a free Web hosting service as the "second server" and I don't have any access to the conf files.)
I hope I've explained my situation well enough.
Including your php remotely is a) yes a huge security risk and b) not accomplishing much, since your customer can also "see" that remote code, copy/paste it, and have it all in his possession.
Option 1: Don't give away the app!
If your customer wants to test the app, deploy it to a server that you control. Let him see/use/test the app, without access to the source code.
Option 2: Encode it
If you absolutely have to give your app to the customer and yet need to protect it, look at encoding solutions. We use http://www.ioncube.com/ to encode/protect PHP code that we deploy to a customer's server.
Related
I've a rather odd requirement: I want users to be able to verify the live source code of a web app before they input data or extract data from it.
Or, on a more higher level, the users need to be reasonably assured of what is being done (and not done) in the back end. Of course, if you inspect the stream from a process external to the web server, this becomes a useless exercise. But I only need a reasonable level of assurance.
What are the options? I'm willing to use pretty much any server side language/platform, provided it serves the purpose better than the alternatives. It cannot be a method that can be used to easily spoof the source code -- there has to be some assurance that the code is live and not a separate copy (something equivalent to making /var/www/app and apache conf world-readable, but not exactly).
Update: this should be read-only
Giving them access to your Git sources is simple and straightforward. If you cannot convince them that you deploy what you show, you lose anyway. There is no way to prove that with a more convoluted system either (short of giving them write access!)
No server-side solution will do. If the users don't trust the server to begin with then showing them some code will not convince them that the code is actually what processes their input, or that no one is listening in on the traffic or on the server-side process.
If the server is not a trusted platform as far as the users are concerned, then you will have to execute the code somewhere the users do trust. On a trusted 3rd-party, or even better on the user's machine itself. Be that as a downloable module they can inspect and run themselves (something interpreted, most likely, like Python or node) or even better: in their browser.
I got a situation where I have lots of system configurations/logs off which I have to generate a quick review of the system useful for troubleshooting.
At first I'd like to build kind of web interface(most probably a php site) that gives me the rough snapshot of the system configuration using the available information from support logs. The support logs reside on mirrored servers (call it log server) & the server on which I'll be hosting the site (call it web server) will have to ssh/sftp to access them.
My rough sketch:
The php script on web server will make some kind of connection to the log server & go to the support logs location.
It'll then trigger a perl script at logs server, which will collect relevant stuffs from all the config/log files into some useful xml (there'd be multiple of those).
Someway these xml files are transferred to web server & php will use it to create the html out of it.
I'm very new to php & would like to know if this is feasible or if there's any other alternative/better way of doing this?
It would be great if someone could provide more details for the same.
Thanks in advance.
EDIT:
Sorry I missed to mention that the logs aren't the ones generated on live machine, I'm dealing with sustenance activities for NAS storage device & there'll be plenty of support logs coming from different end customers which folks from my team would like to have a look at.
Security is not a big concern here (I'm ok with using plain text authentication to log servers) as these servers can be accessed only through company's VPN.
Yes, PHP can process XML. A simple way is to use SimpleXML: http://php.net/manual/en/book.simplexml.php
While you can do this using something like expect (I think there is something for PHP too..), I would recommend doing this in two separate steps:
A script, running via Cron, retrieves data from servers and store it locally
The PHP script reads from the local stored data only, in order to generate reports.
This way, you have these benefits:
You don't have to worry about how to make your php script connect via ssh to servers
You avoid the security risks related to allowing your webserver user log in to other servers (high risk in case your script gets hacked)
In case of slow / absent connectivity to servers, long time to retrieve logs, etc. you php script will still be able to quickly show the data -- maybe, along with some error message explaining what went wrong during latest update
In any case, you php script will terminate much quicker since it only has to retrieve data from local storage.
Update: ssh client via php
Ok, from your latest comment I understand that what you need is more a "front-end browser" to display the files, than a report generation tool or similar; in this case you can use Expect (as I stated before) in order to connect to remote machines.
There is a PECL extension for PHP providing expect functionality. Have a look at the PHP Expect manual and in particular at the usage examples, showing how to use it to make SSH connections.
Alternate way: taking files from NFS/SAMBA share
Another way, avoiding to use SSH, is to browse files on the remote machines via locally-mounted share.
This is expecially useful in case interesting files are already shared by a NAS, while I wouldn't recommend this if that would mean sharing the whole root filesystem or huge parts of it.
What is the best process for updating a live website?
I see that a lot of websites (e.g. StackOverflow) have warnings that there will be downtime for maintenance in advance. How is that usually coded in? Do they have a config value which determines whether to display such a message in the website header?
Also, what do you do if your localhost differs from the production server, and you need to make sure that everything works the same after you transfer? In my case, I set up development.mydomain.com (.htaccess authentication required), which has its own database and is basically my final staging area before uploading everything to the live production site. Is this a good approach to staging?
Lastly, is a simple SFTP upload the way to go? I've read a bit about some more complex methods like using server-side hooks in Git.. Not sure how this works exactly or whether it's the approach I should be taking.
Thanks very much for the enlightenment..
babonk
This is (approximately) how it's done on Google App Engine:
Each time you deploy an application, it is associated with a subdomain according to it's version:
version-1-0.example.com
version-1-1.example.com
while example.com is associated with one of the versions.
When you have new version of server-side software, you deploy it to version-2-0.example.com, and when you are sure to put it live, you associate example.com with it.
I don't know the details, because Google App Engine does that for me, I just set the current version.
Also, when SO or other big site has downtime, that is more probable to be a hardware issue, rather than software.
That will really depend on your website and the platform/technology for your website. For simple website, you just update the files with FTP or if the server is locally accessible, you just copy your new files over. If you website is hosted by some cloud service, then you have to follow whatever steps they offer to you to do it because a cloud based hosting service usually won’t let you to access the files directly. For complicated website that has a backend DB, it is not uncommon that whenever you update code, you have to update your database as well. In order to make sure both are updated at the same time, you will have to take you website down. To minimize the downtime, you will probably want to have a well tested update script to do the actual work. That way you can take down the site, run the script and fire it up again.
With PHP (and Apache, I assume), it's a lot easier than some other setups (having to restart processes, for example). Ideally, you'd have a system that knows to transfer just the files that have changed (i.e. rsync).
I use Springloops (http://www.springloops.com/v2/) to host my git repository and automatically deploy over [S/]FTP. Unless you have thousands of files, the deploy feels almost instantaneous.
If you really wanted to, you could have an .htaccess file (or equivalent) to redirect to a "under maintenance" page for the duration of the deploy. Unless you're averaging at least a few requests per second (or it's otherwise mission critical), you may not even need this step (don't prematurely optimize!).
If it were me, I'd have a an .htacess file that holds redirection instructions, and set it to only redirect during your maintenance hours. When you don't have an upcoming deploy, rename the file to ".htaccess.bak" or something. Then, in your PHP script:
<?php if (file_exists('/path/to/.htaccess')) : ?>
<h1 class="maintenance">Our site will be down for maintenance...</h1>
<?php endif; ?>
Then, to get REALLY fancy, setup a Springloops pre-deploy hook to make sure your maintenance redirect is setup, and a post-deploy hook to change it back on success.
Just some thoughts.
-Landon
I'm currently using SSH+SVN for a web project developed primarily in PHP. There is another developer working with me and we both check out from the repo into our own sandboxes which is viewable from the web.
I want to bring in new implementers and restrict them to certain parts of the project code. How do I achieve this and still allow them to have a sandbox to preview the site with their changes in it?
For example, I have a piece of code called proprietary_algo.php that needs to be restricted to only privileged developers (read, write, execute). All other new implementers can still view the site via their sandbox, which requires the execution of proprietary_algo.php, but they cannot copy the code or read the code inside of it.
I'm open to moving away from SVN or setting up a whole new process if I can achieve this.
Added note: no, NDAs and trust will not cut it. For our business need and situation, the specific source files need to be restricted.
MORE INFO:
I setup a virtual host and DNS that points to their sandbox dir (example: devuser1.mydomain.com) so they can do testing. They checkout code directly from trunk into their sandbox and edit code on their IDEs remotely connected via SSH. As mentioned above, there are some code in the repo that should be off limits, but still needed to run the site when they edit and test in their sandboxes. All devs share the same MySQL DB instance.
You can do that if you use svn+httpd.
Addressing "requires the execution of proprietary_algo.php, but they cannot copy the code or read the code inside of it." If NDAs won't cut it, you are in for a world of pain.
Even once you've set things up with the SVN access controls, you won't be able to stop their PHP script copying the secret scripts to HTTP output.
Actually, you can stop it, but they'd have to either:
Call the secret script via a http request (e.g. curl). You'll need to implement an XML/JSON/name-your-HTTP-RPC-method interface between trusted and untrusted code.
Allow untrusted code to execute
CGI-mode scripts.
So, if this question has been asked before, I'm sorry. I'm not exactly sure what to search for.
Introduction:
All the domains I maintain now are hosted on my server, so I have not ran into this problem yet.
I have created a structure, similar to WordPress, for uploading and editing images.
I regularly create changes in the functions and upload them to a single folder. When the user logs in, the contents are automatically downloaded into their folder.
What I am wanting to do:
Now, say I have a user that is not hosted on my server. I cannot use copy(), but is there a safe and secure way to echo the contents of each php file (obviously, I can echo) into another file on the users server?
For example:
Currently I can copy from jasonleodurbin.com to geodun.com (same server), but say I want to copy jasonleodurbin.com/test.php to somedomain.com/test.php.
I had some thoughts like give each user a private key and send that to a file like echo.php. echo.php will grab the contents of every file (that has been modified recently) and echo that to the screen. The requesting server would take that content and copy that into it's respective .php file.
I assume I could send the key through GET, but since I have never dabbled into the security implications of anything (I am a hobbyist), I don't know how secure this is.
Are there any suggestions or directions that someone could send me?
I appreciate the help!
I'm assuming this is sensitive data. If that's the case, then I would suggest encrypting the file using PGP keys. Either way, you need a method to send the file from your server to their server. I can't recall how I did it, but I used to send encrypted data file from our remote server to a server in house. We used PGP keys to encrypt and decrypt once it arrived in house. As for the method we used to send the file across the web, I believe we used SCP (you need shell access on the server).
You could use FTP, but how about setting it up so that they only have access to a particular directory so they can't touch anything else. You'll need a script to grab the file from the FTP location and storing it in the appropriate directory per user?
Just thought of something, store the file in a protected folder. Have the user download the file using curl. I believe you can specify username/password with curl.
Several options:
Upload the newest version of test.php as test.phps (PHP Source file, will be displayed instead of run) in a location know to the client. It is then up to them to download this file and install it on their web server.
pros: not much effort required on your part, no keys or encryption required.
cons: everyone can view the contents of your PHP file if they know where to look, no guarantee that clients will actually get updated versions of the file.
Copy the file to clients web server. Use scp, ftp, or some such method to update test.php on the clients web server whenever you change it.
pros: file will always be updated. Reasonably secure if you use scp
cons: extra step required for you, you will have to remember to do this each time you change test.php. You will need to have access to the clients web server for this to work
Automated copy at a timed interval. Set up a cron script that syncs test.php to the clients web server at a certain time each hour/day/week/whatever
pros: Not much repeated effort required on the part of either party. Reasonably secure if you use scp
cons: could break if something changes and you're not emailing when an error occurs. You will still also need access to the clients machine for this to work.
There's probably a lot more different ways to do this as well, but this is just a few to get you started
Use a version control system, such as subversion. Just check in your code to the repository each time you make some changes you want to push, and run an update from the clients. If you're already using a version control system, create a production-branch where you commit your changes when they're ready to be pushed to clients.
It can be done from the clients in pure php (slightly experimental) with library from here or here, with a PHP extension, or with a wrapper to the native svn client.
This gives you security, as each user can have their own password, which you can retract if you so please. Can also do encryption by running through a ssh tunnel (limits your library choices to the wrapper I think), but really, wouldn't worry too much about encryption, who's going to be looking at the traffic between the servers? Unless you're doing top secret type stuff.
It also gives you automatic change detection, you don't have to roll your own way of keeping track of which files are updated as this is done when you commit your new changes.
It's a proven way of doing code bases up to date, so I don't see why you would implement your own. It also gives you the extra advantage of being able to roll back changes if (when) there's a problem with the code update.