I'm currently using SSH+SVN for a web project developed primarily in PHP. There is another developer working with me and we both check out from the repo into our own sandboxes which is viewable from the web.
I want to bring in new implementers and restrict them to certain parts of the project code. How do I achieve this and still allow them to have a sandbox to preview the site with their changes in it?
For example, I have a piece of code called proprietary_algo.php that needs to be restricted to only privileged developers (read, write, execute). All other new implementers can still view the site via their sandbox, which requires the execution of proprietary_algo.php, but they cannot copy the code or read the code inside of it.
I'm open to moving away from SVN or setting up a whole new process if I can achieve this.
Added note: no, NDAs and trust will not cut it. For our business need and situation, the specific source files need to be restricted.
MORE INFO:
I setup a virtual host and DNS that points to their sandbox dir (example: devuser1.mydomain.com) so they can do testing. They checkout code directly from trunk into their sandbox and edit code on their IDEs remotely connected via SSH. As mentioned above, there are some code in the repo that should be off limits, but still needed to run the site when they edit and test in their sandboxes. All devs share the same MySQL DB instance.
You can do that if you use svn+httpd.
Addressing "requires the execution of proprietary_algo.php, but they cannot copy the code or read the code inside of it." If NDAs won't cut it, you are in for a world of pain.
Even once you've set things up with the SVN access controls, you won't be able to stop their PHP script copying the secret scripts to HTTP output.
Actually, you can stop it, but they'd have to either:
Call the secret script via a http request (e.g. curl). You'll need to implement an XML/JSON/name-your-HTTP-RPC-method interface between trusted and untrusted code.
Allow untrusted code to execute
CGI-mode scripts.
Related
Just as the question says... I've read up a few articles, others says just don't do it, but yet fail to mention a safe way. I know it hazardous to give it sudo access or root, but I was thinking about running a script that has root access through root.
One post was talking about a binary wrapper, but I did not fully understand it when I attempted it and when I tried to do a search to understand I didn't find anything that explain it well.
So, what would be a good-safe way? I don't even need to have a detailed explanation. You can just point me to a good source to start reading.
Thanks.
Specs:
Ubuntu Server 14.04
EDIT:
Commands I am talking about is mkdir, rmdir with an absolute path. Create user, remove user (which is why I need root) and edit some Apache files for me.
They fail to provide a safe way because, IMHO, there isn't one. Or, to put it another way, are you confident that your code that protects the create user and add user functions is cleverer than the hackers code that tries to gain access to your system via the back door you've built?
I can't think of a good reason for a web site to create a new system-level user. Usually web applications run using system users that are created for them by an administrator. The users inside your web site only have meaning for that web site so creating a new web site user gains that user no system privileges at all. That said, it's your call as to whether you need to do it or not.
In those cases where system operations are necessary a common approach is to build a background process that carries out those actions independently of the web site. The web site and that background process communicate via anything that works and is secure - sockets, a shared database, a text file, TCP-IP, etc. That separation allows you to control what actions can be requested and build in the necessary checks and balances. Of course it's not a small job, but you're not the first person to want to do this so I'd look for an existing tool that supports this administration.
I have developed an web app in PHP, but without me my is client editing files using his knowledge. I want to stop him, to hide, protect or lock my php code. So, he has to come back to me for edits.
If you put your code into Source Control, you can easily monitor and control what code changes are made to your application (and easily reverse them if needed). GIT is a very popular source control system, you should investigate it, this tutorial is very easy to follow: http://gitimmersion.com/
I made a web based program for a customer, and I want to install the app on a local server of him.
I don't want to give him all the source until he has paid for it, so my idea was to store most of the core code on an external server, and only have a kind of include on his server, so he would not be able to see / copy / change the actual PHP code.
I know I can use include() with a URL as soon as I have changed the corresponding entry in the PHP.ini file, but is there a more secure way of doing this?
Also, what configuration should my server have so that the PHP code on his local server would be able to read the PHP on mine? Wouldn't that pose a huge security risk if I allow other servers to "load" my PHP code?
(Notice that I use a free Web hosting service as the "second server" and I don't have any access to the conf files.)
I hope I've explained my situation well enough.
Including your php remotely is a) yes a huge security risk and b) not accomplishing much, since your customer can also "see" that remote code, copy/paste it, and have it all in his possession.
Option 1: Don't give away the app!
If your customer wants to test the app, deploy it to a server that you control. Let him see/use/test the app, without access to the source code.
Option 2: Encode it
If you absolutely have to give your app to the customer and yet need to protect it, look at encoding solutions. We use http://www.ioncube.com/ to encode/protect PHP code that we deploy to a customer's server.
What is the best process for updating a live website?
I see that a lot of websites (e.g. StackOverflow) have warnings that there will be downtime for maintenance in advance. How is that usually coded in? Do they have a config value which determines whether to display such a message in the website header?
Also, what do you do if your localhost differs from the production server, and you need to make sure that everything works the same after you transfer? In my case, I set up development.mydomain.com (.htaccess authentication required), which has its own database and is basically my final staging area before uploading everything to the live production site. Is this a good approach to staging?
Lastly, is a simple SFTP upload the way to go? I've read a bit about some more complex methods like using server-side hooks in Git.. Not sure how this works exactly or whether it's the approach I should be taking.
Thanks very much for the enlightenment..
babonk
This is (approximately) how it's done on Google App Engine:
Each time you deploy an application, it is associated with a subdomain according to it's version:
version-1-0.example.com
version-1-1.example.com
while example.com is associated with one of the versions.
When you have new version of server-side software, you deploy it to version-2-0.example.com, and when you are sure to put it live, you associate example.com with it.
I don't know the details, because Google App Engine does that for me, I just set the current version.
Also, when SO or other big site has downtime, that is more probable to be a hardware issue, rather than software.
That will really depend on your website and the platform/technology for your website. For simple website, you just update the files with FTP or if the server is locally accessible, you just copy your new files over. If you website is hosted by some cloud service, then you have to follow whatever steps they offer to you to do it because a cloud based hosting service usually won’t let you to access the files directly. For complicated website that has a backend DB, it is not uncommon that whenever you update code, you have to update your database as well. In order to make sure both are updated at the same time, you will have to take you website down. To minimize the downtime, you will probably want to have a well tested update script to do the actual work. That way you can take down the site, run the script and fire it up again.
With PHP (and Apache, I assume), it's a lot easier than some other setups (having to restart processes, for example). Ideally, you'd have a system that knows to transfer just the files that have changed (i.e. rsync).
I use Springloops (http://www.springloops.com/v2/) to host my git repository and automatically deploy over [S/]FTP. Unless you have thousands of files, the deploy feels almost instantaneous.
If you really wanted to, you could have an .htaccess file (or equivalent) to redirect to a "under maintenance" page for the duration of the deploy. Unless you're averaging at least a few requests per second (or it's otherwise mission critical), you may not even need this step (don't prematurely optimize!).
If it were me, I'd have a an .htacess file that holds redirection instructions, and set it to only redirect during your maintenance hours. When you don't have an upcoming deploy, rename the file to ".htaccess.bak" or something. Then, in your PHP script:
<?php if (file_exists('/path/to/.htaccess')) : ?>
<h1 class="maintenance">Our site will be down for maintenance...</h1>
<?php endif; ?>
Then, to get REALLY fancy, setup a Springloops pre-deploy hook to make sure your maintenance redirect is setup, and a post-deploy hook to change it back on success.
Just some thoughts.
-Landon
I'm attempting to build an application in PHP to help me configure new websites.
New sites will always be based on a specific "codebase", containing all necessary web files.
I want my PHP script to copy those web files from one domain's webspace to another domain's webspace.
When I click a button, an empty webspace is populated with files from another domain.
Both domains are on the same Linux/Apache server.
As an experiment, I tried using shell and exec commands in PHP to perform actions as "root".
(I know this can open major security holes, so it's not my ideal method.)
But I still had similar permission issues and couldn't get that method to work either.
But I'm running into permission/ownership issues when copying across domains.
Maybe a CGI script is a better idea, but I'm not sure how to approach it.
Any advice is appreciated.
Or, if you know of a better resource for this type of information, please point me toward it.
I'm sure this sort of "website setup" application has been built before.
Thanks!
i'm also doing something like this. Only difference is that i'm not making copies of the core files. the system has one core and only specific files are copied.
if you want to copy files then you have to take in consideration the following:
an easy (less secured way) is to use the same user for all websites
otherwise (in case you want to provide different accesses) - you must create a different owner for each website. you must set the owner/group for the copied files (this will be done by root).
for the new website setup:
either main domain will run as root, and then it will be able to execute a new website creation, or if you dont want your main domain to be root, you can do the following:
create a cronjob (or php script that runs in a loop under CLI), that will be executed by root. it will check some database record every 2 minutes for example, and you can add from your main domain a record with setup info for new hosted website (or just execute some script that gains root access and does it without cron).
the script that creates this can be done in php. it can be done in any language you wish, it doesn't really matter as long as it gets the correct access.
in my case i'm using the same user since they are all my websites. disadvantage is that OS won't create restrictions, my php code will (i'm losing the advantage of users/groups permissions between different websites).
notice that open_basedir can cause you some hassle, make sure you exclude correct paths (or disable it).
also, there are some minor differences between fastCGI and suPHP (i believe it won't cause you too much trouble).