I'm planning an application that allow users to create a specific type of website. I wanted to link account names to 'account.myapp.com'. Going to 'account.myapp.com' will serve up a website. I haven't a clue on how to map this. I'll be using Code Igniter as my development tool.
I would like to give users the ability to add a registered domain name for their website, rather than use the standard sub domain. Any tips/methods on this?
Also, what are some pitfalls and problems I should be looking for when developing something with this design? My biggest concern is backing myself in a corner with bad database design, and creating a nightmare of an app to maintain or update.
Thanks for your time!
Your plan for having a single app that serves all the sites is going to be quite an undertaking and a lot of work. That is not to say it isn't possible (plenty of enterprise CMS's, including Sharepoint, allow you to run 'virtual sites' etc. from a single install).
You are going to need to undertake a lot of planning and design, specifically on the security front, to make sure the individual sites operate in isolation. I assume that each site will have its own account(s) - you are going to have to do a lot of work to make sure users can't accidentally (or malicously) start editing another site.
And you are right to consider maintanence - if you have all the sites running under a single application, and therefore a single database, that database is going to get big and messy, very quickly. It also then becomes a single point of failure.
A better plan would be to develop a self-contained solution (for a single website) - this can then run from it's own directory, with it's own database with it's own set of accounts. It would be significantly smaller (in terms of both code and database) and therefore probably perform a lot better. Day-to-Day maintanence would be easier (restore a website from backup), but software updates (to add a new feature) would be a bit trickier, though as it's PHP it's just file uploads and SQL patches, therefore you can automate this with ease.
In terms of domains: if you went with the invidual app (one per website) approach, you can use Apache's Dynamic Virtual Hosts feature, which effectively maps a URL to the filesystem, (so website.mydomain.com could be translated to automatically be severed from /home/vhosts/com/mydomain/website): thus deploying a new website would be a matter of simply copying the files into the correct directory, creating the database, and updating a config file, all of which could be automated with ease.
If users want to use their own URL's then they have to firstly update their DNS to point to your server and secondly you would need to configure an Apache vhost for that domain, which would probably involve a restart of apache and thus affect all other users.
This is very easy to do in codeigniter and can be complete done using routes.php, and a pre-controller hook.
THAT BEING SAID... Don't do this. It's generally not a good idea. Even 37Signals, who made this sort of account management famous is recanting, and moving towards centralized accounts. Check http://37signals.com/accounts
If I'm understanding your question correctly, you need to setup wildcard DNS, and use Apache mod_rewrite to internally redirect (example) myaccount.myapp.com to myapp.com/?account=myaccount. You app logic can take it from there.
I just Googled, "wildcard dns mod_rewrite account" (without quotes) and found some examples with instructions, such as:
http://www.reconn.us/content/view/46/67/
This is a valid and desirable way to structure certain web apps IMO. I'm not aware of serious drawbacks.
I don't really know of a great (automated/scalable) way to allow the users to specify their own individual domain names but you might be able to do it if you had them modify their domain's DNS to point to your web server, then added a ServerAlias directive to your "myapp" Apache configuration. You're still left with the problem of your myapp runtime instance understanding that requests coming through a customer's domain are specific to a customer account. So (example) customeraccount.com really equates to myapp.com/?account=customeraccount. Some more mod_rewrite rules could probably take care of this, but it's not automated (perhaps it could be though with an include file or such).
Sorry, you said you were using CodeIgniter ... substitute in then myapp.com/account/myaccount where I wrote myapp.com/?account=myaccount.
First, you have to setup your Apache (or whatever webserver you're using) to give every subdomain the same DNS settings (wildcard match). Then, you have to use CodeIgniter's routing features to parse the subdomain from the request URL, set that as param (or whatever that's called in CodeIgniter) and have some fun with it in your controller.
About pitfalls and problems: that depends on what you want to do. ;)
Related
I have a media center which also serves as a low-volume personal nginx server.
Currently, sickbeard, sabnzbd and maraschino are all reached through subdomains, such as sickbeard.domain.com, which are each proxied through nginx to the appropriate port for that service's server. They are each individually secured by their own auth systems, which I don't entirely understand (I tried reading the code, but it's way over my head and in Python, which I know very little about) but they all use the basic auth popup window, which I think is hideous and redundant.
I also have a website, secured by a session-based authorization with a nice form, using php, that I created as part of a tutorial in php (Fort Knox, this ain't.)
What I want is to go to my website, log in to my pretty form, and have links there that take me to all of my services, without having to go through a challenge screen every time. How can I begin to do this? I tend to think my Google-fu is pretty good, but I'm not even sure where to start.
Additional notes:
I put the bones of this together years ago now, but if I recall I went with the subdomain scheme because I was having trouble getting nginx's proxy_pass to work with subfolders. I'm not wedded to it, but I do think it looks nice and clean.
Ideally, I would also like to somehow serve the above services through nginx, so I don't have to have so many open ports.
I also wouldn't mind advice on my php auth scheme. I had a hard time finding tutorials between basic auth and complex systems involving a database of users. I am the only user. I keep my credentials in a flat file outside the path of the site, and I have no need to grow beyond that. I just want an attractive integrated login form, instead of a popup straight out of the 90s.
Sab and Sickbeard are WSGI based, and use the CherryPy libraries. I did a lot of research and decided I could create a new auth method that manually pulled from my php session files and used bcrypt for the hash checks. But I realized I'd stand the risk of my changes being overwritten every time I updated.
Maraschino is also WSGI based, but uses the flasks framework. I had the same realizations as above, but while going through the documentation and code for that, I realized that Maraschino is a lot more powerful than I thought, and the only thing I would want to do on either Sab or Sickbeard that I can't do with Maraschino is non-routine system maintenance, like changing ports or api keys.
So my conclusion is that I'm going to close the ports for Sab and Sickbeard to outside calls, do all my routine activities through Maraschino, and focus my development efforts on getting a better login screen for that. I'll still have multiple ugly auth screens, but I'll encounter them much less frequently. The biggest issue I'll have with that is that when I change my password, I'll have to do it in three different locations.
What is the best process for updating a live website?
I see that a lot of websites (e.g. StackOverflow) have warnings that there will be downtime for maintenance in advance. How is that usually coded in? Do they have a config value which determines whether to display such a message in the website header?
Also, what do you do if your localhost differs from the production server, and you need to make sure that everything works the same after you transfer? In my case, I set up development.mydomain.com (.htaccess authentication required), which has its own database and is basically my final staging area before uploading everything to the live production site. Is this a good approach to staging?
Lastly, is a simple SFTP upload the way to go? I've read a bit about some more complex methods like using server-side hooks in Git.. Not sure how this works exactly or whether it's the approach I should be taking.
Thanks very much for the enlightenment..
babonk
This is (approximately) how it's done on Google App Engine:
Each time you deploy an application, it is associated with a subdomain according to it's version:
version-1-0.example.com
version-1-1.example.com
while example.com is associated with one of the versions.
When you have new version of server-side software, you deploy it to version-2-0.example.com, and when you are sure to put it live, you associate example.com with it.
I don't know the details, because Google App Engine does that for me, I just set the current version.
Also, when SO or other big site has downtime, that is more probable to be a hardware issue, rather than software.
That will really depend on your website and the platform/technology for your website. For simple website, you just update the files with FTP or if the server is locally accessible, you just copy your new files over. If you website is hosted by some cloud service, then you have to follow whatever steps they offer to you to do it because a cloud based hosting service usually won’t let you to access the files directly. For complicated website that has a backend DB, it is not uncommon that whenever you update code, you have to update your database as well. In order to make sure both are updated at the same time, you will have to take you website down. To minimize the downtime, you will probably want to have a well tested update script to do the actual work. That way you can take down the site, run the script and fire it up again.
With PHP (and Apache, I assume), it's a lot easier than some other setups (having to restart processes, for example). Ideally, you'd have a system that knows to transfer just the files that have changed (i.e. rsync).
I use Springloops (http://www.springloops.com/v2/) to host my git repository and automatically deploy over [S/]FTP. Unless you have thousands of files, the deploy feels almost instantaneous.
If you really wanted to, you could have an .htaccess file (or equivalent) to redirect to a "under maintenance" page for the duration of the deploy. Unless you're averaging at least a few requests per second (or it's otherwise mission critical), you may not even need this step (don't prematurely optimize!).
If it were me, I'd have a an .htacess file that holds redirection instructions, and set it to only redirect during your maintenance hours. When you don't have an upcoming deploy, rename the file to ".htaccess.bak" or something. Then, in your PHP script:
<?php if (file_exists('/path/to/.htaccess')) : ?>
<h1 class="maintenance">Our site will be down for maintenance...</h1>
<?php endif; ?>
Then, to get REALLY fancy, setup a Springloops pre-deploy hook to make sure your maintenance redirect is setup, and a post-deploy hook to change it back on success.
Just some thoughts.
-Landon
I'm attempting to build an application in PHP to help me configure new websites.
New sites will always be based on a specific "codebase", containing all necessary web files.
I want my PHP script to copy those web files from one domain's webspace to another domain's webspace.
When I click a button, an empty webspace is populated with files from another domain.
Both domains are on the same Linux/Apache server.
As an experiment, I tried using shell and exec commands in PHP to perform actions as "root".
(I know this can open major security holes, so it's not my ideal method.)
But I still had similar permission issues and couldn't get that method to work either.
But I'm running into permission/ownership issues when copying across domains.
Maybe a CGI script is a better idea, but I'm not sure how to approach it.
Any advice is appreciated.
Or, if you know of a better resource for this type of information, please point me toward it.
I'm sure this sort of "website setup" application has been built before.
Thanks!
i'm also doing something like this. Only difference is that i'm not making copies of the core files. the system has one core and only specific files are copied.
if you want to copy files then you have to take in consideration the following:
an easy (less secured way) is to use the same user for all websites
otherwise (in case you want to provide different accesses) - you must create a different owner for each website. you must set the owner/group for the copied files (this will be done by root).
for the new website setup:
either main domain will run as root, and then it will be able to execute a new website creation, or if you dont want your main domain to be root, you can do the following:
create a cronjob (or php script that runs in a loop under CLI), that will be executed by root. it will check some database record every 2 minutes for example, and you can add from your main domain a record with setup info for new hosted website (or just execute some script that gains root access and does it without cron).
the script that creates this can be done in php. it can be done in any language you wish, it doesn't really matter as long as it gets the correct access.
in my case i'm using the same user since they are all my websites. disadvantage is that OS won't create restrictions, my php code will (i'm losing the advantage of users/groups permissions between different websites).
notice that open_basedir can cause you some hassle, make sure you exclude correct paths (or disable it).
also, there are some minor differences between fastCGI and suPHP (i believe it won't cause you too much trouble).
I have a dedicated server and I'm in need for building new version of my personal PHP5 CMS for my customers. Setting aside questions whether i should consider using open source, i need your opinions regarding CMS architecture.
First approach (since the server is completely in my control) is to build a centralized system that can support multiple sites from single administration panel. Basic idea is that I am able to log-in as super user, create new site (technically it creates new web root and new database, and maybe some other things), assign modules and plug-ins for specific customer or develop new ones if needed. If customer log-ins at this panel, he/she sees and can manage only their site content.
I have seen such a system (it was custom build), it's very nice to bug fixes and new features affects all customers instantly without need of patching every CMS that can also be on other hosting server...
The negative aspect I can see is scalability - what if i will need to add second server, how do I merge them to maintain single core.
The second is classical - stand-alone CMS for every customer.
What way will you go and why?
Thank you for your time.
If you were to have one central system for all clients, scalability could become easier. You can have one big database server and several identical web servers (probably behind a load balancer), and that way you don't have to worry about dividing the clients up into different servers. Your resources are pooled so if one client has a day with heavy traffic it can be taken up by several servers, rather than bringing one server (and all other clients' sites on it) to its knees.
You can get PHP sessions to work across multiple servers either by using 'sticky sessions' on your load-balancing configuration, or by getting PHP to store the data somewhere accessible to all servers (e.g. a database).
Keeping the web application files synchronised to one code base shouldn't be too difficult, there are tools like rsync that you could use to help you.
It really depends on the types of sites. That said, I would suggest that you consider using version control software to manage multiple installations. In practise, this can give you the same as with a centralised approach, but it gives you the freedom to postpone update of a single (or a number of) sites.
What is the best practice for setting up a LAMP server in terms of linux users and groups? If there are multiple sites hosted on the same server, is it best to have a single user that owns all site source files (and uploads) that is in the same group as apache - or to have a different user for each site (so that each site has its own crontab)? Or something else entirely?
For some reason, this question never seems to be addressed in PHP/MySQL/Linux books that I've encountered.
On our platform each site's htdocs etc has it's own user. This means if one site is compromised, the others should be fine.
If this is a small number of large sites, you may find that splitting your server into multiple VMs using something like Xen is a better option than simply segregating by user. This will improve the isolation of your sites, and make it easier to move a site to its own hardware if, in future, one starts to become much heavier on resource usage than the others.
I assume you don't want to go crazy and get WHM for cPanel and may want to do this inexpesnively.
I think its a best practice to have each user access their space from their own username and group - especially if unrelated users may be using the webserver.
If you have over 10 domains and users and want to keep accounts segregated to their own space, I would consider using Webmin with VirtualMin installed on the server. This easily handles these type of issues, within a nice, free install. Otherwise, you'll have to purchase a commercial product or handle everything manually - a real pain, but it can be done (not recommended for a commercial venture).
Also, Xen and VMS might be overkill, but also not as easy to manage as Webmin/VirtualMin for 10-100+ accounts.
The best choice is create VirtualHost for each domain using Apache with suPHP module. By this way, each site will be owned by an user and run with that user's permission. Webroot of each site should be put under user's homedir to prevent local attack.
If you use the same user for every websites, that means user from websiteA can access read/write to files of websiteB.
I did some kind of small level hosting over several years and my answer is "It depends".
First of all there is a difference between Apache Module (mod_php). CGI and FastCGI.
A good list with all the pros and cons could be found here:
Apache php modes
When it comes to security all of the modes have pros and cons.
Since we only hosted a relatively small amount of Domains with moderate traffic I decided to stay with mod_php and used vhost configuration.
I also used different FTP users for each vhost root dir (of course).
Configuring vhosts (one per customer) allows you to switch off domains the easy way without digging your way through a ridiculously big httpd.conf and producing errors on the way.