I have a media center which also serves as a low-volume personal nginx server.
Currently, sickbeard, sabnzbd and maraschino are all reached through subdomains, such as sickbeard.domain.com, which are each proxied through nginx to the appropriate port for that service's server. They are each individually secured by their own auth systems, which I don't entirely understand (I tried reading the code, but it's way over my head and in Python, which I know very little about) but they all use the basic auth popup window, which I think is hideous and redundant.
I also have a website, secured by a session-based authorization with a nice form, using php, that I created as part of a tutorial in php (Fort Knox, this ain't.)
What I want is to go to my website, log in to my pretty form, and have links there that take me to all of my services, without having to go through a challenge screen every time. How can I begin to do this? I tend to think my Google-fu is pretty good, but I'm not even sure where to start.
Additional notes:
I put the bones of this together years ago now, but if I recall I went with the subdomain scheme because I was having trouble getting nginx's proxy_pass to work with subfolders. I'm not wedded to it, but I do think it looks nice and clean.
Ideally, I would also like to somehow serve the above services through nginx, so I don't have to have so many open ports.
I also wouldn't mind advice on my php auth scheme. I had a hard time finding tutorials between basic auth and complex systems involving a database of users. I am the only user. I keep my credentials in a flat file outside the path of the site, and I have no need to grow beyond that. I just want an attractive integrated login form, instead of a popup straight out of the 90s.
Sab and Sickbeard are WSGI based, and use the CherryPy libraries. I did a lot of research and decided I could create a new auth method that manually pulled from my php session files and used bcrypt for the hash checks. But I realized I'd stand the risk of my changes being overwritten every time I updated.
Maraschino is also WSGI based, but uses the flasks framework. I had the same realizations as above, but while going through the documentation and code for that, I realized that Maraschino is a lot more powerful than I thought, and the only thing I would want to do on either Sab or Sickbeard that I can't do with Maraschino is non-routine system maintenance, like changing ports or api keys.
So my conclusion is that I'm going to close the ports for Sab and Sickbeard to outside calls, do all my routine activities through Maraschino, and focus my development efforts on getting a better login screen for that. I'll still have multiple ugly auth screens, but I'll encounter them much less frequently. The biggest issue I'll have with that is that when I change my password, I'll have to do it in three different locations.
Related
I am currently creating a website in php that has a database backend (can be either MySQL or SQL Server) and I realized recently that if my database crashes at any time, my website will not run properly and probably cause some headaches.
So what is the proper thing to display on the website if my database (or any crucial outside component) goes down? My particular website relies heavily on its database and will be almost useless without it.
One option I have been told is to email the website admin and display a Error 500 page that says something is wrong with the server and basically make the website unusable till the issue is fixed. Is there anything else I could do to work around this problem? Are there any ways to design a website so that the database (any crucial component) crashing isn't an issue?
I am looking for general rules of thumb as well as specific examples of how people have worked around this in the past. Also, these examples don't just have to be for my website example.
If you only have one database server, and the website cannot work without it's database, there is no magic : you'll have to display some sort of nice error page, informing the users there is a technical problem and that the website will come back shortly.
Generally speaking :
Chances of such a problem are pretty low
If your website is a normal one, people will tend to accept a problem once in a while, especially if you communicate about it.
If you can afford it (and have the technical knowledge to set this up), you could use two database servers, with replication (MySQL supports this) between them : one master, which you use, and a slave, that's considered as a backup.
Then, if the master falls, your application will use the slave.
Of course, this will greatly reduce the risks of a database-related problem (having two servers crash at the same time is quite unlikely), but you'll still have problems with all other components -- like your webserver : if you only have one, you might want to consider using two, with the second one as a fallback.
After that, if you still have money (and think you need an even better uptime for your website), you'll want to think about the case when your datacenter has a problem -- setting up server in two separate locations...
The proper thing to display is a simple "oops" error message that gives away no information that would be helpful to hackers. Something along the lines of "We're experiencing technical difficulties" or "website unavailable". This is for security purposes.
It would be good to have an error logging and notification system in place to notify an administrator in case of a crash. That would be fairly simple to write, but I'm sure there are already libraries that handle this. (There's a tutorial with code samples at http://net.tutsplus.com/tutorials/php/404403-website-error-pages-with-php-auto-mailer/ and a simpler example at http://www.w3schools.com/php/php_error.asp)
There are ways to design the architecture of your web site to handle a database component crashing. It's not architecting your website, it's architectin the whole environment. For example, database clustering for high availability (http://en.wikipedia.org/wiki/High-availability_cluster). It's not cheap.
Overall, you just need to ensure that you're doing your error handling properly. A database crash is a classic example ofr why we need error handling. There are plenty of resources and guidance for this.
http://www.google.com/search?q=Error+Handling+Guidelines&rls=com.microsoft:en-us&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1
Edit
I found this and thought it was a very nice resource for answering how to handle the errors:
http://www.nyphp.org/PHundamentals/7_PHP-Error-Handling
It is considered best practice to return a HTTP 500 status code in the event that your database being down, or any other crippled service, prevents your website from functioning properly. Depending on your websites functionality, this could be on a page by page basis or site wide. For example, your "About Us" page may not need database capabilities while your search page would. You could thus keep the "About Us" page up and running but return a 500 status code when someone goes to your search page.
Do not give any technical information about why the site is not working to the end user. This could be a security risk.
If you are using apache, this document will tell you how to setup custom error pages:
http://httpd.apache.org/docs/2.0/custom-error.html
I recommend you use plain HTML for your 500 status code pages. You can also have your PHP pages send a 500 status code via the header() function, documented here:
http://php.net/manual/en/function.header.php
I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it?
Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components:
There is a load balancer that first receives requests and then decides where to route each request
This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client
If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache
A blackbox that handles database replication
Specifically, I am trying to do the following:
Setting up a load balancer (my homework revealed that HAProxy is one such load balancer)
Setting up replication so that databases can be synchronized
Using memcached
Configuring Apache to work with multiple web servers
Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage)
Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally.
I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?
Personally, I think you should be considering how your app will scale initially - as otherwise you'll run into problems down the line.
I'm not saying you need to build it initially as a multi-server system, but if you think you'll need to do it later, be mindful of the concerns now.
In my experience, this includes things like:
Sessions. Unless you use 'sticky' load balancing, you will have to have some way of sharing session state between servers. This probably means storing session data on either shared storage, or in a DB.
File uploads and replication. If you allow users to upload files, or you have a CMS that allows you to upload images/documents, it needs to cater for the fact that these files will also need to find their way onto other nodes in your cluster. However, if you've gone down the shared storage route mentioned above, this should cover it.
DB scalability. If you're using traditional DB servers, you might want to think about how you'll implement scalability at that level. This may mean coding your app so you use one connection string for reads, and another for writes. Then, you are free to implement replication with one master node handling the inserts/updates cascading the changes to read only nodes that handle the bulk of the work.
Middleware. You might even want to go down the route of implementing some kind of message oriented middleware solution to completely hand off business logic functions - this will give you a great level of flexibility in how you wish to scale this business logic layer in the future. Although initially this will be a lot of complication and work for not a great deal of payoff.
Have you considered playing around with VMs first? You can run 2-3 VMs on your local machine and set them up like you would actual servers, they just won't be able to handle real traffic levels. If all you're looking for is the learning experience, it might be an ideal way to go about it.
I asked a recent question regarding the use of readfile() for remotely executing PHP, but maybe I'd be better off setting out the problem to see if I'm thinking the wrong way about things, so here goes:
I have a PHP website that requires users to login, includes lots of forms, database connections and makes use of $_SESSION variables to keep track of various things
I have a potential client who would like to use the functionality of my website, but on their own server, controlled by them. They would probably want to restyle the website using content and CSS files local to their server, but that's a problem for later
I don't want to show them my PHP code, since that's the value of what I'd be providing.
I had thought to do this with calls to include() from the client's server to mine, which at least keeps variable scope intact, but many sites (and the PHP docs) seem to recommend readfile(), file_get_contents() or similar. Ideally I'd like to have a simple wrapper file on the client's server for each "real" one on my server.
Any suggestions as to how I might accomplish what I need?
Thanks,
ColmF
As suggested, comment posted as an answer & modified a touch
PHP is an interpretive language and as such 'reads' the files and parses them. Yes it can store cached byte code in certain cases but it's not like the higher level languages that compile and work in bytecode. Which means that the php 'compiler' requires your actual source code to work. Check out zend.com/en/products/guard which might do what you want though I believe it means your client has to use the Zend Server.
Failing that sign a contract with the company that includes clauses of not reusing your code / etc etc. That's your best protection in this case. You should also be careful though, if you're using anything under an 'open source' license your entire app may be considered open source and thus this is all moot.
This is not a non-standard practice for many companies. I have produced software I'm particularly proud of and a company wants to use it. As they believe in their own information security for either 'personal' reasons or because they have to comply to a standard such as PCI there are times my application must run in their environments. I have offered my products as 'web services' where they query my servers with data and recieve responses. In that case my source is completely protected as this is no different than any other closed API. In every case I have licensed the copy to the client with provisions that they are not allowed to modify nor distribute it. This is a legal binding contract and completely expected from the clients side of things. Of course there were provisions that I would provide support etc etc but that's neither here nor there.
Short answers:
Legal agreement, likely your best bet from everyone's point of view
Zend guard like product, never used it so I can't vouch for it
Private API but this won't really work for you as the client needs to host it
Good luck!
If they want it wholly contained on their server then your best bet is a legal solution not a technical one.
You license the software to them and you make sure the contract states the intellectual property belongs to you and it cannot be copied/distributed etc without prior permission (obviously you'll need some better legalese than that, but you get the idea).
Rather than remote execution, I suggest you use a PHP source protection system, such as Zend Guard, ionCube or sourceguardian.
http://www.zend.com/en/products/guard/
http://www.ioncube.com/
http://www.sourceguardian.com/
Basically, you're looking for a way to proxy your application out to a remote server (i.e.: your clients). To use something like readfile() on the client's site is fine, but you're still going to need multiple scripts on their end. Basically, readfile scrapes what's available at a particular file path or URL and pipes it to the end user. So if I were to do readfile('google.com'), it would output the source code for Google's homepage.
Assuming you don't just want to have a dummy form on your clients' sites, you're going to need to have some code hanging out on their end. The code is going to have to intercept the form submissions (so you'll need a URL parameter on the page you're scraping with readfile to tell your code that the form submission URL is your client's site and not your own). This page (the form submission handler page) will need to make calls back to your own site. Think something like this:
readfile("https://your.site/whatever?{$_SERVER['QUERY_STRING']}");
Your site is then going to process the response and then pass everything back to your clients' sites.
Hopefully I've gotten you on the right path. Let me know if I was unclear; I realize this is a lot of info.
I think you're going to have a hard time with this unless you want some kind of funny wrapper that does curl type requests to your server. Especially when it comes to handling things like sessions and cookies.
Are you sure a PHP obfuscator wouldn't be sufficient for what you are doing?
Instead of hosting it yourself, why not do what most php applications do and simply distribute the program to your client with an auto-update feature? Hosting it yourself is complicated, from management of websites to who is paying for the hosting.
If you don't want it to be distributed, then find a pre-written license that allows you to do this. If you can't find one then it's time to talk to a lawyer.
You can't stop them from seeing your code. You can make it very hard for them to understand your code, which is a good second best. See our SD PHP Obfuscator for a tool that will scramble the identifiers and the whitespacing in the code, making it much more difficult to understand.
I'm planning an application that allow users to create a specific type of website. I wanted to link account names to 'account.myapp.com'. Going to 'account.myapp.com' will serve up a website. I haven't a clue on how to map this. I'll be using Code Igniter as my development tool.
I would like to give users the ability to add a registered domain name for their website, rather than use the standard sub domain. Any tips/methods on this?
Also, what are some pitfalls and problems I should be looking for when developing something with this design? My biggest concern is backing myself in a corner with bad database design, and creating a nightmare of an app to maintain or update.
Thanks for your time!
Your plan for having a single app that serves all the sites is going to be quite an undertaking and a lot of work. That is not to say it isn't possible (plenty of enterprise CMS's, including Sharepoint, allow you to run 'virtual sites' etc. from a single install).
You are going to need to undertake a lot of planning and design, specifically on the security front, to make sure the individual sites operate in isolation. I assume that each site will have its own account(s) - you are going to have to do a lot of work to make sure users can't accidentally (or malicously) start editing another site.
And you are right to consider maintanence - if you have all the sites running under a single application, and therefore a single database, that database is going to get big and messy, very quickly. It also then becomes a single point of failure.
A better plan would be to develop a self-contained solution (for a single website) - this can then run from it's own directory, with it's own database with it's own set of accounts. It would be significantly smaller (in terms of both code and database) and therefore probably perform a lot better. Day-to-Day maintanence would be easier (restore a website from backup), but software updates (to add a new feature) would be a bit trickier, though as it's PHP it's just file uploads and SQL patches, therefore you can automate this with ease.
In terms of domains: if you went with the invidual app (one per website) approach, you can use Apache's Dynamic Virtual Hosts feature, which effectively maps a URL to the filesystem, (so website.mydomain.com could be translated to automatically be severed from /home/vhosts/com/mydomain/website): thus deploying a new website would be a matter of simply copying the files into the correct directory, creating the database, and updating a config file, all of which could be automated with ease.
If users want to use their own URL's then they have to firstly update their DNS to point to your server and secondly you would need to configure an Apache vhost for that domain, which would probably involve a restart of apache and thus affect all other users.
This is very easy to do in codeigniter and can be complete done using routes.php, and a pre-controller hook.
THAT BEING SAID... Don't do this. It's generally not a good idea. Even 37Signals, who made this sort of account management famous is recanting, and moving towards centralized accounts. Check http://37signals.com/accounts
If I'm understanding your question correctly, you need to setup wildcard DNS, and use Apache mod_rewrite to internally redirect (example) myaccount.myapp.com to myapp.com/?account=myaccount. You app logic can take it from there.
I just Googled, "wildcard dns mod_rewrite account" (without quotes) and found some examples with instructions, such as:
http://www.reconn.us/content/view/46/67/
This is a valid and desirable way to structure certain web apps IMO. I'm not aware of serious drawbacks.
I don't really know of a great (automated/scalable) way to allow the users to specify their own individual domain names but you might be able to do it if you had them modify their domain's DNS to point to your web server, then added a ServerAlias directive to your "myapp" Apache configuration. You're still left with the problem of your myapp runtime instance understanding that requests coming through a customer's domain are specific to a customer account. So (example) customeraccount.com really equates to myapp.com/?account=customeraccount. Some more mod_rewrite rules could probably take care of this, but it's not automated (perhaps it could be though with an include file or such).
Sorry, you said you were using CodeIgniter ... substitute in then myapp.com/account/myaccount where I wrote myapp.com/?account=myaccount.
First, you have to setup your Apache (or whatever webserver you're using) to give every subdomain the same DNS settings (wildcard match). Then, you have to use CodeIgniter's routing features to parse the subdomain from the request URL, set that as param (or whatever that's called in CodeIgniter) and have some fun with it in your controller.
About pitfalls and problems: that depends on what you want to do. ;)
I have a dedicated server and I'm in need for building new version of my personal PHP5 CMS for my customers. Setting aside questions whether i should consider using open source, i need your opinions regarding CMS architecture.
First approach (since the server is completely in my control) is to build a centralized system that can support multiple sites from single administration panel. Basic idea is that I am able to log-in as super user, create new site (technically it creates new web root and new database, and maybe some other things), assign modules and plug-ins for specific customer or develop new ones if needed. If customer log-ins at this panel, he/she sees and can manage only their site content.
I have seen such a system (it was custom build), it's very nice to bug fixes and new features affects all customers instantly without need of patching every CMS that can also be on other hosting server...
The negative aspect I can see is scalability - what if i will need to add second server, how do I merge them to maintain single core.
The second is classical - stand-alone CMS for every customer.
What way will you go and why?
Thank you for your time.
If you were to have one central system for all clients, scalability could become easier. You can have one big database server and several identical web servers (probably behind a load balancer), and that way you don't have to worry about dividing the clients up into different servers. Your resources are pooled so if one client has a day with heavy traffic it can be taken up by several servers, rather than bringing one server (and all other clients' sites on it) to its knees.
You can get PHP sessions to work across multiple servers either by using 'sticky sessions' on your load-balancing configuration, or by getting PHP to store the data somewhere accessible to all servers (e.g. a database).
Keeping the web application files synchronised to one code base shouldn't be too difficult, there are tools like rsync that you could use to help you.
It really depends on the types of sites. That said, I would suggest that you consider using version control software to manage multiple installations. In practise, this can give you the same as with a centralised approach, but it gives you the freedom to postpone update of a single (or a number of) sites.