PHP Code on Separate Server From Apache? - php

This is something I've never seen done, and I'm not turning up in my research, but my boss is interested in the idea. We are looking at some load balancing options, and wonder if it is possible to have apache and php installed on multiple servers, managed by a load balancer, but have all the actual php code on one server, with the various apache servers pointing to the one central code base?

For instance NFS mounts are certainly possible, but I wouldn't recommend it. A lot of the advantage of loadbalancing is lost, and you're reintroducing a single point of failure. When syncing code, and rsync cronjob can handle itself very nicely, or a manual rsync on deployment can be done.
What is the reason you would want this central code base? I'm about 99% sure there is a better solution then a single server dishing out code.

I believe it is possible. To add to Wrikken's answer, I can imagine NFS could be a good choice. However, there are some drawbacks and caveats. For one, when Apache tries access files on an NFS share that has gone away (connection dropped, host failed, etc) very bad things happen. Apache locks up, and continues to try to retrieve the file. The processes attempting to access the share, for whatever reason, do not die, and it is necessary to re-boot the server.
If you do wind up doing this, I would recommend an opcode cache, such as APC. APC will cache the pre-processed php locally, and eliminate round trips to your storage. Just be prepared to clear the opcode cache whenever you update your application/

PHP has to run under something to act as a web processor, Apache is the most popular. I've done NFS mounts across servers without problem. Chances are if NFS is down, the network is down. But it doesn't take long to do an rsync across servers to replicate files, and is really a better idea.
I'm not sure what your content is like, but you can separate static files like javascript, css and images so they are on their own server. lighttpd is a good, light weight web server for things like this. Then you end up with a "dedicated" php server. You don't even need a load balancer for this setup.
Keep in mind that PHP stores sessions on the local file system. So if you are using sessions, you need to make sure users always return to the same server. Otherwise you need to do something like store sessions in memcache.

Related

Websocket complications

This is complicated, and not necessarily one question. I'd appreciate any possible help.
I've read that is is possible to have websockets without server access, but I cannot seem to find any examples that show how it is. I've come to that conclusion (that I believe I need this) based on the following two things:
I've been struggling for the past several hours trying to figure out how to even get websockets to work with the WAMP server I have on my machine, which I have root access. Installed composer, but cannot figure out how to install the composer.phar file to install ratchet. Have tried other PHP websocket implementations (would prefer that it be in PHP), but still cannot get them to work.
My current webhost I'm using to test things out on is a free host, and doesn't allow SSH access. So, even if I could figure out to get websockets with root access, it is a moot point when it comes to the host.
I've also found free VPS hosts by googling (of course, limited everything) but has full root access, but I'd prefer to keep something that allows more bandwidth (my free host is currently unlimited). And I've read that you can (and should) host the websocket server on a different subdomain than the HTTP server, and that it can even be run on a different domain entirely.
It also might eventually be cheaper to host my own site, of course have no real clue on that, but in that case I'd need to figure out how to even get websockets working on my machine.
So, if anyone can understand what I'm asking, several questions here, is it possible to use websockets without root access, and if so, how? How do I properly install ratchet websockets when I cannot figure out the composer.phar file (I have composer.json with the ratchet code in it but not sure if it's in the right directory), and this question is if the first question is not truly possible. Is it then possible to have websocket server on a VPS and have the HTTP server on an entirely different domain and if so, is there any documentation anywhere about it?
I mean, of course, there is an option of using AJAX and forcing the browser to reload a JS file every period of time that would use jQuery ajax to update a series of divs regardless of whether anything has been changed, but that could get complicated, and I'm not even sure if that is possible (I don't see why it wouldn't be), but then again I'd prefer websockets over that since I hear they are much less resource hungry than some sort of this paragraph would be.
A plain PHP file running under vanilla LAMP (i.e. mod_php under Apache) cannot handle WebSocket connections. It wouldn't be able to perform the protocol upgrade, let alone actually perform real-time communication, at least through Apache. In theory, you could have a very-long-running web request to a PHP file which runs a TCP server to serve WebSocket requests, but this is impractical and I doubt a shared host will actually allow PHP to do that.
There may be some shared hosts that make it possible WebSocket hosting with PHP, but they can't offer that without either SSH/shell access, or some other way to run PHP outside the web server. If they're just giving you a directory to upload PHP files to, and serving them with Apache, you're out of luck.
As for your trouble with Composer, I don't know if it's possible to run composer.phar on a shared host without some kind of shell access. Some hosts (e.g. Heroku) have specific support for Composer.
Regarding running a WebSocket server on an entirely different domain, you can indeed do that. Just point your JavaScript to connect to that domain, and make sure that the WebSocket server provides the necessary Cross-Origin Resource Sharing headers.
OK... you have a few questions, so I will try to answer them one by one.
1. What to use
You could use Socket.IO. Its a library for developing realtime web application based on JavaScript. It consists of 2 parts - client side (runs on the visitor browser) and server side. Basic usage does not require almost any background knowledge on Node.js. Here is an example tutorial for a simple chat app on the official Socket.IO website.
2. Hosting
Most of the hosting providers have control panel (cPanel) with the capebility to install/activate different Apache plugins and so on. First you should check if Node.js isn't available already, if not you could contact support and ask them if including this would be an option.
If you don't have any luck with your current hosting provider you could always switch hosts quickly as there are a lot of good deals out there. Google will definitely help you here.
Here is a list containing a few of the (maybe) best options. Keep in mind that although some hosting deals may be paid there are a lot of low cost options to choose from.
3. Bandwidth
As you are worried about "resource hungry" code maybe you can try hosting some of your content on Amazon CloudFront. It's a content delivery network that is widely used and guarantees quick connection and fast resource loading as the files are loaded from the closest to the client server. The best part is that you only pay for what you actually use, so if you don't have that much traffic it would be really cheap to run and still reliable!
Hope this helps ;)

Is there a better way to store my functions on one server to use on multiple servers?

Here is my quick code. This allows me to keep one file for my commonly used PHP functions, and retrieve it as needed. This is mostly my private work and not for clients.
$include_contents = '<?php '.file_get_contents("http://example.com/inc/functions.php");
$handle = fopen("_functions.php", 'w');
fputs($handle, $include_contents );
fclose($handle);
include('_functions.php');
Don't do it!
The better way of doing this is to just host all the files you need on the local filesystem and include/require them normally.
There are no advantages to the approach you're suggesting, and numerous disadvantages of such severity that there is nothing whatsoever to be gained by doing this.
If the host that provides all your includes goes down, then every site that uses them goes down too. Even if it doesn't go down the extra latency of doing an additional http request to include the remote file will add to the total execution time of your script.
Worse, is the remote server is compromised, or your DNS server is co-opted to point the remote site's domain to a malicious server, a hacker can introduce malicious code with ease.
Servers typically have huge amounts of storage available, there's no reason why you can't keep all your PHP scripts on the server that the pages they generate will be served from.
Note: if you insist in persisting with such insanity as this, you can enable allow_url_include and then you can include directly from a URL.
But like I said, this is a really really REALLY bad idea, and there's a reason why allow_url_include is off by default.
What you need is a deployment strategy. Composer should be a good starting point to investigate. It is a software, written in PHP, that aims to resolve external dependencies, but not at runtime, at installation time. Start with the link I gave you.
Beside from composer there are other deployment strategies and softwares for PHP, the most notable (but needs more effort to make it work) is PEAR or even PEAR2

Make Apache read PHP documents from RAM / Memory?

Is there any way to make Apache to read PHP documents from RAM?
I'm thinking of creating a virtual disk in the memory and then modify httpd.conf to change the document root directory to the virtual disk in the memory.
Is this viable?
Basically, what I want to do is distribute my PHP code to my users' computers so they can run it. But I don't want them to be able to look at the PHP source code easily - the code can't be stored in the harddisk in plain text, instead, they are stored in a data file and then read by my program into the memory where Apache reads it.
Is this viable? Is it easy to create a virtual disk in memory in C++ yet the virtual disk can't be accessed by any other means such as My Computer?
Update:
Thank you all for the questions which would help me better percept my goals, but I think I know what I'm doing. Please just suggest any solutions you may have towarding my needs.
The hard part thus far is for Apache to read from somewhere other than a plain directory in the harddisk that contains all the source code of my project. I would like it to be as concealed as possible. I know a little about windows desktop development and thought virtual disk might be a solution but if you have better ones, please suggest.
You can, in theory, have Apache serve files out of a Samba share. You would need to configure the server to mount a specific file share made by the user. This won't work if the user is behind a firewall or NAT gateway of any variety.
This will be:
Slower than molasses in January ... in Alaska. Apache does a lot of stat calls on each request by default. This is going to add a lot of overhead before even finding the file, transferring it over, and then executing it.
Hard to configure. Adding mounts is a non-trivial task at the server level and Samba can be rather finicky on both sides. Further, if you are using RHEL/CentOS or any other distro running SELinux, you're going to have to do the chcon/setsebool tapdance to even get it working. The default settings expressly prohibit Apache from touching any file that came to the system through a Samba share.
A security nightmare. You will be allowing Apache to serve up files to anyone from a computer that is not under your direct control. The malicious possibilities are endless. This is a horrible idea that you should not seriously consider.
A safer-but-still-insane alternative might be available. FastCGI. The remote systems can run a FastCGI process and actually host and execute the code directly. Apache can be configured to pass PHP requests to the remote FastCGI process. This will still break if the users are firewalled or NATted. This will only be an acceptable solution if the user can actually run a FastCGI process and you don't mind the code actually executing on their system instead of the server.
This has the distinct advantage of the files not executing in the context of the server.
Perhaps I've entirely misunderstood -- are you asking for code to be run live from user's systems? Because I wrote this answer under that interpretation.

Caching issue of cakephp on distributed server architecture

We having been using multiple webservers for our cakephp application,
problem is there are two cache directories, server 2 is clearing his cache before doing any insertion in his database. But server 1 doesn't know the about the database has been changed , so server 1 cached is not cleared
When new web request comes to server 2 , he creates new cache files and return good results. BUT when it comes to server 1 , he show same old results :( .
I wonder is there any way we can share the cache directory among various physical servers, without compromising with performance.
We may increase the web servers, so please recommend good long term solution for this
Thanks for reading
I would look into using Memcache as for your caching mechanism. You can set up the Memcache daemon (memcached) on one server and then connect both servers to the one cache. See the core.php for set up details. Of course you will also have to install the memcache php ext and the daemon, but it will be worth it.
Easiest way would be setting up the cache directory to be served from an nfs (file) server. You wouldn't have to change anything else.
For performance i'd also stick to Memcache as bucho said, but it will likely require you to change your application code.

Methods for caching PHP objects to file?

In ASPNET, I grew to love the Application and Cache stores. They're awesome. For the uninitiated, you can just throw your data-logic objects into them, and hey-presto, you only need query the database once for a bit of data.
By far one of the best ASPNET features, IMO.
I've since ditched Windows for Linux, and therefore PHP, Python and Ruby for webdev. I use PHP most because I dev several open source projects, all using PHP.
Needless to say, I've explored what PHP has to offer in terms of caching data-objects. So far I've played with:
Serializing to file (a pretty slow/expensive process)
Writing the data to file as JSON/XML/plaintext/etc (even slower for read ops)
Writing the data to file as pure PHP (the fastest read, but quite a convoluted write op)
I should stress now that I'm looking for a solution that doesn't rely on a third party app (eg memcached) as the apps are installed in all sorts of scenarios, most of which don't have install rights (eg: a cheap shared hosting account).
So back to what I'm doing now, is persisting to file secure? Rule 1 in production server security has always been disable file-writing, but I really don't see any way PHP could cache if it couldn't write. Are there any tips and/or tricks to boost the security?
Is there another persist-to-file method that I'm forgetting?
Are there any better methods of caching in "limited" environments?
Serializing is quite safe and commonly used. There is an alternative however, and that is to cache to memory. Check out memcached and APC, they're both free and highly performant. This article on different caching techniques in PHP might also be of interest.
Re: Is there another persist-to-file method that I'm forgetting?
It's of limited utility but if you have a particularly beefy database query you could write the serialized object back out to an indexed database table. You'd still have the overhead of a database query, but it would be a simple select as opposed to the beefy query.
Re: Is persisting to file secure? and cheap shared hosting account)
The sad fact is cheap shared hosting isn't secure. How much do you trust the 100,500, or 1000 other people who have access to your server? For historic and (ironically) security reasons, shared hosting environments have PHP/Apache running as a unprivileged user (with PHP running as an Apache module). The security rational here is if the world facing apache process gets compromised, the exploiters only have access to an unprivileged account that can't screw with important system files.
The bad part is, that means whenever you write to a file using PHP, the owner of that file is the same unprivileged Apache user. This is true for every user on the system, which means anyone has read and write access to the files. The theoretical hackers in the above scenario would also have access to the files.
There's also a persistent bad practice in PHP of giving a directory permissions of 777 to directories and files to enable the unprivileged apache user to write files out, and then leaving the directory or file in that state. That gives anyone on the system read/write access.
Finally, you may think obscurity saves you. "There's no way they can know where my secret cache files are", but you'd be wrong. Shared hosting sets up users in the same group, and most default file masks will give your group users read permission on files you create. SSH into your shared hosting account sometime, navigate up a directory, and you can usually start browsing through other users files on the system. This can be used to sniff out writable files.
The solutions aren't pretty. Some hosts will offer a CGI Wrapper that lets you run PHP as a CGI. The benefit here is PHP will run as the owner of the script, which means it will run as you instead of the unprivileged user. Problem averted! New Problem! Traditional CGI is slow as molasses in February.
There is FastCGI, but FastCGI is finicky and requires constant tuning. Not many shared hosts offer it. If you find one that does, chances are they'll have APC enabled, and may even be able to provide a mechanism for memcached.
I had a similar problem, and thus wrote a solution, a memory cache written in PHP. It only requires the PHP build to support sockets. Other then that, it is a pure php solution and should run just fine on Shared hosting.
http://code.google.com/p/php-object-cache/
What I always do if I have to be able to write is to ensure I'm not writing anywhere I have PHP code. Typically my directory structure looks something like this (it's varied between projects, but this is the general idea):
project/
app/
html/
index.php
data/
cache/
app is not writable by the web server (neither is index.php, preferably). cache is writable and used for caching things such as parsed templates and objects. data is possibly writable, depending on need. That is, if the users upload data, it goes into data.
The web server gets pointed to project/html and whatever method is convenient is used to set up index.php as the script to run for every page in the project. You can use mod_rewrite in Apache, or content negotiation (my preference but often not possible), or whatever other method you like.
All your real code lives in app, which is not directly accessible by the web server, but should be added to the PHP path.
This has worked quite well for me for several projects. I've even been able to get, for instance, Wikimedia to work with a modified version of this structure.
Oh... and I'd use serialize()/unserialize() to do the caching, although generating PHP code has a certain appeal. All the templating engines I know of generate PHP code to execute, making post-parse very fast.
If you have access to the Database Query Cache (ie. MySQL) you could go with serializing your objects and storing them in the DB. The database will take care of holding the query results in memory so that should be pretty fast.
You don't spell out -why- you're trying to cache objects. Are you trying to speed up a slow database query, work around expensive object instantiation, avoid repeated generation of complex page, maintain application state or are you just compulsively storing away objects in case of a long winter?
The best solution, given the atrocious limitations of most low-cost shared hosting, is going to depend on what you're trying to accomplish. Going for bottom of the barrel shared-hosting means you have to accept that you won't be working with the best tools. The numbers are hard to quantify, but there's a trade off between hosting costs, site performance & developer time (ie - fast, cheap or easy).
It's in theory possible to store objects in sessions. That might get you past the file writing disabled problem. Additionally you could store the session in a mysql memory backed table to speed up the query.
Some hosting places may have APC compiled in.. That would allow you to store the objects in memory.

Categories