Make Apache read PHP documents from RAM / Memory? - php

Is there any way to make Apache to read PHP documents from RAM?
I'm thinking of creating a virtual disk in the memory and then modify httpd.conf to change the document root directory to the virtual disk in the memory.
Is this viable?
Basically, what I want to do is distribute my PHP code to my users' computers so they can run it. But I don't want them to be able to look at the PHP source code easily - the code can't be stored in the harddisk in plain text, instead, they are stored in a data file and then read by my program into the memory where Apache reads it.
Is this viable? Is it easy to create a virtual disk in memory in C++ yet the virtual disk can't be accessed by any other means such as My Computer?
Update:
Thank you all for the questions which would help me better percept my goals, but I think I know what I'm doing. Please just suggest any solutions you may have towarding my needs.
The hard part thus far is for Apache to read from somewhere other than a plain directory in the harddisk that contains all the source code of my project. I would like it to be as concealed as possible. I know a little about windows desktop development and thought virtual disk might be a solution but if you have better ones, please suggest.

You can, in theory, have Apache serve files out of a Samba share. You would need to configure the server to mount a specific file share made by the user. This won't work if the user is behind a firewall or NAT gateway of any variety.
This will be:
Slower than molasses in January ... in Alaska. Apache does a lot of stat calls on each request by default. This is going to add a lot of overhead before even finding the file, transferring it over, and then executing it.
Hard to configure. Adding mounts is a non-trivial task at the server level and Samba can be rather finicky on both sides. Further, if you are using RHEL/CentOS or any other distro running SELinux, you're going to have to do the chcon/setsebool tapdance to even get it working. The default settings expressly prohibit Apache from touching any file that came to the system through a Samba share.
A security nightmare. You will be allowing Apache to serve up files to anyone from a computer that is not under your direct control. The malicious possibilities are endless. This is a horrible idea that you should not seriously consider.
A safer-but-still-insane alternative might be available. FastCGI. The remote systems can run a FastCGI process and actually host and execute the code directly. Apache can be configured to pass PHP requests to the remote FastCGI process. This will still break if the users are firewalled or NATted. This will only be an acceptable solution if the user can actually run a FastCGI process and you don't mind the code actually executing on their system instead of the server.
This has the distinct advantage of the files not executing in the context of the server.
Perhaps I've entirely misunderstood -- are you asking for code to be run live from user's systems? Because I wrote this answer under that interpretation.

Related

Best way to manage file transfert of heavy folders

The situation is a bit complicated here.
We are receiving large folders with lots of files on a distant fileserver (accessible with FTP, let's say FTP1).
Thoses folders can have a complex arboresence and weigh between 50Mo and 4Go.
With PHP, the goal is to remove unwanted files (.exe, .pdf ...).
Take all files and put them on the root folder and then order them by creating a new defined arborescence.
And after this process, the webserver should send everything to another distant FTP server (FTP2).
Then folders/files can be removed from FTP1
With laravel and Storage everything is easy to make but my main concern is about the speed.
Is it better to
Copy file on webserver, launch process, copy to distant and then clean
Process directly on FTP1 and then copy to FTP2
Copy to FTP2 and then process directly on FTP2
I dont have that much experience in IT infrastructure/architecture but both FTP are only accessible through internet and will never be in the same network as the webserver.
The connection between FTP's servers and webserver should be on high availability but we all knows what it means ...
I dont expect an answer but more like a guideline or the the usual way of dealing with this case.
I don't quite understand you. By 50Mo do you mean 50mb? But anyway, since you are just looking for a rough guide and from what i see,
You should
Reduce the need for FTP because FTP is extremely slow. So if you can have 1 FTP transfer over 2, it is definitely better.
Check your ftp servers hardware specification. You obviously want the faster one to do the processing.

Can PHP move around and edit root system files on a server?

this might seem like a stupid question but I've Googled to no avail.
I've always thought of PHP as a language for creating dynamic database driven sites, and I've never thought about using it to move system files on the actual server (as I have never had a need to). My question is:
can a standard PHP 5.3.x.x installation move, copy or edit system files (I'm using a Linux sever as an example) around in /bin or maybe /etc?
is this a good idea/practise?
It has never occurred to me that if a malicious hacker were to be able to inject some PHP into a site, that they would effectively be granted access to the entire Linux server (and all its system files). I have only ever thought of PHP as something that operates inside the /vhosts directory (perhaps naively).
Sorry if this sounds like a stupid question, but I can't really test my theory as if my boss was to see me writing/uploading/executing a script that moved stuff around in the Linux file system I would be dead.
Thanks for your help guys! :)
PHP can to your server whatever the permissions of the user account it runs as allow it to do. PHP as a language is not restricted in any way (at least, in terms of permissions), it is the user account that is restricted.
This is why people will usually create a user for Apache/nginx/insert web server here to run as, and only give it permissions to manipulate files and directories related to the web server. If you don't give this user access permissions to /bin or /etc, it's can't do anything that will affect them.
is this a good idea/practice?
Normally not. Leave system administration to your sysadmin and not the user requesting your PHP scripts.
PHP can attempt to call many system commands to move or directly edit files on the hard disk. Whether it succeeds depends on the security settings.
Let's assume your running PHP thru apache and apache is set up to run all processes as the user www-data - a default setup for OS's like Debian. If you give the user www-data permission to edit /etc then yes, PHP can read and write to files in /etc
There is only one major drawback as you identified; security, security and security. You also better be sure that your PHP works properly as 1 wrongly written file could now take down the entire server.
I would also definitely not practice on your server behind your bosses back. Look into getting a cheap virtual machine, either hosted elsewhere or on your own machine curtsey of VirtualBox
Yes it can. Its a programming language, it can do anything.
It completely depends who is running it. If its root it can do anything. If its just a normal user bob. It can not do much outside the home /home/bob. Apache is also like bob. Apache usually runs under www-data, www, apache user names.

PHP Code on Separate Server From Apache?

This is something I've never seen done, and I'm not turning up in my research, but my boss is interested in the idea. We are looking at some load balancing options, and wonder if it is possible to have apache and php installed on multiple servers, managed by a load balancer, but have all the actual php code on one server, with the various apache servers pointing to the one central code base?
For instance NFS mounts are certainly possible, but I wouldn't recommend it. A lot of the advantage of loadbalancing is lost, and you're reintroducing a single point of failure. When syncing code, and rsync cronjob can handle itself very nicely, or a manual rsync on deployment can be done.
What is the reason you would want this central code base? I'm about 99% sure there is a better solution then a single server dishing out code.
I believe it is possible. To add to Wrikken's answer, I can imagine NFS could be a good choice. However, there are some drawbacks and caveats. For one, when Apache tries access files on an NFS share that has gone away (connection dropped, host failed, etc) very bad things happen. Apache locks up, and continues to try to retrieve the file. The processes attempting to access the share, for whatever reason, do not die, and it is necessary to re-boot the server.
If you do wind up doing this, I would recommend an opcode cache, such as APC. APC will cache the pre-processed php locally, and eliminate round trips to your storage. Just be prepared to clear the opcode cache whenever you update your application/
PHP has to run under something to act as a web processor, Apache is the most popular. I've done NFS mounts across servers without problem. Chances are if NFS is down, the network is down. But it doesn't take long to do an rsync across servers to replicate files, and is really a better idea.
I'm not sure what your content is like, but you can separate static files like javascript, css and images so they are on their own server. lighttpd is a good, light weight web server for things like this. Then you end up with a "dedicated" php server. You don't even need a load balancer for this setup.
Keep in mind that PHP stores sessions on the local file system. So if you are using sessions, you need to make sure users always return to the same server. Otherwise you need to do something like store sessions in memcache.

Securing a shared lighttpd setup

(Yes, I know that questions pertaining to lighttpd fit better on SF, but I thought it was more apt to be asked here since it's primarily concerned with security policy.)
We're planning to set up a small web server in my college, so that people could get some web space to put up web pages and the like. They could also upload PHP pages. The whole setup runs from within a chroot jail.
We are thinking about using the same infrastructure to put up some more services, for instance a discussion forum. My problem is, putting the forum in the same document root (or indeed, the same chrooted environment) pretty much allows any user to place small PHP scripts in their directories that can access the forum configuration files (using, say, file_get_contents). This is a massive security risk! Is there any way to solve this issue, short of disabling PHP for the user accounts, and only keeping it enabled for the discussion forum and the like, or serve the forum elsewhere and proxy it using lighttpd?
I doubt setting ownerships/permissions would do anything to fix this, since, the way I see it, the PHP FastCGI process is spawned by the web server, and hence, any page that can be accessed by the server (they all must be, seeing how it is the server that must ultimately serve them) can be accessed by the PHP scripts uploaded by a user.
Any help would be appreciated!
Well, a few points.
First off, while Lighttpd is GREAT for high performance needs, it was not designed to be used in a shared host setting. Apache would likely be the better choice for that, since it supports things like .htaccess...
Secondly, PHP does not need to be run as the same user as Lighttpd. You can use the spawn_fcgi program to launch each fastcgi listener as the user of that website. You would declare a fastcgi backend for each virtual host. Note, that you likely won't be able to use any of the built in vhost modules (simple_vhost, etc). Simply use the regular expression matching:
Either by IP and Port:
$SERVER["socket"] == "127.0.0.2:80" {
fastcgi.server = (
".php" => (
"username" => (
"socket" => "/tmp/user_php.fastcgi",
)
)
)
)
Or by host name:
$HTTP["host"] =~ "example\.com" {
# ...
}
You would likely need to modify the init script to also execute spawn_fcgi to launch the php processes for each user.
Each user needs to have its own Linux user account. Then you need to use SuPHP+LightHTTPD to make sure that the php code is run with the privileges of that user. Next you should make sure that all files are owned by the correct user and chmod 700 or chmod 500 (best for .php files). The last 2 zeros in the chmod, along with suphp makes it such that users cannot file_get_contents() each others files.

Methods for caching PHP objects to file?

In ASPNET, I grew to love the Application and Cache stores. They're awesome. For the uninitiated, you can just throw your data-logic objects into them, and hey-presto, you only need query the database once for a bit of data.
By far one of the best ASPNET features, IMO.
I've since ditched Windows for Linux, and therefore PHP, Python and Ruby for webdev. I use PHP most because I dev several open source projects, all using PHP.
Needless to say, I've explored what PHP has to offer in terms of caching data-objects. So far I've played with:
Serializing to file (a pretty slow/expensive process)
Writing the data to file as JSON/XML/plaintext/etc (even slower for read ops)
Writing the data to file as pure PHP (the fastest read, but quite a convoluted write op)
I should stress now that I'm looking for a solution that doesn't rely on a third party app (eg memcached) as the apps are installed in all sorts of scenarios, most of which don't have install rights (eg: a cheap shared hosting account).
So back to what I'm doing now, is persisting to file secure? Rule 1 in production server security has always been disable file-writing, but I really don't see any way PHP could cache if it couldn't write. Are there any tips and/or tricks to boost the security?
Is there another persist-to-file method that I'm forgetting?
Are there any better methods of caching in "limited" environments?
Serializing is quite safe and commonly used. There is an alternative however, and that is to cache to memory. Check out memcached and APC, they're both free and highly performant. This article on different caching techniques in PHP might also be of interest.
Re: Is there another persist-to-file method that I'm forgetting?
It's of limited utility but if you have a particularly beefy database query you could write the serialized object back out to an indexed database table. You'd still have the overhead of a database query, but it would be a simple select as opposed to the beefy query.
Re: Is persisting to file secure? and cheap shared hosting account)
The sad fact is cheap shared hosting isn't secure. How much do you trust the 100,500, or 1000 other people who have access to your server? For historic and (ironically) security reasons, shared hosting environments have PHP/Apache running as a unprivileged user (with PHP running as an Apache module). The security rational here is if the world facing apache process gets compromised, the exploiters only have access to an unprivileged account that can't screw with important system files.
The bad part is, that means whenever you write to a file using PHP, the owner of that file is the same unprivileged Apache user. This is true for every user on the system, which means anyone has read and write access to the files. The theoretical hackers in the above scenario would also have access to the files.
There's also a persistent bad practice in PHP of giving a directory permissions of 777 to directories and files to enable the unprivileged apache user to write files out, and then leaving the directory or file in that state. That gives anyone on the system read/write access.
Finally, you may think obscurity saves you. "There's no way they can know where my secret cache files are", but you'd be wrong. Shared hosting sets up users in the same group, and most default file masks will give your group users read permission on files you create. SSH into your shared hosting account sometime, navigate up a directory, and you can usually start browsing through other users files on the system. This can be used to sniff out writable files.
The solutions aren't pretty. Some hosts will offer a CGI Wrapper that lets you run PHP as a CGI. The benefit here is PHP will run as the owner of the script, which means it will run as you instead of the unprivileged user. Problem averted! New Problem! Traditional CGI is slow as molasses in February.
There is FastCGI, but FastCGI is finicky and requires constant tuning. Not many shared hosts offer it. If you find one that does, chances are they'll have APC enabled, and may even be able to provide a mechanism for memcached.
I had a similar problem, and thus wrote a solution, a memory cache written in PHP. It only requires the PHP build to support sockets. Other then that, it is a pure php solution and should run just fine on Shared hosting.
http://code.google.com/p/php-object-cache/
What I always do if I have to be able to write is to ensure I'm not writing anywhere I have PHP code. Typically my directory structure looks something like this (it's varied between projects, but this is the general idea):
project/
app/
html/
index.php
data/
cache/
app is not writable by the web server (neither is index.php, preferably). cache is writable and used for caching things such as parsed templates and objects. data is possibly writable, depending on need. That is, if the users upload data, it goes into data.
The web server gets pointed to project/html and whatever method is convenient is used to set up index.php as the script to run for every page in the project. You can use mod_rewrite in Apache, or content negotiation (my preference but often not possible), or whatever other method you like.
All your real code lives in app, which is not directly accessible by the web server, but should be added to the PHP path.
This has worked quite well for me for several projects. I've even been able to get, for instance, Wikimedia to work with a modified version of this structure.
Oh... and I'd use serialize()/unserialize() to do the caching, although generating PHP code has a certain appeal. All the templating engines I know of generate PHP code to execute, making post-parse very fast.
If you have access to the Database Query Cache (ie. MySQL) you could go with serializing your objects and storing them in the DB. The database will take care of holding the query results in memory so that should be pretty fast.
You don't spell out -why- you're trying to cache objects. Are you trying to speed up a slow database query, work around expensive object instantiation, avoid repeated generation of complex page, maintain application state or are you just compulsively storing away objects in case of a long winter?
The best solution, given the atrocious limitations of most low-cost shared hosting, is going to depend on what you're trying to accomplish. Going for bottom of the barrel shared-hosting means you have to accept that you won't be working with the best tools. The numbers are hard to quantify, but there's a trade off between hosting costs, site performance & developer time (ie - fast, cheap or easy).
It's in theory possible to store objects in sessions. That might get you past the file writing disabled problem. Additionally you could store the session in a mysql memory backed table to speed up the query.
Some hosting places may have APC compiled in.. That would allow you to store the objects in memory.

Categories