Questions regarding Lighttpd for Windows - php

I am using lighty for windows, yes i know it's not linux, but atm can only afford local hosting, which then allows me to do a lot of learning and practicing my web skills.
I am aware that fast-cgi, does not work on windows, but I am wondering what other ways, to improve performance are there?
Also I was wondering how to hide all those lightpd.exe window/boxes that come up, everytime anyone or a bot visits the site...can lighttpd be run from the background? I am running it as a service, and that is fine...
But all in all, why is there so little support for lighty on windows?
And I really could care less for 1 more lecture on why everything should be on linux or windows...That discussion is really a waste of time...mine and yours...
If you have some useful information, I definitely want to hear it.
I guess I am one of those guys, who always wants to learn how to improve things, it's like a drug for me, to eak out any percent more in performance...
Like for example, I have added a subdomain, because yslow loves subdomain hosting of images,css and javascript...
I really like lighty, just hope I am not the only one there...using it on windows...and all the lighty for windows sites seem to be dead...or forgotten...
Thank You for your time..
-Craig

I also run lighttpd for Windows, but I've made my own very well optimized lighttpd mod with PHP and Python support which I run from a USB pen drive, since I switched to Windows 7 all the command line windows keep appearing whenever I access the server (I also don't know how to keep this from happening).
I did several things to make my lighttpd server faster (since I run it from a USB pen drive):
disable all kinds of logs (specially access logs)
keep the config file as small as possible (mine has only 20 lines)
activate PHP only on .php files, Python only on .py files
disable all kinds of modules that you don't need, like SSL and so on (I only have 5)
Here it is, my config file:
var.Doo = "C:/your/base/path/here"
# LightTPD Configuration File
server.port = 80
server.name = "localhost"
server.tag = "LightTPD/1.4.20"
server.document-root = var.Doo + "/WWW/"
server.upload-dirs = ( var.Doo + "/TMP/" )
server.errorlog = var.Doo + "/LightTPD/logs/error.log"
server.modules = ( "mod_access", "mod_cgi", "mod_dirlisting", "mod_indexfile", "mod_staticfile" )
# mod_access
url.access-deny = ( ".db" )
# mod_cgi
cgi.assign = ( ".php" => var.Doo + "/PHP/php-cgi.exe", ".py" => var.Doo + "/Python/python.exe" )
# mod_dirlisting
dir-listing.activate = "enable"
# mod_indexfile
index-file.names = ( "index.php", "index.html" )
# mod_mimetype
mimetype.assign = ( ".css" => "text/css", ".gif" => "image/gif", ".html" => "text/html", ".jpg" => "image/jpeg", ".js" => "text/javascript", ".png" => "image/png", ".txt" => "text/plain", ".xml" => "text/xml" )
# mod_staticfile
static-file.exclude-extensions = ( ".php", ".py" )
And the modules that I've active:
mod_access
mod_cgi
mod_dirlisting
mod_indexfile
mod_staticfile
Bottom line is, even when running from the USB pen the server still is blazing fast.
PS: I also considered switching to nginx but given the current performance I can get and the even smaller user base of nginx I decided I would keep LightTPD.

By local hosting, I'm guessing you mean on your own box, so essentially free. If you're not too strapped for cash, you could probably pick up a cheap box, and install a headless linux on there. Well, that's only if you're adverse to using linux as a desktop...
So, first, since you're only learning, I'm assuming you're not trying to put up a production site yet, so you can shut down lighty when you're not using it (getting rid of the boxes popping up for bots). Excuse me if this is unacceptable, since there is probably a solution out there (and how are you getting bots for a sandbox site? oO). Same goes for the performance: it's just a testing grounds, so optimization shouldn't matter too much yet (don't worry about it: remember the maxim that premature optimization is the root of all... something). If you still want fastcgi, there's another stackoverflow question/answer on that: FastCGI on Windows and Lighttpd. Also, check out scgi, which might be a different story on windows.
Also, here's some thoughts from Atwood on yslow: codinghorror.com/blog/archives/000932.html
Finally; last I checked, lighty was no where near as popular as apache, meaning a much smaller user base. When you also consider IIS, then lighty wouldn't really have that many users under Windows. Just noting, you might have a not-so-smooth road ahead of you if you want to continue with lighttpd on windows. Also note, you'll probably end up shifting the server to another box or offsite eventually. I've served stuff from my desktop, and it's not all too fun in the long run.

Try nginx - another lightweight alternative to apache, fast and stable. fastcgi on windows works ok.
Regarding your question - I think the reason is that lighttpd is loosing its popularity, look at the web server stats. So less people use it, less features are tested, more bugs are lurking around.

Related

Serving large files (>1GB) via PHP on Plesk / Nginx / Apache

I'm trying to serve arbitrary large files via PHP.
Since I need to check permissions first, i cannot let the Web server handle the file downloads directly.
The server is running Plesk 17.0. As far as I know, Plesk uses Nginx as a proxy for Apache by default. But this can be turned off, so Apache serves everything directly.
My Problem:
On downloads I get a Network error at a few MB after 1GB if I use the default configuration (which is with Nginx running).
I've read many suggestions on how to handle large file uploads in PHP. Currently I'm using essentially this:
if ( isset( $_SERVER['MOD_X_SENDFILE_ENABLED'] ) &&
$_SERVER['MOD_X_SENDFILE_ENABLED'] == 1 ) {
header( "X-Sendfile: " . $this->options['base_path'] . $file );
} else {
readfile( $file );
}
As you see, I've tried offloading the file download to Apache. It also works fine, if I turn off Nginx.
With Nginx turned on, the max_execution_time in PHP marks a limit at which Nginx produces a network error. Though it serves at least 1GB. To me it seems, that there is some kind of block size limit between Apache and Nginx, which is set to 1GB. But I could not find such an option. For example setting max_execution_time to 5 sec, still delivers 1GB, even if the Download takes 10 Minutes.
This error is logged in the proxy_error_log when 1GB is served and the max_execution_time is passed:
[error] 3524#0: *796853 upstream prematurely closed connection while reading upstream
With Apache serving directly and mod_xsendfile active the max_execution_time does not matter. Using PHP readfile the max_execution_time matters. This also makes sense to me.
But according to the Plesk documentation it is beneficial to use Nginx for the serving.
So I'm looking to a way to keep Nginx and Apache running and not being limited by the max_execution_time when serving multiple GB files.

Using WEBrick to serve PHP web applications

I am a PHP developer who has started learning Ruby on Rails. I love how easy it is to get up and running developing Rails applications. One of the things I love most is WEBrick. It makes it so you don't have to configure Apache and Virtual Hosts for every little project you are working on. WEBrick allows you to easily start up and shut down a server so you can click around your web application.
I don't always have the luxury of working on a Ruby on Rails app, so I was wondering how I might configure (or modify) WEBrick to be able to use it to serve up my PHP projects and Zend Framework applications.
Have you attempted this? What would be the necessary steps in order to achieve this?
To get php support in webrick you can use a handler for php files. To do this you have to extend HttpServlet::AbstractServlet and implement the do_GET and do_POST methods. These methods are called for GET and POST requests from a browser. There you just have to feed the incoming request to php-cgi.
To get the PHPHandler to handle php files you have to add it to the HandlerTable of a specific mount. You can do it like this:
s = HTTPServer.new(
:Port => port,
:DocumentRoot => dir,
:PHPPath => phppath
)
s.mount("/", HTTPServlet::FileHandler, dir,
{:FancyIndexing => true, :HandlerTable => {"php" => HTTPServlet::PHPHandler}})
The first statement initializes the server. The second adds options to the DocumentRoot mount. Here it enables directory listings and handling php files with PHPHandler. After that the server can be started with s.start().
I have written a PHPHandler myself as I haven't found one somewhere else. It is based on webricks CGIHandler, but reworked to get it working with php-cgi. You can have a look at the PHPHandler on GitHub:
https://github.com/questmaster/WEBrickPHPHandler
You can use nginx or lighttpd
Here's a minimal lighttpd config.
Install PHP with FastCGI support and adjust the "bin-path" option below for your system. You can install it with MacPorts using sudo port install php5 +fastcgi
Name this file lighttpd.conf
then simply run lighttpd -f lighttpd.conf from any directory you'd like to serve.
Open your webbrowser to localhost:8000
lighttpd.conf:
server.bind = "0.0.0.0"
server.port = 8000
server.document-root = CWD
server.errorlog = CWD + "/lighttpd.error.log"
accesslog.filename = CWD + "/lighttpd.access.log"
index-file.names = ( "index.php", "index.html",
"index.htm", "default.htm" )
server.modules = ("mod_fastcgi", "mod_accesslog")
fastcgi.server = ( ".php" => ((
"bin-path" => "/opt/local/bin/php-cgi",
"socket" => CWD + "/php5.socket",
)))
mimetype.assign = (
".css" => "text/css",
".gif" => "image/gif",
".htm" => "text/html",
".html" => "text/html",
".jpeg" => "image/jpeg",
".jpg" => "image/jpeg",
".js" => "text/javascript",
".png" => "image/png",
".swf" => "application/x-shockwave-flash",
".txt" => "text/plain"
)
# Making sure file uploads above 64k always work when using IE or Safari
# For more information, see http://trac.lighttpd.net/trac/ticket/360
$HTTP["useragent"] =~ "^(.*MSIE.*)|(.*AppleWebKit.*)$" {
server.max-keep-alive-requests = 0
}
If you'd like to use a custom php.ini file, change bin-path to this:
"bin-path" => "/opt/local/bin/php-fcgi -c" + CWD + "/php.ini",
If you'd like to configure nginx to do the same, here's a pointer.
I found this, but I really think it isn't worth the hassle. Is making a virtual host (which isn't even necessary) that difficult? In the time it would take you to set this up to work with PHP, if you can even get it working, you could have written a script that creates virtual host entries for you, making it as easy as webrick.
It looks like WEBrick has CGI support, which implies that you can get PHP running by invoking it as a CGI script. The #! line at the top of each executable file would just need to point towards the absolute path to php-cgi.exe.
It's worth noting that you'd need to remove the #! line when moving the file to any other server that doesn't think of PHP as a CGI script, which would be ... uh ... all of'em.

Running Rails and PHP on Lighttpd on Linux

Well, I'm wondering if theres a way to run both rails and PHP on Lighty, on Ubuntu. I want to run both my PHP projects and Rails projects on the one server/domain.
I have little experience with Linux really, so forgive my naivety.
If theres a way of doing this please let me know :)
It's really quite simple to run them both. I do it all the time (ROR to run Redmine, and PHP for the rest).
You have 2 real options for ROR. Either serve it from FastCGI (what I do), or run it with a standalone server (like Mongrel, etc) and proxy to it. Both have advantages. FastCGI has the advantage that it's self-contained (no secondary server to run). The standalone has the advantage that it's easier to configure.
If you have specific questions, I can guide, but there are guides on the internet on how to do this.
My lighttpd.conf:
$HTTP["host"] =~ "my.ror.site" {
server.error-handler-404="/dispatch.fcgi"
fastcgi.server = (".fcgi" => ("ror_1" => (
"min-procs"=>8,
"max-procs" => 8,
"socket" => "/tmp/myrorlock.fastcgi",
"bin-path"=> "/path/to/ror/site/public/dispatch.fcgi",
"kill-signal" => 9,
"bin-environment" => ( "RAILS_ENV" => "production" )
)))
}
fastcgi.server = ( ".php" =>
(
(
"socket" => "/tmp/php-fastcgi.socket",
"bin-path" => "/usr/bin/php-cgi -c /etc/php.ini",
"min-procs" => 1,
"disable-time" => 1,
"max-procs" => 1,
"idle-timeout" => 20,
"broken-scriptfilename" => "enable",
"bin-copy-environment"=> (
"PATH", "SHELL", "USER"
),
"bin-environment" => (
"PHP_FCGI_CHILDREN" => "40",
"PHP_FCGI_MAX_REQUEST" => "50000"
)
)
)
)
And that's it. Note the kill-signal option. that's important, otherwise you'll wind up with zombie processes everywhere every time you restart the server...
Check out fastcgi.conf in the conf.d subdirectory of Lighty's configuration directory (not sure where it's located on Ubuntu, but a quick search suggests /etc/lighttpd). There are commented-out examples for both PHP and Rails; by combining the two, you should be able to get the set-up you're looking for (though I'd suggest getting one working first and then setting up the other).
FastCGI is the method by which Lighty can communicate with runtimes like Ruby or PHP. Lighty can also use SCGI, though I've never use it myself and am not sure how well it works (last I heard it was still experimental-ish).
You may also find the Optimizing FastCGI page on Lighty's documentation wiki helpful, though it's fairly PHP/MySQL-specific.
I don't use Lighty. Rails is best served with Passenger and Apache, considering the power of Passenger add-on to Apache. I served Wordpress (PHP) in the same domain as my Rails app by pointing its path to somewhere else. Here's an article to follow. HTH.

What Apache/PHP/Server settings might affect the speed of a CodeIgniter model instantiation from one server to another?

One of the pages in one of my apps runs very slowly on the web server compared to my local test server. There are some radical differences in the environments that might explain it, but I'm hoping for a more solvable solution than that.
Server:
Solaris 10
Apache 2.2.9 Prefork
PHP 5.2.6
The server is run on a cluster of 4 not-even-a-year-old Sun boxes, and shouldn't be having any issues performance-wise.
Local Test Server:
Windows XP
Apache 2.2.14 WinNT
PHP 5.3.1
This is actually my own desktop - a decent machine, but should pale in comparison to the Sun boxes.
The application is written with CodeIgniter, and I've used the profiling features within to trace the slowdown to Model::Model(). For example, Model::Model() runs in 0.0006s locally and 0.0045s on the server. When you're loading hundreds of models on a page, this is obviously an issue.
I've cross-posted this here from ServerFault, as it could, potentially, be more closely related to CodeIgniter.
From local, the page takes 2-3 seconds to load. From the server, it's 11-15.
Modules on Local, but not remote:
mod_actions
mod_asis
mod_dav
mod_dav_fs
mod_dav_lock
mod_isapi
mod_autoindex_color
Modules on remote, not Local:
mod_authn_dbm
mod_authn_anon
mod_authz_dbm
mod_authz_owner
mod_cache
mod_mem_cache
mod_deflate
mod_authnz_ldap
mod_ldap
mod_mime_magic
mod_expires
mod_unique_id
mod_autoindex
mod_suexec
mod_userdir
libphp5
mod_dtrace
mod_security2
Edit:
I've been moving my benchmarking progressively down, level by level, and have found the largest discrepancy lies within this chunk of code (which is in the CodeIgniter function Model::_assign_libraries, and is called during a model's constructor):
$time = microtime()*1000;
foreach (array_keys(get_object_vars($CI)) as $key)
{
if ( ! isset($this->$key) AND $key != $this->_parent_name)
{
// In some cases using references can cause
// problems so we'll conditionally use them
if ($use_reference == TRUE)
{
$this->$key = NULL; // Needed to prevent reference errors with some configurations
$this->$key =& $CI->$key;
}
else
{
$this->$key = $CI->$key;
}
}
}
if (get_class($this) == 'SeatType')
echo sprintf('%.5f ms|', (microtime()*1000 - $time));
Locally, this prints around 0.48ms every iteration. On the cluster, it prints around 3.9ms every iteration.
I'm beginning to wonder if this problem is outside of Apache/PHP - I copied both the php.ini and htconf files to my local server, and (after removing mod_dtrace, and pretty much nothing else), I actually saw increased performance. (The above check now prints .2ms locally.)
What we have discovered is that though the SPARC servers look as though they should perform better than the core2 quad in my PC, they do this completely by threading. Any single thread will actually perform worse. The decrease in performance is likely due to this.

PHP Subversion Setup FTP

I work at a small php shop and I recently proposed that we move away from using our nas as a shared code base and start using subversion for source control.
I've figured out how to make sure our dev server gets updated with every commit to our development branch... and I know how to merge into trunk and have that update our staging server, because we have direct access to both of these, but my biggest question is how to write a script that will update the production server, which we many times only have ftp access to. I don't want to upload the entire site every time... is there any way to write a script that is smart enough to upload only what has changed to the web server when we execute it (don't want it to automatically be uploading to the production enviroment, we want to execute it manually)?
Does my question even make sense?
Basically, your issue is that you can't use subversion on the production server. What you need to do is keep, on a separate (ideally identically configured) server a copy of your production checkout, and copy that through whatever method to the production server. You could think of this as your staging server, actually, since it will also be useful for doing final tests on releases before rolling them out.
As far as the copy goes, if your provider supports rsync, you're set. If you have only FTP you'll have to find some method of doing the equivalant of rsync via FTP. This is not the first time anybody's had that need; a web search will help you out there. But if you can't find anything, drop me a note and I'll look around myself a little further.
EDIT: Hope the author doesn't mind me adding this, but I think it belongs here. To do something approximately similar to rsync with ftp, look at weex http://weex.sourceforge.net/. Its a wrapper around command line ftp that uses a local mirror to keep track of whats on the remote server so that it can send only changed files. Works great for me.
It doesn't sound like SVN plays well with FTP, but if you have http access, that may prove sufficient to push changes using svnsync. That's how we push changes to our production severs -- we use svnsync to keep a read-only mirror of the repository available.
I use the following solution. Just install the SVN client on your webserver, and attach this into a privately accessible url:
<?php
// make sure you have a robot account that can't commit ;)
$username = Settings::Load()->Get('svn', 'username');
$password = Settings::Load()->Get('svn', 'password');
$repos = Settings::Load()->Get('svn', 'repository');
echo '<h1>updating from svn</h1><pre>';
// for secutity, define an array of folders that you do want to be synced from svn. The rest should be skipped.
$svnfolders = array( 'includes/' ,'plugins/' ,'images/' ,'templates/', 'index.php' => 'index.php');
if(!empty($_GET['justthisone']) && array_search($_GET['justthisone'], $svnfolders) !== false){ // you can also just update one of above by passing it to $_GET
$svnfiles = array($_GET['justthisone']);
}
foreach($svnfiles as $targetlocation)
{
echo system("svn export --username={$username} --password {$password} {$repos}{$targetlocation} ".dirname(__FILE__)."/../{$targetlocation} --force");
}
die("</pre><h1>Done!</h1>");
I'm going to make an assumption here and say you are using a post-commit hook to do your merging/updating of your staging server. This may work, but I would strongly recommend you look into a Continuous Integration solution. The following are some that I am aware of:
Xinc - http://code.google.com/p/xinc/ (PHP Specific)
CruiseControl - http://cruisecontrol.sourceforge.net/ (Wildly popular.)
PHP integration made possible with http://phpundercontrol.org/about.html
Hudson - [https://hudson.dev.java.net/] (Appears to be Java based, but allows for plugins/extensions)
LFTP is capable of synchronizing directories over ftp.
Just an idea:
You could hold a revision of your project on a host you have access to and where subversion is installed. This single revision reflects the production server's version.
You could now write a PHP script that makes this repository update over svn and then find all files that have been changed since the rep was updated. These files you can upload.
Such a script could look like this:
$path = realpath( '/path/to/production/mirror' );
chdir( $path );
$start = time();
shell_exec( 'svn co' );
$list = array();
$i = new RecursiveIteratorIterator( new RecursiveDirectoryIterator( $path ), RecursiveIteratorIterator::SELF_FIRST );
foreach( $i as $node )
{
if ( $node->isFile() && $node->getCTime() > $start )
{
$list[] = $node->getPathname();
}
// directories should also be handled
}
$conn = ftp_connect( ... );
// and so on
Just as it came to my mind.
I think this will help you
https://github.com/midhundevasia/deploy
its works well in Windows.

Categories