I'm attempting to rapidly deploy a PHP application under apache2/PHP on a Unix host. The sysadmin hasn't heard of PHP so I'm looking to build/install myself. Unfortunately root access is two weeks of bureaucracy away so I'm looking for a way to use PHP and its requisite libxml2 without installing.
You can be a non-privileged user and build Apache and PHP. You can have your own prefix paths for the installs, and if the development headers for the necessary libraries are available you can use them. You're going to hit issues in that as not-root you'll have to start Apache on a port > 1024. You're also not going to have the system package management available, so updates will have to be likewise built. In short, it's doable, but depending on which UNIX you're actually using, may not be horrendously pleasant. PHP in particular needs many many libraries (and if you're building from source, runtime isn't enough, you need the dev headers files) to have a usable system.
Good luck.
You can't avoid make install, but you can maybe use --prefix
Related
I have been trying to run HHVM as standalone web server for multiple domains and it looks like they are switching to FCGI mode only https://github.com/facebook/hhvm/wiki/runtime-options
Is that the case or running it as standalone is still possible on production?
Yes, you should use FastCGI mode and let nginx/Apache/whatever deal with being the webserver. HHVM's old built-in webserver has been deprecated for quite a number of releases now -- I can't find the old the wiki page on its deprecation, but it's been about six months or so. This more closely mirrors how PHP is often used, and removes a whole host of complicated HHVM-specific configuration mess. Many people are already familiar with how to make nginx/Apache serve files the way they want, and so we can just keep the HHVM-specific stuff in HHVM and let the full-featured webservers do what they are good at.
The getting started guide has a very quick, basic intro to getting FastCGI set up if you're using our prebuilt debian/ubuntu packages, and the FastCGI wiki page contains all the details to get set up in some other environment.
I have been compiling PHP for years with the configuration options I want. I compile extensions I use from source. Is there an advantage to doing this versus installing it from a package manager like apt-get or yum. I assumed it would also give me a leaner binary. I noticed that their are PHP modules in the repos such as "php53-gd". What if there wasn't a package available for something I wanted such as cURL for PHP?
I understand the disadvantages of compiling such as needing to download/install dependencies based on my configuration options. I'm not really concerned with that.
So the question is:
Compile PHP on Linux or just use apt-get / yum? Can I get all the things I need from the repos? Does anyone out there still compile it from source?
Any insight is appreciated! Thanks.
I compile from source every time. It's not hard to corral the mentioned issues with regards to compiling manually. For example, my ./configure settings are saved to a file which is version controlled, so when a new version of PHP is stable and I am ready to make the switch, I download and extract the file, then run this command:
./configure `sh /path/to/my/configure/php.sh`
Not too difficult. And because it's in version control, I can add notes as to why a module was added or removed.
Another benefit of manual compilation is it allows me to keep the PHP footprint as minimal as possible. I pass the --disable-all flag, then add the modules I need. However, there is a downside to this minimalist approach, recently I needed to install Magento, so I had to recompile with --enable-hash and --with-mcyrpt flags. Even though I needed to add new flags, it wasn't difficult to add to the configure file and recompile.
Compiling from source has a few quirks:
There are hundreds of config parameters and flags. And you might not know the optimal ones that need to be used.
if you rely on apt-get's PHP, then you can be assured that you will get the latest patches and security updates if you set up auto-upgrade on your server.
the configuration of php.ini varies a lot. Sometimes your OS may decide some defaults for you which may work better with the rest of the system.
installing extensions like xdebug or other packages are a lot easier with apt.
However, it's worth compiling php from scratch if you want to learn. Also if you don't use some portions of it, you can always disable them in configuration - but then again it might not make much difference to performance.
I compiled php for specific needs only, like :
very small hard disk space so required a minimalist php version
and/or
need only a few specific modules or extensions
and/or
needed for a specific application
and/or
needed to optimize performances: when compiling on the machine where it's used, this allows some performance improvements, if using compile options to get a real tuned version for your system,
and/or
needed multiple and different php versions on the same machine.
and/or
I had a specific nux distro like only a busybox, so no other options than compiling.
But for common usage, e.g. in 80% of the cases, it's not worth spending time to compile and better using the repository version. But I learned a lot by compiling.
Personally, it's a matter of opinion. If you are in a hurry, apt-get it, if you have time to learn and possibly need to reinstall 20 times...compile it.
There are tons of guides out there for PHP compiling. It has a ton of flags for configuration, especially for GD and other libraries. Personally if this is for learning and development, just get LAMP or use apt-get...especially if you need to use Apache
I feel the primary reason for compiling is to have latest version binary (stable or nightly). package managers (most distors) are often annoyingly slow in this respect.
The other reason is that its very common problem that production systems are not wholesale upgraded using package managers. Even if that can be easy. Since package managers create dependency chains and you may not want to upgrade those items. So just to pick one item, compiling is an option. It keeps everything else as it is. You ofcourse have to always study the upgrade issues and make sure nothing else will fail.
I have installed PHP5 - PHP5-MEMCACHE - PHP-APC.
Can they work like that together? Will the loading be fast with these modules ?
I tried to use them, I don't "see" particular differences, maybe the CPU is used less with these modules. My website doesn't have high traffic, but If i can save resources is better!
Thank you
APC keeps cache of PHP bytecode. Memcache keeps cache of your vars, that you set.
So answer is Yes, they can. They're made for different things.
They work together very well, you just need to use them properly :
Memcached is a distributed cache system. What that means in a nutshell is that if you have a cluster of servers all of them can access the same cache pool
APC is an opcache and local cache system. Meaning it optimizes the php scripts so when going through the compiler less operations are made and the code is executed way faster. Another use of APC is local cache, which means you can store values in the cache and access them from the machine running the code.
Yes, they can work together. Whether they will on a production system is another story...
Personally, I had to give up trying to get the following to work for any extended period of time:
Ubuntu 10.04
NGINX 0.7.65
PHP 5.3.2
php-apc
php5-memcache
It will run for awhile, but after stress testing php errors out. I can restart php-fastcgi via /etc/init.d/php-fastcgi and things will role along for some time more, but it always crashes again sooner than later.
I can run either/or without issue, but the two together won't cooperate for me. FYI I tried using binaries (apt-get packages), installing as PECL extensions, downloading source, but all roads lead me to the same sad fate. I also tried running the memache daemon local & remotely on my web host, but same outcome.
I'm working on mmo game based on JavaScript and PHP. We are using both of them. I can't tell you more, beacause I am only frontend developer, however I think if APC and memcache were bad we were not using it.
I currently purchased a dedicated server hosted at iWeb and got it administered by them.
I recently asked after registration to add php_apc and php_imagick to the available libraries. It seems according to them that it is not possible as it is not supported with cPanel.
I would apparently need to do that myself... is there any risks to install those two libraries ? What kind of problem could it raise if there is any ? In case I would have to debug this myself.
Installing Imagemagick to your system in itself won't be too much of a problem.
However, adding support for it and APC to PHP may be a tricky procedure; it may be best for you to just no longer use the PHP provided by cPanel and install PHP yourself, which will go into /usr/local, and run the configure script for it yourself, compiling in whatever extensions you need. It'll mean that you'll need to keep PHP up to date yourself, but it'll also mean that you won't have all your customisations to it wiped the next time cPanel upgrades it.
If there are better suggestions I'd also be interested to hear them.
So my group is trying to set up a shared-server environment for various and sundry web services. I think we've settled on setting disable_functions and disable_classes site wide in php.ini and php_admin_value to force open_basedir in each app's httpd.conf
for php scripts, and passenger's user switching for ruby scripts.
We still need to find something for python though. Passenger does support python, but not for per-application security for specific sub-directories (it's all or nothing at the domain level).
Any suggestions?
(And if any of the previous doesn't make sense - well, I'm the guy who's supposed to set up the python support, not the guy who set up the php or ruby support, so there's still some "and then some magic happens" steps in there from my perspective).
Well, there is a system called virtualenv which allows you to run Python in a sort of safe environment, and configure/load/shutdown these environments on the fly. I don't know much about it, but you should take a serious look into it; here is the description from its web page (just Google it and you'll find it):
The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into /usr/lib/python2.4/site-packages (or whatever your platform's standard location is), it's easy to end up in a situation where you unintentionally upgrade an application that shouldn't be upgraded.
Or more generally, what if you want to install an application and leave it be? If an application works, any change in its libraries or the versions of those libraries can break the application.
Also, what if you can't install packages into the global site-packages directory? For instance, on a shared host.
In all these cases, virtualenv can help you. It creates an environment that has its own installation directories, that doesn't share libraries with other virtualenv environments (and optionally doesn't use the globally installed libraries either).