How is the Twelve-Factor App manifesto applied to PHP projects? - php

I just read the Twelve-Factor App, which looks like a pretty comprehensive set of rules to apply in a web-based application. It uses python or rails in its examples, but never php... I was wondering which factors of the manifesto can be applied to PHP projects and how?
Thanks

Short answer:
All points apply to PHP as the twelve factor app manifesto refers specifically for web apps.
PHP has a very hard time complying to twelve factor, in particular in the items 2, 7, 8 9 (as a side effect of 7 and 8) and 12 (partially). Actually that's the first really grounded argument I have heard in the whole "PHP sucks" rant that is common on the Ruby and Python communities (don't get me wrong, I think Ruby and Python are better languages, but I don't hate PHP, and definitively hate the "my language is better" rants.)
Being that said, it may be that your PHP project is not a web app or SaaS, but just a simple website, so you may just deem that twelve factor is not a need.
Long answer: A point-by-point analysis would be:
Codebase: not an issue
Dependencies: the way PEAR works goes quite against this point, as pear dependencies are installed system wide and usually you don't have a consolidated manifesto to declare them. Is also usual for a PHP setup to require you to add packages to your OS installation to get some libraries available. Finally AFAIK there isn't a tool in PHP to provide isolation like "virtualenv", "rbenv" or "rvm" (or if it exists is not popular among the PHP community) Edit: Composer (http://getcomposer.org/) seems to do the right regarding dependencies, it still doesn't isolate the PHP version, but for all the rest it should be fine.
Config: some PHP frameworks are not very well suited to do this, but there are certainly others that do well, so it's not a flaw of the platform itself
Backing Services: shouldn't be much of an issue, despite maybe having to hack some frameworks a little in order to manage the services as resources
Build, release, run: this is totally appliable to PHP as this definitively doesn't concern only "compiling". One may argue that several projects and hosting platforms on the PHP community abuse of direct FTP, etc. but that's not a flaw of PHP itself, and there is no real impediment on doing things right regarding this item.
Processes: This definitively concerns to PHP. PHP is quite capable of running purely stateless processes (the emphasis is on the word stateless), and actually several frameworks make your life easy for it. For example, symfony provides out-of-the-box session management with memcached or database storage instead of regular sessions
Port binding: Over simplyfing it, this point basically demands you to reverse proxy and have the actual webserver embedded on the app instead of being a separated component. This puts PHP in a very hard position to comply. Despite there are ways to do this (see the reply about using PHP as FastCGI) that's definitively not the most common nor the best supported way to serve a PHP app as it is on other communities (e.g. Ruby, Node.js).
Processes: This is not impossible in PHP. However several elements put PHP in a hard position to comply. Namely the lack of good support for items 6 and 7; the fact that the PHP API to spawn new processes isn't really very nice to work with; and specially the way Apache's mod_php handles their workers (which is by far the most common deployment schema for PHP)
Disposability: If you use the right tools there is nothing inherent on PHP to prevent you from creating fast, disposable, tidy web and worker processes. However I believe that since the underlying process model is hard to implement as per points 7 and 8, then 9 becomes a bit cumbersome as a side effect
Dev/prod parity: This is very platform agnostic, and I would say one of the hardest to get done right. PHP is no exception to this rule, but it doesn't have a particular impediment either. Actually most of the tools named on the manifesto can be applied to a PHP project
Logs: Having your app agnostic of the log system on the execution environment is totally doable on PHP
Admin processes: The most important flaw of PHP regarding this point is its lack of a REPL shell. Regarding the rest, several frameworks like Symfony allow you to program admin tasks (e.g Doctine-based database migrations) and run them on the same environment as your "regular" web envionrment.
Since the PHP community is evolving, it may be that it has already righted some of the wrongs mentioned.

Build, release, Run: Applicable to compiled code which is not the case in PHP. So this
point is not something you need to look at.
I don't claim any authority on this 12 factor stuff, but my read of that section is that the author would disagree. It's not just about compiling, it's about managing dependencies both in the small (the snapshot of the code) and in the large (any libraries the code uses).
Imagine you're a new dev and they say, "Okay, this is a custom php app, so...
a) The custom code is really two subprojects, which are in repo A and repo B.
b) You'll need to create a directory layout like so, and then
c) check the code for each subproject out into this subdirectory and this subdirectory.
d) You'll also need these three open source PHP libraries:
version 3.1 of library Foo,
version 2.3 of library Bar, and
version 5.6 of library Bat.
e) download them from their home project sites and unpack them, then copy them into this directory, that directory, and the other directory.
f) then you'll need to set these configurations in the external library,
g) and these configs in our two custom code projects.
h) once that's all done, tar/gzip it all up, upload it up to the QA server and untar it into htdocs.
There's no compiling going in that set of steps, but you can bet there's a lot of building.
Getting all of that set up and working is the build step.
Using tar/gzip to take a snapshot of the working build is the release step.
SCP'ing/unpacking it into the QA server's htdocs directory is the runtime step.
You might say that some of those steps above are unnecessary - the libraries should be deployed at the system level and merely imported. From the 12factors.net site I'd say the author prefers you to import them uniquely and individually for the app. It sidesteps dependency versioning problems at the cost of more disk space (not that anybody cares). There are more hassles in managing all those dependencies as local-to-the-app, but then that's the point of the build/release/runtime scheme.

It might have changed since you read it - there are a few PHP examples now, although a few of them seem like negations of the twelve-factor concept.
One of the ways that normal mod_php sites violate twelve-factor comes with VII. Port binding. From the manifesto:
The twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.
However, apache/php can be coaxed into this worldview with something like this:
https://gist.github.com/1398498
Not perfect, but it works for the most part. To test it out, install foreman:
gem install foreman
then run it in the directory you cloned the gist into:
foreman start
then visit
http://localhost:5000/foo

DO NOT TAKE THIS POST AS A REFERENCE, THIS WAS WRITTEN IN 2011, MANY THINGS HAVE CHANGED SINCE THEN... THE WORLD OF PROGRAMMING IS IN CONSTANT EVOLUTION. A POINT OF VIEW OF 2011 IS NOT NECESSARILY STILL VALID IN 2018.
I just read a few lines of each points and here goes my analysis of the document:
Codebase: Everyone, php or not should have a codebase even for the little projects
Dependecies: PHP uses includes and code libraries that you should always be able to port easily by simply copying the code to a server. Sometimes you use PEAR and then if the server doesn't support it, you have to copy and install pear manually. I use Zend Framework most of the time, so it's just copying the code of the framework with the ftp upload.
Config: It is common for php apps to have a writable config file that you store configurations into. Where you store it is your choice as a developer, but it is usually either at the root of your app or in a settings/config folder.
Backing Services: PHP does use backing service most of the time such as MySQL. Other common services depending on your app are twitter and facebook. You use their API to communicate with them and store/retrieve data to work with.
Build, release, Run: Applicable to compiled code which is not the case in PHP. So this point is not something you need to look at.
Processes: HTTP is stateless and is SERVED,thus, you usually have no process apart from the web server. This is not entirely true as a webservice may be bundled with applications you package with or create for it. But, for the sake of standards, you usually don't have to use processes in the web world.
Port binding: PHP doesn't apply to port binding at all because it is not a process that hooks to an address, Apache, NGinx or Lighttpd does that for you. Reading #6/7 makes me understand that a website could never be a Twelve-Factor app.
Concurrency: Again this point treats about processes which do not apply to PHP web pages. This point applies to the server serving the content.
Disposability: This point discusses about how fast a process should be. Obviously, PHP web sites not being a process shouldnt apply but always note that your website should execute pages as fast as possible... Always think about this block or that block of code and see if it is optimized.
Dev/Prod Parity: This point is crucial in any app development. You never want to have a large gap between two app versions or upgrading can become a hassle. Furthermore, never create client specific versions of an app. Always find ways to allow your app to be configured/customized at the template level so you can keep your app as close as possible to all installed versions everywhere.
Logs: Logs are always a good thing to have, they allow you to follow the process of your code without having to output it to screen. My favorite tactic is to "tail -f logfile" inside of a linux console and look at what is happening as i execute my code.
Admin processes: Not applicable, in php, you don't have processes, but you do have pages that you can secure with usernames and passwords.

Related

Is there a good method to distribute PHP based software in a hardware using Docker? [duplicate]

We all know situations when you cannot go open source and freely distribute software - and I am in one of these situations.
I have an app that consists of a number of binaries (compiled from C sources) and Python code that wraps it all into a system. This app used to work as a cloud solution so users had access to app functions via network but no chance to touch the actual server where binaries and code are stored.
Now we want to deliver the "local" version of our system. The app will be running on PCs that our users will physically own. We know that everything could be broken, but at least want to protect the app from possible copying and reverse-engineering as much as possible.
I know that Docker is a wonderful deployment tool so I wonder: is it possible to create encrypted Docker containers where no one can see any data stored in the container's filesystem? Is there a known solution to this problem?
Also, maybe there are well known solutions not based on Docker?
The root user on the host machine (where the docker daemon runs) has full access to all the processes running on the host. That means the person who controls the host machine can always get access to the RAM of the application as well as the file system. That makes it impossible to hide a key for decrypting the file system or protecting RAM from debugging.
Using obfuscation on a standard Linux box, you can make it harder to read the file system and RAM, but you can't make it impossible or the container cannot run.
If you can control the hardware running the operating system, then you might want to look at the Trusted Platform Module which starts system verification as soon as the system boots. You could then theoretically do things before the root user has access to the system to hide keys and strongly encrypt file systems. Even then, given physical access to the machine, a determined attacker can always get the decrypted data.
What you are asking about is called obfuscation. It has nothing to do with Docker and is a very language-specific problem; for data you can always do whatever mangling you want, but while you can hope to discourage the attacker it will never be secure. Even state-of-the-art encryption schemes can't help since the program (which you provide) has to contain the key.
C is usually hard enough to reverse engineer, for Python you can try pyobfuscate and similar.
For data, I found this question (keywords: encrypting files game).
If you want a completely secure solution, you're searching for the 'holy grail' of confidentiality: homomorphous encryption. In short, you want to encrypt your application and data, send them to a PC, and have this PC run them without its owner, OS, or anyone else being able to scoop at the data.
Doing so without a massive performance penalty is an active research project. There has been at least one project having managed this, but it still has limitations:
It's windows-only
The CPU has access to the key (ie, you have to trust Intel)
It's optimised for cloud scenarios. If you want to install this to multiple PCs, you need to provide the key in a secure way (ie just go there and type it yourself) to one of the PCs you're going to install your application, and this PC should be able to securely propagate the key to the other PCs.
Andy's suggestion on using the TPM has similar implications to points 2 and 3.
Sounds like Docker is not the right tool, because it was never intended to be used as a full-blown sandbox (at least based on what I've been reading). Why aren't you using a more full-blown VirtualBox approach? At least then you're able to lock up the virtual machine behind logins (as much as a physical installation on someone else's computer can be locked up) and run it isolated, encrypted filesystems and the whole nine yards.
You can either go lightweight and open, or fat and closed. I don't know that there's a "lightweight and closed" option.
I have exactly the same problem. Currently what I was able to discover is bellow.
A. Asylo(https://asylo.dev)
Asylo requires programs/algorithms to be written in C++.
Asylo library is integrated in docker and it seems to be feаsable to create custom dоcker image based on Asylo .
Asylo depends on many not so popular technologies like "proto buffers" and "bazel" etc. To me it seems that learning curve will be steep i.e. the person who is creating docker images/(programs) will need a lot of time to understand how to do it.
Asylo is free of charge
Asylo is bright new with all the advantages and disadvantages of being that.
Asylo is produced by Google but it is NOT an officially supported Google product according to the disclaimer on its page.
Asylo promises that data in trusted environment could be saved even from user with root privileges. However, there is lack of documentation and currently it is not clear how this could be implemented.
B. Scone(https://sconedocs.github.io)
It is binded to INTEL SGX technology but also there is Simulation mode(for development).
It is not free. It has just a small set of functionalities which are not paid.
Seems to support a lot of security functionalities.
Easy for use.
They seems to have more documentation and instructions how to build your own docker image with their technology.
For the Python part, you might consider using Pyinstaller, with appropriate options, it can pack your whole python app in a single executable file, which will not require python installation to be run by end users. It effectively runs a python interpreter on the packaged code, but it has a cipher option, which allows you to encrypt the bytecode.
Yes, the key will be somewhere around the executable, and a very savvy costumer might have the means to extract it, thus unraveling a not so readable code. It's up to you to know if your code contains some big secret you need to hide at all costs. I would probably not do it if I wanted to charge big money for any bug solving in the deployed product. I could use it if client has good compliance standards and is not a potential competitor, nor is expected to pay for more licenses.
While I've done this once, I honestly would avoid doing it again.
Regarding the C code, if you can compile it into executables and/or shared libraries can be included in the executable generated by Pyinstaller.

Alternatives to Chef/Fabric/Puppet for Simple LAMP Development?

I've finally committed to really learning the software design process correctly in order to advance my skills and grow my business. This means embracing version control (git), setting up a development-staging-production environment and keeping these environments as similarly configured as possible.
I'm getting really caught up with the last step, in picking a solution to automate and sync my server settings. I've looked into Chef, Puppet & Fabric, but for my purposes they all seem overly complex. I am:
Developing a small web app on a single server
Will be developing in a LAMP environment with intermediate PHP & UNIX skills
Won't be heavily modifying environmental variables (primarily php.ini, apache configs)
I would appreciate any recommendations on solutions that would be easier to implement than mastering the complex Chef environment or learning Python to use Fabric. I can do this if necessary, but am hoping there is a more basic / elegant solution given my very simplistic needs.
In the company I work for, where we have more or less the same needs, we just setup a couple of bash script.
Basically it sets up the git repo (local, and distant bare), install apache2 and PHP5 (and some php extensions), configure the apache's vhost, php.ini, install frameworks and bootstrap project if needed (for us it's symfony).
We have another script, that fire some EC2 instance, run the previously mentionned script, launch the test suite, and download the report of these scripts.
Chef & Puppet works well, but it's a little overkill, unless you have many projects that runs in the same time.
Edit :
If you want to run a script after commiting/pushing (like deploy to staging/pre-production server, launching your continuous integration build, etc), there's a way to do this using git call post-hook, see Deploy a project using Git push
I'd strongly recommend having a look at Ansible for this purpose.
It is a full solution, which means it can handle configuration management, deployment and so forth. However, it is far easier to learn in my experience than Chef or Puppet as you can start by doing basic shell command execution and move on from there.
There's no need to learn a new language; all the configuration and specification you would be doing is done in YAML, which is just structured text.
Overall, Ansible will give you much of what Chef or Puppet will at your level and hopefully you will find it more straightforward to get started with.
If you're serious about professional web development, I would strongly recommend taking a second look at Chef. It works really great for us (me and my co-workers). I know it may seem like overkill, but in my opinion, the advantages far outweigh the learning curve. It's a lot more work to try to maintain different server environments (and local development environments among co-workers). Plus, Chef makes it super easy to install Apache, PHP, and MySQL since there are already cookbooks/recipes available.
Also, make sure you check out Vagrant. It works with Chef and VirtualBox, making it really simple to set up a local development environment.
Also, if you're working on a Zend Framework project, you may be interested in the Zend Framework Boilerplate project which is an all-in-one LAMP development environment which uses Vagrant.
For Simple LAMP Development you don't need anything at all. It is not that simple development can't be automated, it's because for simple development it is usually sufficient (easier and faster) to write some scripts yourself (even in Python).
When you realise that your custom scripts are hard to maintain or not enough, you are ready for tools like Fabric (shell command automation) and/or Chef/Puppet (server configuration management). They are not easy to learn, because system interconnections that they are managing are not simple (which is not your case, obviously).
For your single server, I'd say README + Mercurial (Git if you need GitHub) plus some symlinking should be more than enough to manage configs, sources and server setup. For automation and deployment just write a script that uploads your site to FTP/SSH, restarts server, executes tests, whatever - you decide. That is Simple Deployment for Simple Development. If you'd rather avoid writing PHP for that and don't know shell - then Fabric will save some time for you.
Once your scripts are ready, you already know your problems, you can learn Chef/Puppet in background to see if it is worth complicating things for your environment.
If you choose to try Chef - don't start with Chef Solo - it's a poisonous snack for a starter - use Hosted Chef + Client - it is free for your setup. Can't say anything for Puppet - I chose Chef because my mom said I need to know how to cook.

PHP Code Deployment Tips

In the past, I have been developing in a very amateurish fashion, meaning I had a local machine where I developed and tested code and a production machine to which I copied the code when I was done. Recently I modified this slightly to where I developed locally, checked the code into SVN and then updated the production machine through SVN.
Now I would like to start a new project and improve my workflow. Ideally I had the following in mind:
Have one or more local dev environments
Develop and test on local machine(s)
Use SVN (or Git) as code repository
Use a build tool to set up new environments (either dev, staging or production) and deploy code
Since I am not very familiar with this process, I am looking for suggestions on how to best set this idea up and the tools to use, especially when it comes to the build tools. I was looking into Ant and Phing (possibly make), but I am so new to this that I would really like to get some guidance. Are there any good tutorials or books about PHP deployment, especially for beginners? What I am especially interested in are the following topics:
Deployment to different types of servers with different settings (e.g. dev uses different db, db passwords, PHP error reporting than production or staging).
Deployment that automatically pulls code from SVN.
Deployment that temporarily sets a "Maintenance" page for production environment.
Once I mastered the above, maybe even do some testing in the build process.
I know my question might sound quite confused... I admit, I am new to this and might be a little off the target in what I really need. That's why any help is greatly appreciated.
I would suggest making your testing deployment strategy a production-ready install-script -- since you're going to need one of those anyway eventually.
A few tips that may seem obvious to some, but are worth pointing out:
Your config file saved in your VCS should be a template, and should be named differently from the file that will eventually contain the actual settings. E.g. config-dist.php or config-sample.conf or sample/config-mysql.php or something along those lines. Otherwise you will end up accidentally checking in a server-specific configuration file over your template.
For PHP deployment, anticipate that some users will not be able to run server-side scripts through any mechanism other than the web server itself. A PHP-based installer is almost non-negotiable.
You should include a consumer-friendly update mechanism, and for that, wordpress is a great example of a project to emulate. A PHP script can (a) download the latest build, (b) use the ftp functions to update your application's files, and (c) execute an update script which makes the appropriate changes to the database, etc.
For heaven's sake don't do like [redacted] and make your users download and install separate patches for each point release. Have them download the latest (final) release which contains all the updates to date, and applies the correct ALTER TABLE functions in sequence.
Whether the files are deployed via SVN or through FTP, the install/update mechanism should be the same: get the latest files, run the update script. The updater uses the version listed in the PHP script and the version listed in the DB, and uses that knowledge to apply the appropriate DB patches in order. As for how to generate those patches, there are other questions here that you can refer to for more info.
As for the "Maintenance" page, just use the version trick mentioned above to trigger it (compare the version in the DB against the version in the PHP code). It's also useful to be able to mark a site as "down" to the public but make it visible to admins (like Joomla does), which you can trigger through database or filesystem flags.
As for automatically pulling code from SVN, I'd say you're better off with either a cron script or with commit triggers than working that into your application, since it wouldn't be relevant to end users.
This isn't exactly part of your question, but it's relevant:
If you go into distributing code intended for a wide audience, I would advise you to go with building and distributing OpenSSL-signed PHAR packages. You can distribute them over HTTP without a problem, and because they're OpenSSL-signed, you're also mitigating the risk of man-in-the-middle attacks and protecting end-users/customers/clients from someone injecting code if you want to setup an automatic or one-click update.
There's a set of tools I've contributed to in the past that work great for this, but you'll either need PHP 5.3, or you'll need PHP 5.2 with PHAR installed via PECL. https://github.com/koto/phar-util
As far as testing goes, PHPUnit is the de facto standard.
If you are interested in using Git then you should check out this build system from CodeMeme. From what you described it sounds like it would be a good fit. You can add it to any project as a submodule and with the included code you can tailor a build script that will deploy to different multiple servers in multiple environments. It uses Git to build the code for deployment but unfortunately SVN is not supported.
https://github.com/CodeMeme/Phingistrano

Setting up a deployment / build / CI cycle for PHP projects

I am a lone developer most of my time, working on a number of big, mainly PHP-based projects. I want to professionalize and automate how changes to the code base are handled, and create a Continuous Integration process that makes the transition to work in a team possible without having to make fundamental changes.
What I am doing right now is, I have a local test environment for every project; I use SVN for each project; changes are tested locally, and then transferred to the on-line version, usually via FTP. API documentation is generated manually from the source code; Unit tests are something I am getting into slowly, and it's not yet part of my daily routine.
The "build cycle" I am envisioning would do the following:
A changeset gets checked into SVN after having been tested locally.
I start the build process. The SVN HEAD revision gets checked out, modified if necessary, and made ready for upload.
API Documentation gets generated automatically - if I haven't set it up in detail yet, using a default template, scanning the whole code base.
The new revision is deployed to the remote location via FTP (Including some directory renaming, chmodding, importing databases, and the likes.) This is something I already like phing for very much, but I'm open for alternatives of course.
Unit tests residing in a predefined location are run. I am informed about their failure or success using E-Mail, RSS or (preferably) HTML output that I can grab and put into a web page.
(optionally) a end-user "changelog" text file in a pre-defined location gets updated with a pre-defined part of the commit message ("It is now possible to filter for both "foo" and "bar" at the same time). This message is not necessarily identical with the SVN commit message, which probably contains much more internal information.
Stuff like code metrics, code style checking and so on are not my primary focus right now, but on the long run, they certainly will. Solutions that bring this out-of-the-box are very kindly looked upon.
I am looking for
Feedback and experiences from people who are or were in a similar situation, and have successfully implemented a solution for this
Especially, good step-by-step tutorials and walkthroughs on how to set this up
Solutions that provide as much automation as possible, for example by creating a skeleton API, test cases and so on for each new project.
and also
Product recommendations. What I know so far is phing/ant for building, and phpUnderControl or Hudson for the reporting part. I like them all as far as I can see, but I have of course no detailed experience with them.
I am swamped with work, so I have a strong inclination towards simple solutions. On the other hand, if a feature is missing, I'll cry about it being too limited. :) Point-and-click solutions are welcome, too. I am also to commercial product recommendations that can work with PHP projects.
My setup
I am working on Windows locally (7, to be exact) and most client projects are run on a LAMP stack, often on shared hosting (= no remote SSH).
I am looking for solutions that I can run in my own environment. I am ready to set up a Linux VM for this, no problem. Hosted solutions are interesting for me only if they provide all of the aspects described, or are flexible enough to interact with the other parts of the process.
Bounty
I am accepting the answer that I feel will give me the most mileage. There is a lot of excellent input here, I wish I could accept more than one answer. Thanks everyone!
I've been through buildbot, CruiseControl.net, CruiseControl and Hudson. All though I really liked CruiseControl*, it was just too much of a hassle with really complex dependency cases. buildbot is not easy to set up, but it's got a nice aura (I just like python, that's all). But hudson won over the former three because:
It's just easy to set up
It's easy to customize
It looks good and got nice overview functionality
It got point-and-click updates, for itself and all installed plugins. This is a really nice feature, that I appreciate more and more
Caveat: I only ever used linux as base for the above mentioned build servers (CC.net ran on mono), but they should all - according to the docs - run cross-platform.
Setting up a hudson server
Prerequisites:
Java (1.5 will serve you just fine)
Read access to the subversion server (I have a separate account for the hudson user)
From here, it's just:
java -jar hudson.war
This will run a small server instance right off your console, and you should be able to browse the installation at your http://localhost:8080, if you don't have anything else running on that port in advance (you can specify another port by passing the --httpPort=ANOTHER_HTTP_PORT option to the above command) and everything went well in the 'installation' process.
If you go to the available plugins directory (http://localhost:8080/pluginManager/available), you'll find plugins for supporting your above mentioned tasks (subversion support is installed per default).
If that has whet you appetite, you should install a java application server, such as tomcat or jetty. Installation instructions are available for all major application servers
Update: Kohsuke Kawaguchi has constructed a windows service installer for hudson
Setting up a project in hudson
The links in the following walk-through assumes a running instance of hudson located at http://localhost:8080
Select new Job (http://localhost:8080/view/All/newJob) from the menu on the left
Give the job a name and tick Build a free-style software project on the list
Pressing 'ok' will take you to the configuration page of the job. All the options have a little question mark besides them. Pressing this will bring up a help text regarding the option.
Under the option group 'Source Code Management' you would be using Subversion. Hudson accepts both url access as well as local module access
Under the option group 'Build Triggers', you would use 'Poll SCM'. The syntax used here is that of cron, so polling the subversion repository every 5 minutes would be */5 * * * *
The process of building the project is specified under the option group 'Build'. If you already have an ant build file with all the targets you need, you're in luck. Just choose 'Invoke ant' and write the name of the target. The option group supports maven and shell commands as well out of the box, but there is also a plugin available for phing.
Tick off additional build actions in 'Post Build Actions', such as e-mail notifications or archiving of build artefacts.
For setting up processes for which hudson have no plugins, you can either call them directly through a shell script from within the build setup, or you could write you own plugin
Pitfalls:
If you have it produce build artefacts, remember to have hudson clean up after itself in regular intervals.
If you have more than 20 projects set up, consider not displaying their build status as the default main page on hudson
Good luck!
The term you are looking for is "continous integration."
Here is an example of someone who uses GIT + phpundercontrol: http://maff.ailoo.net/2009/09/continuous-integration-phpundercontrol-git/
CruiseControl (which is a CI server), can use Hosted SVN/GIT as a source. So you can even use it with GitHub or Beanstalk or something else.
Then you can integrate that with the following kind of software:
PHPUnit
php-codesniffer
phpdocumentor
PHP Gcov
PHPXref
Yasca
etc.
You could also try this hosted CI: http://www.php-ci.net/hosting/create-project
Keep in mind though, that those tools need custom support if you integrate them yourself.
Have you also thought about project management and patch management?
You can use Redmine for project management. It has integrated continuous integration support, but only as client side (not as CI server).
Try using a hosted SVN/GIT/etc. solution, because they will cover your backups and keep their servers running, so you can focus on development.
For a tutorial on how to setup Hudson, see: http://toptopic.wordpress.com/2009/02/26/php-and-hudson/
I use Atlassian's Bamboo continous integration server for my main PHP project (along with their other products such as fisheye (repository browsing), jira (issue tracker) and clover (code coverage)).
It supports SVN and now supports Git and it has a great user interface. It is available for linux, windows and mac and can run standalone on its own tomcat server which is great for people (like me) who does not like to take days to setup their tools). Although it may look expensive, being a lone developer myself I purchased the starter kit license for 10$ (10$ by software). This is great for small teams and it is worth the look.
PHPTesting PHPCI This is nice, continuous integration server built in php.
Plus, its free and open source. :)
it has number of plugins..
PHPCI includes integration plugins for:
Atoum
Behat
Campfire
Codeception
Composer
Email
Grunt
IRC
PHP
Lint
MySQL
PDepend
PostgreSQL
PHP Code Sniffer
PHP Copy/Paste Detector
PHP Spec
PHP Unit
Shell Commands
Tar / Zip
I am mostly a sys admin but sometimes I code PHP as well. As a side project I created some scripts that will make it simple and painless to set up a full blown PHP CI environment using Jenkins. It also runs a sample project for you so you can see how each build step is configured.
If you want to try it out all you need is a Debian/Ubuntu box and shell access.
http://yauh.de/articles/379/setting-up-a-ci-environment-for-php-projects-using-jenkins-ci
Update To add some content to my answer:
You can simply set up a Jenkins CI for PHP using Ansible. Since v1.4 it supports roles which you can download from their galaxy.ansibleworks.com community site and it will do the heavy lifting for you. It is called jenkins-php.
I would suggest using Jenkins http://jenkins-ci.org/ it's free and it's open source.
It's pretty straight forward to setup, works on multiple platforms and integrates well with other continuous integration tools like SonarQube (+ SQUALE) to measure technical debt and Thucydides for testing automation.
I would highly suggest using GIT or GIT Hub for version control instead of SVN. From my point of view it's just a better version control system that will help you scale your development efforts later.
Since you're working mostly with PHP project there are some other tools you can use.
PHPUnit - For unit testing
PHP CodeSniffer - Check for coding standards
PHP Depend - Shows your PHP code dependencies
XDEBUG - For performance testing
All of these tools and be triggered with a Jenkins job and helps with the quality and performance of your code.
Good luck and Enjoy!
I do not use many of the products, or even types of products that you use, but I will give you my experience.
I run a TEST environment in parrallel with my PROD environment. I have no local testing per se. If it is too hard to get soemthing up into a real TEST environment, then I fix my build process. I don't see the point in testing locally, as the environments are different. UPDATE: The only thing I do locally is run "php -l" before I upload anything. Stops the stupid mistakes.
The build process works with whatever is in the current workspace, which includes uncommitted code. This is not everyone's cup of tea, but I am going to TEST very often. Everything gets committed before going to PROD.
Part of my build process (similar to yours) creates two META files. One contains the last (typically) 100 changes and also gives me the current changelist number. The shows me what changes are installed. The other contains the CLIENTSPEC (in Perforce terms) which shows me exactly what branches were used in this build. Together these give me reproducible builds.
I do not build straight to the target environment, but to a staging area on the server. I use SSH so this makes sense. This gives me a few advantages. Most importantly it avoids dying half way through a large upload. It also gives me a place to store META files, and all the build files are automatically archived (so I can go straight back to any build). The script also logs the update (so there is an entry in the log stream and I can see pre- and post-) and kicks all daemons (I use daemontools so "svc -t"). All of these are better off on the target machine.
One other issue is DB changes. I keep a master script of the DB schema, which I update every time the schema changes. Each of the changes also go into a changes.sql script, which is uploaded with the build to the staging area. The script is run as part of the install script.
I've recently begun the same kind of process, and am using Beanstalk for svn hosting.
There are two nifty features in the paid accounts (start at $15pm i think):
deployment allows the user to create ftp targets for staging and production servers, which can be deployed at the click of a button (inc specifying a revision and branch)
webhooks allow the user to set up a url that is called on each commit/deploy, passing across things like revision number, description and user. This could be used to update docs, run unit tests and update changelogs.
I'm sure there are other hosted or self-hosting svn servers with these two features, but beanstalk is the one i have experience of and it's working very, very well
There's also an API, which I imagine could be used to integrate deployment further in to your process.
Consider fazend.com, a free hosted CI platform, which automates configuration and installation procedures. You don't need to setup version control, bug tracking, CI server, test environment, etc. Everything is done on-demand.

Run Apache / PHP / MySQL (CakePHP) application on a USB stick?

I have an existing CakePHP that runs on a LAMP environment and need to install it on a USB drive for mass public distribution.
There are a few requirements:
Protect the source code
No installation required
Windows support essential
MAC & Linux would be a bonus
Must run offline, without Internet connection
Ability to sync with server for data transfer and updates
I have conducted a large amount of research into the options and am keen to learn what other developers think.
Potential solutions:
- Flash / XML
- Adobe AIR app
- USB webserver (Server2Go, Portable Apps XAMPP)
Has anyone used any of the above, any comments would be greatly appreciated.
Thanks
Similar thread here :
Portable USB Webserver
If you ask me, XAMPP should do, because it offers a "plain unzip" version. There's lots of variety out there - Bitnami also offers a nice bunch of stacks, although they may not be good for this particular task.
To keep the same scripts in both Windows and Linux, you could consider using UnxUtils which is a port of all common Linux commands. This will be very handy if you are good at Linux bash shell scripting but not good at Windows batch files.
Protecting the source code is a bit troublesome. Do you really, really need to do so? Because there's a ton of great open source code out there which already does practically everything in most common business domains - sourceforge.net.
And if someone's taking your code and calling it their own, you can just name them on the internet if you can prove it. That itself will be bad publicity for them. That said, I obviously don't know your specific need. So that is just my opinion.
You will have problems with this, no matter how you go about it. Each step is a little more unusual it seems.
You'll need to use a source code obfuscator to protect your source. I recommend the one by Zend, not from experience, but because Zend makes awesome products. Never used a source protector myself.
You'll need three custom LAMP/MAMP/XAMP installs, one for each target OS. They should point to a directory that is shared on the USB drive. Make sure you configure them to use an unprotected port, otherwise the user will need admin privileges to run the server software. And getting the server stuff up and running will likely result in a few hiccups as well.
I would actually recommend finding something that will allow you to distribute a binary, or something like an AIR app that is intended for this type of distribution. You may have to rewrite lots of code, but it'll be easier to fix than all the niggling little install errors you'll see on the client end. To package scripts into binaries without rewriting stuff, check out http://www.scriptol.com/apollo.php and similar products.
But I'd suggest you make a standalone app in adobe air that will sync with your server (maybe even some google gears integration, to have it function offline). Don't try to force a PHP app into this distribution model, it'll create nightmarish problems.
This is what I used to run a CakePHP app from a DVD. Worked on USB too (while I was still developing it).
http://www.server2go-web.de/
Server2Go is a Webserver that runs out of the box without any installation and on write protected media. This means that web applications based on Server2Go can be used directly from cdrom, a usb stick or from any folder on a hard disk without the hassle of configuring Apache, PHP or MySQL.
Server2Go allows you to create a standalone working web site or PHP application on a CD-ROM.
It's really nice.
You can use MAMP for Mac, you'll just need to edit the config to properly point the sites directory.
however you would have the problem that the mysql db would not necessarily work with windows. if you switched the db to sqlite, you could sync the sqlite db file fairly easily.
XAMPP would work for the windows side
sorry dont know about the linux side.
Out there is a CakePHP InstaWeb Server
http://bakery.cakephp.org/articles/view/the-cakephp-instaweb-webserver
that runs on python and doesn't need an installation. This plus some additional goodies should get you already half the way.

Categories