When you set up a stack in OpsWorks, does it lock in the current built-in cookbooks version or will it use the most up-to-date version each time a lifecycle event is triggered?
For custom cookbooks, I understand that OpsWorks caches the provided recipes when they are provided rather than fetching the newest version each time, but I wonder if the same is true for the built-in cookbooks.
I'm concerned about this for a few reasons. What if the cookbooks are updated to install a different version of Apache or PHP or slightly vary their default configuration? What if I then setup a new instance in a layer in which the old recipe was used and end up with multiple servers with slightly different configurations?
Also there doesn't appear to be a way to customize which PHP5 version gets installed, so am I just at the mercy of the ubuntu package managers' decision to use the latest stable version?
I do want to continue using the latest and greatest software versions, but I would like to deploy them on my own time after I have been able to test that my application works in the new version.
When you set up a stack in OpsWorks, does it lock in the current built-in cookbooks version or will it use the most up-to-date version each time a lifecycle event is triggered?
When you provision a new machine, built in cookbook + custom cookbooks are requested onto the server at the same time. It gets updated only custom cookbook update is requested. This is why the recommendation is NOT copy the entire AWS cookbook into your custom cookbook. Only things you are modifying so you can benefit from standard community cookbook updates.
I'm concerned about this for a few reasons. What if the cookbooks are updated to install a different version of Apache or PHP or slightly vary their default configuration? What if I then setup a new instance in a layer in which the old recipe was used and end up with multiple servers with slightly different configurations?
This is not just a BANE, but a benefit too. It depends on how you perceive this. What maybe a performance improvement can be done, and also introduction of bugs. This needs to be kept in sync by your operations people.
You can override parts of the built in cookbook by just placing duplicates ( or customised ) versions in the same place in your cookbook. Converge option
OR the more complex but sure way :
implement custom recipe cookbook.
import opsworks cookbooks as a submodule into a folder inside your cookbooks folder.
symlink the cookbook that you need version controled to the main folder now
evaluate and update as needed specific ones
ie :
cd cookbook
git submodule add https://github.com/aws/opsworks-cookbooks external-cookbooks/opsworks-cookbooks
ln -s external-cookbooks/opsworks-cookbooks/rails rails
This way you can update and keep version control of your infrastructure code. Make and evaluate changes and only import the changes after you've
I still recommend using converge mode with only the minor changes you require hardcoded. It will mean LESS duplication , and your will benefit from updates that maybe in the community version of the cookbooks.
If you're using a cookbook that uses the UBUNTU way to install PHP - then you will be mercy to what is in the repo. If you are using another one that does custom compiled version, then you can compile specific versions. You may have to either write your own OR find one that does build it and let you specify version via an attribute on the cookbook.
Related
I have to update CakePHP from current, outdated version (2.7.7) to latest on 2 branch, because of PHP7 support.
While I've done numerous framework upgrades before, I found book.cakephp.org a more than a cryptic about key things which I ask here:
can it be done by replacing directoris
which directories are intended to be replaced (never edit dirs, like system in Codeigniter)
which directories are partially replaced if any
is there SQL commands that should be run?
is there other commands that should be run?
Any clue is appreciated, but 2 and 3 are of most value I guess. Thanks in advance.
Depending on how you've installed CakePHP, you either use composer to update the CakePHP core dependency:
$ composer update
or require a specific constraint/version if your current constraint doesn't allow upgrading:
$ composer require cakephp/cakephp:^2.10.3
If you're not using composer (I'd suggest to switch to using it), then you download the latest release package manually, and completely replace the /lib/Cake directory. With respect to the core, the upgrade is then complete.
Then read the migration guides to figure the possible changes that you have to apply to your application code or database schemas, and also compare the "application template" changes (/app/) to your local application and apply changes in case necessary. After this, run your test suite to ensure that everything works as expected.
With that being said, the upgrade from 2.7 to the latest 2.10 should be pretty easy, as it is said to be fully API compatible.
I recommend you to use composer to manage your framework and extensions.
With composer installed, it would be much easier to update. If you decide to use composer, let me know if you need any more help by installation, setup or update.
I'm building an SDK for developers to use to build modules for ecommerce platforms that will consume our API for a new startup.
Obviously it would be ideal to use composer, which I am doing right now. But as I examine most of the ecommerce platforms out there right now, or at least the most popular ones, they don't use composer.
So I'm wondering what's the best way to get all the dependencies all my current packages need and build them into a freestanding SDK.
This way I can have a version that will work for both composer and non-composer enabled platforms.
Is there a standardized way to do this in terms of a design pattern? How would I lay out all the dependency packages in any organized way?
Because those e-commerce platforms don't use composer, that doesn't force you to exclude composer from equation. You can't distribute your package as a plugin/module/whatever for that particular e-commerce platform, but you can still use composer's autoloader in production.
You could prepare the package for deployment on your machine or on a build server, archive the result and distribute the archive.
For the sake of simplicity, my example will assume that you will prepare your package on your local machine:
Create a temporary working directory:
$ mkdir -p ~/.tmp && cd ~/.tmp
Clone your package:
$ git clone <package>
Install dependencies1
$ cd ~/.tmp/<package> && composer.phar install --no-dev --optimize-autoloader
or if you do this from an automated tool:
$ cd ~/.tmp/<package> && composer.phar install --no-ansi --no-dev --no-interaction --no-progress --no-scripts --optimize-autoloader
Remove .git directory.
Create the zip/tar archive from ~/.tmp/<package>
Distribute the archive.
Assuming that your package is already a plugin/module for that e-commerce platform, it can be installed as usual from that zip/tar archive.
1) Regarding --optimize-autoloader, please read this answer from Sven, which explains why in some cases doesn't help your application to become faster.
Don't have dependencies!
Yes, seriously. If you'd develop an API client that would use Guzzle as the HTTP client, you'd have to make a choice: Use Guzzle version 3, 4, 5 or 6?
Guzzle 3 is out of maintenance and abandoned. You wouldn't want to use it.
Guzzle 4 is also considered end-of-life, because version 5 came very fast. Nobody really use this version.
This boils down to using either version 5 or 6. But Guzzle is using the same namespace and likely the same class names in both versions, but is incompatible to each other. No matter which version you choose: Your customer will have made the opposite choice - and now you have a codebase where two versions of Guzzle are running at the same time - this will not work.
If you don't have dependencies, but deliver everything within your own codebase, you have all of your code under your control, and are reducing the need to use Composer as a tool to easily install all your dependencies. Your package will have everything already included, it's unlikely that there will be any namespace conflicts.
You'd be able to offer a ZIP file for download. And if you additionally offer a composer.json to allow developers to include your package that way, everyone will be happy.
Update
Now after finding out that everyone thinks I am crazy proposing not to use stuff invented elsewhere, I challenge you to think about the situation once again: You find that you have to produce code that will likely be included in a codebase that is NOT managed with Composer. That means you have no idea what kind of software is put together there.
It may simply be so that you have a version of Guzzle in the existing codebase - undetectable, because there is no composer.json. Now you provide your own package with a bundled Guzzle version (whatever way made it appear there). This will likely crash the entire software at some point because of conflicts, because the autoloading will of course be merged at some point, and then some part of the code will request some Guzzle class to be loaded, which is included twice from two different versions of Guzzle.
WHAT SHOULD HAPPEN IN THIS CASE? THINGS WILL CRASH!
And it is unavoidable that this will happen. Even in the lucky case of being able to use Composer, it will conflict - the software won't crash, but the entire package won't be installed. The good thing is: You will notice this immediately.
If the primary goal is to deliver an API client anyone can use in every situation, without using a dependency manager: Don't have dependencies!
Alternatively, be completely sure that you know which software is already being used, and create a package that will not conflict in any case. However, this is still an effort, because there might be other addons also being installed, which might include conflicting software.
My central point is: If you don't have a dependency manager like Composer being able to manage the dependencies, you are better off NOT to have dependencies in your own code to make it super easy to include your own code in someone else code base.
And the question above clearly states that Composer is not an option in the general case.
Now there is one light at the end of the tunnel: When it comes to general tasks, the PHP-FIG has started to standardize interfaces that should leverage interoperability. For HTTP, the standard is PSR-7.
You COULD provide an API SDK that depends (and brings with it) the PSR-7 interface and requires the user of the SDK to provide a HTTP client that implements this interface.
The problem with this approach I see is that you will still run into trouble if you try to use for example Guzzle for the same reason: The only valid choice now is to use Guzzle 6 for the SDK - what if Guzzle 5 was already used elsewhere? Conflict! The good thing is: You can avoid using Guzzle 6 if you are already using Guzzle 5 by using any other PSR-7 capable HTTP client.
I am going to rewrite this question to be more clear. I have the following application structure:
applications/
api/
public/
composer.json
frontend/
public/
composer.json
backend/
public/
composer.json
common/
vendor/
... composer libraries here
How can I make that every single application's composer install gets installed into common/vendor, so that way I can have the most up to date version of the library in wherever is used with just one composer update; while at the same time only load the libraries that are in the composer.json file of each application. So, when I include vendor/autoload.php, only the needed libraries are loaded.
EDIT: Edited the whole question. Please reread
You have to create one bigger meta project that requires the API package, the frontend and the backend. You can define which directory should be used for placing dependencies for this meta project, and should be able to define for the special packages API, frontend and backend that they should go into their respective directories and not the common folder.
Updating that meta package will have to check more dependencies, but it is guaranteed that you either get the newest possible versions that conform to your version constraints (which may NOT install the newest version available if one of your packages requires a lower version). That way you would avoid installing dependencies that are not allowed for one of the projects, and you would be immediately notified if you attempt to install conflicting versions.
Note that I wouldn't recommend this at all. I would write a script, placing it at applications/updatecomposer.sh and add all the commands necessary to update each project individually. You gain all the flexibility that Composer is about, because essentially you want to have the central library installation of PEAR back. This central installation and the resulting inability to update any of the PEAR packages without risking to break something is one of the reasons that PEAR is considered dead.
Or think about any pre-Composer originating framework like Zend Framework 1. Having this installed in a central point that every application is using will effectively prevent you from ever updating it, if you are not prepared to also deal with incompatibilities in ALL your applications at the same time. Just an example: Updating from any ZF 1.11 to ZF 1.12 (the currently maintained up to date version) is a potentially backwards-incompatible change, because at least one abstract class (dealing with REST interfaces) got new abstract methods that have to be implemented.
I have an application built in 2007 that makes extensive use of PEAR's AUTH and DB packages. It had been mothballed but out again now. Since those packages are not available and pear has completely changed, it no longer works in my system.
Outside of rewriting the entire software, is there anyway to get the previous functionality of DB & AUTH packages?
Thanks.
If you don't mind doing some investigative work yourself, you could look at the changelog pages for both of those packages - at http://pear.php.net/package/Auth/download/All and http://pear.php.net/package/DB/download/All to determine which version of these packages you had installed and used when you developed your application.
Once you've confirmed and installed the specific versions of these packages that you need, you might want to consider writing what's called a "PEAR Meta Package" and committing it to your version control system so that you can ensure these specific packages can be easily installed again (on other servers, whichever) with minimum hassle.
I've installed composer and added some packages via 'composer install'. It installed them under "my_project\vendor" path but some of the packages were cloned using git, so when I committed "my_project", those cloned packages were ignored.
The problem is that when other developers are cloning "my_project", they are missing the packages that were ignored. Is there a way to automatically add the packages to "my_project" so other developers will fetch them from me?
I think this should be done using submodules, but I don't know how to automatically add every new package from composer as a submodule to my project.
Preface: Jordi - I love composer, keep up the great work, and if I'm mistaken on any of this let me know and I'll both update my workflow and edit the post :D
Unfortunately this isn't the "general recommendation" depending on who you ask, it's by far a developer-only perspective. And the caveats to using the practice prescribed in the composer FAQ have many more considerations than I can cover here. So I'll leave a couple major points for the consideration of others.
By #Seldaek's own admission composer isn't really 100% stable, far better from a year ago, but still a very active project regardless. So relying on composer to implement an identical environment on a dev server vs staging server vs production server wouldn't be a general recommendation from any QA / Deployment group. This is not meant as a slight to Jordi, but rather an expression of the maticulous nature of QA peoples.
From the FAQ, it states when merging vendor libs into your own repository you should:
Limit yourself to installing tagged releases (no dev versions)
However if you use composer to manage your CI or automated deployments, the same constraint would apply - only more so - because deploying a master or dev tag to your production environment could be a very different package than what you tested in staging only a day or even an hour ago.
Even outside of changes introduced in third party libs (which would be solved by using only tagged versions regardless of dev or production deployments) unless you can rely on composer doing the exact same thing every time, you'll risk introducing bugs into production. This is not really a risk-case I would concern myself with, but then again I'm a developer too ;) But issues can result from simple changes like this where unless you maintain the exact same version of composer.phar on all environments, you could really muck up a staging or production server.
The other major issue I have is really related to all of the points listed under this heading:
While it can be tempting to commit it in some environment, it leads to a few problems:
I don't see the consequences as problems, but instead benefits! A large vcs repository isn't that big of a deal in modern high bandwidth environments. And unless you are using a very active vendor lib, your diffs won't be that big either. Even if they were big, git/hg/dvcs systems are all capable of re-using chunks, compressing chunks and keeping all your ducks in a row. But more so, they are an alert to the developer when changes are introduced to those packages, and diff -w is a great summary view of the total changesets, especially if you are on dev/master tags.
Duplication of the history of all your dependencies in your own VCS.
This is worded a little incorrectly, it won't duplicate the entire commit history of the vendor lib, just a single commit (your commit) covering the full delta between now and the last time you ran a composer update resulting in changes. You're probably not updating all of your libs every time you update, even if you don't specify individual packages. And if you did accidentally update a vendor lib, you can easily revert, whereas if you did so on a dev/master tag and it broke your environment, you'd have to figure out what version you were previously using and specify the tag in composer.json, and update again to revert it. git checkout /vendor/3rdpartylib --force just seems easier to me.
Adding dependencies installed via git to a git repo will show them as submodules. This is problematic because they are not real submodules, and you will run into issues.
Ideally, composer would give you a config option. It could automatically delete the .git directory from git pulls, and automatically rm the directory (or temporarily mv it) before updating a lib, when and only when an updated version exists. And doing so would be far more reliable than leaving that manual process up to individual developers. There are an equal number of reasons to have vendor libs integrated into your version control repo so the choice really depends on the details of your situation.
The biggest reason for versioning all of your files is being able to reliably deploy the exact package you tested in development to staging to production, a key purpose of vcs and automated deployments to begin with. Unless you configure your development environment to use specific tags for every package and you version control your composer.phar you should not rely on composer to deploy your software.
You should ideally just add vendor/ to your .gitignore, and then every developer of the project would run composer install to get the vendors on his setup.
You can read the FAQ entry on commiting vendors for more details.