We have a huge code-base (I mean huge, about 2M+ lines) in PHP. I would like to know how you guys managed to integrate composer in this kind of situation.
Specially when the code cannot be decoupled in little projects (Right now) because of the complexity (Even mixed with legacy code) and it's being hold in the same SVN repository.
Why should I be confident in the quality of the composer/packagist libraries?
What happens if packagist goes down?
What should I do if my vendor repository goes down (Github/Bitbucket/Whatever)?
What happens if some of my vendors decide to delete their library?
What if they've been hacked and set the next version tag empty?
I know that this possible problems could be over-passed in one way or another. But the fact that the life of a lot people could be depending on this makes me feel a bit crazy with this kind of decision.
What do you think? What are my best options?
For the first point - if you have legacy, 2M+ tighthly-coupled codebase, common open source projects quality shouldn't bother you ;).
For the rest - you can use staging to build your project together with dependencies and then build a full package there (by that I mean all the dependencies downloaded and bundles). Of course you will still be dependent on external packages on your development cycle, but not in deployment/production. Whenever package goes down, you have time and possibility to replace it.
Composer is a really great tool for bundling yor project together with dependencies, so it's both the answer to question "how to use external dependencies" and also to "how to be independent from them", you only need to specify the point, at which you want to bring this independency into your project.
I think that you should develop with external dependencies in mind, lowering your code base as much as possible and not put these problems on your devs shoulders, they want to use code, libraries, play with tiem... then, somewhere in your deployment process, bundle it all together (staging is a good place). Even if your dependencies will disappear and you will have to spend your development time to replace them:
It will probably still cost you less than handling all on your own.
Related
I'm developing an application that consists of several modules/packages which i also want to offer as standalone packages. I know how to create composer packages but i'm not exactly sure on the best way to do the actual development and need your help on this.
One way would be installing the packages with composer but that would mean that, for each change, i would have to commit and then do a composer update on my app, just to able to test it. Not very practical.
Another way would be to have them included in my app, although having the package internal structure. That would work fine for developing but would pose a problem on publishing individual packages since all the code would belong to the same repository.
I think a good example on this is the way modern frameworks, like Laravel, are available. They have the whole code available in a repository but, at the same time, have each individual component available standalone.
What's the best way (in your opinion) to accomplish this?
Thanks in advance.
Symfony2 uses Git subtree split. That is, a single development repository which is split into multiple repositories later.
Don't make any mistake about it though, the code is the same, but they "are different" repositories, and the procedure to maintaining them is rather long winded.
http://www.craftitonline.com/2012/03/git-subtree-split-this-is-what-symfony2-does-every-night-to-set-standalone-components/
I'm currently working on a PHP driven website that has a number of dependencies via Composer.
Recently, a dependency had a minor version (as per SemVer) change with a backwards compatibility breaking change in it. The website broke unexpectedly. I updated my dependency version number to something more precise, like 1.2.16 instead of 1.*.
But this makes versions harder to track, because minor versions change often.
How do you keep track of dependencies in a way that is both simple and avoids breaking changes to break your project?
How do you keep track of dependencies in a way that is both simple and avoids breaking changes to break your project?
Never take automatic updates to live production sites. You probably shouldn't take them to developer sites either, but that depends on how many developers are involved and the rate of changes raining in. Best practice is to setup a branch and matching test site where such updates are allowed. You can run periodic (daily, hourly, whatever...) build/test cycles of your project and site, then automatically generate dependency configurations that extend the stated ranges to whatever has been tested/proven good and check those into the active developer branch(es) where they will eventually find their way to dev/test and product site(s).
Some background first
Our company, a small startup with only four developers, is starting the refactoring of our products into reusable modules to simplify the development process, increase productivity and, along the way, we would like to introduce unit tests where fits.
As usual on a small startup, we can't afford wasting too much development time but, as we see, this is extremely important for the success of our business on a medium and long term.
Currently, we have two end-user products. Both are Laravel (PHP) applications built on top of our own internal business layer, mainly composed of webservices, restful apis and a huge database.
This business layer provides most of the data for these products, but each of them makes completely different use of it. We plan to build other products on the near future besides maintaining and improving those two that are almost finished.
For that to happen, we intend to abstract the common logic of those (and the future) products into reusable and decoupled modules. The obvious choice seems to be Composer, even with our little knowledge about it.
Now to the real question
I would like to ask other opinions on how to develop internal packages on a test driven fashion. Should each module be a composer package with it's own unit tests and requiring it's dependencies, or should we build a single package with each module namespaced?
To clarify a bit, we would like to have, for instance, a CurlWrapper module and that would be required on our InternalWebserviceAPI module (and a few others).
I personally like the idea of having completely separate packages for each module and declaring dependencies on composer.json, which would mentally enforce decoupling and would allow us to publish some of those packages as opensource someday. It also may simplify breaking changes on those modules because we could freeze it's version on the dependents that will need to be updated.
Although, I also think this separation may add a lot of complexity and may be harder to maintain and test, since each module would need to be a project on it's own and we don't have all that man power to keep track of so many small projects.
Is really Composer the ideal solution for our problem? If so, which would recommend: single package or multiple packages?
Edit 1:
I would like to point out that most of these modules are going to be:
Libraries (ie obtaining an ID from an youtube URL or converting dates to "x seconds ago")
Wrappers (like a chainable CURL wrapper)
Facades (of our multiple webservices, those require the other two kinds)
Yes, composer is the way to go and I recommend you to use single packages.
You don't know when you need these modules. It is better to create many single packages and be able to include them all (or a single one), than creating big packages and need to put more time in breaking a package in multiple ones when you need some classes from it.
For instance, see the Symfony2 project. That is a lot of components which are all required for the full-stack Symfony2 framework, but you can also use some components in your own project (like Drupal8 is doing). Moreover, Symfony2 gets more and more packages, it seems so usefull to have small packages that people put time in breaking some big packages in pieces.
An alternative to using single packages: use separate composer.json files for each subproject.
This has the benefit of letting you keep all of your libraries in the same repository. As you refactor the code, you can also partition autoload and dependencies by sub-library.
If you get to the point that you want to spin the library off into its own versioned package, you could go the final step and check it into its own repository.
I am wondering what the best way (for a lone developer) is to
develop a project that depends on code of other projects
deploy the resulting project to the server
I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate.
I've read
subversion:externals considered to be an anti-pattern, and
How do you organize your version control repository,
but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code.
Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on.
Possible way:
Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee:
When you want to deploy a project, you might want to freeze its dependencies, right?
The dependency code should not end up as a duplicate in the projects repository, I think.
*(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment)
I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs*
This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies?
I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc).
I am very eager to learn about your best practices.
One approach might be:
one repository per dependency
a requirements configuration file for your project which documents the dependencies and their versions (probably even your own versions of the dependencies)
automation scripts that handle setup of the development, testing and deployment environments (can be as simple as documenting the setup procedure once and making it configurable and executable)
This has several benefits:
you can easily (or even automatically) check whether your dependencies have become outdated (another better library is available), or have known security vulnerabilities.
more awareness about dependencies
easier to debug/fix/patch problems caused by dependencies
ignoring svn:externals might also ease the pain when you switch to distributed version control like git, bzr, hg in the future.
if you want to set up your environment on another machine (or eventually another developer takes over or joins) it will save you tons of time
Some KISS automation tools that are popular in web-development and server administration:
fabric (python)
buildout (python)
capistrano (ruby)
Summary:
Document your requirements (preferably machine readable -> yaml, ini, json, xml) and handle dependencies outside of your project. It provides you with a bit of indirection which makes automated setup and deployment easier and less dependent on your version control system (separation of concerns, best tool for the job, etc).
This may sound cheap but I think I have a answer for you in this questions thread: svn folder structure organization
As long as you stay within your own repository I wouldnt consider svn:externals as harmful as stated. Just don't overdo it.
Deployment with this strategy is also a piece of cake since it is ALL in one tag (checkout, run it, profit). Your directory structure remains the same on all layers, branches, tags, trunk.
By directing externals to tags (and making tags read-only on the svn server) you can be 100% sure you get the library you expect.
How long do you normally test an update for Zend Framework before pushing it out into a productions project. We can break this question up into minor updates 1.6.0 -> 1.6.1 or maybe a major update 1.6.2 -> 1.7.0. Obviously you don't release it if it add bugs to your code.
Also, as with most other server software updates normally people have a window of time they like to wait and watch the community before even attempting an update on a development environment. How long do you even wait to start the process?
It seems like the best method would be to have a comprehensive set of tests that exercised all the functionality in your application. With a good method for testing it seems like you could push it into production pretty quickly.
Another simple thing you can do to help you make your decision would be to simply do a diff against the repository to see what changes where applied to any modules that you use. If there where no changes, then upgrading shouldn't make any difference. If something underwent a major re-write, you would probably want to investigate a lot deeper.
I'll often jump through update releases (1.7.1 -> 1.7.2) without much hesitation. When the minors roll in, it's another bag of tricks though. For example, there were a lot of changes with Zend's file upload elements, and Zend form in between 1.5, 1.6 and 1.7.
Whether or not I even move on a new release depends on what's been done. Checking the update lists provided is pretty important for deciding on whether or not to go.
As for timing, it varies. There's no set in stone process.
Finding "what breaks" is quickly accomplished with the unit tests. But, who really has a full set of unit tests for their application, right? ;)
Using unit testing will help catch some of the deltas. Zend Framework now comes with Zend_Test to make testing applications a bit easier. I updgrade between projects (so new projects that are coming up will get the latest version).