process for creating and applying svn patch - php

I am using a 3rd party library as part of my code base. It has a bug and i have 2 choices:
Create and overridden class to provide the big free behaviour.
Create an svn patch for the bug
I prefer 2 because logically it makes more sense, however i am not sure of how to do this,
Do i do the following:
Modify the 3rd party library code
Create the patch file
revert my repository
Apply the patch file
Add the patch file to the repository
Commit my changes
??
If i should use a patch, where would this be stored in the repository?

Check for Vendor Branches:
But sometimes you want to maintain custom modifications to third-party code in your own version control system. [...] These modifications might include new functionality or bug fixes, [...].
Now you face an interesting situation. Your project could house its custom modifications to the third-party data in some disjointed fashion, such as using patch files or full-fledged alternative versions of files and directories. But these quickly become maintenance headaches, requiring some mechanism by which to apply your custom changes to the third-party code and necessitating regeneration of those changes with each successive version of the third-party code that you track.
A vendor branch is basically a directory (we call it A) with the unmodified version of your source code. You branch this directory into your project (or any other location, we call this X) and apply your path to this branch. If vendor updates his component you will import it into a directory (we call B). To update your own modified library you will do a merge between B and A and apply this to X. In this way you keep your modification but also get all changes between vendors software versions.
Keep in mind to use --ignore-ancestry option on merge command otherwise you will replace changed files and not apply the diffs

Related

how to maintain different versions of a symfony web site

I'm working on a symfony 2 web site, it is a product our company sells (every client gets his own installation).
Each of our client gets a slightly different version of the site: sometimes the differences are small (different strings, slightly different templates), and sometimes they are bigger (different bundles activated, different database schema, different security.yml file etc.)
Currently each version sits on its own git branch. This works for now as we only started selling and don't have many clients(=branches) yet.
But I want to move to a better solution.
Any ideas?
Thanks,
A
I disagree. If it's a single application, then branches per client or release, ie configurations that need separate histories (change tracking), are precisely the correct practice. Clients use/share the same app, "core", whatever, but possibly you want to control when changes in one client's features/bugs are merged into the other configurations, ie branches. Each client will have to be "upgraded" by merging from the development branch to client's branch, separately, manually. You want to pay this overhead to gain stability and isolation.
As for the specific differences you mentioned:
Different strings: client specific configs, constants, deployment stuff, should probably be in a per-client/deployment module (include file, whatever), that you can simply ignore when merging, so that the core app code will have no client/deployment specific differences, ever.
Slightly different templates: meh, this smells bad; if you can use symbolic constants or partials or whatever, and move the differences to a config.php, then see the above solution. Otherwise, don't template diffs suggest it's not the same app anymore?
Different bundles activated: if you can put the include statements in the config.php, no problem.
Different database schema: nasty business. Can this app be made schema-independent? Can you do polymorphism somehow? Nu, I don't know, haven't touched PHP in twenty years, I only do CoffeeScript and NoSQL now. ;o)
Different security.yml: same as with config.php, not a problem, just don't merge that file, or even stop tracking it (remove from repo) altogether.
HTH.
;o)
I don't think that a branch is a good solution as its not intended for this kind of use case. As i see it, branches should be used for a staging or features/fixes.
As i see it, you have few options (in case you are not familiar with those options, you should read more about them in the links i attached):
Git submodule. "A submodule allows you to keep another Git repository in a subdirectory of your repository. The other repository has its own history, which does not interfere with the history of the current repository. This can be used to have external dependencies such as third party libraries for example." http://git-scm.com/docs/git-submodule
Composer. This is what i think would best suite you. Use a regular dependencies manager (composer for php, npm for nodejs etc..). I will give you full control over the version been used + easy deployment options. https://getcomposer.org/
Another option would to do it "old school".. In case they are all in the same server, just keep the "Core" stuff in the root folder and include it for each project. It's no a recommended way to go but it might be enough and won't require much changes from you

Composer and third party bugs

While developing a Symfony2 project, I often come across bugs in third party bundles. Most of the time the bugs are subtle but hard to find. For example this week alone I have found three bugs where a value was tested using a simple if ( $value ) construct but required the use of ( $value !== null) or ( $value !== false ).
Without having sufficient permissions on the relevant github pages for the projects in question, the best I can do is push a pull request. It usually takes quite some time for the request to be merged. In the mean time, especially when using the master version, other pull requests are merged which in turn leads composer to update. When that happens, any local bug fixes will revert back to the original code.
Is there any method to handle this situation?
Ideally, I would like the third party bundle to update but have my modifications persist. Until the pull request is merged of course.
There is a project that allows you to apply patches after downloading packages with composer. It is created to be used with the Drupal project but I believe it should work with your own patches just as well.
https://github.com/jpstacey/composer-patcher
Otherwise, you could fork the project, make you improvements, submit a pull request and in the mean time use your own forked repository in composer. See [this answer][https://stackoverflow.com/a/14637668/3492835) for a detailed description of how to achieve that.
Edit:
The stars say it is about to be 2016 now, and a few things have changed.
jpstacey/composer-patcher is considered deprecated in favour of the netresearch/composer-patches-plugin project. This is a Composer plugin which does basically the same, but it is able to apply local patches as well.
Composer does not support this functionality out of the box. The reason is simple, one should not work with the development versions of other libraries. But fear not, you can easily work around this by forking the projects on GitHub. Of course this means a lot of overhead, but it is the best solution I can think of with which you can tackle this problem.
Note that this approach has several advantages over the patch approach:
You can directly create your pull request from your fork.
The git merge process will identify any conflicts.
A script to automate this process is easy:
#!/bin/sh
git fetch upstream
git checkout master
git merge upstream/master
You could create a Composer post update/install script which executes these command in each projects local directory if it is one of your forks. (I leave this implementation part to the reader. But one would need to create the repository locally first, since Composer only downloads the latest files without an repository data. This might add huge .git folders to a project since some projects are, well, huge.)

Can we use Composer with sparse SVN checkout to share dependencies?

Our current development setup uses a single Subversion repository containing multiple projects, each with branches, tags, and trunk. We then use a "sparse checkout" to select the projects, and branches of those projects, to work with.
The result is that the directory structure of a working copy matches that of the repository, including branch information, and we never use svn switch. (This style of working will probably be familiar to anyone who uses SVN, but may be surprising to those who don't.)
We are thinking of using Composer to manage both external and internal dependencies, but I'm not sure how this can work with the sparse checkout style of working.
I would like some way of using a directory within the existing checkout to satisfy a dependency, rather than each "root project" needing a separate copy.
For example:
sites/Foo/trunk
depends on lib Aaa, so references lib/Aaa/trunk
depends on lib Bbb 1.5.*, so references lib/Bbb/branches/release-1.5
sites/Bar/trunk
depends on lib Aaa 1.0.*, so references lib/Aaa/branches/release-1.0
depends on lib Bbb 1.5.*, so references lib/Bbb/branches/release-1.5
At present, if I edit the code in lib/Bbb/branches/release-1.5, I can test those changes on both sites, without needing to commit one and update the other.
Is there any way of using Composer to manage these dependencies?
(PS: Please don't answer with "give up on SVN, use Git, it is teh awesomez"; that is an answer to a different question.)
No - I do not believe that you can do this with Composer as standard: it expects to copy the files from whichever source (Packagist/VCS/Zips) to the local vendor folder, which is not what you want.
That said, I believe there are two potential ways you could get this working (at least in part):
Autoloader
You could try using the autoload field in the composer.json file to include the correct files into the project. You would still need to manage the checkouts of the relevant branches/versions manually (like I assume you do now), but you would be able to manage the inclusion of the internal libraries through Composer. This will probably require that your libraries are properly namespaced. The paths to the files for each namespace are relative to the root of the project, but can go below the root (via the /../ path) if required.
To be honest though, if you already have an autoloader for these files, there may not be much advantage to this solution. Relevant Docs
Composer Plugin
You could also write a composer plugin/"custom installer" that could probably manage this. This would have the advantage that you could have it manage checking out the correct parts of the sparse repository to have the correct version available, as well as doing correct wildstar version checking, but would be a much more difficult and riskier venture.
The basic process would be that you would define a new package type (e.g. 'internal-svn-package'). You would create the plugin as an external library that gets installed normally via Composer, which declares (via it's composer.json) that it handles this new type of package. Your custom logic would then be used for any packages that are listed with this custom type. I'm not sure how much of the internal composer logic for SVN checkouts you would be able to reuse however. Relevant Docs

Drupal6 : Dev - Stage -Online for a small group of developers

I hire tommrow a new developer, since now i worked alone, now i need to do some enviorment to developing and do a stage - online step
what is the leading tools (even if need to pay somthing) to do that?
i saw webenabled.. so far..
You'll need a some sort of version control system (VCS) for your project code. Since Drupal.org now use Git which is pretty good and awesome, you should too. There are several hosting solution for Git, the most popular seems to be GitHub.
In your code repository, I recommend not to put the whole site directory but only your own custom code. Regardless the used VCS, here is what I put in my code repository
A .make file used to download Drupal core, contrib modules and contrib themes and apply patches (if required)
a module folder with only the custom modules
a themes folder with only the custom themes
A build script to
run drush make on the .make file to download Drupal core and contribs to a (VCS ignored) dist folder
copy the modules folder to dist/sites/all/modules/custom
copy the themes folder to to dist/sites/all/themes/custom
This to
properly track changes to your project custom code
properly track used core and contribs versions (in the .make file)
prevent core or contribs hack but allow patching when required (Drush Make requires the applied patches to be available at a publicly accessible HTTP address)
For the build script, I use Phing but any scripting languages (ant, bash, php, ruby, etc.) could be used. With some additional work, the build script can also be used to run automated test (SimpleTest) and code validation (php -l and Coder Review). In the end, the build script produce and update dist folder ready for deployment.
For multi developpers project, I try to have as much configurations as possible exported into code instead of working at the database level to store. Mainly by using exportables through the Features module and by having a project specific profile to define and update non-exportable configurations through its hook_install and hook_update_N implementations. See The Development -> Staging -> Production Workflow Problem in Drupal and the Code driven development: using Features effectively in Drupal 6 and 7 presentation.
There are a few options for this, there is deployment module that is alpha but apparently works good. Then there is plain old svn ( or even rsync ). That get the job done pretty fast, and give you the added bonus of source code management but you need to transfer databases manually.
Last but not least, and the most powerful method of the 3 mentionned, is drush.
Whatever you chose depends on the time you are willing to invest in this step, because short-term they all involve a little more time than just copying a site in another folder but the task would be automated once you do it, so long-term you can easily repeat the deployment and this is where these tools will make you save time.
Good-luck!

What is the Best Practices to share PHP scripts among different PHP applications?

I have a folder of PHP scripts, they are mostly utility scripts. How to share those scripts among different PHP applications so that reuse and deployment are easy?
I would have to package my app into an installer, and let the user install it.
I could put the lib and hardcode the include path, but that means I haven to change the PHP code every time i deploy the web application to a new customer. This is not desirable.
Another route I consider is to copy the lib to other apps, but still, since the lib is constantly updating, that means that I need to constantly do the copying, and this will introduce a lot of problems. I want an automated way to do this.
Edit: Some of the applications are Symfony, some are not.
You could create a PEAR package.
See Easy PEAR Package Creation for more information on how to do this.
This assumes that when you say anyone, you mean outside your immediate organisation.
Updated: You do not need to upload to a website to install the PEAR package. Just extract your archive into the pear folder to use in a PHP application.
Added: Why not create a new SVN repository for your library? Lets say you create a library called FOO. Inside the repostory you could use the folder heirachy of trunk\lib\foo. Your modules could then go into trunk\lib\foo\modules and have a file called trunk\lib\foo\libfoo.php. Now libfoo.php can include once or require once all the modules as required.
PHP now supports Phar archives. There's full documentation on php.net.
There's a complete tutorial on IBM website as well.
One neat thing you can do with Phar archives is package an entire application and distribute it that way.
http://php.net/phar
http://www.ibm.com/developerworks/opensource/library/os-php-5.3new4/index.html
Ahh, libraries...
There are two conflicting purposes here:
Sanity when updating scripts (ie. not breaking 10 other apps).
Keeping things in one organized logical place for developer efficiency.
I suggest you take a close look at git and git submodules
We use git submodules extensively for this very purpose. It allows the best of both worlds because shared scripts can be upgraded at will in any project, and then that change can be moved to the other projects (deliberately) when you have time to do so and test correctly.
Of course, you need to be using git to take advantage of submodules, but if you are not using git, and you start, you'll eventually wonder how you ever lived without it.
Edit: Since the original poster is using svn, consider using SVN Externals.
UPDATED:
you just have to put the lib in some place reachable by your apps (in a place where you can reach it via http or ftp or https or something else) and include it.
If you have to update it often you can package your library in a single phar file and you can then provide your client a function to pull the library from some remote path and update a parameter in their local configuration accordingly, like:
function updateLocalLibary(){
//read the remote library in a variable
$file= file_get_content($remoteLibraryRepository.$libraryPharFile);
//give it a unique name
$newLibraryName=$libraryPharFile."_".date('Ymdhsi');
//store the library it on a local file
file_put_content($localLibraryPath.$newLibraryName,$file);
//update the configuration, letting your app point to the new library
updateLatestLibraryPathInConfig($newLibraryName);
//possibly delete the old lib
}
In your include path then you don't have necesasrily to hardcode a path, you can include a parameter based on your config, like:
include( getLatestLibraryPathFromConfig() )
(you are responsible to secure the retrieval in order to let only your clients see the library)
Your conf can be in a db, so that when you call updateLibraryPathInConfig() you can perform an atomical operation and you are sure not to have client read dirty data.
The clients can then update their library as needed. They may even schedule regular updates.
There are a lot of options:
tar + ftp/scp
PEAR (see above #Wayne)
SVN
rsync
NFS
I recommend to use a continuous integration software (Atlassian Bamboo, CruiseControl); check out your repository, build a package, and then use rsync. Automatically.
You should also look into using namespace in order to avoid conflicts with other libraries you might use. pear is probably a good idea for the delivery method, however, you can just place it in the standard path /usr/share/php/, or any other place that is set as the include path in your php settings file.
Good question, and probably one that doesn't have a definite answer. You can basically pick between two different strategies for distributing your code: Either you put commonly used code in one place and let individual applications load from the same shared place, or you use a source-control-system to synchronise between local copies. They aren't mutually exclusive, so you'll often see both patterns in use at the same time.
Using the file system to share code
You can layer the include_path to create varying scopes of inclusion. The most obvious application of this pattern is a globally maintained PEAR repository and a local application. If your it-system consists of multiple applications that share a common set of libraries, you can add a layer in between these (a framework layer). If you structure the include_path such that the local paths come before the global paths, you can use this to make local overrides of files. This is a rather crude way to extend code, since it works per-file, but it can be useful in some cases.
Use source-control
Another strategy is to make a lot of local checkouts of a single shared repository. Some benefits over the layered-include-pattern is that you can make more fine grained local changes. It can be a bit of a challenge to manage the separation between application layers (infrastructure, framework, application). svn:externals can work, but has some limitations. It's also slightly more complicated to propagate global changes to all applications. An automated deployment process can help with that.

Categories