Continous Integration and Cakephp 3.x internationalisation with multiple plugins in project - php

In our use case we're using cake 3's plugins to seperate different front-ends for the same data (simplified explanation) - and because of this, we have a a lot of `__('random_strings'); spreaded out in files in following paths:
src/*
plugins/plugin_name(s)/*
I'm using the following command to extract the .potfiles:
bin/cake i18n extract --app /cake/ --paths src,plugins --merge yes --output /cake/src/Locale --exclude test,vendors --overwrite --extract-core yes
We're using Jenkins with an Ant script for bulding our application, and we're about to choose a online web gui for translating these .pot files - and it's a requirement that translaters without developer-knowledge can then go into that web gui, translate the files and somehow synchronize the resulting .po files back into the git repo.
I'm thinking to have a jenkins job that runs before the current one we have, which only job is to run the script for generating .pot files and then commit + push them back into our development branch - and then have the web gui "check" these .po files for each language.
However, this feels kinda hack-ish, and some better solution might exist :-)

I ended up pulling/pushing translations from our GitHub project 3 times daily w/ POEditor.com's API - but I havn't yet automated the generation of the .pot files, so that's manually for now.
Don't know if we're gonna implement pre-commit git hook, or implement a pre-production build in Jenkins checking it :-)

Related

Strategy to deploy versioned assets (for PHP app's assets to Amazon S3)

I currently deploy a PHP app's static assets using s3cmd.
If an asset for Amazon S3 has changed (like a JavaScript script), I'll rename it manually, so that the new version is synced and served by Amazon Cloudfront.
Are there documented practices for deploying a PHP app's static assets to Amazon S3? I've seen that sites use a hash to refer to a certain deployment version of their assets. I'm wondering the approach to get that hash (like the git commit SHA?) to be referenced by the web application.
The approach I could see working is writing to a stand-alone config file that holds the current SHA, and read from it for deployment.
Update 1 with current process
I'm looking to make the deployment of assets more automated:
Make a change to the app's JavaScript
Rename the script from app.23.js to app.24.js
Edit the site's header HTML template to refer to app.24.js
Commit all change to git
Merge the develop branch into master
Tag the commit with a new version number
Push code to BitBucket
Use s3cmd to syncronise the new script to Amazon S3 (and therefore Cloudfront CDN)
SSH into server, and pull the latest git tree. The production server serves the master branch.
I would like to think there is no specific answer to this question but here are a few things.
If you only want to automate the manual work you are doing then it might be worthwhile to look into a few deployment tools. Capistrano and Magallanes are two names that come to my mind but you can google for this and I am sure you will find a lot of options.
The Rails framework was built on the philosophy that there is a best way to do things. It also uses hashes to version static assets and does its thing out of the box on the fly. You can look into implementing hashing in your case.
Grunt is another automation tool that you can look into. I found this module that might come in handy https://github.com/theasta/grunt-assets-versioning
I would say for me, the 2,3,4 are the problem areas in your workflow. Renaming manually and updating code every time does not sound too nice. As you have pointed out, GIT hashes are unique so perhaps append the GIT hash to your assets during deployment and sync them to S3/Cloudfront ?

Jenkins guide needed for build, deploy, provision and rollback, keeping 5 releases

I am pretty new to Jenkins, and have some sort of understanding but need guidance further.
I have a PHP application on a Git repo, that uses Composer, has Assets, has user uploaded Media files, uses Memcache/Redis, has some Agents/Workers, and has Migration files.
So far I understood I need to create two jobs in Jenkins.
Job 1 = Build
Job 2 = Deploy
In the Build job, I setup the Git repo as source, and I setup a post shell script that has one single line composer update.
1) My first question relates to how/where are the files cloned. I understand there is a Workspace, and every time gets cloned there, or only new things are pulled.
2) composer update seams to load again and again the same stuff, and looks like it's not being cached with multiple builds. I'd love to hear the opinion here, but I was expecting on the next build it will check for changes, and get the diff only. Doing a full composer update takes several minutes.
In the Deploy job, I would love to setup a process that takes the most recent stable build, and moves the files to a dedicated folder like releases2. Then runs some provision scripting and in the end, it updates the /htdocs folder symlink to the new releases2 folder, so the webserver starts to serve from this folder the website.
3) How can I get the latest build (in the build folder I saw only a couple of log and xml files, couldn't locate the files from git) and move to a fresh destination.
4) How shall I setup the destination, so that I can keep Media Files between different deploys.
5) When shall I deal with the Assets (like publishing to a CDN) after successful build, and before deploy is finished. Shall this be a pre/post hook, or a different job.
6) When shall I clear the caches (memcache, redis).
7) How can I rollback to previous versions? And how can I setup to keep last 5 successful releases.
8) How can I get email of failed build and failed deploy email alerts?
9) How can operations get a list of recents commit messages, after a successful deploy by email.
I noticed Jenkins has a lot of plugins. Not sure if these are handled by those plugins, but feel free to recommend anything that gets these done. I also read about Phing, but not sure what is, and were shall I use it.
I understand there are lots of questions in this topic, but if you know the answer for a few of them please post as answer
warning tl;tr
Ok - you want it all. Lot's of questions - long story.
Jenkis is "just" a continous integration server.
Continous integration, basically means that you don't have to run a compilation and unit-testing step on the developer machine, but pull this over to a central server, right?
Because compilation and linking is now on a central server, the developer has more time to develop, instead of waiting for compilation to finish. That's how this CI thing started.
Now, when looking at PHP projects, there isn't any compilation or linking process involved.
The job of an Continous Integration working on PHP projects boils down to just doing the unit-testing and maybe some report generation.
You can clearly see that, when looking at helper projects like Jenkins-PHP, which provdies a template setup for PHP projects on Jenkins - http://jenkins-php.org/example.html
The starting point is "Jenkins does something, after you commited source".
You already have a configuration for your Git repository. It is monitored and whenever a new commit arrives, a new "build process" is triggered.
What is this "build process"?
The build process can partly be configured in the Jenkis GUI.
Partly means, the focus is on the configuration of "triggers" and "notifications" and also on "report generation". Report generation means, that, when certain build tools have finished their jobs and their log files are processed and turned into a better format.
E.g. when phpunit finished it's job, it's possible to use the code-coverage log, to turn it into a nice HTML page and move it to the /www folder for public viewing.)
But, most of the real work of this build process is described in a build configuration file. Here is, where build tools like "Phing", "ant" (the big brother of phing) and "nant" (win) come into play.
The build tool provides the foundation for scripting tasks.
This is where your automation happens!
You will have to script the automation steps youself.
Jenkins is just a GUI on top of that, providing some buttons for displaying the build.log and reports and re-starting a build, right.
Or in other words: you can't simply stick Jenkins and your PHP project together, hoping that you can click your build and deployment process together on the GUI.
We are not there, yet! The tools are getting better, but it's a long way.
Let's forgt about Jenkis for a while. Let's focus on the build steps.
How would you build and deploy your project, when you are on the CLI only?
Do it! You might want to write down, all commands and steps involved to simple text file.
Now, turn these steps into automation steps.
Create a "build.xml" in the root folder of your project.
<?xml version="1.0" encoding="UTF-8"?>
<project name="name-of-project">
... build steps ..
</project>
Now we need some build steps.
Build tools refer to them as "target"s. A build target groups tasks.
You can execute each target on it's own and you can also chain them.
<?xml version="1.0" encoding="UTF-8"?>
<project name="name-of-project" default="build">
<target name="build">
<!-- tasks -->
</target>
<target name="deploy">
<!-- tasks -->
</target>
</project>
Rule: keep targets small - maximum 5-7 cli commands in one target.
Now let's introduce target chaining with dependencies.
Lets assume, that your task "build" should run "phpunit" before.
On the CLI you would just run phpunit, then your build commands.
Inside a build config you have to wrap calls into the exec tasks.
So, you create a "phunit" target and add this as a dependency to the target "build".
Dependencies are executed before the target specifying them.
<?xml version="1.0" encoding="UTF-8"?>
<project name="name-of-project" default="build">
<target name="phpunit" description="Run unit tests with PHPUnit">
<exec executable="phpunit" failonerror="true"/>
</target>
<target name="build" depends="phpunit">
<!-- tasks -->
</target>
<target name="deploy">
<!-- tasks -->
</target>
</project>
A build tool like Phing provides a lot of core tasks, like chown, mkdir, delete, copy, move, exec (...) and additional tasks (for git, svn, notify). Please see the documentation of Phing http://www.phing.info/docs/master/hlhtml/index.html or Ant http://ant.apache.org/manual/
A good thing with Phing is the possibility to write AdhocTasks in PHP inside the build configuration file and run them. This is also possible with ant, just build a exec tasks executing PHP and the script.
Ok - lets fast forward: you re-created the complete build and deployment procedures inside this build configuration. You are now in the position to use the target commands standalone. Now we switch back to Jenkins CI or any other CI server and configure it, to run the build tool with the target tasks. Normally you would have a default target, called mainor build which chains all your targets (steps) together.
Now, when a new commit arrives, Jenkins starts the build process by executing the build script.
Given these pieces of information, about how Jenkins interacts with a build tool,
some of your questions are self-explaining. You just have to create the steps, in order to get done, what you want...
Let's start the Q&A round:
Q1: Jenkins workspace folder
Workspace is where the projects live. New commits arrive there.
Under "Advanced" you select a working directory for the projects without changing the Jenkins home directory. Check the "Use custom workspace" box and set the directory that Jenkins will pull the code to and build in.
It's also possible to configure the build folders there and the amount of builds to keep.
Q2: Composer
Composer keeps a local cache - it lives in $COMPOSER_HOME/cache. The local cache will be used, when the same dependencies are used. This avoids re-downloading them. If a new dependency is introduced or the version changed, then that things gets fetched and will be re-used, on composer install and composer update.
Composer installs/updates are always fresh from the net or the cache.
There is no keep alive of the vendor folder. The dependency is deleted and re-installed.
If it takes long, it takes long. End of the story.
If it takes to long, use Composer one-time, then add new build targets "zip-copy-vendor-folder" and "copy-unzip-vendor-folder". I guess, you can imagine what these things do.
Now you have to introduce a if check for the zipped vendor file. If the vendor zip file exists, you skip the composer install target and proceed with "copy-unzip.." .. ok, you got it. This is a tweak.. do this only if your dependencies are pretty stable and far from changing often.
In general, you will need a build target "get-dependencies-with-composer", which executes composer install. Cache will be used automatically by Composer.
Q3: get the latest build and move to a fresh destination
The latest build is in the build folder - or, if you defined a step to move the file, it's already in your desired folder.
Q4: how to get media files in
Just add a build target for copying the media folders into the project.
Q5: add a build targets for asset handling
You already know the position: it's "after build". That means it's a deployment steps, right? Add a new target to upload your folder maybe via FTP to your CDN.
Q6: when shall I clear the caches (memcache, redis)
I suggest to go with a simple: "deploy - flush cache - rewarm caches" strategy.
Hotswapping of PHP applications is complicated. You have to have a PHP class supporting the change of underlying components, while the system starts running two versions. The old versions fades out of cache, the new versions fades in.
Please ask this question standalone!
This is not as easy as one would think.
It's complicated and also one of the beloved topics of Rasmus Lerdorf.
Q7.1: How can I rollback to previous versions?
By running the deploy target/tasks in the folder of the previous version.
Q7.2: And how can I setup to keep last 5 successful releases.
Jenkins has a setting for "how many builds to keep" in the build folder.
Set it to 5.
Q8: How can I get email of failed build and failed deploy email alerts?
Automatically. Email notifications are default. If i'm wrong, look into notifiers "email".
**Q9: How can operations get a list of recents commit messages, after a successful deploy by email. **
Add a build target "send-git-log-via-email-to-operations".
i feel, like i wrote a book today...
I don't have answers to all but few that i have are:
1) My first question relates to how/where are the files cloned. I understand there is a Workspace, and every time gets cloned there, or only new things are pulled.
You're correct in your understanding that files are cloned in workspace. However, if you want, you can set your own custom workspace by setting it up in Advanced Project Options ( enable Use custom workspace) which is just above 'Source Code Management' section.
2) composer update seams to load again and again the same stuff, and looks like it's not being cached with multiple builds. I'd love to hear the opinion here, but I was expecting on the next build it will check for changes, and get the diff only. Doing a full composer update takes several minutes.
I have no idea about Composer but if this thing is also getting checked-out from Git, then Shallow clone might be the thing you're looking for. This is present under: Source Code Management section > Git > Additional Behaviours > Advanced clone behaviours
In the Deploy job, I would love to setup a process that takes the most recent stable build, and moves the files to a dedicated folder like releases2. Then runs some provision scripting and in the end, it updates the /htdocs folder symlink to the new releases2 folder, so the webserver starts to serve from this folder the website.
I'm not sure why you need a separate job for deployment. All that you've stated above can be accomplished in the same job i guess. In the Build section (just above Post-build Actions section) itself, you can specify your script (Win batch/bash/perl/...) that will perform all actions required on the stable build that just got created.
3) How can I get the latest build (in the build folder I saw only a couple of log and xml files, couldn't locate the files from git) and move to a fresh destination.
From your description, i'm almost sure that you're not having a master-slave set up for Jenkins. In that case, the easiest way to find out the location of files fetched from Git would be to check the 'Console Output' of the latest build (or any build for that matter). In the first two-three lines of the console output, you will see the path to your build workspace. For ex., in my case, it's something like:
Started by timer
Building remotely on Slave1_new in workspace /home/ec2-user/slave/workspace/AutoBranchMerge
4) How shall I setup the destination, so that I can keep Media Files between different deploys.
I'm not really sure what you're looking for. Looks like something that has to handled by your script. Please elaborate.
5) When shall I deal with the Assets (like publishing to a CDN) after successful build, and before deploy is finished. Shall this be a pre/post hook, or a different job.
If you mean artifacts, then you should be checking 'Post-build Actions'. It has several options such as 'Archive the artifacts', '[ArtifactDeployer] - Deploy artifacts from workspace to remote repositories' etc... The number of such options you see in this section depends on the number of plugins you've installed.
One useful artifact-related plugin is https://wiki.jenkins-ci.org/display/JENKINS/ArtifactDeployer+Plugin
6) When shall I clear the caches (memcache, redis).
Sorry, no idea about this.
7) How can I rollback to previous versions? And how can I setup to keep last 5 successful releases.
Previous version of what? Build is always overwritten in the workspace; only the logs of past builds will be there to view. You will have to explicitly put a mechanism (script) in place to take backup of builds. Also check this section Discard Old Builds at the top of project configuration page. There are few plugins available too which you can install. These will help you configure builds to keep, to delete etc.
8) How can I get email of failed build and failed deploy email alerts?
This option is available in Post-build Actions. There is 'E-mail Notification' which provides basic functionality. If you need better features, i suggest you install 'Email-ext' plugin.
https://wiki.jenkins-ci.org/display/JENKINS/Email-ext+plugin
You can search from hundreds of plugins by going to Jenkins > Manage Jenkins > Manage Plugins > Available tab
9) How can operations get a list of recents commit messages, after a successful deploy by email.
This functionality might not be directly available through plugin. Some scripting effort will be required i guess. This might be of some help for you: How to include git changelog in Jenkins emails?

How to deploy only modified/new files GIT+Jenkins+PHP?

I am trying to use Jenkins CI server for my PHP application. As we are using our Git repository so i am using jenkins's git plugin to take files from central repo.
Currently when my jenkins job runs it takes files from git repo & make a build but that build contains all the files.
As per my current scenario i only want modified+new files in that build.So that i can deploy only them not the whole bunch of files.
Is this possible somehow..or it is fundamentally wrong in build environments..?
You need two things: The commit of the previous build and then the changed files between the previous build commit and current HEAD.
For the first: There might be ways to find the commit from Jenkins via the REST API (as it does display it in the build page. But I think it will be easier if you put the git commit into a file and archive it as a build artifact. Then you can use Copy Build artifacts plugin to get the file from the previous build.
For the second: Maybe you can use git diff --name-status HEAD
To tie all of this together:
Set up the build to Copy artifacts from the same job, last successful build.
Assuming the file where you store the commit id is called "commit_id", set a build step to run something like:
git diff --name-status `cat commit_id` HEAD |
while read status file; do
case $status in
D) echo "$file was deleted" ;; # deploy (or ignore) file deletion here
A|M) echo "$file was added or modified" ;; # deploy file modification here
esac
done
# record this commit to be archived
git describe > commit_id
In the post build actions, configure the job to archive the file commit_id.
There is nothing "fundamentally wrong" in that, however at least your release builds should be "clean full" builds (not incremental).
As for "how" to do that... you have to do that yourself. Haven't seen any plugins like this. Why? Because in majority of compiled "builds", there is no 1-to-1 relationship between a source file and corresponding compiled file. How would the system know which new files produced which new artifacts? (In PHP, it's clear, in other languages, not)
You've got to write your own build script that would:
Parse the console log for SCM changes, or query the SCM directly.
Build
Archive/Package/Zip only the files that were changed, based on your parsing in step 1.
Deploy that subset of file.

Drupal6 : Dev - Stage -Online for a small group of developers

I hire tommrow a new developer, since now i worked alone, now i need to do some enviorment to developing and do a stage - online step
what is the leading tools (even if need to pay somthing) to do that?
i saw webenabled.. so far..
You'll need a some sort of version control system (VCS) for your project code. Since Drupal.org now use Git which is pretty good and awesome, you should too. There are several hosting solution for Git, the most popular seems to be GitHub.
In your code repository, I recommend not to put the whole site directory but only your own custom code. Regardless the used VCS, here is what I put in my code repository
A .make file used to download Drupal core, contrib modules and contrib themes and apply patches (if required)
a module folder with only the custom modules
a themes folder with only the custom themes
A build script to
run drush make on the .make file to download Drupal core and contribs to a (VCS ignored) dist folder
copy the modules folder to dist/sites/all/modules/custom
copy the themes folder to to dist/sites/all/themes/custom
This to
properly track changes to your project custom code
properly track used core and contribs versions (in the .make file)
prevent core or contribs hack but allow patching when required (Drush Make requires the applied patches to be available at a publicly accessible HTTP address)
For the build script, I use Phing but any scripting languages (ant, bash, php, ruby, etc.) could be used. With some additional work, the build script can also be used to run automated test (SimpleTest) and code validation (php -l and Coder Review). In the end, the build script produce and update dist folder ready for deployment.
For multi developpers project, I try to have as much configurations as possible exported into code instead of working at the database level to store. Mainly by using exportables through the Features module and by having a project specific profile to define and update non-exportable configurations through its hook_install and hook_update_N implementations. See The Development -> Staging -> Production Workflow Problem in Drupal and the Code driven development: using Features effectively in Drupal 6 and 7 presentation.
There are a few options for this, there is deployment module that is alpha but apparently works good. Then there is plain old svn ( or even rsync ). That get the job done pretty fast, and give you the added bonus of source code management but you need to transfer databases manually.
Last but not least, and the most powerful method of the 3 mentionned, is drush.
Whatever you chose depends on the time you are willing to invest in this step, because short-term they all involve a little more time than just copying a site in another folder but the task would be automated once you do it, so long-term you can easily repeat the deployment and this is where these tools will make you save time.
Good-luck!

Why to have "build/" folder with PHP project and phing

What is a benefit of having "build/" folder where all the sources will be placed and "built"?
Maybe it's a silly question, but I'm trying to understand Continuous Integration with PHP. Any example of build.xml for phing uses such build/ folder, but what's a sense in that for PHP where a checked out project doesn't require a compilation, only a basic configuration. Copying it all into build/ will just complicate the things, because you'll have doubled files and +1 folder to the web root path (if you'd like to have web UI to run selenium tests on)
Particularly I need phing for two cases:
1) let new user setup his first installation (or update old), right on a working copy
2) run unit/func-tests, phpcc, phpcs, phpdoc etc (all that usually on CI server)
Should I have "build/" for the second task? What is the best practice for PHP?
There are several good reasons to have a build directory (i.e., deployment to multiple environments, performing some text replacement, minimizing and combining CSS and JS, optimizing images, handling of config files etc.)
However, these may not apply in your use cases. There is no rule saying you need this directory. Depending on your thinking on testing in production, a build directory may be a good reason to keep this directory.

Categories