Include git commit information inside a project - php

I am working on php project. Is it possible to integrate git into the project to have the info from last commit in the footer?
I am not using build framework

I'd use so called git hook to achieve this. Hooks are literally scripts that execute automatically on specific git actions. The one that you might be interested in is called git post commit hook, which is a script that is automatically executed after every commit. Here's how to add one:
Create a script .git/hooks/post-commit:
#!/bin/sh
git log -1 --format=%cd --date=local > version.txt
Make it executable:
chmod +x .git/hooks/post-commit
That's it. Now after every commit your version.txt will be updated with info from your last commit.

There's multiple options for doing that, but I think the best one would be to do it in during your build.
If you're using something like Jenkins, Bamboo or any other CI/CD system, you can create a task to retrieve the git commit, and save it in a readable location by the application. This could be:
Set it as part of your configuration file
Set it as an environment variable that you can read
Save it in a file and read it for each request
...
Really, the options are endless, but it depends on the way you're building your project, and how it's being deployed. As you can see, you're not lacking options!
If you don't use a build system, maybe you should be using one then!
Still, you have the option to get the latest commit by executing something like:
echo exec('git rev-parse --short HEAD');
Which will give you the short commit hash. I'd really recommend using one of the alternative options though.

Actually after searching what I found something like that without using builds
echo exec('git log -1 --format=%cd --date=local');
That will display the time and date of last commit

Related

How would I programmatically determine all the currently existing dirs in a Github dir page?

I have this: https://github.com/bitcoin-core/guix.sigs/tree/main/22.0
In PHP, I'm trying to grab a list of all subdirs in that dir, as it exists at that given moment. For all I know, they sometimes remove and add (or even rename) the existing ones. In other words: [ '0xb10c', 'CoinForensics', 'Emzy', ... ];
What would be the best way to accomplish this?
Do I really have to cURL-fetch the webpage (Github/Microsoft loves blocking my bots) and then try to parse them out from the absolute clusterduck of HTML code?
Do they really not provide this list of "independent verifiers" as some sort of computer-parseable list somewhere?
My ultimate goal is to be able to fetch all of their verifications for the current version of Bitcoin Core, such as: https://raw.githubusercontent.com/bitcoin-core/guix.sigs/main/22.0/fanquake/all.SHA256SUMS, and compare it with the "official" one on BitcoinCore.org, and it they all don't match, I will not install the new update. To be able to do this, I need to know the list of "users" to construct the URLs to fetch.
I don't understand why they always seem to actively make one step impossible or near-impossible to automate, even in highly technical and security-related contexts where it makes no sense. I really hope that I'm missing something obvious.
The best option is probably to sparsely clone the remote repository locally, then scan the local filesystem for directory changes.
Do the following once to set up a sparse clone:
git init guix.sigs
cd guix.sigs
git remote add -f origin https://github.com/bitcoin-core/guix.sigs
git config core.sparseCheckout true
echo "22.0/" >>.git/info/sparse-checkout
Now do the following at the start of each 'run' where you want to see what directories have updated:
git pull origin main
You can then look in guix.sigs/22.0 for any changes.

Jenkins guide needed for build, deploy, provision and rollback, keeping 5 releases

I am pretty new to Jenkins, and have some sort of understanding but need guidance further.
I have a PHP application on a Git repo, that uses Composer, has Assets, has user uploaded Media files, uses Memcache/Redis, has some Agents/Workers, and has Migration files.
So far I understood I need to create two jobs in Jenkins.
Job 1 = Build
Job 2 = Deploy
In the Build job, I setup the Git repo as source, and I setup a post shell script that has one single line composer update.
1) My first question relates to how/where are the files cloned. I understand there is a Workspace, and every time gets cloned there, or only new things are pulled.
2) composer update seams to load again and again the same stuff, and looks like it's not being cached with multiple builds. I'd love to hear the opinion here, but I was expecting on the next build it will check for changes, and get the diff only. Doing a full composer update takes several minutes.
In the Deploy job, I would love to setup a process that takes the most recent stable build, and moves the files to a dedicated folder like releases2. Then runs some provision scripting and in the end, it updates the /htdocs folder symlink to the new releases2 folder, so the webserver starts to serve from this folder the website.
3) How can I get the latest build (in the build folder I saw only a couple of log and xml files, couldn't locate the files from git) and move to a fresh destination.
4) How shall I setup the destination, so that I can keep Media Files between different deploys.
5) When shall I deal with the Assets (like publishing to a CDN) after successful build, and before deploy is finished. Shall this be a pre/post hook, or a different job.
6) When shall I clear the caches (memcache, redis).
7) How can I rollback to previous versions? And how can I setup to keep last 5 successful releases.
8) How can I get email of failed build and failed deploy email alerts?
9) How can operations get a list of recents commit messages, after a successful deploy by email.
I noticed Jenkins has a lot of plugins. Not sure if these are handled by those plugins, but feel free to recommend anything that gets these done. I also read about Phing, but not sure what is, and were shall I use it.
I understand there are lots of questions in this topic, but if you know the answer for a few of them please post as answer
warning tl;tr
Ok - you want it all. Lot's of questions - long story.
Jenkis is "just" a continous integration server.
Continous integration, basically means that you don't have to run a compilation and unit-testing step on the developer machine, but pull this over to a central server, right?
Because compilation and linking is now on a central server, the developer has more time to develop, instead of waiting for compilation to finish. That's how this CI thing started.
Now, when looking at PHP projects, there isn't any compilation or linking process involved.
The job of an Continous Integration working on PHP projects boils down to just doing the unit-testing and maybe some report generation.
You can clearly see that, when looking at helper projects like Jenkins-PHP, which provdies a template setup for PHP projects on Jenkins - http://jenkins-php.org/example.html
The starting point is "Jenkins does something, after you commited source".
You already have a configuration for your Git repository. It is monitored and whenever a new commit arrives, a new "build process" is triggered.
What is this "build process"?
The build process can partly be configured in the Jenkis GUI.
Partly means, the focus is on the configuration of "triggers" and "notifications" and also on "report generation". Report generation means, that, when certain build tools have finished their jobs and their log files are processed and turned into a better format.
E.g. when phpunit finished it's job, it's possible to use the code-coverage log, to turn it into a nice HTML page and move it to the /www folder for public viewing.)
But, most of the real work of this build process is described in a build configuration file. Here is, where build tools like "Phing", "ant" (the big brother of phing) and "nant" (win) come into play.
The build tool provides the foundation for scripting tasks.
This is where your automation happens!
You will have to script the automation steps youself.
Jenkins is just a GUI on top of that, providing some buttons for displaying the build.log and reports and re-starting a build, right.
Or in other words: you can't simply stick Jenkins and your PHP project together, hoping that you can click your build and deployment process together on the GUI.
We are not there, yet! The tools are getting better, but it's a long way.
Let's forgt about Jenkis for a while. Let's focus on the build steps.
How would you build and deploy your project, when you are on the CLI only?
Do it! You might want to write down, all commands and steps involved to simple text file.
Now, turn these steps into automation steps.
Create a "build.xml" in the root folder of your project.
<?xml version="1.0" encoding="UTF-8"?>
<project name="name-of-project">
... build steps ..
</project>
Now we need some build steps.
Build tools refer to them as "target"s. A build target groups tasks.
You can execute each target on it's own and you can also chain them.
<?xml version="1.0" encoding="UTF-8"?>
<project name="name-of-project" default="build">
<target name="build">
<!-- tasks -->
</target>
<target name="deploy">
<!-- tasks -->
</target>
</project>
Rule: keep targets small - maximum 5-7 cli commands in one target.
Now let's introduce target chaining with dependencies.
Lets assume, that your task "build" should run "phpunit" before.
On the CLI you would just run phpunit, then your build commands.
Inside a build config you have to wrap calls into the exec tasks.
So, you create a "phunit" target and add this as a dependency to the target "build".
Dependencies are executed before the target specifying them.
<?xml version="1.0" encoding="UTF-8"?>
<project name="name-of-project" default="build">
<target name="phpunit" description="Run unit tests with PHPUnit">
<exec executable="phpunit" failonerror="true"/>
</target>
<target name="build" depends="phpunit">
<!-- tasks -->
</target>
<target name="deploy">
<!-- tasks -->
</target>
</project>
A build tool like Phing provides a lot of core tasks, like chown, mkdir, delete, copy, move, exec (...) and additional tasks (for git, svn, notify). Please see the documentation of Phing http://www.phing.info/docs/master/hlhtml/index.html or Ant http://ant.apache.org/manual/
A good thing with Phing is the possibility to write AdhocTasks in PHP inside the build configuration file and run them. This is also possible with ant, just build a exec tasks executing PHP and the script.
Ok - lets fast forward: you re-created the complete build and deployment procedures inside this build configuration. You are now in the position to use the target commands standalone. Now we switch back to Jenkins CI or any other CI server and configure it, to run the build tool with the target tasks. Normally you would have a default target, called mainor build which chains all your targets (steps) together.
Now, when a new commit arrives, Jenkins starts the build process by executing the build script.
Given these pieces of information, about how Jenkins interacts with a build tool,
some of your questions are self-explaining. You just have to create the steps, in order to get done, what you want...
Let's start the Q&A round:
Q1: Jenkins workspace folder
Workspace is where the projects live. New commits arrive there.
Under "Advanced" you select a working directory for the projects without changing the Jenkins home directory. Check the "Use custom workspace" box and set the directory that Jenkins will pull the code to and build in.
It's also possible to configure the build folders there and the amount of builds to keep.
Q2: Composer
Composer keeps a local cache - it lives in $COMPOSER_HOME/cache. The local cache will be used, when the same dependencies are used. This avoids re-downloading them. If a new dependency is introduced or the version changed, then that things gets fetched and will be re-used, on composer install and composer update.
Composer installs/updates are always fresh from the net or the cache.
There is no keep alive of the vendor folder. The dependency is deleted and re-installed.
If it takes long, it takes long. End of the story.
If it takes to long, use Composer one-time, then add new build targets "zip-copy-vendor-folder" and "copy-unzip-vendor-folder". I guess, you can imagine what these things do.
Now you have to introduce a if check for the zipped vendor file. If the vendor zip file exists, you skip the composer install target and proceed with "copy-unzip.." .. ok, you got it. This is a tweak.. do this only if your dependencies are pretty stable and far from changing often.
In general, you will need a build target "get-dependencies-with-composer", which executes composer install. Cache will be used automatically by Composer.
Q3: get the latest build and move to a fresh destination
The latest build is in the build folder - or, if you defined a step to move the file, it's already in your desired folder.
Q4: how to get media files in
Just add a build target for copying the media folders into the project.
Q5: add a build targets for asset handling
You already know the position: it's "after build". That means it's a deployment steps, right? Add a new target to upload your folder maybe via FTP to your CDN.
Q6: when shall I clear the caches (memcache, redis)
I suggest to go with a simple: "deploy - flush cache - rewarm caches" strategy.
Hotswapping of PHP applications is complicated. You have to have a PHP class supporting the change of underlying components, while the system starts running two versions. The old versions fades out of cache, the new versions fades in.
Please ask this question standalone!
This is not as easy as one would think.
It's complicated and also one of the beloved topics of Rasmus Lerdorf.
Q7.1: How can I rollback to previous versions?
By running the deploy target/tasks in the folder of the previous version.
Q7.2: And how can I setup to keep last 5 successful releases.
Jenkins has a setting for "how many builds to keep" in the build folder.
Set it to 5.
Q8: How can I get email of failed build and failed deploy email alerts?
Automatically. Email notifications are default. If i'm wrong, look into notifiers "email".
**Q9: How can operations get a list of recents commit messages, after a successful deploy by email. **
Add a build target "send-git-log-via-email-to-operations".
i feel, like i wrote a book today...
I don't have answers to all but few that i have are:
1) My first question relates to how/where are the files cloned. I understand there is a Workspace, and every time gets cloned there, or only new things are pulled.
You're correct in your understanding that files are cloned in workspace. However, if you want, you can set your own custom workspace by setting it up in Advanced Project Options ( enable Use custom workspace) which is just above 'Source Code Management' section.
2) composer update seams to load again and again the same stuff, and looks like it's not being cached with multiple builds. I'd love to hear the opinion here, but I was expecting on the next build it will check for changes, and get the diff only. Doing a full composer update takes several minutes.
I have no idea about Composer but if this thing is also getting checked-out from Git, then Shallow clone might be the thing you're looking for. This is present under: Source Code Management section > Git > Additional Behaviours > Advanced clone behaviours
In the Deploy job, I would love to setup a process that takes the most recent stable build, and moves the files to a dedicated folder like releases2. Then runs some provision scripting and in the end, it updates the /htdocs folder symlink to the new releases2 folder, so the webserver starts to serve from this folder the website.
I'm not sure why you need a separate job for deployment. All that you've stated above can be accomplished in the same job i guess. In the Build section (just above Post-build Actions section) itself, you can specify your script (Win batch/bash/perl/...) that will perform all actions required on the stable build that just got created.
3) How can I get the latest build (in the build folder I saw only a couple of log and xml files, couldn't locate the files from git) and move to a fresh destination.
From your description, i'm almost sure that you're not having a master-slave set up for Jenkins. In that case, the easiest way to find out the location of files fetched from Git would be to check the 'Console Output' of the latest build (or any build for that matter). In the first two-three lines of the console output, you will see the path to your build workspace. For ex., in my case, it's something like:
Started by timer
Building remotely on Slave1_new in workspace /home/ec2-user/slave/workspace/AutoBranchMerge
4) How shall I setup the destination, so that I can keep Media Files between different deploys.
I'm not really sure what you're looking for. Looks like something that has to handled by your script. Please elaborate.
5) When shall I deal with the Assets (like publishing to a CDN) after successful build, and before deploy is finished. Shall this be a pre/post hook, or a different job.
If you mean artifacts, then you should be checking 'Post-build Actions'. It has several options such as 'Archive the artifacts', '[ArtifactDeployer] - Deploy artifacts from workspace to remote repositories' etc... The number of such options you see in this section depends on the number of plugins you've installed.
One useful artifact-related plugin is https://wiki.jenkins-ci.org/display/JENKINS/ArtifactDeployer+Plugin
6) When shall I clear the caches (memcache, redis).
Sorry, no idea about this.
7) How can I rollback to previous versions? And how can I setup to keep last 5 successful releases.
Previous version of what? Build is always overwritten in the workspace; only the logs of past builds will be there to view. You will have to explicitly put a mechanism (script) in place to take backup of builds. Also check this section Discard Old Builds at the top of project configuration page. There are few plugins available too which you can install. These will help you configure builds to keep, to delete etc.
8) How can I get email of failed build and failed deploy email alerts?
This option is available in Post-build Actions. There is 'E-mail Notification' which provides basic functionality. If you need better features, i suggest you install 'Email-ext' plugin.
https://wiki.jenkins-ci.org/display/JENKINS/Email-ext+plugin
You can search from hundreds of plugins by going to Jenkins > Manage Jenkins > Manage Plugins > Available tab
9) How can operations get a list of recents commit messages, after a successful deploy by email.
This functionality might not be directly available through plugin. Some scripting effort will be required i guess. This might be of some help for you: How to include git changelog in Jenkins emails?

How to deploy only modified/new files GIT+Jenkins+PHP?

I am trying to use Jenkins CI server for my PHP application. As we are using our Git repository so i am using jenkins's git plugin to take files from central repo.
Currently when my jenkins job runs it takes files from git repo & make a build but that build contains all the files.
As per my current scenario i only want modified+new files in that build.So that i can deploy only them not the whole bunch of files.
Is this possible somehow..or it is fundamentally wrong in build environments..?
You need two things: The commit of the previous build and then the changed files between the previous build commit and current HEAD.
For the first: There might be ways to find the commit from Jenkins via the REST API (as it does display it in the build page. But I think it will be easier if you put the git commit into a file and archive it as a build artifact. Then you can use Copy Build artifacts plugin to get the file from the previous build.
For the second: Maybe you can use git diff --name-status HEAD
To tie all of this together:
Set up the build to Copy artifacts from the same job, last successful build.
Assuming the file where you store the commit id is called "commit_id", set a build step to run something like:
git diff --name-status `cat commit_id` HEAD |
while read status file; do
case $status in
D) echo "$file was deleted" ;; # deploy (or ignore) file deletion here
A|M) echo "$file was added or modified" ;; # deploy file modification here
esac
done
# record this commit to be archived
git describe > commit_id
In the post build actions, configure the job to archive the file commit_id.
There is nothing "fundamentally wrong" in that, however at least your release builds should be "clean full" builds (not incremental).
As for "how" to do that... you have to do that yourself. Haven't seen any plugins like this. Why? Because in majority of compiled "builds", there is no 1-to-1 relationship between a source file and corresponding compiled file. How would the system know which new files produced which new artifacts? (In PHP, it's clear, in other languages, not)
You've got to write your own build script that would:
Parse the console log for SCM changes, or query the SCM directly.
Build
Archive/Package/Zip only the files that were changed, based on your parsing in step 1.
Deploy that subset of file.

Git - Few git repos clones and what not (PHP - framework Lithium, ORM Doctrine 2)

I'm fairly new to git (vcs in general) so I need help with this next case.
I want to start working on a new project, which will be built using php lithium framework, and doctrine 2.
Case:
I have a main project git repository, and now I want to add (clone) lithium framework inside, from github.
Next, I need to clone li3 extension for doctrine 2 (it automatically clones itself and doctrine 2).
Questions:
Is this the right way (I suppose not).
How do you manage cloning inside existing repository (especially that second part, with li3 extension and doctrine 2).
Thanks in advance.
In git there is no such "cloning inside existing repository" (well technically there is but let's don't make this more complicated than needed). What you describe looks like that you want to use the lithium framework and doctrine as a library.
Normally you don not need to put external libraries into your repository. You only need to do this if you plan to modify the library code and put it under version control.
But you should think first about what you would like to do: integrate it into the repository or not. I think the later is the easier one.
You just create your own git repository first. Then you exclude that part of the library folder that you don't want to have under version control. So you can keep things apart quite easily in the beginning.
To set this up, first create your project on disk w/o git. Create the file system and directory layout. Then initialize the git repository within the project's main directory. That's just calling git init inside that directory.
Git will now show the status of all files you have in there when you type git status. Before you do the first commit you can use the methods described in gitignore(5) Manual Page to exclude the libraries and (perhaps configuration files of your IDE) that you do not want to have inside the git repository.
You can always check if the configuration you're editing matches your need by checking the output of git status.
Keep in mind that git ignores empty directories, so if there is a folder you don't want to be added, it will start to show in the status only if it contains at least a file.
When all the files you don't want to have under version control have disappeared from the status listing, you can do your first commit: git commit -m "Initial Commit.".
Depending if you have configured git or not, it will give you an error about your name and email. The error messages point you to what you need to know. It's just that you have an author and email for each commit, which is useful.
And that's it already. Check the web for the commands:
git init
git status
git commit
it's quite easily with git help *command*, like git help init. It takes some time to learn git, so probably create some test-repository to play around. Once you've learned the commands and get used to it (in case of doubt, google your problem), it's supercool to use.

Can I use subversion in such a way that "commit" automatically updates my server files?

I would like to know if there is any way to make it so that when I commit changes using Subversion, that the changes are automatically reflected on my server in the testing folder? Perhaps this idea is irrational. I'm struggling to get a good big-picture view here. If I kept the subversion repository on the server, would that be reasonable?
You could set something up as simple as a cron job to check for changes every minute or so.
However in my experience, I've found it more flexible to set up a continuous integration / build server like TeamCity or Hudson with a job that checks the svn repository every minute or so for updates. If there are updates you can "deploy" them to the testing directory. The advantage of going this route is that you could automate additional tasks such as restarting the web server and/or running unit tests and only updating on success, etc.
Also, it is not entirely necessary for the subversion server to be on the same server as your environment that needs updating.
A post-commit hook should help you: I'm managing a website in my repository. How can I make the live site automatically update after every commit?
I would like to know if there is any way to make it so that when I commit changes using Subversion, that the changes are automatically reflected on my server in the testing folder?
Yes, certainly. What you want is called a post-commit hook -- see this link for the relevant SVN FAQ.
The general strategy is to have a working copy on your server. When you commit, the script gets called, and its job is to cause your server's working copy to initiate an svn update.
An alternative (and for me, preferable) is to use a post-build push after a successful build on a CI server. There is no sense in putting code on a server that does not do the right thing.
Try svnsync?
If that isn't interesting, you could do something custom with incron & rsync pretty easily

Categories