I'm not a php nor moodle developer. I worked as a python developer for many years and now work as a devops eng.
One of my clients uses the moodle framework for their site with no source control. I've spoken with their lead developer and he insists there's no way to have a moodle repo without the entire directory structure in the repository, that is all the auth, admin, backup, badges, etc directories, since many files in those directories have been touched by their development team
I did a file count and it's over 50K files, which is insane for a code repo.
Has anyone managed to solve this problem for a moodle site before? Specifically a clean CI process using source control?
I have been through a similar process on several occasions. Your best bet is to clone a clean copy of Moodle from the github repo. Then look at the version.php file on the client site to identify the exact version they are using. Next checkout that same version in the clean copy (use gitk to search for that version number). Finally copy across the code from the client site and then the standard git commands should allow you to audit what has changed and commit it in sensible steps.
Once cleaned up, keep all the changes in a branch.
Related
I'm still new to coding and I'm learning everything on my own. This is a silly question for you but after reading a dozen of articles I am still confused.
I have a php based website on a shared host. After reading the various articles on benefits of using repositories and Composer, I decided to give it a try. These are my difficulties so far:
Which version of the operating system of Composer should I download, to enable me to install/update repositories of my cPanel based shared hosting?
If I am to install Windows version, how do I connect to my shared hosting to install/update the repositories?
My apologies for my silly questions, but it would really help.
If you are using shared hosting, you are unlikely to be able to use Composer on the host itself. Furthermore, you are not encouraged to use Composer "on production".
I would recommend you use Composer locally (on the O/S of your local machine), to compose your project and install your dependent packages. Once it's all working and tested with your own code, you upload your entire development directory tree including the resulting vendor library - as one big FTP/SCP upload of "flat files".
Once you get more advanced you could adventure into automated deployment techniques, but I feel for now you would be best to stick to using Composer as a local development tool to manage your codebase.
Update, further details:
Composer is really a tool to help you manage your codebase in development. It's not intended as a "deployment" tool. Previously you used to find a library you liked, download it, unzip it into your codebase somewhere random like "lib/stuff" and then link to it, and commit it into your version control system (VCS). OK, but they a year later you want to update it and you have to download it again, figure out where you saved it and how to overwrite the files, or delete old ones... it gets hard. Also your VCS repository gets full of 3rd-party components - even duplicates of the same one! Composer solved this by bringing order to this long-term dependency management chaos.
The reason you don't want to run Composer "on production" (i.e. your live website), is that during the process of download, update, composition your website will probably be broken. Even if the composer process works, this could be several minutes of broken site. After the update has finished - you now have a completely new set of 3rd party packages: how do you know they are compatible with your codebase?
So therefore you only do composer updates locally, test everything, amend your code to work the shiny new updates, and only then do you decide to upload the whole new site to the server - just as if you'd cobbled it all together manually. The deployment is independent.
In my project the deployable version needs to have a copy of each of the external libs, a different config file and install and setup files, for security concerns, the main project is set to refuse to run if they are present. Thus the upstream copies of the other projects need to be committed to repo. How can I work on code running on localhost where the file layout and sometimes file contents from dev and testing are different to what I need to commit?
Background
I am working on a project on hosted on github and my main IDE is netbeans which has imperfect git support (good enough for >99% of my needs). The project is in PHP and uses several other projects as libraries.
As Netbeans does not have the best support for sub-repos I have chosen to keep each additional project in a separate project. This is fine as the central project looks at the config data for where to find these outside libs.
Half an answer
My instinct is to suppose that there will need to be some "build stage" prior to committing to the github repo but how on earth do I go about setting all that up?
I could write some sort of homebrew thing but then when I pull other people's contributions I would need to reverse the process unless we had a branch for builds and a branch for working copies which seems needlessly complex and could leave the dev(s) config data on public display (not to mention updates being a mess).
I have seen that others have wrestled with somewhat similar problems to no conclusion (at time of asking) (How to push and pull from github without sharing sensitive information? Smudge & clean?) so I am looking for anything that might help me come up with a solution
my main IDE is netbeans which has imperfect git support
Most devs just use the command line. I switch to the NetBeans conflict resolver occasionally, which is very good, but for normal stuff the console is usually faster.
My instinct is to suppose that there will need to be some "build stage" prior to committing to the github repo
... unless we had a branch for builds and a branch for working copies
No, there is only ever one repository. It is better to think of your repo as your code history, rather than your deployment state. Branches should just be for features or large changes, which merge into your mainline/master.
There are a good deal of options available to you when deploying. The first is Composer, which Mark points out: when deploying you issue an install or update command, which fetches the dependencies that satisfy your library requirements recursively. You can use Bower to do the same thing for your JavaScript dependencies.
Some deployment strategies prefer to build locally and then scp/rsync to a remote server. Composer and Bower are still probably a good idea, but you write a build script (using Ant or Phing, for example) to create a build copy in a local temporary folder, and then send it to the server. It is common here also to push it to a new release folder on the server, and then swap a symlink or Apache config file when it's ready to go live.
the deployable version needs to have a copy of each of the external libs, a different config file and install and setup files, for security concerns
Assuming this is a web project, have you tried adding your sensitive environment data to your Apache configuration file? This can be trivially read in PHP, and of course PHP does not care that this information is different according to whether you are developing, testing, demoing a branch or operating live.
Further reading: an excellent PHP deployment book, free of charge, that suggests Phing and Capistrano.
I know this has been asked before, but I couldn't get the answer I needed.
Currently I'm developing an website using PHP and was using Notepad++ before, and it all worked well because I'm developing with a co-worker so we both keep on changing different files on the FTP.
Switched to NetBeans. All went ok, pulled the entire website via FTP to my local computer and everytime I edited a file and saved it uploaded to the FTP. But, there is a problem. If my colleague updates a file, it doesn't update on my local folder. So, I thought: "Let's try versioning".
Created a team on bitbucket, created a repository. All went ok.
But now, I'm in a struggle to get everything up and running on both NetBeans (mine and colleague's) so that my colleague is editing a file on his NetBeans and constantly saving so that it gets saved on FTP and only when he stops working on that file push it to BitBucket so that I can pull after.
Suggestions?
About setting up your work environment :
In order to set up your bitbucket repository and local clone, go read this link (official doc).
You will need to repeat the cloning part once for each PC (e.g : once on yours, once on your colleague's).
Read the account management part to see how you can tag your actions with your account, and your colleague's action with his own account.
Start using your git workflow ; when you are tired of always typing your password to upload modifications to your bitbucket account, take the time to read the ssh keys setup part - read carefully, you will need to execute the procedure once for you and once for your colleague.
Using your local git repository with Netbeans is pretty straightforward :
From netbeans, run the File > New Project ... command (default: Ctrl+Shift+N),
Select PHP application with Existing Sources and click Next >,
For the Sources Folder: line, select your local git directory,
Fill the remaining fields, and if you want the last Run configuration screen, then click Finish.
After the project is created in netbeans, you can modify the Run configuration part by right clicking on the project's icon, selecting the Properties menu entry, and going to the Run configuration item.
About solving your workflow "problem" :
Your current FTP workflow can lead you to blindly squash your colleague's modifications (when uploading), or have your colleague's modification blindly squash your own local modifications (when downloading). This is bad, and you will generally notice it only after the bad stuff happened - too late.
Correctly using version control allows you to be warned when this could potentially happen, and to keep an almost infinite undo stack on the modifications of the project's files. The cost, however, is that both of you will have to add several actions in your day to day workflow - some choices can not be made automatically.
You may find it cumbersome in the beginning, but it really pays off, and quite quickly - we're talking big bucks here. So use it and learn.
On top of using Ctrl+S to save your modifications on disk, you and your colleague will need to integrate 3 extra commands in your daily work :
Save your work to your local repository (git add / git commit)
Download the latest modifications shared by your colleague (git pull)
Upload your work to the central repository (git push)
You can access these commands :
from a terminal,
from a GUI frontend : you can try TortoiseGit for windows, or gitk for linux,
from Netbeans :
in the contextual menu of the files/folders in the project tree (right click on the item, there is a "Git" entry),
using the Team > Git > ... menu
Since you provided a git tag, I'll describe what's to do for Git.
set up a remote bare repo on a server that you both could access (BitBucket in your case):
http://git-scm.com/book/en/Git-on-the-Server-Getting-Git-on-a-Server
you both clone that remote repo to your local machines:
http://git-scm.com/book/en/Git-Basics-Getting-a-Git-Repository#Cloning-an-Existing-Repository
each of you works in her part of the application. When one is done, publish the work to the server:
http://git-scm.com/book/en/Git-Basics-Working-with-Remotes#Pushing-to-Your-Remotes
By now, the remote server holds the version that was just pushed. What's missing is the deployment of the website. This has been discussed here:
Using GIT to deploy website
Doing so, you will decouple your work from that of your colleague since you're not changing files over FTP all the time. You work in your part, your partner works on her part. The work is getting merged and then a new version of the website gets published.
You can create git or Mercurial repositories in Atlassian Bitbucket (http://bitbucket.org). If your team is new to version control, i advise you no forks in your first project.
The easy solution ins to use Atlassian SourceTree (http://www.sourcetreeapp.com/) to control your code since there is a bug in netbeans. See NetBeans + Git on BitBucket
You need to create a new repository in bitbucket. I assume you already configure the ssh2 keys. Using Git you need:
git clone --bare --shared php_project php_project.git
git commit
Using Mercurial you need:
hg init
hg commit
Good luck / boa sorte
Pedro
Basically I've got various projects all version controlled using subversion. This is for many reasons: backup of files in case of bugs/issues in the future; backup of files in case of local system failure etc; collaboration from others in the company; etc..
One of the systems we work with is Wordpress which does updates and installs plugins through its administration panel and such, plus on installing it the system creates various files (including a wp-config.php file and a .htaccess file). This means that on install there are files on the server integral to the running of the system which aren't on the local systems and aren't in svn. Plus any installed plugins and updates aren't mirrored in version control or the local copy.
Plus it feels wrong (specifically when you compare with data normalisation in databases and such) to be working with two copies of the same code - one in version control and one on the server.
So my question is am I using the tools in the right way? Is there any way that the public_html folder from the server can "point" to the latest version in the repo? Or can SVN be configured to read from the public_html folder and automatically add+commit any files created/edited on the server?
Or do people just literally download anything that gets changed/created and add them to SVN manually? Or do people not care? Maybe I've misinterpreted what SVN is for? I'm using it for backup effectively.
Thanks
Tom
I only have versioned my own wordpress theme. All the other stuff including the data is live on the server and solely backuped from there.
The code of wordpress and the plugins used are developed elsewhere, they have their own repositories, and i do not mess mine with code I never will touch.
The question is how to deal with configurations. I am currently running a wiki where I document all the plugins installed live and what configuration properties I have set up.
A sync of live to local then goes like this:
Update wordpress version and plugins to the versions written in the wiki
Setting all configuration options as written in the wiki.
Importing the data base (except wp_options). Converting the static URL of wp_content files to the local scheme.
Syncronisation of the wp_content directory
In many cases your hosting provides regular backup. But is you use VPS you have more freedom to do whatever you want. I have made my public_html folder under version control and created a small script to commit every night. So I can have a complete version history of my site with changes traced. You can also create a script just to copy this folder elsewhere. There may be other better solutions for enterprises, but this may be enough for small project.
I've read a number of topics in the same sort of ballpark as this one, but in all honesty I'm still not exactly sure on the best approach (as a starting point). I am a solo developer in a small office and I have around 30 websites which are hosted on a linux VPS. I want to start using using version control (probably SVN) and also set up a staging server. At the moment, I do development either locally on my machine before using FTP to upload to the live server, or ocassionally for small changes I edit the remote files directly, which is not an ideal approach.
I'm looking for some guidance on how to improve my development environment. I imagine I should be installing SVN on the web server, which would then allow me to check out versions to my local machine (which would also require SVN i think). Also, if I want to set up a staging server, should I just set up subdomains for each of the live websites, then use these subdomains for showing clients changes to the site before making them live?
Hope this makes sense!
This is what we do at work:
We have a staging server running Apache and a Subversion server. We have a post commit hook that updates a working copy in the htdocs directory, that way, when a developer commits something it automatically gets updated on the staging server, so everyone can see the latest code.
On the client's production servers (the ones we can control) we have the Subversion client installed and the website is a working copy. When we need to update the live site we login to a shell and run svn up. If you do something like this, make sure to limit access to the .svn directories, either with .htaccess files or from the main Apache config.
We have a custom app that manages the projects, but that is only because we're lazy and don't want to setup each project by hand, the app creates the necessary directories and working copies. You could write a quick script to do this.
We never, ever, edit files via FTP on the live site. All in all we have been using this setup for almost 2 years and aside from the occasional conflict on the staging server, we never have had any problems.
You can actually install the SVN server on your local machine, which I would recommend in lieu of installing it on the web server (assuming you make backups). The easiest thing to do, since it’s only you using it, would be to use the file:// protocol, but using svnserve is a little more robust, and the preferred method if you want to take the time to do it.
#Michael, I disagree - I would say it's better to install on the linux vps, especially if you are already paying for the hosting service. I find it very helpful to be able to browse and download stuff from my svn repo wherever I am, from whatever computer I'm on.
#nicky, I started with svn (and version control) several years ago and I took baby steps which made it easier to tackle.
If I had to do it over again, I'd read the svn book to start with. The book is very well laid out and didn't take more than 1-2 days to plow thru.
While you're reading, install svn on your linux vps with an apache front end.
Once you have that up, pick one of your websites and import it into svn. This is how I structure my svn repo. For example, say my repo is hosted at http://mysvn.mydomain.com/svn/:
mywebsite1
- trunk
- tags
- branches
mywebsite2
- trunk
- tags
- branches
Don't worry about creating the perfect structure. It's pretty easy to re-organize especially when you're starting out. After you import a few projects into svn, you'll start to get a feel for which projects should have their own "trunk/tags/branches" dir structure and which can be combined.
For creating test environments, I do exactly what you describe. I use build scripts to checkout from svn and download files into dirs that are mapped to subdomains like "test.clientsite.com" (I work primarily in java and use ant and maven, but I think you can use whatever scripting language you're familiar with).
Once you get used to version control, you'll never go back, good luck!