Need a solution for a development environment and version control - php

I will try to describe what I'm liiking for as best as I can.
I have a team of developers working on different sites all written with PHP.
For the example, they are now working on a site mysite.com
I want them to have something like a git solution where the master will be the staging area but I will be able to access the different branches and see them.
Let's give an example.
Dan is working on mysite.com
He has a local development environment which he forked from staging.mysite.com (the master).
Dan is creating a branch called dan-branch-1 and another one called dan-branch-2.
Both branches need to be available at the addresses staging.mysite.com/dan/dan-brach-1 and staging.mysite.com/dan/dan-branch-2
When I'm saying available I mean a browsable website so I will have a copy of staging.mysite.com at each of the branches where my client will be able to see the changes we made for him.
After the changes are approved, Dan will merge the branches back to the master.
Finally at some point we will upload the master to the production server.
Is there such a solution that can help me with that?
Thanks !

After a lot of searching I found the exect solution to my issue.
It's called GitLab and it offers everything I looked for.
The interface seems a bit old fashioned, and the learning curve seem steap but it is actually a complete solution so I'm going to learn it and try it.

Netbeans have remote server options.
you can try this.
start netbeans PHP project with remote server access or you can do existing project right click properties and rin configuration give remote server details, FTP, folder path.
if two developer working in same file they have to syncronize manually.
code will merge . No need spend time to merge .
and application will uptodate every time.
thanks..

Related

Deploy Laravel PHP apps to development and production server

I'm trying to figure out a way to deploy my company intranet PHP apps automatically, using GitLab, but so far, I'm not sure if the options that I've found on the internet will do the job.
Here's the scenario:
VM1: Remote Git server, that uses GitLab to administrate projects repos.
VM2: Development server. Each project has it's own development server.
VM3: Production server. Each project has it's own, as well.
Developers: Each developer uses a vagrant box, based on the project's development server.
What I want to do:
Whenever a developer push it's commits to the development branch, a hook in the remote server must update the development server tree with the last commit on that branch.
The same must occur when the master branch is updated, but it must update the production server tree.
However, since we use Laravel, we must run some extra console commands using Artisan, depending on each case.
We are following Vincent Driessen's Git Branching Model.
What I know so far:
I know that GitLab uses Web Hooks, but I'm not sure if that's going to work in this case, since I need to access a custom URL, doesn't sound very safe, but if it's the only solution, I can write a script to handle just that.
The other possible solution is to use Jenkins but I'm not an expert and I don't know yet if Jenkins is too much for my case, since I'm not using unit tests yet.
Do you guys have implemented a solution that could help in my case? Can anyone suggest an easy and elegant way to do that?
Thanks guys! Have a nice one
We do it the following way:
Developers checkout whatever Git branches, and as many branches as they want/need locally (Debian in VM Ware on Mac)
All branches get pulled to dev server whenever a change is pushed to Git. They are available in feature-x.dev.domain.com, feature-y.dev.domain.com etc., running against dev database.
Release branches are manually checked out on live system for testing, made available on release-x.test.domain.com etc. against the live database (if possible, depending on migrations).
We've semi-automated this using own scripts.
Database changes are made manually, due the sensitivity of their nature. However, we don't fint that a hassle, after getting used to migrations - and just remembering to note the alterations. We find good support by cloning databases locally for each branch that needs changes. An automated schema comparison quickly helps then, if changes to a migration file have been forgotten.
(The second point is the most productive one, making instant test on the dev platform available to everyone as soon as the first commit of a new branch is pushed)
I would suggest to keep things simple and work with git, hooks and remote repositores.
Pulling out heavy guns, like Jenkins or Gitlab for this task could be a bit too much.
I see your request as the following: "git after push and/or git after merge: push to remote repo".
You could setup "bare" remote repositories - one for "dev-stage", one for "production-stage".
Their only purpose is to receive pushes.
Each developer works on his feature-branch, based on the development branch.
When the feature-branch is ready, it is merge back to the main development branch.
Both trigger a "post merge" or "post-receive" hook, which execute a script.
The executed script can do whatever you want.
(Same approach for production: When the dev-branch has enough new features, it is merged to prod branch - triggers merge event - scripts...)
Here you want two things:
You want to push a specific branch to a specific remote repo.
In order to do this, you have to find out the specific branch in your hook script.
That's tricky, but solveable, see: https://stackoverflow.com/a/13057643/1163786 (writing a "git post-receive hook" to deal with a specific branch)
You want to execute additional steps for configuration/setup, like artisan, etc.
You might add these steps directly or as triggers to the hook script.
I think this request is related to internal and external deployment via git.
You might also search for tutorials, like "deployment with git", which might be helpful.
For example: http://ryanflorence.com/deploying-websites-with-a-tiny-git-hook/
http://git-scm.com/book/en/Git-Basics-Working-with-Remotes
http://githooks.com/ & https://www.kernel.org/pub/software/scm/git/docs/githooks.html
If you prefer to keep things straightforward and don't mind using paid third-party options, check out one of these:
http://deploybot.com/
https://www.deployhq.com/
https://envoyer.io/
Alternatively, if you fancy shifting to an integrated solution, I've not used better than Beanstalk.

How do I synchronise two servers for changes in files as well as database?

I have a PHP/Mysql Desktop server. I also installed PHP/Mysql to my laptop and one of my friends for developing a project. Is there anyway that we can synchronise all the changes (i.e. to the php files as well as database changes whether structural or data wise) we do on our laptops to desktop PC ?
For sure you want to start using Git, possibly with a free private repository on somewhere like www.bitbucket.org. Within your versioning, you can backup a version of your SQL inside of the phpMySQL admin tool, and clone the entire repository every time you want to move it.
There are a few choices
https://github.com/axkibe/lsyncd#readme
will sync real time and uses inotify
and if you want to go real nerd
http://www.drbd.org/
High availability clustering
The best way would to be purchase a VPS from a company like Digital Ocean for $5.00 a month. You can then setup a MySQL database that you can both easily connect to. In addition, you will be able to use it to test and deploy (smaller) projects to the server.
Forgot to address the other part of your question. You will be able to setup git/SVN/CVS on this server as well. There are good tutorials online for this. You can both access and commit your changes to this repository. You could also use a website like GitHub for this version control.
Probably the easiest way (if you just want the files synced as is on each change) would be to use something like Google Drive or a similar service. Though if you want revision history and more advanced tools, you'd probably need some sort of version control repository (git, svn, etc.).
However, I definitely recommend version control if that works for you. And if you go with git, I highly recommend bitbucket as they have free closed source hosting for projects of up to 10 (last time I checked) people.
Along with a version control system (such as SVN, Git, or Mercurial) you can't go wrong with using a tool such as FreeFileSync (free as in freeware) for a quick and easy way to push the changes you make to a project to a centralized location. With your project included in a VCS repository the VCS you choose to use will give you control over how you want your project to be synchronized.
This has helped me a lot over the years and has even made the process faster.

How do you take your project from development to production?

Good day to you all,
I am currently developing a project on Laravel. So far I have always developed online, directly editing my files on the webserver throuh FTP (using PSPad or similar simple editing tools).
What I want to do now (and what i believe most people actually do) is setup a (W)LAMP stack on my local machine and program locally. However it is a little bit unclear to me how to keep my local code (including databases) in sync with the live website. How do you folks do that? I know there's probably lots of ways and tools to do that, but what would be your advice for a best practice? Any advice would be very welcome :)
What many companies do is build offline, then push their edits up to a server using git.
Im no expert on the software so ill describe what you do in a basic form:
My advice would be to create an online repo (repository) to store your project while you edit/update.
There are several git project management systems such as github or bitbucket. I personally use bitbucket
What git does, is when you have built or added what you need offline on local (w)lamp, you then git push them up to your repo or server. The changed files then get merged with the existing on the repo or the server. If you'd like the most recent version of your project you'd simply just git pull them down.
Read the full documentation here to see the wide range of options available when using git
We have a settings array within our platform available as $res::Config.
At runtime, a variable is changed from 'dev' to 'live' after checking the HTTP Host, obviously depending on the IP address.
Within our framework bootstrapping, depending on the value of $res::Config->$env, or the environment set previously as either dev or live, the settings for the database connection are set. You store these settings in the Config array as db_live or db_dev.
However you do it, use an environmental variable to figure out whether you want live or dev, and set up and array of settings accordingly.
We also have sandbox and staging for intermittent development stages.
As for version control, use git or subversion.
Edit: It's also possible that within our vhost file, we setup an environmental variable as either live or dev, and our application reads from this accordingly. I'd suggest this approach :)
There are a number of ways of doing this. But this is a deceptively HUGE question you've asked.
Here is some good practice advice - go and research these items, then have a look at my approach.
Typically you use a precess called version control which allows you to create "versions" or snapshots of your system.
The commonly used "SVN" software is good, but the new (not really any more) kid on the block is GIT, and I personally recommend that.
You can use this system to push the codebase live in a controlled fashion. While the files/upload feature is essentially similar to FTP, it allows you to dump a specific version of your site live.
In environments where there are multiple developers, this is ideal - you can compare/test and work around each other, and version control tends to stop errors between devs.
So - advice part 1: Look up and understand version control, then use it to release CODE to the live environment.
Part 2: I use database dumps and farm them back to my machine to work with.
If the live database needs updating, I can work locally and simply export, then re-import on the live system.
For example: on a recent Moodle project I worked on, to refresh the whole database took seconds... I could push a patch and database update in a few minutes.
However: you should think about maintenance and scheduling... if the site is live and has ongoing data changes then you need to be careful with this. Consider adding a maintenance page.
Advice 2: go research SQL dump/export and importing.
I personally use phpmyadmin to dump and re-import, as it's very convenient.
Advice 3: Working locally then pushing live is MUCH BETTER PRACTICE. You're starting down a much safer and better road than you're on!
Hope that helps... but bear in mind - this is a big subject, so you'll need to research a fair bit.

Wordpress Development VS Production Environments

I am about to use WordPress as CMS with a lot of customization, but my question is, how should I sync development to production?
Assuming WordPress is not used, a typical development cycle is to develop the pages locally, and create the DB scripts. Then when everything is ready, it is posted to the site. And then again, more db and code changes, then only the changes are updated/applied, and so on.
Now, with WordPress, you get all the great features (one example is blogging, comments, almost ready CMS ...etc). However deployment is a pain! Ideally, I would rather keep my typical development cycle described above. Therefore I am going to be implementing WordPress locally (from wordpress.org) and then pushing the changes to my production server.
So, assuming that I create a new page in WordPress locally (I will never create pages on the server, all locally, I will find a way to disable wp-admin for the server), a number of files are created. This is not a problem so far. HOWEVER, if I want to add content to that newly created page, that content is saved to my local database. Even though that content is a database change, it is considered (from my point of view) a new change that should be pushed to server rather than add that content via the live server (because that content is considered static, it is not a blog post or a comment, it is a static page).
Now, that new page content is saved to the DB, and therefore, the DB will have changes done on my local machine that I should push to the server along with the files that I will FTP to the server.
My questions are:
Is this approach correct? If not, what do you suggest
What is the best way to find database diffs? What is a tool to use? Does MySQL Workbench provide something like that? I intend to use that tool to find diffs and then generate an update script for the DB. The reason for this question is I normally make the changes myself, and I know how to track them, but now, those DB changes are generated by WordPress and I need to reverse engineer them to find out which changes are made.
Assuming I got step 2 to work, is there anything in that script that should be modified? Such as server names? Does WordPress hard-code server names for example?
So to summarize and give you more information about my development environment, I use XAMPP for development on Windows 7, with PHP and MySQL setup. I also use Mercurial for source control. As mentioned above, I will use WordPress as part of the solution and I intend to use it to help me create a CMS solution. I will use it locally for page generation, and disable that feature for online (keeping online for blog posts and similar entries only). I am doing that so as to keep things in-sync. If I create a page locally, some data is saved to the DB. Now, how do I sync/upload?
Thanks.
OK, after further investigation, here is what I concluded.
All theme development should be version-controlled
All plugin development should be version-controlled
Content of pages and posts are not part of the development porcess, this is contect and should only be backed up.
This way, you do not need to worry about DB changes ...etc.
Hope this helps everyone.
You might use a Version Control System? What OS is the development on, e.g. Win or Linux? And what is the production OS? I use http://serverpress.com for my testing environment though there are others, WAMP, LAMP, etc.

Proper Way To Use Git/GitHub - PHP System with Dev/Testing/Production servers

I apologize if this is obvious or easy, I have looked at a good number of git/github tutorials and read other articles, but I want to make sure what I'm doing is right.
I want to incorporate VC (for obvious reasons) into my development team and process.
Current development process (using Dreamweaver):
* Receive a ticket (or work order)
* Download file on Development server
* Make changes to the file
* Upload file back to development server
* Changes tested/verified
* Send to production server
I'm trying to figure out how to make our new development process with using Git.
I am switching over to PHPStorm (which is an actual PHP IDE with direct integration with Git).
Would it be something like
Receive a ticket (or work order)
Checkout/Update/Download file(s)
Change Files
Upload file (which I assume is also the current working directory...?)
At the end of the day, do a commit
Have build script send data to testing server (nightly build)
Or would it be better to do something like
Receive a ticket (or work order)
Checkout/Update/Download file(s)
Change Files
Upload file/commit
Have build script send data to testing server (nightly build)
Or is there another way? Having a bit of trouble understanding what would be the optimal flow?
Any help would be greatly appreciated.
Edit
I'm trying see if it is best to have a version of the server locally (every developer), and if so, how does that work if you have 7 or so branches?
If not, how do you deal with 7 or so branches with them on the web? Do you FTP files up or use Git Hooks to make them auto update?
Update 07/26/2012
After working successfully with Git for quite a while now I've been following this branching model with great success:
A Successful Git Branching Model
The answer to the above was yes -- should definitely have a local version of the server.
Assuming you have a live server and a development server I would do something along these lines.
Before even starting with a development cycle I would at least have two branches:
Master - the development server runs on this branch
Stable - the live server runs on this branch.
So if a developer gets a ticket or a work order he/she will perform the following actions:
git pull origin master
git branch featureBranch (named as the ticket id or as a good description for the work order)
git checkout featureBranch
Make changes which will accomplish the desired change. Commit as often as is necessary. Do this because you will create valuable history. For instance you can try an approach to a problem and if it doesn't work, abandon it. If a day later you see the light and want to re-apply the solution, it is in your history!
When the feature is fully developed and tested locally, checkout master.
git merge featureBranch
git push origin master
Test the pushed changes on your development server. This is the moment to run every test you can think of.
If all is working out, merge the feature or fix into the stable branch. Now the change is live for your customers.
Getting the code on the server
The updating of servers shouldn't be a problem. Basically I would set them up as users just like you're developers are. At my company we've setup the servers as read-only users. Basically that means the servers can never push anything but can always pull. Setting this up isn't trivial though, so you could just as well build a simple webinterface which simply only allows a git pull. If you can keep your developers from doing stuff on live implementations you're safe :)
[EDIT]
In response to the last questions asked in the comments of this reaction:
I don't know if I understand your question correctly, but basically (simplified a bit) this is how I would do this, were I in you shoes.
The testing machine (or the webroot which acts as testing implementation) has it source code based in a git repository with the master branch checked out. While creating this repository you could even remove all other references to all other branches so you'll be sure no can checkout a wrong branch in this repository. So basically the testing machine has a Git repository with only a master branch which is checked out.
For the live servers I would do exactly the same, but this time with the stable branch checked out. Developer should have a local repository cloned in which all branches exist. And a local implementation of the software you guys build. This software gets its source from a the local git repository. In other words: from the currently checked out branch in this repository.
Actual coding
When a new feature is wanted, a local feature branch can be made based on the current master. When the branch is checked out the changes can be made and checked locally by the developer (since the software is now running on the source of the feature branch).
If everything seems to be in order, the changes get merged from feature branch to master and pushed to your "git machine". "your github" so to speak. Testing can now pull the changes in so every test necessary can be done by QA. If they decide everything is ok, the developer can merge the changes from master to stable and push again.
All thats left now is pulling form your live machines.

Categories