(cutting a few quotations of myself to keep things readable)
Sure don't follow you there. If there has been not integration (no merge or rebase), there really isn't a need for integration test.
Well, it can be the core of misunderstanding. CI is about integrating every meaningful change (commit) as soon as you have it. If you don't do that, your integration is not continuous.
Ok, I'll also use that term: 'meaningful change'. It seems to say that some changes are not meaningful. We are not applying any rule as to what is meaningful here, leaving that to the developer shop. It makes sense that a change that is not meaningful is not worth putting through integration with meaningful changes.
Any CI tool that can run a job from one branch of a vcs can certainly be configured to run multiple jobs, each from a different branch. Any branch that CI runs from would be an 'integration' branch. This applies to git, svn, whatever. In svn, run a CI job on trunk and any number of 'branches' that have ongoing development. Same with git.
Indeed, there are no problems with setting up a a job against a branch. Or several jobs against several branches. The problem is how you get an up-to-date branch that integrates all changes from all committers. With svn or other centralized vcs that happens naturally - people commit to shared trunk, every commit is integrated and is available for CI tool to pull from remote repo.
To be more precise, only meaningful changes are committed into a shared branch ('trunk' is merely a name of a branch in svn and by community convention, not the svn software, it is the 'next release' branch). Users of git can also have a central repo with a 'trunk' branch, they just have not carried over the convention of calling it 'trunk'.
You lose me with the term 'happens naturally'. I'm left guessing what you mean. Users have to have write permission to the repo. They have to deal with merging their commits with changes on that branch. They have to test the result of their merge. They have to be responsible for the result of the commit. Integration of a meaningful change is intentional, manual and consumes some amount of time.
With git you have topic branches. In a team of 10 developers working in topic branches, what is the way to get a branch that integrates their work in an automated fashion? I'm personally not sure there is a way.
'topic branch' is a matter of user convention and can be supported by svn, by convention, as well. svn lets developers create 'topic branches' by convention in the 'branches' subdirectory. So, your paragraph above could have easily started as: "With svn you have topic branches...". I'm not sure what you intend to say by 'integrates in an automated fashion', but the question is due to the use of topic branches, not to any inherent feature of the vcs tool. In other words, if topic branches don't work, don't use them. svn and git do not prevent the creation of a topic branch, so in both cases this is a matter of a developer conforming to the site's workflow.
If I can take a guess at what you want with 'integrates topic branches in an automated fashion': maybe you expect some automated tool to gather all topic branches together into a single build for integration testing? The issues here are the tool does not know which topic branches have meaningful changes and which do not, and the tool has no way of resolving merge conflicts that would occur. Both of these require manual, deliberate decisions. So, if this is what you mean, I would agree there is no automated solution.
But the bottom line for me is that dvcs and cvcs both support topic branches and both do not require use of topic branches. Whether they are put to use or not depends on the developer shop's workflow.
The idea of pushing a branch is 'publicizing it' to a central location. With respect to CI, there is no distinction between using git to push a branch or svn to commit to a branch.
Yes, somewhat. It does also show the difference of git commit vs svn commit for CI. With svn you make a meaningful change (commit), and it is available for CI immediately. It is not the case for git. In order to achieve the same effect, it would be developer responsibility to push the branch immediately after each commit. Which would be really strange use of git.
Another way of looking at this: commits to cvcs are visible to everyone and so every commit must be meaningful change; whereas commits to dvcs are not visible to everyone so not every commit has to be meaningful change. dvcs allows me to work through a series of private commits into a single meaningful change. This gives the advantage of further incrementalizing (?) the change. I can privately commit a single unit of work even if my effort as a whole is still unstable. Smaller increments of change are easier to manage. cvcs users might avoid committing anything until the full single meaningful change is ready. On changes that take a week or more to implement, I prefer private change control (i.e., all intermediary change is 'not meaningful' :) For testing of these private commits, I invoke unit tests manually.
Not sure what you mean by 'strange'. "Life is strange, but compared to what?". I have the feeling your constraint to get every commit under a CI run ("Every commit is a meaningful change") is somewhat contrived. If I had to work that way, I would defer commit until I knew CI test would pass, potentially retaining a large amount of uncommitted work in my local file system without a change history log. I would have to work out an alternate backup (not using the vcs). Sorry, I don't like this constraint.
It seems you have an assumption that every branch has to pass test on every commit. The vcs technology in use is irrelevant to this question.
Well, here is the short, vcs-neutral requirement for CI: every maningful change by every developer should be integrated into result of the teamwork right away, and validated automatically right away.
Yup. I'm working from that view also. I, for one, have not been convinced that dvcs is obstructing this requirement, which I understand was your main point to begin with.
Define a workflow that targets maybe two branches for CI: a release maintenance branch, and a next release branch. The workflow won't release commits until they are committed to one of the other of these branches.
Thank you. This is similar to what my team is doing now - we have a workflow that includes review, rebase against mainline (or 2 mainlines, if the branch targets "current" and "next" release, just like you described), and build on Hudson. After that branch can be merged.
Sorry, I don't use rebase. Changes history which tangles merges. I like 'merge --squash' for merging 'meaningful change' into a shared branch.
(On frequently changing binary content)
This indeed is extreme condition.
Several times I've seen projects keeping requirement specs, design artifacts and project plans under version control side by side with code. Another interesting case I've seen was drools rules in excel in the repo. That doesn't happen often, but I wouldn't call it extreme condtions. I would for database dump or build products under version control.
(There are also old school ant-backed projects that have libraries checked in, but these don't change often).
Sorry, I meant that binary files having a performance impact on git is an extreme condition. Its not extreme to commit binaries into the repo. Its extreme to have large binaries, modified often in such a manner to impact performance of git.
Compare the download patterns of git and svn. Lets say a project has 3 images each changed 3 times: 9 images in repo. One git clone copies down all 9 images once and for all. Each svn checkout copies down 3 images. It would take only 3 checkouts in this pattern for git to be more efficient. Then, lets say somebody changes one of the images. In git, the change image is fetched once. In svn, the image has to be downloaded for every local checkout. git wins.
That would be true if you check out svn project each and every time. But normal workflow is check out once, and update onwards. Update only gives the most recent version, and this is also what working copy stores. I'd say for this scenario the only chance for git to win in bandwidth/storage is if usage pattern implies going back in history often.
I'll have one checkout for the release maintenance branch so I can respond to production emergencies. I'll have one for the 'next release' branch for my main project of the week. I'll get re-prioritized before I'm done with my main project of the week, and I'll have a branch or two hanging out that my manager has not yet declared as 'meaningful change'. Sometimes, I screw up a merge so badly, its better if I recheckout head and load my changes atop that. So, I will normally have more than one checkout; for me, git wins.