Continuous Integration: Was Fowler Wrong?


News: Continuous Integration: Was Fowler Wrong?

  1. Continuous Integration: Was Fowler Wrong? (25 messages)

    While rereading Martin Fowler's classic paper, Continuous Integration, it struck me that its approach to Continuous Integration (CI) is fundamentally flawed. Fowler, like most of the CI community, seems to argue that CI is about building rather than testing. This basic misconception, permeating an otherwise good paper, has contributed to poor tool designs that are focused on build automation and, perhaps more importantly, an untold numbers of teams following bad practices. The problem is in Fowler's definition of CI: Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. The ever present focus on builds, which is reinforced by Fowler's paper, fundamentally corrupts a practice that should be founded on good, fast testing. Tests need to take center stage, and the build needs to be considered just a simple test of compilation. By discarding build as a focus, what remains are integrations and tests of those integrations. In practice, developers continue to integrate many times a day, and tests are run to see if errors were introduced during those many integrations. Each set of tests is run as often as there is something new to test and resources are available. Read the rest at

    Threaded Messages (25)

  2. No, Martin Fowler is not wrong. As far as I'm aware, Martin never stated that CI ends with a build. Of course, if the build fails, you can't go much further. Everything I've read about CI, and seen in practice, involves a build, an automated deployment, and the execution of an automated test suite. I'm curious how you got the impression that Martin only focused on the build, especially given the fact that he's one of the founding fathers of refactoring and test-driven development, which both stress testing. Mark
  3. the founding fathers
    I am pretty sure every thing Mr. Fowler proposed was either old concept or variation of an old concept from probably academic circles. If you want to know who is your(plural) daddy check this: Now this is something which can be called innovation and research, aka to be founding father of something. If you rename some old thing to 'backlog' its hardly innovation.
  4. Martin Fowler, more than anyone I know in this industry, helped popularize refactoring as a first class development activity. He may not have invented it, but he sure put it in the spotlight. Code a little. Test a little. Refactor. Unit tests are your safety net. Continuous working code. Academia? Last time I looked "academia" was still teaching waterfall as the only software development methodology. Fowler was at the forefront of agile software development. The agile community has never claimed that it invented anything, but rather it brought together and popularized a cohesive set of practices and principles, including CI. Mark
  5. Martin Fowler, more than anyone I know in this industry, helped popularize refactoring as a first class development activity. He may not have invented it, but he sure put it in the spotlight.

    Code a little. Test a little. Refactor. Unit tests are your safety net. Continuous working code.

    Academia? Last time I looked "academia" was still teaching waterfall as the only software development methodology. Fowler was at the forefront of agile software development. The agile community has never claimed that it invented anything, but rather it brought together and popularized a cohesive set of practices and principles, including CI.

    My understanding could be wrong, but there is big difference between being founding father and popularizing something (or slightly adjusting it). I am very very confident that almost every process/methodology/thing/concept used in current IT market has been used years before in government and research agencies such as NASA, ESA (European Space Agency), and similar organizations. That is were founding fathers of current IT industry reside. Now it is a different thing that true founding fathers are not interested to be in spotlight, not they care how they will be called (Tesla syndrome).
  6. I agree with Mark. Even assuming Fowler actually focuses on building (I don't think so) does it actually change anything? It's a matter of redefining the responsibilities of the build fase, isn't it? Just as Mark says, it wouldn't have sense to stop at build/compile time. Alessandro
  7. Mark, Fair enough, and I wouldn't say that Fowler doesn't care about testing. It's just not where his emphasis is with respect to CI. My point is that "a build, an automated deployment and the execution of an automated test suite" doesn't cut it because "a", "an" and "an" are not sufficient except in the simplest case. Once our collection of automated tests exceeds what can be run in a "short" amount of time, we need multiple processes. When we try to frame those multiple processes in the context of build, we encounter inefficiencies and confusion. We are either rebuilding unchanged code repeatedly for secondary types of tests or we are calling something that has no compile/package pieces a "build" - which is confusing at the least. Last I met Fowler, who I respect immensely, he was giving a talk about why dependency language based scripting languages (like Ant) were poor choices for build scripts. His point was essentially that deployments and such tend to require more functional programming. This is where thinking about all these processes as "build" gets you in trouble. Ant (like Maven) is a fine build scripting language. It will compile/package in reasonable ways and kick off your quick unit tests. Using another scripting language for deployments makes great sense, but that doesn't make Ant a bad language for build. Where I bicker with Fowler is essentially over the "self testing build". I think the build can be responsible for a small sub-set of tests, but trying to make the build do all the tests can get you in trouble. All the tests that don't fit neatly into the build still are part of CI though and CI needs to account for them - and be focused on them rather than give special significance to the build.
  8. I agree with most that has been said here, i.e. you need a successful build in order to test. However, what CI also gives you is the opportunity to update your tests incrementally should things change (which in a perfect world of course never happens :-). Otherwise, we can end up with an archive of out of date tests that are of no use because they don't address the current system.
  9. For the past 3.5 years I've been focused purely on CI at TW, and from everything I've seen points to it all being correct. I'm quite interested to see how people are picking on the semantics of the original paper. Remember that it's almost 8 years since that paper was published, and the meanings of words change... I'm actually going to be speaking on this very subject at JAOO this year. As well as looking at how the language has changed I'll be discussing how I've seen the concepts applied (and mis-applied) and how things are progressing. I know Martin will be at the conference as well, so why not come along and we can all have a chat about it...
  10. You are wrong[ Go to top ]

    I don´t know why I read this Thread. The author vision is very poor and he has no idea about the fowler paper basics concepts. Write to write. Time lost.
  11. I think you are talking about tests and itests, the first focuses on testing the components' functionalities itself, regardless other components are there, but for itests, they are for testing the whole system - or you can say the integration - and this is so normal in the open-source software development. But we can say that the CI itself evolved from just a build process to look more at the purpose of the automated build/integration processes, which is making sure that the components and the whole system are functional and they are functional according to the specs/requirements as early as possible.
  12. Couldn't be more wrong.[ Go to top ]

    Sure, developers integrate..if they get updates from their SCM. Sure, developers test...if they feel like it and don't shortcut. Sure, developers have their environment setup correctly...if they're sticklers for detail. CI is the gatekeeper. If Mr. Developer didn't take into account something with your local environment, if Mr. Developer didn't test locally, if Mr. Developer didn't get the latest updates to see if it all worked with their code, CI stops all of that. If developers are perfect, then you don't need CI. However, as pre-CI it was proved developers aren't perfect, and post-CI it continues to be true, CI is of great benefit, especially with offshoring.
  13. Don't forget ... you are talking about the "classical paper" of Martin Fowler. When integration is not the problem you will find out, that proper testing becomes a major goal of CI. But I have been working in quite a lot of Java project where integration - i.e. the build process - did not work. Developers did not care about correct versions, correct configuration, correct environment, ... In such case CI should support the build process on the first hand. My bad experiences don't date befor christ and such developers came also from the world leading (:)) software houses. So my perspective would is: CI supports the build process and the testing. Where build is no problem it can do the rest. But only than. Martin
  14. Fowler Was Right[ Go to top ]

    There is no problem with Fowler's definition of CI. Though Fowler did not invent it (IBM was heard to rebuild code for OS/360 four times a day in 60's), he is right. In fact, your quote explicitly includes tests. What else should have Fowler done, write TEST in bold font? You are trying to accuse Fowler in what he didn't say. This is not pretty. Myself I have always treated Continuous Integration, even more formally, as a set of steps to ensure that new changes successfully integrate into the codebase. These steps may include building, testing, running static code analysis, deploying, releasing and anything else that is necessary to insure successful integration. Dismissing the build step doesn't make sense because there maybe no builds (any project with a scripting language as a main means of programming). Or there may be no tests (though should be). An integration maybe just running a package utility successfully. Also, there is nothing new about using result of shorter builds for running other steps. Parabuild supported it since the first release. As for the build and test time, even if a single build and test round takes 24 hours, it is providing feedback on the changes 7 times a week, which infinite improvement for a team that has not been doing it before. Slava
  15. Yikes Slava! I'm sorry if I offended you my friend. I thought I was extremely clear throughout my post (although less so in the title) that Fowler was big on tests. He just goes about it wrong. He puts tests largely in the context of a self-testing build which just doesn't scale and leads those who follow the "staged build" methodology too closely to rebuild excessively. I certainly wouldn't claim that Fowler invented rebuilds. He's always struck me more as a documenter and communicator than an inventor. I wouldn't think of dismissing the build step either. It's a vital test and the one that gets you feedback fastest. But for teams with large test suites it's best as a step and not the entirety of what happens. It's the first test of integrations and hopefully one of many. Besides, like Fowler includes tests in his definition, I include build in mine. Should I put in it in bold font? Anyway, congratulations you got your plug in. :)
  16. Hmm, I thought that was you who was offended by generally peaceful Fowler's paper.
    As for "excessive" rebuilding, you contradict yourself. If a build takes a minute and a test takes two hours, the time gain from building a hairy staged build-test-what-not infrastructure is diminishing (0.7%).
    Also, "enterprise QA" that, according to you, all of a sudden becomes not a subject of change management, is perfectly runnable in automated mode, in a CI cycle, separate test runs, or other processes by using visualization or putting scripts to a VCS.
    With all this said, I take my hat to you for calling Fowler wrong. This is indeed a smart marketing move. Such moves shall not be left unplugged :) Slava
  17. verified integrations[ Go to top ]

    As I interpret Martin Fowler CI’ s article, the strongest point is on verified integrations. And as I see it: verified integration = build + test. Automation plays a big part on the CI practice, but the CI practice is described independently of tools. On the past I saw CI working (even manually), but nowadays, I cannot imagine doing CI without good build scripts, automation and a CI server like Cruise. Another similar practice versus tools topics that Fowler strongly influenced was Refactoring. When Martin Fowler wrote about Refactorings, he focused on the practice. Later on the tools evolved to support the practice that the industry was following.
  18. Ahhh testing...[ Go to top ] warms my heart. The interesting thing about CI builds is that it is a hard-stop test. If it doesn't work and it doesn't deploy, it's wrong. Fairly easy to judge. Full stop. End of story. Kick some ass if you like (and designate the poor sod who did the last check-in to be the test manager). The thing with tests is, unfortunately, that, in particular with new functionality that enters the codebase, all we can say is "it does not break any existing tests" which is very different from "it does not break any existing functionality". This is an extremely (alas "Extreme Programming") weak statement. As a basic reminder, consider that, to test saving master data into a relational database with only ten ASCII character data fields, you will usually need 10!*3 (165) tests at very the least to even grasp the basic constraints and permutations of the business case.
  19. Lets not be so hard on Eric here. The title of his blog ("was Fowler wrong?") was more dramatic than the content, but I guess it serves its purpose which is to drive traffic to the blog. We probably wounldn't even be discussing it otherwise :-) The main theme of the blog makes sense to me. In complex enterprise environments, the "build" part of CI is really only the first, small, step and we should place a lot less emphasis on it. Deployment and testing of integration between independantly-built components is often a much larger part of the CI task in these environments. We should consider adopting terminology that reflects this. Unfortunately, Eric doesn't appear to propose an alternative to the word "build"; at least not one that has a chance of catching on. Until someone does, we will continue to say "build" when we mean "checkout, compile, unit-test, package, collect quality metrics, deploy and integration-test".
  20. 10 minute builds are the point.[ Go to top ]

    I think Fowler is right on the money, particularly when it comes to the importance of frequent builds - the goal being a build with every commit. The problem is that build scripts must be written to support an incremental build process that matches the incremental nature of agile development. Yes, ten minute builds are possible, but your build scripts had better be designed to handle incremental processing and to perform good dependency management, not to mention the ability to repeat the IDE build so that coding practices such as refactoring are automatically reflected in the build scripts. If not, the infrequency of your builds will be the ultimate bottleneck and testing will be left to the last thing done, at the last minute, no matter how agile your coding process is defined.
  21. In my opinion, this article just demonstrates that the word "build" is hideously overloaded and we have to be very careful about using it. Although Eric makes valid criticisms about a naive approach to staged or pipelined builds, I don't think if you spoke to Martin he would disagree with anything Eric says. I think the confusion just arises because of this term "build". Martin is focussed on getting people to automate their compilation and testing process, which he calls an "automated build", and says that if you write functional tests it is best to automate them and include them in your automatic "build" process. Eric is saying that you can't always automate the testing process, but you still need a process that encompasses these manual testing steps, and your "continuous integration server" should help you with that, but you can't call it the "build" because it's not all susceptible to automation. I am fine with not calling it "the build" -- a rose by any other name would smell as sweet. But it's not in any way at odds with Martin's position -- it's just a different emphasis, because Eric has a different goal (consulting around a product that helps manage software lifecycle) from Martin (convincing people to automate their compile and test process). If you really want to know what Martin thinks, he and I are doing a webcast on the 28th: Although Eric makes many valid criticisms of the "staged build" or build pipeline metaphor, they exclusively refer to how hard it is to implement, rather than fundamentally saying a build pipeline is bad. Cruise (, out later this month, resolves all the problems Eric talks about, so that you can in fact automate long-running functional tests effectively without making it hard to get feedback. And if you can automate pragmatically, why wouldn't you?
  22. Jez, You're right on target in a lot of ways. The word build is hideously overloaded being the most important. With that in mind, it's impossible to read Mr. Fowler's description of a staged build and have a firm idea of what he actually intends for us to do there. My choice to interpret that section as a lot of rebuilds is less a criticism of Fowler than it is a criticism of how I see people in the industry doing things. My actually problem with Fowler is that last I did talk to him, he seemed to define build as "all the stuff that takes source code to production". So compilation, packaging, testing, deployments, etc. I think this much overloading even gets him confused. At CITCON 2007 (we missed TW in 08 Denver) Martin suggested that Ant (and Maven, MsBuild, etc) were poor build scripting languages since they weren't great at the functional style of programming that's ideal for things like deployments. While most of the room was content to be impressed by the famous wise man preaching to us. A couple of folks (I'm sad to say not me) observed that Ant was a fine language for build scripts, but a poor one for deployment scripts. They were using a smaller definition. I'm a big fan of using a very small definition of build. It gives the world meaning instead of being a nebulous thing. I'd prefer something like "Build is the process of converting source code into installable software." So, compile, link, package, creation of installers, etc. Continuous Compilation is totally inadequate though and I'd agree with my friend Paul Duval on that. Our tightest CI loop should be running whatever tests quickly give us confidence that our integrations are non-disastrous. Compilation is definitely part of that, as are fast tests and perhaps some static analysis. Anyway, I'm not saying that just because something can't be automated it shouldn't be "build". I'm saying that I want a ten minute build and a three hour execution of automated test suites. My three hour test suite execution is just a set of tests, it requires that some build earlier passed so it has something to test, but it is in no way a build. Automation and build are not the same thing. I do think manual testing has its place in CI though. Questions like "Did the latest changes negatively impact usability?" are really best addressed by manual checking and are also best found early. This might not even happen daily though, but the same drive to run every test as often as we have the resources and something new to test can be applied here. And yes, we're aware of Cruise(and CCE before it) and are eager to see what it brings to the table that's new. For Anthillers, most of the features on your short announcement page are part of what we use everyday. It's good to see the industry moving this way. I think as an industry (and Urbancode shares blame here) we provided build focused tools for so long that we tried to make everything a build and caused a lot of confusion in the process. We really should start using tighter, clearer definitions that are in line with what practitioners who are not even using rudimentary CI expect.
  23. Sounds like projection to me[ Go to top ]

    In his original post on the AnthillPro site, Peter makes his emphasis explicit: "The problem is clear in Fowler's definition of CI (emphasis added):" Peter chose to emphasize build to mean compilation in his post; this does not reflect how Martin or others at ThoughtWorks use the term. "Continuous compilation", an automated compilation build with little or no regression tests, is an established anti-pattern in the CI community. Here's Paul Duvall railing against it: - search for 'continuous compilation'.
  24. Wrong author[ Go to top ]

    I meant to say Eric, not Peter. :-P
  25. In my opinion, Continuous Integration (CI) is emphasised in Java/J2EE/J2ME environments as these do not have a common IDE as Microsoft' Developer Studio. Though IDEs are present in Java/J2EE/J2ME environments, they are not uniform in their approach to CI. For example, in the case of Eclipse IDE, there are provisions to integrate with a source-code-management-system, this would automatically allow checked in sources to be made available to any configured user and options are available to auto-compile the sources. This I believe, should eliminate the need for CI as a separate exercise in Java/J2EE/J2ME environments.
  26. you make me laugh[ Go to top ]

    No continue integration in Microsoft? At least we need to understand something before we involve in discussion.