Todd Huss: Time-boxed vs. Feature-boxed releases

Discussions

News: Todd Huss: Time-boxed vs. Feature-boxed releases

  1. Todd Huss has blogged about time-boxed versus feature-boxed releases - for example, "Foo 2006Q2" is a time-boxed release and "Foo 2.4" (with feature "X") is a feature-boxed released.

    From the blog entry:
    1. Feature boxed releases allow you to focus on getting the features the business side needs in the release and getting them right. However feature boxing is more easily prone to scope creep, continually pushing the release date, and spending too much time getting a feature "just" right instead of getting it in front of customers.

    2. Time boxed iterations continuously keep the organization focusing on what's most important and reliably get things done on a certain date at the cost of cutting features when estimates turn out to have been to low. You also spend a little less time actually implementing and more time planning, estimating, and meeting.
    What are your opinions of the two approaches? Do you prefer to use libraries that use one or the other?

    Threaded Messages (30)

  2. Agile versus RUP[ Go to top ]

    This is an Agile versus RUP arguement in disguise. Time based is a Agile approach - deliver something/anything in 6 week increments. RUP is "you get it when we're finished", and often devolves into a waterfall project.

    Most people here would say time-based is better, and more organized, but, in the real world, unacceptable bugs and business politics often force us into a release-based schedule..
  3. Re: Agile versus RUP[ Go to top ]

    I agree with David: experience suggests me that in small-medium development groups an Agile approach and a release-based schedule is the best solution to handle frequent user requests and bug fixes.
  4. Agile versus RUP[ Go to top ]

    I don't think so. Agile development is supposed to focus on delivering value to the customer, and delivering "something/anything" in six week increments hardly ensures that the customer will receive any value.

    Remember there is a difference between increments and releases. A project may go through a dozen increments before it is ready for a release.

    Interestingly, Todd claims that time based releases because "they provide a very light weight yet key form of discipline that many organizations lack," which implies that they are more agile than feature based releases, while also asserting that with time based releases you "spend a little less time actually implementing and more time planning, estimating, and meeting," which hardly sounds agile at all.

    I think the real question is: should releases be treated like macro-increments?

    It depends, because now you have a define "release." "Point releases" (aka "maintenance releases," "patches," "bug fixes," etc) that have a very low transition cost (transition being moving from version 1.x to 1.(x+1)) should be made at regular intervals.

    But anything with a non-trivial transition cost (trivial being subjective, but meaning < 1 week labor in my view) should be rolled into a feature-driven release.

    Why?

    Because, as a customer, when a vendor tells me I should "upgrade" to a new release, I always ask "What's changed, and how does that benefit me?" Every change represents a cost that will need to be offset by some benefits. Usually the vendor will think there is a one-to-one relationship between changes and benefits, and usually I will think there are a ton more changes than there are benefits. Ultimately, if the benefits don't significantly outweigh the costs of upgrading, I don't do it.

    The problem is skipping releases tends to put the upgrade cost on an exponential growth curve where the cost of going from release X to release X+5 somehow manages to exceed the cost of doing each upgrade in serial.

    Consequently, making useless releases is a very efficient way for a vendor to get replaced, because it is a strong indication that the vendor's roadmap is out of alignment with my needs.
  5. Agile versus RUP[ Go to top ]

    I don't think so. Agile development is supposed to focus on delivering value to the customer, and delivering "something/anything" in six week increments hardly ensures that the customer will receive any value.Remember there is a difference between increments and releases. A project may go through a dozen increments before it is ready for a release.Interestingly, Todd claims that time based releases because "they provide a very light weight yet key form of discipline that many organizations lack," which implies that they are more agile than feature based releases, while also asserting that with time based releases you "spend a little less time actually implementing and more time planning, estimating, and meeting," which hardly sounds agile at all.I think the real question is: should releases be treated like macro-increments?It depends, because now you have a define "release." "Point releases" (aka "maintenance releases," "patches," "bug fixes," etc) that have a very low transition cost (transition being moving from version 1.x to 1.(x+1)) should be made at regular intervals.But anything with a non-trivial transition cost (trivial being subjective, but meaning < 1 week labor in my view) should be rolled into a feature-driven release.Why?Because, as a customer, when a vendor tells me I should "upgrade" to a new release, I always ask "What's changed, and how does that benefit me?" Every change represents a cost that will need to be offset by some benefits. Usually the vendor will think there is a one-to-one relationship between changes and benefits, and usually I will think there are a ton more changes than there are benefits. Ultimately, if the benefits don't significantly outweigh the costs of upgrading, I don't do it.The problem is skipping releases tends to put the upgrade cost on an exponential growth curve where the cost of going from release X to release X+5 somehow manages to exceed the cost of doing each upgrade in serial.Consequently, making useless releases is a very efficient way for a vendor to get replaced, because it is a strong indication that the vendor's roadmap is out of alignment with my needs.

    Totally agree. Just look at Microsoft, everything use to be time-released and they ended up with the blue screen hell :)

    Time-released is good for maintenance fix, adding small features and for testing purpose in my mind. You don't go for example from java 1.4 to java 1.5 just for fun. You need some features to appeal you to do so. I think Apache uses this kind of hybrid approach, isn't it?

    Interesting article by the way!
  6. Agile versus RUP[ Go to top ]

    The "something/anything" that I said sounds valueless, I didnt mean it that way. I mean that it could be either a new feature, a set of change requests, or a requirement doc, or some other tangible deliverable(s) - depending on the scope of your project. The point is to show progress to the customer.
  7. Agile versus RUP[ Go to top ]

    The "something/anything" that I said sounds valueless, I didnt mean it that way. I mean that it could be either a new feature, a set of change requests, or a requirement doc, or some other tangible deliverable(s) - depending on the scope of your project. The point is to show progress to the customer.

    We use a combination of the time releases and feature releases. If none of the features takes longer than our development timeframe, then we do a time based release where we get in the features that we can. The date will not move. We develop based off of the priorities the project manager sets and anything that doesn't make the cutoff date is dropped. We typically will have a 6-8 week development-to-production cycle when doing this type of release.

    If there is a feature that cannot be accomplished in 6-8 weeks, and is the top priority, it becomes a feature release. We wont release (to production) an incomplete feature as that makes no sense. However we will release stages to QA so that it can incrementally be tested and verified by the project manager. While those stages are feature based, we attempt to develop a schedule for those stages and stick to the timeframe as best we can.

    We've found that this process works rather well for us. Large projects will typically occupy a large portion of the development team. Those not working on the "feature" project will work on other "agile" projects. By doing this we can rotate developers through material so we keep everyone fresh while keeping sites updated frequently. We use the SCRUM methodoligy to accomplish this.
  8. Agile versus RUP[ Go to top ]

    Mike,

    At my shop, we have just begun to use Scrum. In fact, we will finish our very first sprint next week :-)

    We are not using the hybrid approach you are, but it sounds very logical to me. For my situation, I would be hesitant to use that approach until we've gotten some more experience under our belt (we switched from "modified RUP").

    We are a bit confused on a couple of things that I thought maybe you could help refine ...

    QA Testing:
    What is the recommended approach for Scrum? I get the impression that each sprint should be tested, and then a final regression test be done at the very end. I assume that the testing done at the end of each sprint will only test the functionality completed during that sprint? Also, I assume the testing is part of the 30 days, or is it done by overlapping it onto the next sprint? That is, the testing of sprint 1 would occur during the first x days of sprint 2?

    Release Planning:
    We've found that Scrum excels at immediately being productive. We can be up and running on our first sprint within a couple of days of working the product backlog. However, we're struggling on how we can properly commit to our customer that we can achieve what they want for a "release". For the most part, we use time based relase, with some flexibility on what the final dates are. That is, "we'll do a release sometime in September or October". I guess what I'm saying is how can we make the committment to the customer without fully estimating (and doing some amount of high-level design) for the entire release? Seems more RUP based to me ...

    (Apologize for the long message)

    - Joe
  9. Scrum (was Agile versus RUP)[ Go to top ]

    Joe, if I may...

    Something to keep in mind is that Scrum is an overall product development management process, and doesn't talk specifically about testing.

    Agile Software Development in general does rely heavily on rigorous programmer and acceptance testing, preferably automated. The practice of continous integration ensures that the product as a whole is relatively clean, and several free tools exist that will automatically build from source control and run tests.

    In terms of regression testing, if you can get your acceptance tests automated, you will be way ahead of the game. This allows you to perform full regression tests at will, rather than scheduling a full regression at the end of development (which is a 'waterfall' practice anyway). This does require quite a bit of work by your testers, but it's an investment that will pay off significantly over time. What technologies are you using? Java, .NET, C/C++?

    A goal towards which to aim is to have your system in a releasable state at the end of each sprint, i.e. it's in a state where it could be placed into production. That isn't to say that you would do that, as it may not make sense from a business perspective. What it does do, however, is focus the team on keeping the system as clean as possible. Again, it does have a cost, but it's an investment.

    With regards to when the QA testing occurs, the ideal situation is to have the testers get their hands on a feature the minute it has been completed. If you are using continuous integration, you can have a build package ready quite soon after code is checked in. If this isn't feasible, generally what I have seen done is to have the testers work on the previous iteration's (sprint's) features.
    Release Planning:We've found that Scrum excels at immediately being productive. We can be up and running on our first sprint within a couple of days of working the product backlog. However, we're struggling on how we can properly commit to our customer that we can achieve what they want for a "release". For the most part, we use time based relase, with some flexibility on what the final dates are. That is, "we'll do a release sometime in September or October". I guess what I'm saying is how can we make the committment to the customer without fully estimating (and doing some amount of high-level design) for the entire release? Seems more RUP based to me ...

    I typically use Extreme Programming's planning process, in which you do provide estimates for all of the User Stories (features). You then determine in conjunction with the Customer whether you need to ship by a certain date or with certain features. If it's date-driven you determine, based on the estimates, how much you can deliver in that time. If it's feature-driven you figure out the date by which the finished product will be delivered. In both cases, you schedule the highest priority or highest business value stories first. You also work under that assumption that you have made estimates, and that the schedule will change as you and the Customer learn more during development. For a date-driven schedule, if you run out of time you should already have built the highest priority or highest business value features.

    Remember that this is a very collaborative process, and you don't just go away and build the system and show it to the Customer when you're done. They need to be involved at all times in order to provide the guidance and feedback required to build what's actually required.

    I apologize for the long reply!

    Regards,

    Dave Rooney
    Mayford Technologies
  10. Scrum (was Agile versus RUP)[ Go to top ]

    In terms of regression testing, if you can get your acceptance tests automated, you will be way ahead of the game.

    Don't get me wrong, I'm all for automating testing as much as possible. But acceptance testing is about a lot more than ensuring the system meets a few predefined criteria. It's about ensuring the system is acceptable for production use.

    The customer is the only one who can do acceptance testing, because the customer is the only one who can determine if a system is acceptable. The customer may decide to automate portions of the acceptance tests, but unless the system never directly interacts with a human being, I would say a good chunk of the acceptance testing must involve real usage.
  11. Scrum (was Agile versus RUP)[ Go to top ]

    Dave answered your question quite well. I'll only add what my experiences are.

    Once we release a build to QA, they immediately start testing the features unless they are backlogged from the previous release. But the feature is still verified by dev once it reaches QA. Here's more or less our entire process:

    1a) For a new feature, the business analyst creates an STR detailing the general functionality of the feature. Depending on the complexity of the feature, s/he will sometimes reference an external document.

    1b) For a bug, which anyone can enter (BA, developer, QA), the functional spec is a description of the bug and the "Steps to Reproduce" section of the STR is filled in.

    2) The STR is then assigned to development and QA to provide estimates for the STR. Development will enter brief technical specs into the STR as to what will need to change (whether it be modifying or creating new actions/jsps/ejbs/etc). This way another developer can pickup the STR and quickly get up to speed.

    3) The STR is assigned back to the business analyst so they can analyze the ROI, assign it to a build, and assign it a priority. The BA will work with dev and QA to generate a rough estimate and target date for a build, keeping 6-8 weeks as the typical target. For feature releases, this obviously changes.

    4) Developers begin working on a build, working on the STRs based off of priority.

    5) Once a developer has fixed a feature/bug, it is assigned as "fixed" to another developer on the project. The developer will enter "Steps to Reproduce" for new features IF testing the feature is complex. We try not to provide too many details because we dont want QA following the exact same testing patterns.

    6) Prior to deploying to QA, the developers review the code changes of the "fixed" STRs that are assigned to them. This is to attempt to catch errors in code/logic.

    7) Immediately after deploying to QA, the developer tests the STRs assigned to him. This is to ensure that an STR was properly deployed and the build was successful. Sometimes database scripts aren't run or files aren't properly checked in. This catches that.

    8) Once a fixed STR is verified by dev, it is assigned to QA. At this point, QA should be familiar with the functionality as they have already seen the STR and have estimated it.

    9) If QA verifies the STR, they mark it as such and the STR is "closed". If it fails, it is assigned back to development with "Steps to reproduce" and the process repeats.

    10) We typcially have a coding freeze about a week prior to production release. QA regression tests the build to ensure that the new features/bug fixes didn't break other areas.

    We have developed this process over the course of a couple years, and found it to have greatly improved our time to production and reduced bugs in our builds. The part we like most are steps 5-8.

    By having a different person from development verify the STR in QA, we ensure that the QA environment is working in the "happy path". We have had developers catch bugs in code before the deployment, allowing us to save time fixing it. We also occasionaly find similarities in code and refactor it to improve code reuse. It's similar to XP, but definitely not the same.

    By doing steps 5-8, QA's responisbility is to further verify the "happy path" and then find those unusual paths that dev doesn't think of. The likelihood of them discovering a problem with the happy path is greatly reduced, so we dont waste time going back and forth from QA to Dev. The bugs we do find are often extremely odd. Not all bugs are fixed. We usually have bugs that dont follow the happy path entered as separate STRs. This allows the BA to assess the severity of the bug and assign it a priority. Sometimes s/he will assign it to a future build.

    During the SCRUMS (brief 15 minute meetings), we will discuss any issues we have and which STRs will be in each release. We attempt to get out no less than a release per week so QA isn't slammed with a huge build. The BA/QA is responsible for developing test cases prior to the build being released to QA. They store these for future regression testing.

    Another thing we try to keep in mind is that our process is to aid the development of our websites, not hinder it. We will sometimes have post production deployment meetings to discuss things that went well with the build and things that could be improved.

    Hope this helps! As you can see the process works well for both the feature and time based releases as we have estimates prior to the formation of the build. We are still trying to perfect estimating large features (who isnt), but we have gotten extremely good at estimating small features.

    For those that are curious, we use Seapine's TestTrack Pro to keep track of our STRs. It's a great tool and I highly recommend it. Sorry for the length reply.

    Mike Schanberger
  12. Agile versus RUP[ Go to top ]

    I guess what I'm saying is how can we make the committment to the customer without fully estimating (and doing some amount of high-level design) for the entire release?

    Interesting question...I would like to know as well...

    Actually, my impression of agile methods is that they shift a lot of responsibility off of the development team and onto the customer. Because the customer is a allowed to change the requirements at any point, the customer is completely in charge of scope of the project as a whole. So you provide your customer with a ROM (rough-order-of-magnitude) for the project based on his initial concept, and then it's up to the customer to be focused enough to ensure that the requirements the development team is implementing are in line with the goals the business wants to achieve through the system, and are minimalistic enough to fit within the allotted budget.

    In other words, make sure you are working on a pure time and materials contract. Or, if you are the customer, make sure your lead assigned to the project has a clear idea of the big picture, and will actively manage scope despite objections from other team members.

    Maybe I'm alone here, but I find customers typically have a decent idea about what they want the system to do, a vague (often delusional) idea of how they expect to benefit from it, and no sense what so ever of how much it will cost to build. Agile methods can be useful in these situations, because they ensure you will build something and that something will be useful to the customers actively involved in the project. The problem is I think projects like that deserve to die a quick and painless death, because resources are better spent on projects that have clear, attainable objectives with a quantitatively verifiable ROI.
  13. This is about our inability[ Go to top ]

    to accurately measure when a feature is finished.

    Time-boxed is essentially easier to manage and maintains momentum on a project better because we, as an industry, find it difficult to measure when features are complete.
  14. Agile versus RUP[ Go to top ]

    This is an Agile versus RUP arguement in disguise. Time based is a Agile approach - deliver something/anything in 6 week increments. RUP is "you get it when we're finished", and often devolves into a waterfall project.Most people here would say time-based is better, and more organized, but, in the real world, unacceptable bugs and business politics often force us into a release-based schedule..

    RUP is not a methodology as XP is - thus You cannot compare them both directly. RUP is just - like Rational/IBM says -
    a set of best practices. You cannot just "get" RUP and use it as a methodology. It just hasn't practices like "if You would do this thing like that You'd success". It's rather like some kind of meta-methodology or "framework" and can be used to build Your own methodology. For example You can use RUP infrastructure/artefacts/techniques/best practices to manage Your projects using PRINCE2/PMI/XP/and many others (not always convenient but possible).

    I strongly disagree that RUP is somehow connected with waterfall model. It's core is iterative model! See 1st RUP practice - "Develop software iteratively". Note that's 1st practice not by an accident.

    Artur
  15. As explained by Todd, people who's never practiced Time-boxed releases (with XP or Scrum for example) could understand that features don't matter with that kind of approach. That's of course not the case as most agile methodologies promote feature driven development inside time boxes. The post's title should have been 'Time boxed versus Not Time boxed releases' to prevent such ambiguity.
  16. One of the things I find some people have a very hard time understanding, despite the fact that it seems very simple to me, is that there is no _necessary_ connection between iterations and releases to customers. Yes, one of the iterations will get released to a customer, but nothing says you have to release every iteration and in fact that would often be fairly stupid to even attempt.

    In all Agile processes, you will be doing frequent iterations, where frequent is anywhere from 1 week (Evo) to 1+ month (RUP, XP, others). Each of those iterations should deliver value to stakeholders, but stakeholders are NOT the customer in most (all?) cases. Whoever the stakeholder is internally (product manager, direct manager, CEO, etc) can then be responsible for making release decisions.

    Engineers in most organizations will not be making release decisions, so you'd think this would be clear, but sometimes it seems that people building webapps are less able to make the distinction. This is probably a symptom of the typical HTML website methodology that many people learn before they ever use a true software development method, better known as the 'no' Method (ie, live editing of pages that maybe are backed up, maybe not, depending on how technical the person is and how badly they've been burnt).

    As far as open source libraries go, my preference is that they set goals for feature releases that are off in the future, and make time-boxed steps towards that goal. Once all the goals are met, they can make the release and write their next goals document.

    Unfortunately, the more common process seems to be to write a goals document for every major release and then simply spin off minor releases willy nilly until they get so tired of the current codebase and/or there's enough developer turnover that no one can stand making evolutionary changes, at which point there's another major release push, starting over from alpha. There are plenty of projects which avoid this lifecycle, but more that seem to think this is a good way to do things.

    James
  17. I fully agree with you James. even I cannot understand, why people confuse iterations with releases. Planning your releases independent of the iterations, gives a better control on the project.
    While iterations help us make visible progress in small steps, we can hold on to the delivery for some more time, especially, if the next iteration brings a few dependant modules to the system. Thus reducing the amount for planning required for each release.
  18. I agree with James that there is no connection between iterations and releases.

    The 'public' release schedule will depend on the nature of the product. Having worked with financial applications in the past we can't roll the release out to the end-users, but we can integrate into test environments and work with the releases there. Working with community driven game development is a totally different matter - you have to keep your time-based release schedule (in our view), but add features that the end-user will find of value. These things doesn't always add up, but if you can show the users that you're on the right track I think that they'll understand that the feature may not be fully implemented.

    As James is saying: the buck always stops at the company management. We pay if we can't deliver - may it be a public release or 'just' an internal release.

    As for open source libraries and products I would say it depends on how crucial your business depends on the product. Most development can be done using stable releases that have a feature based timebox - but when it comes to your run-time platform (J2EE server, web frameworks and so on) it may be necessary to maintain an internal build of that product. You need to have a 100% stable platform where you can merge new features in when needed. Why ? Not all open source projects are doing QA to fit business needs. How do we overcome this problem ? Give your changes back to the community and your internal branches will be smaller - everybody wins.

    Just my $0.02
     Jesper
     CEO, World League Sports
  19. I may be confusing my terms, here, but I believe that the end of an iteration (for the Development team) should result in a "work package" - a binary (i.e; war file) of working code, with the features that were agreed upon at the beginning of the iteration (and the development team has unit tested). Maybe this is the "release" that the author is talking about time-boxing. Since development has little control over QA, testing, and even deployment - the official "release" to production of the package should go through its own process. If you are working on a large project with a clear set of roles - developer, tester, business analyst, systems administrator, then this is a more appropriate model.

    However, I am not an expert in XP, but I believe in classic XP there is a explicit set of rules, and it does not make a distinction between iteration and release. There are only two roles - developer and customer. On a smaller project, this may be the better approach.
  20. I may be confusing my terms, here, but I believe that the end of an iteration (for the Development team) should result in a "work package" - a binary (i.e; war file) of working code, with the features that were agreed upon at the beginning of the iteration (and the development team has unit tested).

    That's the ideal goal, although it isn't always achievable. You ideally want something that's releasable at the end of the iteration.
    Since development has little control over QA, testing, and even deployment - the official "release" to production of the package should go through its own process.

    That's a classic symptom of a waterfall organization. First, acceptance testing people should be integrated with the developers, as should DBA's, infrastructure people, etc. There shouldn't be any sort of team to whom you throw the system over the wall.

    As soon as you have these disparate teams that have to get their hands on the system, you're losing agility. I've been there:

    http://www.mayford.ca/download/XP%20Meets%20Corporate%20Reality.pdf
    However, I am not an expert in XP, but I believe in classic XP there is a explicit set of rules, and it does not make a distinction between iteration and release.

    Again, the idea is to be as close to releasable after each iteration. If you need to release early, then you're in a position to do so in relatively short order.
    There are only two roles - developer and customer. On a smaller project, this may be the better approach.
    That's oversimplifying it somewhat. You will always have multiple roles (Customer, Programmer, Manager, Gold Owner, Tester, etc.), but one person may fill more than one role.

    Regarding the size of the project, are you saying that on a smaller project it's more appropriate to be releasable at the end of each iteration?

    Dave Rooney
    Mayford Technologies
  21. What are your opinions of the two approaches? Do you prefer to use libraries that use one or the other?

    I have used both time and feature-boxed releases with agile development. My experience has been that time-boxed releases provide better focus to the team as a whole (including the Customer). Feature-boxed releases have had a tendency to include just one more story, and there seems to be some drift in the team's focus.

    A couple of years ago, Kent Beck was researching the applicability of Lean Manufacturing principles to software development. One of the concepts he spoke of at the time was <em>Software in Process</em>. Essentially, it means that inventory is waste and software that hasn't been released to production is inventory. That's something that really resonated with me, and I have seen too many times what happens when software sits in inventory while it's being polished or a couple of new features are being added.

    So, my personal opinion is that time-boxed is better, shorter time-boxes are better, and that it takes a great deal of discipline and leadership to do it.

    Dave Rooney
    Mayford Technologies
  22. Software in process[ Go to top ]

    A couple of years ago, Kent Beck was researching the applicability of Lean Manufacturing principles to software development. One of the concepts he spoke of at the time was <em>Software in Process</em>. Essentially, it means that inventory is waste and software that hasn't been released to production is inventory. That's something that really resonated with me, and I have seen too many times what happens when software sits in inventory while it's being polished or a couple of new features are being added.

    I think in many situations this is a flawed concept. There are costs to change developers may or may not see. For example, critical business systems often have a lot of training material for end users. Every time you make a change, you potentially invalidate that material, just like changing an API in a library can invalidate a bunch of code in the client code base. Furthermore, you invalidate the tacit knowledge that they user community has acquired.

    Not to mention regression testing. Yes, much of testing can be automated. But not all, and not enough to bring the cost of regression testing a non-trivial system down to a point where it can be done frequently.

    Consequently, I think the coding to manufacturing analogy is fundamentally flawed. Coding is more like detailed design in the hardware world. In manufacturing, you don't want inventory. You also don't want engineering pushing design changes down on you every day (or week or month), because it means you have to retool and reprogram your machines, and it causes you to slide backwards down your learning curves. Changes can invalidate existing inventory of feeder parts, and agreements with suppliers.

    I'm not saying change is bad. I'm saying change costs money. Different changes cost different amounts. In other words, change must be carefully managed. Managing change for a departmental webapp is different from a webapp with a few hundred users is different from an ERP system with thousands of users that's capable of shutting down a business.
  23. Software in process[ Go to top ]

    I think in many situations this is a flawed concept. There are costs to change developers may or may not see. For example, critical business systems often have a lot of training material for end users. Every time you make a change, you potentially invalidate that material, just like changing an API in a library can invalidate a bunch of code in the client code base.

    Documentation and training materials should be treated as part of the system just as much as code. They too can be created iteratively and incrementally.
    Furthermore, you invalidate the tacit knowledge that they user community has acquired.

    Not necessarily - if your increments are small enough, the change the users face is also small. Having said that, I do agree that users do not want an application to churn in front of them every couple of weeks.
    Not to mention regression testing. Yes, much of testing can be automated. But not all, and not enough to bring the cost of regression testing a non-trivial system down to a point where it can be done frequently.

    This I simply don't agree with. I'm not talking about using tools such as WinRunner or Rational Robot to test through the GUI (though they do have their place), but writing quickly executable tests that execute just under the UI (assuming the system has one). This, of course, is predicated on proper separation of business logic from interface logic.
    Consequently, I think the coding to manufacturing analogy is fundamentally flawed.

    That I agree with. What I also agree with is that software that hasn't been released is inventory. What's bad about inventory? It isn't producing value to the company. It's becoming obsolete. Any defects in that inventory are hidden. Customer feedback from the product is delayed because the product is sitting in inventory.

    That analogy holds very well for software. The sooner you can get software off of a developer's machine and into production, the sooner you receive real feedback and return on the investment in that software.

    There always has to be a balance between shipping to production the moment code is checked in, to gargantuan multi-year dinosaurs that are obsolete before they ever see the light of day. My experience has consistently been that shorter releases are much better than longer.

    Dave Rooney
    Mayford Technologies
  24. Software in process[ Go to top ]

    Documentation and training materials should be treated as part of the system just as much as code. They too can be created iteratively and incrementally.

    Yes and know. I'm 100% in support of involving the people who develop the training material in the development. They are frequently more useful than the domain experts.

    But in my experience, the released materials typically contain a lot of screenshots and action-by-action instructions. Simple changes in screen layout or application navigation can obsolete hundreds or even thousands of screenshots and sets of instructions. Making these updates is a lot of work.

    If you have a tool or technique for automatically regenerating screenshots and validating instructions when the system changes, I'm all ears.
    This I simply don't agree with. I'm not talking about using tools such as WinRunner or Rational Robot to test through the GUI (though they do have their place), but writing quickly executable tests that execute just under the UI (assuming the system has one). This, of course, is predicated on proper separation of business logic from interface logic.

    Yes and no again. Having automated validations for business logic can be absolutely critical. I've worked on systems would never had been validated without such automated tests, and would probably be busy producing subtle errors in production as we speak. But automated tests assume the person creating them has a complete understanding of what the system is supposed to do. In agile development, you assume that no one has such a complete understanding, and that it will only be reached by iteratively developing the system and the requirements together based on what the customer learns from iteracting with the system.

    Automated tests are great for finding "provable" bugs, like the application throwing an exception or returning miscalculated numbers. But they don't find all the "strange" things that the system does, because the person writing the tests cannot anticipate every strange behavior of the system. They also don't find cases where the developers failed to understand the requirements, nor do they find cases where the GUI is "hard" to use.

    On a sides note, I should answer the question: Why is it so important that testing uncover ease-of-use issues? Those aren't really bugs...

    Well, in my opinion, they are. Anything within the system boundaries that could cause the sytem to fail to achieve it's projected ROI is a bug. If the construction of a system was justified based on labor savings, but it simply automates one task while making another obscenely clunky, there's a good chance it's not going to achieve the labor savings that it's supposed to achieve.

    Consequently, if you expect a system to accomplish it's goals, and you don't want it to stray away from them across the iterations, then you need tests to ensure that they aren't in jeopardy.
  25. I think it is similar to age-old difference of theory vs. real-life. In real life, a software deliverable is neither a pure "time-boxed" or completely "feature-boxed". As most of what is wanted are always driven by marked demands, "when" and "what" will always determine what gets developed and delivered.

    Business management will always want a control on "when" and "what", as they directly influence "bottom-line" and "top-line" respectively and hence the ROI.

    Engineering practice might want to go one way or the other, but ultimately only those practices that directly meet the market and business expectations will succeed and florish. Rest, we will have to honor in the textbooks and research papers.
  26. Sorry for this reality check[ Go to top ]

    From time to time I stand astonished as to how development happens in perfect worlds. It leads me to rather short and disillusioning conclusion that in most of the described organizations where some of the discussed methodologies work, any other would work as well. The reality that I see most of the will usually end you with both "time boxed" and "feature boxed" release one way or the other, since

    • Sales has promised a release date
    • Sales has committed to a broadly and rather abstract feature set

    More often than not none of these can be shifted since the current product may expire, or a reduction in the feature set may create a financial or even worse a security hazard.

    The world is no longer the world where Mr Beck creates a hospital software from scratch or where we do the very first internet bank or where the grocery store from Iowa figures it might be a good idea to have an electronic order management system.

    It is rather the world where a website needs desperate replacement because it runs on a hopelessly outdated platform with bizarre licensing cost. Or it is the world where someone finally figures that it might be nice, no actually mandatory, for a system to support names with 8 bit wide characters. Or it is the world where 20000 ticket validation machines or 30000 electronic cashiers have already been purchased with a defined roll out plan and there is no option whatsoever in the software that needs to be available when that starts.

    The real problem - deep down - with software development is that software developers and worse software developer's managers after half a century of the craft have still not learned to be firm and fair with their estimates. They easily allow for feature creep, the commit to absurd development schedules and delivery plans - some because they are young or overoptimistic or plainly lucky, some because they are pressurized by their customers and employers. And ironically the "methodologies" support this approach by pretending to "manage the chaos", using "iterations" or "agility". Compared to classical engineering (or even a classical craft) where this fails miserably is that it negates the difference between design (or sketch) and construction. There are no iterations when constructing a bridge or an airplane or a car.
  27. Sorry for this reality check[ Go to top ]

    • Sales has promised a release date
    • Sales has committed to a broadly and rather abstract feature set

    ...and the result is the type of defect-laden crap that companies like M$ have traditionally shipped. Even they have moved to agile development to help improve their responsiveness and quality.

    Don't lecture me about perfect worlds! It's that kind of attitude that will ensure that the status quo of late, poor quality software with features that no one uses continues.
    The world is no longer the world where Mr Beck creates a hospital software from scratch or where we do the very first internet bank or where the grocery store from Iowa figures it might be a good idea to have an electronic order management system.

    Who says you can't apply Agile Development to existing systems? I've done it:

    http://tinyurl.com/7fa7s

    It obviously somewhat more difficult than greenfield development, but the most important thing is to establish the core values and practices - communication and feedback, iterative and incremental development, frequent small releases, rigorous testing.
    The real problem - deep down - with software development is that software developers and worse software developer's managers after half a century of the craft have still not learned to be firm and fair with their estimates.

    Absolutely - I couldn't agree more! That's exactly what XP strives to address by saying that the Customer (in the case of a shrink-wrapped application the Sales people or Product Manager) dicates what features are a priority, but it's the Developers that dictate the estimates of how long it will take. Also bear in mind that it isn't the Development manager or a few Architects or Sr. Developers that make the estimates, but rather the actual Programmers themselves.
    There are no iterations when constructing a bridge or an airplane or a car.

    In the construction, no. However, in the design process there more certainly is. Here's an example from Airbus Industries:

    http://tinyurl.com/e4ko5

    Developing software isn't construction, it's design. The construction of software is the build and packaging process.

    Dave Rooney
    Mayford Technologies
  28. Sorry for this reality check[ Go to top ]

    ...and the result is the type of defect-laden crap that companies like M$ have traditionally shipped.
    Well I haven't work at Microsoft, but from what I have read they have embraced "Agile Development" long before the term even was around. And for them it makes perfect sense: They control the release cycle about as much as you can!
    In the construction, no. However, in the design process there more certainly is. Here's an example from Airbus Industries:http://tinyurl.com/e4ko5
    Yes! But the answer we come up with is to gather all the different carpenters that build a house and have each one estimate the room they will build? No you get a carpenter, a plumber, a bricklayer and have them give their respective estimate. Not one per profession, every single one.
    Developing software isn't construction, it's design. The construction of software is the build and packaging process.Dave RooneyMayford Technologies
    I do believe this is a serious misconception. The people who actually build the aircraft are seriously qualified engineers, craftsmen, welders, aircraft technicians...yet they deliver a flying prototype as a result of a construction process with rather small iterations. There is not yet anything repetetive about the construction process as in an assembly line. This is a lot like software development, with the notable difference that the delivery time and the result are predefined and (relatively well) met!
  29. Developing software isn't construction, it's design. The construction of software is the build and packaging process.Dave RooneyMayford Technologies
    I do believe this is a serious misconception. The people who actually build the aircraft are seriously qualified engineers, craftsmen, welders, aircraft technicians...yet they deliver a flying prototype as a result of a construction process with rather small iterations. There is not yet anything repetetive about the construction process as in an assembly line. This is a lot like software development, with the notable difference that the delivery time and the result are predefined and (relatively well) met!
    Karl, I suggest these articles describing agile methodology and software development as a continuous design process:
    o http://martinfowler.com/articles/newMethodology.html#SeparationOfDesignAndConstructiono http://www.bleading-edge.com/Publications/C++Journal/Cpjour2.htm
  30. Karl, I suggest these articles describing agile methodology and software development as a continuous design process:

    http://martinfowler.com/articles/newMethodology.html#SeparationOfDesignAndConstruction

    http://www.bleading-edge.com/Publications/C++Journal/Cpjour2.htm

    (Sorry for the messy first reply, didn't grok formatting rules... and no preview available. Hope this one works better.)
  31. planning requires both[ Go to top ]

    A time based roadmap is meaningless without any planned features (lets just wait until 2007 and release what we have then). Similarly a list of features does not actually consitute a plan you can execute in a project setting (you know, allocate resources, set milestones). You need to do both. The problem is that most software engineers are not pretty good at doing time estimates and also have a poor sense of what they are going to build (feature creep). They have a good sense of what they'd like to build in the next few weeks/months. That's why agile programming has become popular (with developers). Set some short term goals, allocate resources, execute and repeat until everybody is happy.