TDD is an Approach, not a Task

Discussions

News: TDD is an Approach, not a Task

  1. TDD is an Approach, not a Task (98 messages)

    Tim Ottinger (no relation to your Humble Editor, although I'd be proud to say otherwise) has written "Not A Task, But An Approach" on ObjectMentor, saying that he's "been getting a lot of contact from frustrated people who don’t really have a good handle on the 'drive' part of Test Driven Development. A question heard frequently is: 'I’ve almost completed the coding, can you help me write the TDD?'"
    It seems like Test Driven Development is taken backward, that the developers are driven to write tests. The practitioner winces, realizing that he again faces The Great Misunderstanding of TDD. TDD stands for Test-Driven Development, which is not as clear as TFD (Test-First Development). If the consultant would strive to always say the word "first" in association with testing, most people would more surely grasp the idea. In fact, I’ve begun an experiment in which I will not say the word "test" without the word "first" in close approximation. I'll let you know how that works out for me. If the tests are providing nothing more than reassurance on the tail end of a coding phase, then the tests aren’t driving the development. They are like riders instead of drivers. Test-Ridden Development (TRD) would be a better term for such a plan. Even though it is better to have those tail-end tests than to have no automated testing, it misses the point and could not be reasonably be called TDD. An old mantra for TDD and BDD is "it's not about testing". The term BDD was invented largely to get the idea of "testing" out of the way. People tend to associate "test" as a release-preparation activity rather than an active approach to programming. BDD alleviates some of that cognitive dissonance. In TDD, tests come first. Each unit test is written as it is needed by the programmer. Tests are just-in-time and are active in shaping the code. Acceptance Tests likewise tend to precede programming by some short span of time. Through months of repetition I have developed the mantra:
    TDD isn't a task. It is not something you do. It is an approach. It is how you write your programs.
    I wonder if we shouldn't resurrect the term Test-First Programming or Test-First Development for simple evocative power. Admittedly there are some who would see that as a phase ordering, but maybe enough people would get the right idea.

    Threaded Messages (98)

  2. Let's say you have a method with three Boolean decisions. With simple statement coverage for this, you test almost nothing more than compiler does. To achieve full path coverage, you need to have eight tests. And more decisions... the number of paths (and tests needed) increases exponentially. How do you plan all these tests with test-first-approach? How kind of coverage is sufficient? Is it ok to spent the whole day for planning a test for a method you would implement in 15 minutes (with the probability of an error 0,01 % based on your coding history)? These are the questions, real application coders struggle daily. And things get more even more complicated when you start automating user interface and databases tests. This is the reason, TDD is more popular in books than in real life (yeah, you say you do it, give me to code and I'll show big holes in it). Value of basic statement coverage is not much in real life applications. I'm not saying we should give up TDD. But we need more good practices and guidelines. The fact is that sometimes it's impossible to use test-first approach for testing all combinations of those AJAX component states. But what is the reasonable amount of automatic testing? Tomi
  3. Let's say you have a method with three Boolean decisions. With simple statement coverage for this, you test almost nothing more than compiler does. To achieve full path coverage, you need to have eight tests. And more decisions... the number of paths (and tests needed) increases exponentially. How do you plan all these tests with test-first-approach? How kind of coverage is sufficient? Is it ok to spent the whole day for planning a test for a method you would implement in 15 minutes (with the probability of an error 0,01 % based on your coding history)? These are the questions, real application coders struggle daily.
    I think answers to these questions become obvious with TDD experience. The overall answer is that you write the tests based on what you actually expect your program to do. Do you need to cover every eventuality? Then write tests for each. It doesn't take all day. If you find that your tests are taking too long to write then there is something wrong with your design. And that's really the point - to find out if your design is correct. You don't spend all day writing tests. In TDD you write a test, code to pass it, write a test, code to pass it. In the process, you discover what the API for your module should look like. The truth is, TDD is as much about iterative design as anything else. Because of the constant test-code-test-code-test-code cycle, the iterations are tiny. The idea is that nobody knows what the design should be until you write the code that uses it. BUT THE CODE KNOWS, so write that test code! In TDD you do end up with a decent regression test suite. But that's just a bonus. It's not the real reason you do TDD in the first place.
    And things get more even more complicated when you start automating user interface and databases tests.
    True, UI code is tougher to test with TDD. This is a function of the limitations of TDD tools more than anything else. I'm working to solve this tool problem for JSF/AJAX applications right now.
    This is the reason, TDD is more popular in books than in real life (yeah, you say you do it, give me to code and I'll show big holes in it).
    For several years I've been writing almost all my code this way. When I don't, I end up with crap. Popular or not, in real life TDD works. In the long run it will save you time and your code will be cleaner. I would contend that even in the short run it saves you time, but I can't back that up. The fact is, if you are writing code you need some way to ensure that it works. The old way is just to manually test it at the UI level. The problem with that is, the code/compile/test cycle is very long and slow. You have to write a lot of code that may or may not work before you can tell if you've done anything good. So it's much better to write smaller, simpler, targeted, automated tests at a lower level.
    But what is the reasonable amount of automatic testing?
    I had a wise old English teacher in high school. Whenever a student asked, "How long should this paper be?", she would just reply, "Sufficient to cover the subject." I think the same applies to unit testing. Stan Silvert http://www.jsfunit.org
  4. Do you need to cover every eventuality? Then write tests for each. It doesn't take all day. If you find that your tests are taking too long to write then there is something wrong with your design. And that's really the point - to find out if your design is correct.
    Ok, method with 10 boolean decisions means 1024 paths. Day is not enough for 100 % coverage. There's article about this here: Statement, Branch, and Path Coverage Testing in Java. Sometimes there's nothing wrong with your design, but requirements are complex. You can refactor and clean up your code, but can't escape the fact, that in the end there are still 1024 paths because that's how the business is. That article presents one solution to the problem. But in overall, these kind of problems should be talked more in the community.

    I had a wise old English teacher in high school. Whenever a student asked, "How long should this paper be?", she would just reply, "Sufficient to cover the subject." I think the same applies to unit testing.
    That's stupid answer, guiding developers to write simple statement coverage tests, because they are told to. If using TDD, they should write tests that have sufficient path coverage in all layers of the application. But what is that? Is there any research on that? Or is it really just my decision? There's a lot of suspicion about TDD also in this thread. Doesn't that show that "sufficient" is something that should be talked about more? Tomi
  5. If using TDD, they should write tests that have sufficient path coverage in all layers of the application. But what is that? Is there any research on that? Or is it really just my decision? There's a lot of suspicion about TDD also in this thread. Doesn't that show that "sufficient" is something that should be talked about more?

    Tomi
    There has been discussion about these issues unfortunately under another acronym BDD(Behaviour Driven Development). You might want to take a peek here for a primer and other links: http://behaviour-driven.org/ Some of the concepts are interesting and address the limitations of TDD, but I'm still a little worried about overemphasizing "ubiquitous language" instead of domain behavior and dynamics.
  6. There has been discussion about these issues unfortunately under another acronym BDD(Behaviour Driven Development).
    BDD sounds quite reasonable. Using mock objects to test domain objects is not new, but very useful advice. But has database or UI component testing any role in BDD? Should we use TDD only in domain layer? I see that TDD in BDD has certain role in design. But still when you think about domain Service returning different objects with different parameters... In my opinion we would have to achieve certain coverage, when thinking, what kind of test is suitable. I don't need TDD just to realize that findCustomers returns me some Customers. But maybe I'm just one of those "who just don't get it" and should change business... Tomi
  7. Do you need to cover every eventuality? Then write tests for each. It doesn't take all day. If you find that your tests are taking too long to write then there is something wrong with your design. And that's really the point - to find out if your design is correct.

    Ok, method with 10 boolean decisions means 1024 paths. Day is not enough for 100 % coverage. There's article about this here:
    Statement, Branch, and Path Coverage Testing in Java.
    The second sentence of that article makes my point exactly: "Coverage is the perfect complement to unit testing: unit tests tell you whether your code performed as expected, and code coverage tells you what remains to be tested." You are talking about code coverage. TDD is about unit testing.

    Sometimes there's nothing wrong with your design, but requirements are complex.
    Again, we are talking about TDD. If you don't understand the requirements then you've got different problems. A full Agile process might help, but there are many ways to deal with that.
    You can refactor and clean up your code, but can't escape the fact, that in the end there are still 1024 paths because that's how the business is. That article presents one solution to the problem. But in overall, these kind of problems should be talked more in the community.
    Agreed. More research is always needed.

    I had a wise old English teacher in high school. Whenever a student asked, "How long should this paper be?", she would just reply, "Sufficient to cover the subject." I think the same applies to unit testing.

    That's stupid answer, guiding developers to write simple statement coverage tests, because they are told to. If using TDD, they should write tests that have sufficient path coverage in all layers of the application. But what is that? Is there any research on that? Or is it really just my decision?
    First, I didn't say anything about guiding anyone toward statement coverage. Second, just because "sufficient" is not a precise term does not make it wrong or stupid. The statement is wise because if you ask an imprecise question you will get an imprecise answer. Your question has a different answer for every project and for every module within that project.
    There's a lot of suspicion about TDD also in this thread. Doesn't that show that "sufficient" is something that should be talked about more?

    Tomi
    I don't detect any suspicion about TDD in this thread. The only real debate is about what to call it so that it is better understood. Your posts demonstrate the need for this. And you should apologize to my dear old English teacher. Stan Silvert http://www.jsfunit.org
  8. Just a couple of things I want to share with you... Coverage is about how useful your tests are. If you don't think about coverage (what percent of defects are automatically detected), there's no way to justify your time spent on tests from my point of view. There has to be way to measure, how useful your tests are. If we don't justify unit testing and coverage needed properly, people will write very simple tests. If these tests won't reveal any errors and cause only extra trouble in development and maintenance, what's the point? Somebody arguing "It's good for your design and you will surely code better", is not good enough reason for me. I need the evidence. There are people who have written excellent code before TDD. So it must not be the prerequisite of all quality code. Unit testing and TDD/test-first thinking are great tools, but with or without them, excellent coders still produce excellent code, and then there are those, who don't succeed that well. And I haven't criticized Stan's teacher, so I am not going to apologize to him. Comparing English essays with amount of unit testing needed in software process does not sound very reasonable to me, but I'm sorry if the word "stupid" was too strong here. But at least we are generating some discussion. And feel free to call my opinions idiotic if you wish :) Tomi
  9. Just a couple of things I want to share with you...

    Coverage is about how useful your tests are. If you don't think about coverage (what percent of defects are automatically detected), there's no way to justify your time spent on tests from my point of view. There has to be way to measure, how useful your tests are.
    Again, we are talking about two completely different things here. Code coverage tests and unit tests are different things that serve different purposes. Unit tests aid in good design and later serve as a regression suite. Code coverage tests, integration tests, system tests, acceptance tests, UI scripts, etc. have finding bugs as their main purpose. You can measure their effectiveness by how many bugs they detect. For these kinds of tests, the only "successful" test is one that fails. These tests can detect the presence of bugs, but not their absence. Unit tests in the TDD world are much different in purpose. Admittedly, their usefulness is more subjective. That is because as a design tool, they aid in producing something that is fundamentally subjective -> good design.
    Somebody arguing "It's good for your design and you will surely code better", is not good enough reason for me. I need the evidence.

    There are people who have written excellent code before TDD. So it must not be the prerequisite of all quality code. Unit testing and TDD/test-first thinking are great tools, but with or without them, excellent coders still produce excellent code, and then there are those, who don't succeed that well.
    I would submit to you that more excellent code is written today because of TDD and other best practices. Surely you are not suggesting that we throw out good techniques just because it is possible to do good work without them. Stan Silvert http://www.jsfunit.org
  10. Ok, method with 10 boolean decisions means 1024 paths. Day is not enough for 100 % coverage. There's article about this here:
    Statement, Branch, and Path Coverage Testing in Java.
    Sometimes there's nothing wrong with your design, but requirements are complex. You can refactor and clean up your code, but can't escape the fact, that in the end there are still 1024 paths because that's how the business is.
    I'll try to comment this if you don't mind. It seems like you're treating unit tests much like functional tests here. I don't think that requirements says something like this "There should be only one method that performs all the logic resulting in 1024 possible paths", it's more likely that requirements say that method should return specific values for specific inputs but it shouldn't be exactly one method doing this job. Here TDD comes to play. You could decompose method to be facade for many other methods for which you could easily write unit tests. And you only write tests for functionality you currently working on and not for whole facade. Besides, how could you test this 1024-paths method without unit tests? Should you have QA person who will perform 1024 different test cases by hands?
  11. Thanks, I get your point. But let's say you have that findCustomers facade method, ending in DAO method that has still those 10 parameters. Ok, I create mock DAO returning two or three customers, that's easy. But there's a lot of code in DAOImpl that has not been tested. To verify that even your dynamic sql syntax is correct, you should test all 10 parameters with different combinations. And the real work starts when you start testing the results with all combinations. I don't know if this is purpose of TDD anyway, but what I'm trying to say is that writing unit tests that really improve code quality by revealing errors is hard. Also writing sophisticated TDD tests is hard, because I'm not happy with those "hey, let's assert findCustomer returns Customers" tests. Should I be? Maybe you know how to do it for your DAO, services, entities, MVC-classes and UI Components, but I haven't found enough answers nor fully comprehensive TDD resource for these issues. And there are thousands of execution paths in real apps, if we could test them all by hand or automatically, we would never have any errors in production.
  12. I think I understand you. Testing DAO is a real pain and I think always will. But you didn't come up with alternatives in your posts. You saying that it's hard to test method with many combinations but how then you can be sure that other code that depends on this method is correct? Whole point of TDD IMO is to be sure on small pieces. I use TDD to be sure that code works as expected but it could be completely wrong from client point of view. At least I know that it does what I meant it to do.
    And there are thousands of execution paths in real apps, if we could test them all by hand or automatically, we would never have any errors in production.
    It's not only about programming and software applications. It's everywhere in our life. Buildings is a good example. There are so much possibilities to build house that will have defects but if every building company put effort on testing all possibilities it will never build anything. Instead they test each block in isolation reducing possible combinations. Same could be applied to software development.
  13. I think I understand you. Testing DAO is a real pain and I think always will. But you didn't come up with alternatives in your posts. You saying that it's hard to test method with many combinations but how then you can be sure that other code that depends on this method is correct? Whole point of TDD IMO is to be sure on small pieces. I use TDD to be sure that code works as expected but it could be completely wrong from client point of view. At least I know that it does what I meant it to do.


    And there are thousands of execution paths in real apps, if we could test them all by hand or automatically, we would never have any errors in production.


    It's not only about programming and software applications. It's everywhere in our life. Buildings is a good example. There are so much possibilities to build house that will have defects but if every building company put effort on testing all possibilities it will never build anything. Instead they test each block in isolation reducing possible combinations. Same could be applied to software development.
    According to Rod Johnson, unit testing your DAOs is most of the times worthless and I agree. DAOs should be tested by integration test since they are only a thin integration subpackage over a database. Given the complexity of any persistance API, your tests are usually very brittle and you gain almost nothing by unit testing this stuff. I think unit testing should be used mainly to test the domain and UI layer except the case when you have some complex query creation logic performed by your DAO.
  14. It's not only about programming and software applications. It's everywhere in our life. Buildings is a good example. There are so much possibilities to build house that will have defects but if every building company put effort on testing all possibilities it will never build anything. Instead they test each block in isolation reducing possible combinations. Same could be applied to software development.
    Actually, this kind of approach to validation has been show to not work in the 'real' world. A carpenter waits until the second to last panel is placed before measuring and cutting the last. Trying to do it all up front not only produces worse results, it increases the difficulty and cost of the work. I read an article recently about how Honda and Toyota do not focus on part tolerances as much as they do on the tolerances for an assembled system. That is, the per part tolerance on their automobiles is lower than that for a lot of cars that have much lower reliability. In other words, perfect parts do not guarantee a perfect system. I've also often seen broken units in systems that work flawlessly.
  15. You're right about importance of good quality of any production. But it isn't domain of TDD. I didn't know much about Honda or Toyota approach to QA but I don't think they don't pay attention to parts at all. Of course working parts doesn't guarantee working car but at least they can be sure that problems are in integration of different working parts thus reducing number of possible failure reasons. I may be wrong here but I think it could be applied to software development at least. I also saw projects where crap parts worked somehow good (not saying perfectly here). But each new functionality added produced ripple effect on whole system.
  16. You're right about importance of good quality of any production. But it isn't domain of TDD. I didn't know much about Honda or Toyota approach to QA but I don't think they don't pay attention to parts at all.
    No, that's not what I am saying. They don't focus as much on parts as they do on the whole system. The produce higher tolerances for the finished product with lower tolerances on the parts. It's a tenuous analogy but I feel that often and implied relationship between manufacturing QA and TDD that is used as an argument for TDD. You made a similar one about houses. If we use that analogy, we have to note that it implies that unit testing is not as important as it is often made out to be.
    Of course working parts doesn't guarantee working car but at least they can be sure that problems are in integration of different working parts thus reducing number of possible failure reasons. I may be wrong here but I think it could be applied to software development at least.

    I also saw projects where crap parts worked somehow good (not saying perfectly here). But each new functionality added produced ripple effect on whole system.
    I'm not arguing against unit testing. What I am arguing against is inflating the importance of unit testing and ignoring the costs. Unit tests must be written and maintained. They should hit the all of a units requirements but trying to use them to ensure there will be no failure is hopeless. The other thing that I am completely appalled by are claims by apparent TDD experts that have complete coverage means that there are no risks to changing code. I guess my problem with TDD is that it seems to lose focus on the true goal of development: producing high-quality maintainable systems.
  17. Re: TDD is an Approach, not a Task[ Go to top ]

    I'm not arguing against unit testing. What I am arguing against is inflating the importance of unit testing and ignoring the costs. Unit tests must be written and maintained.
    If done correctly, there is no cost to unit testing and TDD. TDD saves you time and money. The cost is in NOT writing and maintaining unit tests.
    They should hit the all of a units requirements but trying to use them to ensure there will be no failure is hopeless.

    The other thing that I am completely appalled by are claims by apparent TDD experts that have complete coverage means that there are no risks to changing code.
    I've never heard anyone make either of these claims. You are correct that they are both false. TDD mitigates risk. It doesn't claim to eliminate it. I suspect a straw man lurks...
    I guess my problem with TDD is that it seems to lose focus on the true goal of development: producing high-quality maintainable systems.
    Yep, there he is. Where did you get that idea? TDD is laser-focused on high quality and maintenance. Read the literature and try TDD for yourself. I think you will like it. Stan Silvert http://www.jsfunit.org
  18. Re: TDD is an Approach, not a Task[ Go to top ]

    I'm not arguing against unit testing. What I am arguing against is inflating the importance of unit testing and ignoring the costs. Unit tests must be written and maintained.

    If done correctly, there is no cost to unit testing and TDD. TDD saves you time and money. The cost is in NOT writing and maintaining unit tests.
    I agree 'if' unit testing is done correctly. But what I think is correct may not be what you think is correct.
    They should hit the all of a units requirements but trying to use them to ensure there will be no failure is hopeless.

    The other thing that I am completely appalled by are claims by apparent TDD experts that have complete coverage means that there are no risks to changing code.

    I've never heard anyone make either of these claims. You are correct that they are both false. TDD mitigates risk. It doesn't claim to eliminate it. I suspect a straw man lurks...
    http://www.artima.com/weblogs/viewpost.jsp?thread=210575 "In other words, the change risk is linearly coupled with the complexity. We had a lot of heated discussions about this. Some very smart people we talked with believe that if you have 100% test coverage, your risk of change should be 0." I'm often told that no one is making these claims but I keep seeing them implied if not outright stated.
    I guess my problem with TDD is that it seems to lose focus on the true goal of development: producing high-quality maintainable systems.

    Yep, there he is. Where did you get that idea? TDD is laser-focused on high quality and maintenance.
    It's laser focused on producing high-quality units. This is not the same thing.
  19. Re: TDD is an Approach, not a Task[ Go to top ]

    I'm not arguing against unit testing. What I am arguing against is inflating the importance of unit testing and ignoring the costs. Unit tests must be written and maintained.

    If done correctly, there is no cost to unit testing and TDD. TDD saves you time and money. The cost is in NOT writing and maintaining unit tests.


    I agree 'if' unit testing is done correctly. But what I think is correct may not be what you think is correct.
    OK, we agree then. As I stated before, I'm not going to get into a pointless argument about what "correct" means.
    They should hit the all of a units requirements but trying to use them to ensure there will be no failure is hopeless.

    The other thing that I am completely appalled by are claims by apparent TDD experts that have complete coverage means that there are no risks to changing code.

    I've never heard anyone make either of these claims. You are correct that they are both false. TDD mitigates risk. It doesn't claim to eliminate it. I suspect a straw man lurks...


    http://www.artima.com/weblogs/viewpost.jsp?thread=210575

    "In other words, the change risk is linearly coupled with the complexity. We had a lot of heated discussions about this. Some very smart people we talked with believe that if you have 100% test coverage, your risk of change should be 0."

    I'm often told that no one is making these claims but I keep seeing them implied if not outright stated.
    So it's just hearsay. You still haven't produced your TDD expert saying that it eliminates risk. As far as I know, nobody actually said this, expert or not. TDD doesn't make these silly claims and I certainly don't either.
    I guess my problem with TDD is that it seems to lose focus on the true goal of development: producing high-quality maintainable systems.

    Yep, there he is. Where did you get that idea? TDD is laser-focused on high quality and maintenance.


    It's laser focused on producing high-quality units. This is not the same thing.
    Again, where did you get the idea that a programming and design technique has some other goal than improving quality and maintainability? And yes, I mean of the whole system. Improving the parts DOES improve the quality of the whole. It's not the only factor, but it is an important factor. If you don't want to do TDD then you are just missing out on a powerful technique. Stan Silvert http://www.jsfunit.org
  20. So it's just hearsay. You still haven't produced your TDD expert saying that it eliminates risk. As far as I know, nobody actually said this, expert or not. TDD doesn't make these silly claims and I certainly don't either.
    What are the qualifications of a TDD expert? If there some sort of certification? Is there an IEEE, ACM, or other industry association that has a standard or at least generally accepted definition?
  21. So it's just hearsay. You still haven't produced your TDD expert saying that it eliminates risk. As far as I know, nobody actually said this, expert or not. TDD doesn't make these silly claims and I certainly don't either.


    What are the qualifications of a TDD expert? If there some sort of certification? Is there an IEEE, ACM, or other industry association that has a standard or at least generally accepted definition?
    I don't know of any certifications in TDD. I believe that there are certs to be had in some of the methodologies that rely heavily on TDD as part of the process. Regardless, if you do hear silly claims like "TDD eliminates risk" you should disregard them. Stan Silvert http://www.jsfunit.org
  22. Re: TDD is an Approach, not a Task[ Go to top ]

    I'm not going to get into a pointless argument about what "correct" means.
    Nor do I want to. All I mean to point out is that you might look at what I call the correct way to do unit testing and think it's all wrong. I might look at yours and think that you've crossed the cost/return threshold.
    So it's just hearsay.
    This isn't a court of law. If we want to start raising objections we can. I see no reason for Mr. Savoia to lie about this and as a major test-first and unit testing proponent he has no reason to undermine it.
    You still haven't produced your TDD expert saying that it eliminates risk. As far as I know, nobody actually said this, expert or not. TDD doesn't make these silly claims and I certainly don't either.
    I'm not saying that you do and honestly I don't feel any need to prove myself to you. If you want to call me a liar, fine, I'm not going to get to worked up about it. But I know that I've seen people make such claims. I never said they were experts. I wouldn't think an expert would say something like that. I don't think this is the goal of TDD. But I do think that a lot of developers have a hard time with subtleties.
    Again, where did you get the idea that a programming and design technique has some other goal than improving quality and maintainability? And yes, I mean of the whole system. Improving the parts DOES improve the quality of the whole.
    I never said it had another goal. I said that it was focused on unit testing. Not necessarily. Unit tests overlap with high level testing. Where they don't overlap, there are two possible reasons. One is that the unit test is not important for the applications overall correctness. That is, the unit has requirements that are not necessary for that application to behave properly. The second is that the unit test is checking something important to application correctness that the high level testing misses. It's only the latter that contributes to high quality systems. And this space shrinks as the high level testing becomes more extensive. While this is a benefit of unit testing, it's better to put more resources into improving high level testing than to hope to catch missed issues with unit tests. The real benefit of unit tests is speed/cost. High level functional tests are expensive and often difficult to automate. The less time you spend doing this, the better. Unit test weed out a lot of issues long before you get to this stage and are cheap and easy to automate.
    It's not the only factor, but it is an important factor.

    If you don't want to do TDD then you are just missing out on a powerful technique.
    I've tried writing tests first but it don't find it to be cost effective. Assumptions made before development commences are often wrong. Instead of rewriting the tests again and again, I wait until the dust settles and write the tests once. It's not that I am not testing. I'm just not investing time into a formal test suite. The first iteration of development is too unstable. Maybe I'm just too set in my ways but I just don't like to waste time.
  23. Re: TDD is an Approach, not a Task[ Go to top ]

    I've tried writing tests first but it don't find it to be cost effective. Assumptions made before development commences are often wrong. Instead of rewriting the tests again and again, I wait until the dust settles and write the tests once. It's not that I am not testing. I'm just not investing time into a formal test suite. The first iteration of development is too unstable. Maybe I'm just too set in my ways but I just don't like to waste time.
    I think I'm seeing a pattern here. There is a general misunderstanding of TDD, which is exactly what the original thread is about. It sounds like you tried writing a bunch of tests before coding, but you didn't do TDD. TDD is great tool for helping you to find the holes in those assumptions. But like any technique, you have to do it correctly to see the benefit. See my post above about "TDD in a nutshell". It's not something you can just sit down and do right away. You have to learn how. It takes a bit of time to get good at it. Stan Silvert http://www.jsfunit.org
  24. I think I'm seeing a pattern here. There is a general misunderstanding of TDD, which is exactly what the original thread is about. It sounds like you tried writing a bunch of tests before coding, but you didn't do TDD.

    TDD is great tool for helping you to find the holes in those assumptions. But like any technique, you have to do it correctly to see the benefit.

    See my post above about "TDD in a nutshell".
    I keep seeing the pattern that any criticism or suspicion of TDD gets a "you're misunderstanding it" or "you're doing it incorrectly" response. From your nutshell post, I'm not seeing the step where ALL tests are rerun to make sure refactoring didn't cause any side effects. This is where TDD has problems -- especially with scaling. Any "agileness" in the early stages where the test suites are small are nullified as the test suites get bigger and teams wait for incremental changes to be validated by lengthy test suite runs.
  25. I think I'm seeing a pattern here. There is a general misunderstanding of TDD, which is exactly what the original thread is about. It sounds like you tried writing a bunch of tests before coding, but you didn't do TDD.

    TDD is great tool for helping you to find the holes in those assumptions. But like any technique, you have to do it correctly to see the benefit.

    See my post above about "TDD in a nutshell".


    I keep seeing the pattern that any criticism or suspicion of TDD gets a "you're misunderstanding it" or "you're doing it incorrectly" response.
    The original post was correct. "TDD is an Approach,not a Task". A lot of people have trouble understanding that and it keeps them from learning this important skill. I didn't give that kind of response until I realized that a lot of posters here really do misunderstand it. Possibly, they tried TDD but didn't really know how to do it correctly. It takes a while to get the hang of it. That's OK. It took me awhile to catch on as well. And I was skeptical like anyone else. But like any other skill, you get better at it with practice. Eventually, I saw its value. If you do give TDD a try, don't throw it out just because it didn't come easy at first. It's different way of doing things and it takes a bit of time to get your head around it. That's why I said that it helps to have someone who can teach it in person.
    From your nutshell post, I'm not seeing the step where ALL tests are rerun to make sure refactoring didn't cause any side effects.
    It's there, but it didn't fit into the nutshell. How often to run the full regression is a judgment call. But generally, you want to run it pretty often.
    This is where TDD has problems -- especially with scaling. Any "agileness" in the early stages where the test suites are small are nullified as the test suites get bigger and teams wait for incremental changes to be validated by lengthy test suite runs.
    That's a valid point, but it's one that's easy to deal with. You simply create sub-regressions which apply to the immediate task. Then you run the full regression a bit less often. I've found Maven to be a big help with this. If you break your project into subprojects you can have unit tests that apply only to that subproject. Then, as needed, you can back up to the root and run the full suite. The requirement that your test suite run quickly drives you to modularize your project. So this "problem" can turn around to be a nice benefit. On the team level, there is no waiting. I'm not sure I understand why you need to wait for something. Stan Silvert http://www.jsfunit.org
  26. That's a valid point, but it's one that's easy to deal with. You simply create sub-regressions which apply to the immediate task. Then you run the full regression a bit less often.

    I've found Maven to be a big help with this. If you break your project into subprojects you can have unit tests that apply only to that subproject. Then, as needed, you can back up to the root and run the full suite.
    But sub-regressions would not expose side effects to the overall project set so you might get a false sense of warm and fuzzy if just the sub-regression was compliant. So a separate task group could be blindsided by refactorings that haven't been fully vetted by a full regression. This idea seems to be outside the scope of TDD and more of a Feature Driven Development(FDD) methodology(by creating feature sets--ie sub-projects) which I agree makes the iteration cycle much more manageable. I personally like the coarser grain modularity of FDD style development than TDD. FDD seems to be more client centric than developer centric. The TDD "thought leaders" at a few of the places I've been would spew acid if this type of modular style was used. TDD had to be holistic in all forms or nothing.
    On the team level, there is no waiting. I'm not sure I understand why you need to wait for something.

    Stan Silvert
    http://www.jsfunit.org
    Isn't every refactoring considered a transaction within the overall scope of a project with TDD. Developers have to wait for test suite compliance and sync before they can proceed themselves hence the waiting. Otherwise you'll see nothing but merging nightmares, don't you think? When test suites are small and fast, you're right, the time delta is neglible. However, with an evolutionary and comprehensive test suite, the time delta becomes significant. (Notwithstanding the sub-project idea above)
  27. That's a valid point, but it's one that's easy to deal with. You simply create sub-regressions which apply to the immediate task. Then you run the full regression a bit less often.

    I've found Maven to be a big help with this. If you break your project into subprojects you can have unit tests that apply only to that subproject. Then, as needed, you can back up to the root and run the full suite.


    But sub-regressions would not expose side effects to the overall project set so you might get a false sense of warm and fuzzy if just the sub-regression was compliant. So a separate task group could be blindsided by refactorings that haven't been fully vetted by a full regression.
    I said that you run the full regression 'a bit' less often. You definitely run the full regression before you commit any code to the repository. So you won't mess up anything in the code base. But you will run full regression more often than just before a commit. I think you just have to get a feel for the rhythm of it. For knowing when to run the subsuite and when to run the full suite. Truthfully though, you end up writing tests and making changes that are so small and focused, there is a low likelihood of breaking something in another module. But if you do, you will know it soon because you can run the full regression at any time. And you will still run it often.
    This idea seems to be outside the scope of TDD and more of a Feature Driven Development(FDD) methodology(by creating feature sets--ie sub-projects) which I agree makes the iteration cycle much more manageable. I personally like the coarser grain modularity of FDD style development than TDD. FDD seems to be more client centric than developer centric. The TDD "thought leaders" at a few of the places I've been would spew acid if this type of modular style was used. TDD had to be holistic in all forms or nothing.
    No acid spewed here. Doing TDD the "right" way doesn't mean you won't follow other best practices such as modularization. Also, I don't think using FDD keeps you from using TDD at the same time. Even Jeff De Luca, the father of FDD agrees. http://www.featuredrivendevelopment.com/node/617
    Isn't every refactoring considered a transaction within the overall scope of a project with TDD. Developers have to wait for test suite compliance and sync before they can proceed themselves hence the waiting. Otherwise you'll see nothing but merging nightmares, don't you think? When test suites are small and fast, you're right, the time delta is neglible. However, with an evolutionary and comprehensive test suite, the time delta becomes significant. (Notwithstanding the sub-project idea above)
    I'm not sure what you mean by "transaction". If you mean that you have to commit after every refactoring then the answer is no. You are refactoring constantly. You do need to run the full suite before you commit. That shouldn't take more than 10 to 20 minutes. If it does then you have some problems with your test suite. Kent Beck talks about a test-driven system with that contains 250,000 lines of functional code and 250,00 lines of test code. Over 4,000 tests execute in under 20 minutes. Stan Silvert http://www.jsfunit.org
  28. I'd like to add few thoughts about TDD in general. As far as I understood all posters feel that unit testing is a helpful technique more or less. But many of us thinks that TDD isn't helpful because it's just unnatural to waste time on tests which could be changed with time. For me main value of TDD is a way to know when to stop write tests. While I wrote tests after code was completed I didn't know when to stop. I just had knowledge about how code actually works and felt that I need to write tests that will confirm my assumptions. But how much should I write? Will two or three different sets of data be sufficient? Now when I use TDD I always know how much tests actually enough. Because I don't know about how implementation will look like I can just make sure that from client point of view (code client) I covered all cases and this cases are implemented. At first I didn't understand common examples when you write test for method, expect some output and then just return expected value from method. I thought it's just silly because I'll newer implement methods like this. But then I realized that writing such small implementations I could be sure that my tests are just enough for method and I don't write anything redundant. So for me TDD pros are:
    • It helps me to write most simpler yet enough implementation
    • It drives design of code so I'll never make it overcomplicated
    • It gives me possibility to not write too many tests
    • It gives me psychological satisfaction when I see that I actually implemented some small functionality all the time (read green bar)
    I'm not seasoned developer and not a guru in methodologies but I had opportunity to compare different approaches and TDD is a natural way as soon as you get to. I needed a long time before I came to this feeling anyway.
  29. I'm not arguing against unit testing. What I am arguing against is inflating the importance of unit testing and ignoring the costs. Unit tests must be written and maintained.

    If done correctly, there is no cost to unit testing and TDD. TDD saves you time and money. The cost is in NOT writing and maintaining unit tests.

    No, there is a cost. The code does not write itself. What you are saying is that TDD is "net positive" in terms of cost. So it either replaces activities that would have been done otherwise and/or it reduces the cost of performing other activites. What activities does it eliminate? What activities does it reduce? By how much? What conditions are required in order to achieve that reduction? These are extremely important questions. Without answers to them claims about savings are meaningless.
    They should hit the all of a units requirements but trying to use them to ensure there will be no failure is hopeless.

    The other thing that I am completely appalled by are claims by apparent TDD experts that have complete coverage means that there are no risks to changing code.

    I've never heard anyone make either of these claims. You are correct that they are both false. TDD mitigates risk. It doesn't claim to eliminate it. I suspect a straw man lurks...

    I've read plenty of them. Usually they come from people who are repeating what they read somewhere without repeating the caveats or taken out of context. For example, you statement that there is no cost in writing unit tests. I believe what you mean is that it is not a cost driver and ultimately saves you money. But it wouldn't be uncommon to see a statement like that repeated or even quoted as meaning that writing unit tests is free.
    I guess my problem with TDD is that it seems to lose focus on the true goal of development: producing high-quality maintainable systems.

    Yep, there he is. Where did you get that idea? TDD is laser-focused on high quality and maintenance. Read the literature and try TDD for yourself. I think you will like it.

    Stan Silvert
    http://www.jsfunit.org
    The problem I've always had was the TDD and unit testing never seemed like anything special. When I first learned out it, I couldn't imagine someone would develop software that way. Then I realized it was just jargonization of good programming practices. It don't think it matters if you write a little application code and then write a few tests to make sure that code works (the way I work), or you write a few tests and then write some application code for them (TDD the way I understand it). The important thing is that you're coding and testing in a quick enough cycle for there to be an effective feedback loop.
  30. I'm not arguing against unit testing. What I am arguing against is inflating the importance of unit testing and ignoring the costs. Unit tests must be written and maintained.

    If done correctly, there is no cost to unit testing and TDD. TDD saves you time and money. The cost is in NOT writing and maintaining unit tests.



    No, there is a cost. The code does not write itself. What you are saying is that TDD is "net positive" in terms of cost.
    Right. That's what I'm saying. A "reduction in cost" is not a cost, it's called a savings.
    So it either replaces activities that would have been done otherwise and/or it reduces the cost of performing other activites.

    What activities does it eliminate?
    What activities does it reduce? By how much? What conditions are required in order to achieve that reduction?

    These are extremely important questions. Without answers to them claims about savings are meaningless.
    Yes, they are important questions. There have been many studies that quantify my statements. You can quibble with any or all of the studies, but my interpretation is that they definitely show a cost savings through increased productivity and increased quality when using TDD. http://www.computer.org/portal/cms_docs_software/software/homepage/2007/s3024.pdf
    The problem I've always had was the TDD and unit testing never seemed like anything special. When I first learned out it, I couldn't imagine someone would develop software that way. Then I realized it was just jargonization of good programming practices. It don't think it matters if you write a little application code and then write a few tests to make sure that code works (the way I work), or you write a few tests and then write some application code for them (TDD the way I understand it). The important thing is that you're coding and testing in a quick enough cycle for there to be an effective feedback loop.
    It's certainly not jargonization of an old technique. 20 years ago, nobody wrote software this way. TDD is a specific way to design code. The studies suggest that it does matter if you test first or test last. There are lots of reasons for this presented in TDD literature, but I'll leave that to your own research. Stan Silvert http://www.jsfunit.org
  31. Right. That's what I'm saying. A "reduction in cost" is not a cost, it's called a savings.
    I hope you have a translator for when you go talk to financial people.
    Yes, they are important questions. There have been many studies that quantify my statements. You can quibble with any or all of the studies, but my interpretation is that they definitely show a cost savings through increased productivity and increased quality when using TDD.
    I haven't read everything but most of what I've read seems to point more towards the actual benefit coming from tightening the feedback loop rather, with TDD being a means of doing so, as opposed to TDD itself. That's important because it means that organizations that already have tight feedback loops won't experience a significant benefit.
    It's certainly not jargonization of an old technique. 20 years ago, nobody wrote software this way. TDD is a specific way to design code.
    It must be all the jargon and dogma that seems to surround almost everything I read about TDD has completely blinded me to the innovative techniques involved.
    The studies suggest that it does matter if you test first or test last. There are lots of reasons for this presented in TDD literature, but I'll leave that to your own research.
    So for my next development project I should allocate 6 months to writing tests prior to writing any application code? No, that would be silly. It would be even sillier than developing code for 6 months without any testing. It's all about cycle times. If you spend 15 minutes writing application code then 15 minutes writing code to test it are you doing TDD? I suppose no, because the application code came first. That would violate the dogma. But I personally find it to be a more natural process. But it don't think it makes much of a difference. Once you start the cycle, the first/second ordering doesn't matter - it's a cycle. That's the whole point. You're arguing about which side of the tire should be facing down while ignoring the size.
  32. I hope you have a translator for when you go talk to financial people.
    I try to avoid those people. ;-)
    So for my next development project I should allocate 6 months to writing tests prior to writing any application code?

    No, that would be silly.
    OK, it appears that we have a misunderstanding about what TDD is. In a nutshell, here is what you do: 1) Write a simple test that fails. 2) Write application code that makes the test pass. 3) Refactor 4) Goto step 1 You don't spend 6 months writing tests before writing application code. In most cases, you don't even spend 6 minutes. Give it a try. You'll like it. It's best to find someone to teach you, but you can probably master the technique from the literature as well. Stan Silvert http://www.jsfunit.org
  33. Re: TDD is an Approach, not a Task[ Go to top ]

    So for my next development project I should allocate 6 months to writing tests prior to writing any application code?

    No, that would be silly.

    OK, it appears that we have a misunderstanding about what TDD is. I don't think you understood his point. This was just a hypothetical example that was intended to be absurd. His point came later.
  34. Re: TDD is an Approach, not a Task[ Go to top ]

    So for my next development project I should allocate 6 months to writing tests prior to writing any application code?

    No, that would be silly.

    OK, it appears that we have a misunderstanding about what TDD is.


    I don't think you understood his point. This was just a hypothetical example that was intended to be absurd. His point came later.
    You're right. I think I did miss his point. As I said before, studies strongly suggest that it does matter if you test first or test last. The act of writing the test forces you to think about the consumer of your code. It forces you to think about the design of the API before your write it. From there you get a better design and all the benefits that come with it. If you test last then you are just looking for bugs. You don't tend to question the design. In some ways, TDD is a "design first" technique that does design in tiny manageable chunks. Stan Silvert http://www.jsfunit.org
  35. As I said before, studies strongly suggest that it does matter if you test first or test last.
    Ahhh, but those are two extremes. I've skimmed through several very dry papers on TDD I found on ACM (as opposed to the usual exhuberant testing-revival stuff I see). Based on what I've read, the advantage of "test first" as opposed to "test second" would probably be that procrastination can kick in and turn "test second" into "test last." The consistent point of my reading was that TDD decreased productivity and increased quality when compared to more traditional methods. Now, I have to seriously question productivity measures that ignore quality, but that's an aside. There are situations where quality isn't that important. Anyway, I've tried TDD and I really didn't like it. Most of the principles make sense and I can't ever remember not applying them. The precise rules seem to dogmatic to me so I ignore them.
  36. Re: TDD is an Approach, not a Task[ Go to top ]

    You're right. I think I did miss his point.

    As I said before, studies strongly suggest that it does matter if you test first or test last. The act of writing the test forces you to think about the consumer of your code. It forces you to think about the design of the API before your write it.
    I always think about the consumer of my code before I write it. Why do I need to write a test to do that? I usually write the consumer parts of the code first. The problem I run into is that I often decide to break what I thought was a unit into more pieces before I am done. Why should I write a bunch of tests just so I can change or toss them? Can you explain how writing a test while I'm still making incorrect assumptions about how the unit will be structured is helping me? My experience is that I actually lock myself into my assumptions even more.
    From there you get a better design and all the benefits that come with it.

    If you test last then you are just looking for bugs. You don't tend to question the design.
    No one has ever explained to me why testing is the best place to question the design. I question the design at least a dozen times before writing a line of code. Well, actually I often hack out a semi-working prototype and then design before rewriting from scratch.
    In some ways, TDD is a "design first" technique that does design in tiny manageable chunks.
    Did you not design first before you did TDD? This is one of the most disturbing aspects of TDD. There's an element of religious fanaticism that insists that TDD is the best way and that people who don't buy it wholesale are just lost souls writing crap code in the wilderness. No one seems to be able to explain why TDD is the ultimate methodology, we are just asked to have faith and if we don't buy into it, we just never did it right.
  37. Re: TDD is an Approach, not a Task[ Go to top ]

    You're right. I think I did miss his point.

    As I said before, studies strongly suggest that it does matter if you test first or test last. The act of writing the test forces you to think about the consumer of your code. It forces you to think about the design of the API before your write it.
    I always think about the consumer of my code before I write it. Why do I need to write a test to do that?
    Because code knows best. It's one thing to think you are right about the consumer, it's quite another to actually test your assumptions.
    I usually write the consumer parts of the code first.
    How do you design that consumer? Once you have the consumer running, are you able to use it to test the other parts to help in the design of the module that it consumes? In a methodical, automated way?
    The problem I run into is that I often decide to break what I thought was a unit into more pieces before I am done. Why should I write a bunch of tests just so I can change or toss them?
    That's where the refactor step comes in. You make changes in very small increments so that you evolve your code into a better design instead of throwing stuff away.
    Can you explain how writing a test while I'm still making incorrect assumptions about how the unit will be structured is helping me?
    I hope so. The tests help you discover the incorrect assumptions very quickly. Again, the code knows. The code is irrefutable evidence when you are wrong.
    My experience is that I actually lock myself into my assumptions even more.
    TDD helps with that a lot. But you need to get good at refactoring as well. The skills are complementary. I recommend Martin Fowler's original book on the subject. Refactoring is another subject that is often misunderstood because people just think they already know how to do it.
    No one has ever explained to me why testing is the best place to question the design. I question the design at least a dozen times before writing a line of code. Well, actually I often hack out a semi-working prototype and then design before rewriting from scratch.
    Ouch! Refactoring is much less painful. Again, the answer is that code knows best. So write some code and test your assumptions. Then refactor as needed. Then you won't be throwing stuff away and rewriting. Your code will evolve into the solution it should be.
    In some ways, TDD is a "design first" technique that does design in tiny manageable chunks.
    Did you not design first before you did TDD?
    I didn't do it in small chucks and I did it in a less disciplined way. I'm still not as disciplined as I would like to be.
    This is one of the most disturbing aspects of TDD. There's an element of religious fanaticism that insists that TDD is the best way and that people who don't buy it wholesale are just lost souls writing crap code in the wilderness. No one seems to be able to explain why TDD is the ultimate methodology, we are just asked to have faith and if we don't buy into it, we just never did it right.
    Well, my religion has nothing to do with programming. I don't think TDD is the ultimate anything. It's just a good practice, that's all. Stan Silvert http://www.jsfunit.org
  38. Re: TDD is an Approach, not a Task[ Go to top ]

    I always think about the consumer of my code before I write it. Why do I need to write a test to do that?

    Because code knows best. It's one thing to think you are right about the consumer, it's quite another to actually test your assumptions.
    How is writing a test better than writing the consumer? How does writing tests based on your assumptions test them? How does a unit test test the assumptions about the consumer than the consumer itself.
    I usually write the consumer parts of the code first.

    How do you design that consumer?
    Depends on the consumer. I'm not sure what you are asking.
    Once you have the consumer running, are you able to use it to test the other parts to help in the design of the module that it consumes?
    Of course. Why wouldn't I be able to.
    In a methodical, automated way?
    Automated design? Are you changing the subject?
    The problem I run into is that I often decide to break what I thought was a unit into more pieces before I am done. Why should I write a bunch of tests just so I can change or toss them?

    That's where the refactor step comes in. You make changes in very small increments so that you evolve your code into a better design instead of throwing stuff away.
    I already do that. Why do you think writing a test first is required for this to happen?



    Can you explain how writing a test while I'm still making incorrect assumptions about how the unit will be structured is helping me?

    I hope so. The tests help you discover the incorrect assumptions very quickly.
    How? Give me a concrete example. It can be anecdotal.
    Again, the code knows. The code is irrefutable evidence when you are wrong.
    Sure but if the tests are built around a design, they cannot tell you you have a problem with your design. I think you may be talking about trivial implementation issues inside a unit. I'm talking about the design at a high-level. Unit tests don't expose high-level design problems.


    My experience is that I actually lock myself into my assumptions even more.

    TDD helps with that a lot.
    This kind of answer doesn't do anything for me.
    No one has ever explained to me why testing is the best place to question the design. I question the design at least a dozen times before writing a line of code. Well, actually I often hack out a semi-working prototype and then design before rewriting from scratch.

    Ouch! Refactoring is much less painful. Again, the answer is that code knows best. So write some code and test your assumptions. Then refactor as needed. Then you won't be throwing stuff away and rewriting.
    Creating a prototype is not painful. It's actually a lot of fun. There's nothing better to validate an idea and correct fundamental design flaws than actually producing a working application. Testing units doesn't even come close. If you don't want to do prototyping then you are just missing out on a powerful technique. Give it a try. You'll like it. It's best to find someone to teach you, but you can probably master the technique from the literature as well. If you do give prototyping a try, don't throw it out just because it didn't come easy at first. It's different way of doing things and it takes a bit of time to get your head around it. That's why I said that it helps to have someone who can teach it in person.
    Well, my religion has nothing to do with programming.

    I don't think TDD is the ultimate anything. It's just a good practice, that's all.
    Then why do you avoid addressing arguments questioning it?
  39. Re: TDD is an Approach, not a Task[ Go to top ]

    Well, my religion has nothing to do with programming.

    I don't think TDD is the ultimate anything. It's just a good practice, that's all.


    Then why do you avoid addressing arguments questioning it?
    Avoid? Was that meant to be a joke? I've addressed this topic ad nauseam. If anyone has a new question or comment on TDD I'll try to address that too. Stan Silvert http://www.jsfunit.org
  40. Re: TDD is an Approach, not a Task[ Go to top ]

    Well, my religion has nothing to do with programming.

    I don't think TDD is the ultimate anything. It's just a good practice, that's all.


    Then why do you avoid addressing arguments questioning it?


    Avoid? Was that meant to be a joke? I've addressed this topic ad nauseam. If anyone has a new question or comment on TDD I'll try to address that too.
    A lot of your responses have boiled down to, you misunderstand or you must have done it wrong. This is not addressing the questions. Assuming that someone who disagrees with you doesn't know what you know is one of the most naive forms of flawed reasoning. For example, I've repeatedly proposed that a test written before code is written will incorporate all the assumptions that the developer has before writing the code. I haven't seen any meaningful response to this.
  41. Creating a prototype is not painful. It's actually a lot of fun. There's nothing better to validate an idea and correct fundamental design flaws than actually producing a working application. Testing units doesn't even come close.
    +1 The neat thing about prototypes is that customers/users can give you feedback on them. Like James said, a developer's assumptions are always going to be buried in his code. Prototyping key functionality allows those assumptions to be challenged all the way back to requirements and even the business case. In other words, prototypes get to the root of flawed assumptions as opposed to addressing the symptoms.
  42. Creating a prototype is not painful. It's actually a lot of fun. There's nothing better to validate an idea and correct fundamental design flaws than actually producing a working application. Testing units doesn't even come close.


    +1

    The neat thing about prototypes is that customers/users can give you feedback on them.

    Like James said, a developer's assumptions are always going to be buried in his code. Prototyping key functionality allows those assumptions to be challenged all the way back to requirements and even the business case. In other words, prototypes get to the root of flawed assumptions as opposed to addressing the symptoms.
    Of course (and I think we've discussed this in the past) the downside of prototyping is that people who see rewriting the program from scratch as a waste of time might actually have a say in whether you do so. For this reason I alway make it clear that the prototype is not production quality and purposely leave it full of holes. But even still sometime people will want you to just 'finish' the prototype. Getting feedback from the users is a huge benefit but I like doing it from a design perspective. By the time I get the thing working, I've realized a number of problems in my approach. Instead of living with these flaws or trying to rework the design halfway through development, I start off with a much better design when I go to really implement it. The second version of an application is usually much better than the first but often that second application is often not as good as it could be because it is held back by baggage from the first. My personal experience is that this is a huge time saver in the long run. High level design issues are much more difficult and costly that coding errors. Most of the time we end up having to live with them because they are so entrenched.
  43. Creating a prototype is not painful. It's actually a lot of fun. There's nothing better to validate an idea and correct fundamental design flaws than actually producing a working application. Testing units doesn't even come close.
    +1

    The neat thing about prototypes is that customers/users can give you feedback on them.
    I agree with that too. Building a prototype is fun and useful. What is painful is throwing away the prototype and starting over. What is even more painful is when someone forces/goads/convinces you to keep a prototype that you threw together quickly. Then you are stuck with an unstable code base to work from. As an alternative, you can build your prototype with TDD and allow it to evolve so that you never have to start over. Some methodologies have this as a central concept. You are in a constant state of prototyping where you are able to get feedback at any time and make changes at any time. TDD is an approach to coding and design. It is compatible with most other best practices such as prototyping. Stan Silvert http://www.jsfunit.org
  44. Re: TDD is an Approach, not a Task[ Go to top ]

    As an alternative, you can build your prototype with TDD and allow it to evolve so that you never have to start over.
    This is the key point I disagree with. How does TDD prevent you from implementing high-level design flaws. I mean the kind where all your producers should have been consumers or an assumption about the cardinality between entities in the system was incorrect? How does validating that things work the way you expect them to show you that your expectations are wrong? Are you saying that software developed with TDD will never contain major design flaws? Has there never been an application created with TDD that had a design overhaul?
  45. Re: TDD is an Approach, not a Task[ Go to top ]

    As an alternative, you can build your prototype with TDD and allow it to evolve so that you never have to start over.


    This is the key point I disagree with. How does TDD prevent you from implementing high-level design flaws. I mean the kind where all your producers should have been consumers or an assumption about the cardinality between entities in the system was incorrect? How does validating that things work the way you expect them to show you that your expectations are wrong?

    Are you saying that software developed with TDD will never contain major design flaws? Has there never been an application created with TDD that had a design overhaul?
    TDD doesn't prevent you from implementing a high-level design flaw. I think it makes it less likely, but it doesn't prevent that. What it does do, is make sure that you have a running system at all times. If you see that there is some major rework to do then you have a methodical way of getting to where you want to be without breaking the whole system. You don't have to be afraid of major rework because you know that you can methodically evolve the system without breaking it. Because you have a nice test suite, you can go back and revisit your assumptions. You can see which ones were right and which ones were wrong. And because all those assumptions are written in code, you don't have to go back and try to remember why you did what you did. Stan Silvert http://www.jsfunit.org
  46. Re: TDD is an Approach, not a Task[ Go to top ]

    TDD doesn't prevent you from implementing a high-level design flaw. I think it makes it less likely, but it doesn't prevent that.
    Whew! I'm glad we cleared that up. I could quibble over that last sentence but lets call it complete agreement.
    What it does do, is make sure that you have a running system at all times. If you see that there is some major rework to do then you have a methodical way of getting to where you want to be without breaking the whole system.
    Agreed. Agreed.
    You don't have to be afraid of major rework because you know that you can methodically evolve the system without breaking it.

    Because you have a nice test suite, you can go back and revisit your assumptions. You can see which ones were right and which ones were wrong. And because all those assumptions are written in code, you don't have to go back and try to remember why you did what you did.
    Completely on board and I think that was a very eloquent and concise way to put it. There is one thing you don't address here and where I am a little hung up and amazingly, it's actually on topic with the original article. What does writing tests first buy you? I thought you were suggesting that they help you with your design. To me writing a test just so it can fail doesn't seem like a huge achievement. To me the biggest benefit of doing it first is that if you are disciplined about it, you will not be tempted to not write the test when you are done. It makes sure you don't put it off too long. But I think there is also a downside. I might write a test. Write a method. Test the method and then tomorrow realize that method isn't needed and craps up the design. I might even decide that whole class is unnecessary fluff. So if I can be likewise disciplined about writing the unit tests once the structure of the unit is fairly stable, what's the downside? Realize I am not talking about writing them at the end of the development cycle. I mean before the unit is completed.
  47. What does writing tests first buy you? I thought you were suggesting that they help you with your design. To me writing a test just so it can fail doesn't seem like a huge achievement. To me the biggest benefit of doing it first is that if you are disciplined about it, you will not be tempted to not write the test when you are done. It makes sure you don't put it off too long. But I think there is also a downside. I might write a test. Write a method. Test the method and then tomorrow realize that method isn't needed and craps up the design. I might even decide that whole class is unnecessary fluff. So if I can be likewise disciplined about writing the unit tests once the structure of the unit is fairly stable, what's the downside? Realize I am not talking about writing them at the end of the development cycle. I mean before the unit is completed.
    That's the million dollar question. I await the answer with bated breath.
  48. What does writing tests first buy you?


    That's the million dollar question.

    I await the answer with bated breath.
    I'll throw out these. 1. I think it leads to cleaner APIs because you are starting to use the class right away and with no implementation you'll won't bleed the implementation through the API. 2. I think it helps you produce better factored code because to get at the stuff you want to test, you tend to have to isolate the logic from other elements. This is why an approach like dependency injection also helps makes classes easier to test. So I think you test more thoroughly if you write up front. I've seen lots of people try to write tests after the fact and then they say "Well this type of code really isn't testable". 3. Once you get used to writing negative tests, it gets you thinking about how an API should respond or report error conditions (this is maybe an extension of 1). To some degree you can use a test to review how thorough someones thinking or understanding of the solution is. If you don't see a test case for it, it probably doesn't handle the case gracefully. At a minimum it won't guarantee that it will always handle it. 4. Tests written up front are usually less tied to the implementation. This leads to less test breakage due to minor refactoring. Obviously its no fun culling or fixing tests that break due to changing assumptions. If a test asserts a basic truth about the class via a simple API, usually that sort of test doesn't need to change as much. 5. Defining the tests up front allows you to know when you are done. If you capture the basic requirements as tests. Throw in negative edge cases. Just code till they all pass and stop. This is probably more useful if the class has lots of functionality. You also won't be tempted to check in code that is 'done, but I haven't written tests yet'. 6. You'll code faster with TDD because you can make sure a particular implementation makes a particular test pass as soon as it compiles. Just methodically flesh out the implementation and run the tests. Without unit tests, most developers test end to end, and this is slower. I mean you still have to test end to end to verify the code, but why not start that when the basic unit is working, instead of debugging minor stuff in an end to end. Here are a couple more thoughts on testing. I prefer lots of short simple test cases as opposed to one or a few uber test cases that tries to assert everything at once. This is mostly a practical suggestion, and here's my line of thinking. 1. If a test is reduced to just a few lines, the odds of the test code having a bug is less. I've seen lots of people struggle with back and forth bugs, where sometimes it's a test class bug, sometimes its target class bug. A 100 lines of any code is likely to take several passes to get correct. 3 - 5 lines of code, much less so. If you can keep it short enough that it is intuitively correct that is better. 2. Since a framework like JUnit fails a test case on the first failed assertion, an uber test tells you less. You usually have to inspect it to figure out what is wrong, and there could be more wrong and you haven't gotten to it yet. If you've broken out your tests into small little assumptions about the code, you can usually tell from the test name what type of bug was introduced.
  49. I completely agree with you. TDD is a way to make not only better design of production code but write better tests.
  50. 1. I think it leads to cleaner APIs because you are starting to use the class right away and with no implementation you'll won't bleed the implementation through the API.
    But you will bleed your assumptions about what the implementation will be. Are you saying all testing should be black-box testing?
    2. I think it helps you produce better factored code because to get at the stuff you want to test, you tend to have to isolate the logic from other elements.
    Better factored or simply more factored? I could see the need to test breaking the logical coherency of the code. I could also see errors being pushed out of relatively clear code paths and into opaque assemblages of objects.
    4. Tests written up front are usually less tied to the implementation. This leads to less test breakage due to minor refactoring. Obviously its no fun culling or fixing tests that break due to changing assumptions. If a test asserts a basic truth about the class via a simple API, usually that sort of test doesn't need to change as much.
    Again - blackbox versus whitebox. I think both are needed.
    5. Defining the tests up front allows you to know when you are done. If you capture the basic requirements as tests.
    You also won't be tempted to check in code that is 'done, but I haven't written tests yet'
    I think it would be reasonable to assert that you aren't done if you haven't written the tests yet.
    Throw in negative edge cases. Just code till they all pass and stop. This is probably more useful if the class has lots of functionality.
    Sometimes implementation drives what the edge cases are.
    6. You'll code faster with TDD because you can make sure a particular implementation makes a particular test pass as soon as it compiles.
    No, you'll know as soon as all of your tests pass. Running the tests may be an automatic part of your build process, but they are not part of the compiler. Also, most of the studies I've read indicate that people code slower with TDD.
    Without unit tests, most developers test end to end, and this is slower. I mean you still have to test end to end to verify the code, but why not start that when the basic unit is working, instead of debugging minor stuff in an end to end.
    I agree with that, but this refers to the benefits of doing unit testing concurrently with development of application logic, not the benefit of doing tests first.
    My thought so far is that TDD helps because many people find writing tests boring. Consequently, writing them first (1) ensures they aren't excessively delayed or skipped entirely, and (2) encourages the programmer to work in smaller chunks, because he's chomping at the bit to write application code. It also seems that TDD focuses on details and keeping things small to the point where the integrity of the whole may be sacrificed. Testability is part of good design, but it is not all of good design or even the majority of good design.
  51. My thought so far is that TDD helps because many people find writing tests boring. Consequently, writing them first (1) ensures they aren't excessively delayed or skipped entirely, and (2) encourages the programmer to work in smaller chunks, because he's chomping at the bit to write application code.

    It also seems that TDD focuses on details and keeping things small to the point where the integrity of the whole may be sacrificed. Testability is part of good design, but it is not all of good design or even the majority of good design.
    Something that's been fomenting in my head through this thread is that some of the benefits that Bryant has listed may really be the case for developers without many years of experience. It's a kind of wax on, wax off training program. Writing the test first does force you to write more modular implementations and if you haven't already learned to do that from years of painful maintenance work, I can see where it would help get you on the right path. I think it's a little oversold by some and that an experience developer can make a determination when to write tests instead of slavishly following a rule.
  52. Something that's been fomenting in my head through this thread is that some of the benefits that Bryant has listed may really be the case for developers without many years of experience. It's a kind of wax on, wax off training program. Writing the test first does force you to write more modular implementations and if you haven't already learned to do that from years of painful maintenance work, I can see where it would help get you on the right path.

    I think it's a little oversold by some and that an experience developer can make a determination when to write tests instead of slavishly following a rule.
  53. Something that's been fomenting in my head through this thread is that some of the benefits that Bryant has listed may really be the case for developers without many years of experience. It's a kind of wax on, wax off training program. Writing the test first does force you to write more modular implementations and if you haven't already learned to do that from years of painful maintenance work, I can see where it would help get you on the right path.

    I think it's a little oversold by some and that an experience developer can make a determination when to write tests instead of slavishly following a rule.
    ???
  54. Something that's been fomenting in my head through this thread is that some of the benefits that Bryant has listed may really be the case for developers without many years of experience. It's a kind of wax on, wax off training program. Writing the test first does force you to write more modular implementations and if you haven't already learned to do that from years of painful maintenance work, I can see where it would help get you on the right path.

    I think it's a little oversold by some and that an experience developer can make a determination when to write tests instead of slavishly following a rule.
    Not surprisingly I disagree. Three of my Engineers are very senior. Have 25+ years experience or more. All were new to TDD. To a man they've all admitted that their code is better and they can move faster with the safety net good unit tests provide with regards to refactoring. As far always writing tests first, there's no question there are times when I'm prototyping with a new unfamiliar library that I just focus on understanding how the library works. There's also times when I'm writing a very trivial class that I'll just fill in the implementation before I write the tests. When working with unfamiliar code, code I didn't write, sometimes it's easier to read the code, then make the changes and test it end to end. This is usually easier if the original code has no tests and I'd have to drastically refactor it to make it testable. I maybe think TDD is a philosophy much more rooted in other Engineering disciplines and goes way beyond unit testing. If I'm asked to build a system that scales, I want a way of measuring scalability, performance and stress testing early on. I'd prefer to automate it and run it every night so we know where we are at. This way if a major change goes in that kills scalability it is detected when it happens. One thing I've learned is that there is a big difference between someone understanding how to technically write a unit test (i.e. I know how to use JUnit) as opposed to having a philosophy that I should guarentee everything works and make sure the system cannot regress, it's always getting better etc. There's a bit of a leap of faith from writing one simple test and say the first 100 tests when the safety net of change starts to form that lets you refactor so much more quickly.
  55. That was odd. For some reason I only saw the quote in your response at first.
    Not surprisingly I disagree. Three of my Engineers are very senior. Have 25+ years experience or more. All were new to TDD. To a man they've all admitted that their code is better and they can move faster with the safety net good unit tests provide with regards to refactoring.
    You are conflating two different arguments. I made so suggestion that unit test don't need to be written. TDD != unit testing. I should have been more precise. I work with some senior developers that don't know what unit testing is. They are actually pretty fearful of anything that was introduced to software development after the 1970s. I don't really mean that. What I really mean are master developers. A developer can remain an apprentice or journeyman for an entire career.
    As far always writing tests first, there's no question there are times when I'm prototyping with a new unfamiliar library that I just focus on understanding how the library works. There's also times when I'm writing a very trivial class that I'll just fill in the implementation before I write the tests.
    Then you aren't following the dogmatic test-first rule. What's your disagreement with what I am saying?
    When working with unfamiliar code, code I didn't write, sometimes it's easier to read the code, then make the changes and test it end to end. This is usually easier if the original code has no tests and I'd have to drastically refactor it to make it testable.

    I maybe think TDD is a philosophy much more rooted in other Engineering disciplines and goes way beyond unit testing. If I'm asked to build a system that scales, I want a way of measuring scalability, performance and stress testing early on.
    I agree completely with this. Unit tests don't really test scalability and performance and don't do stress testing either. At least not the way I've seen them done. This is why I dislike being overly focused on unit tests.
    I'd prefer to automate it and run it every night so we know where we are at. This way if a major change goes in that kills scalability it is detected when it happens.

    One thing I've learned is that there is a big difference between someone understanding how to technically write a unit test (i.e. I know how to use JUnit) as opposed to having a philosophy that I should guarentee everything works and make sure the system cannot regress, it's always getting better etc. There's a bit of a leap of faith from writing one simple test and say the first 100 tests when the safety net of change starts to form that lets you refactor so much more quickly.
    I agree but this is not really an argument for test-first development. It's also the reason that I wish there were more tools focused on automating high-level regression and functional testing. I often write scripts that test high-level behaviors automatically. These tend to be much more stable than unit tests and when there is behavior changes, I can use them to verify the behaviors have changed as desired and nothing else was affected. I've also used a second form of validation with random input to test code. This is where scripting languages can play a big role in improving testing.
  56. That was odd. For some reason I only saw the quote in your response at first.
    Oh that's just because I'm an idiot and hit the wrong button.
    I agree but this is not really an argument for test-first development.
    Well I agree that if the end results are the same. If your code is well tested, then it really doesn't matter how you did it. In my experience, developers are reluctant to change code that they think is working, so the end result is it's not equivalent. If you want to write it, test it end to end, then think about unit testing, refactor it to be testable, refactor it to simplify the API, then retest it end to end I guess that could work, but I think you'll probably just end up testing less thoroughly. So maybe I agree, TDD is an approach to give you the benefits of testing, the same benefits you theoretically could get through other approaches.
  57. If you want to write it, test it end to end, then think about unit testing, refactor it to be testable, refactor it to simplify the API, then retest it end to end I guess that could work,
    I'm going to repeat myself but only because it is important. Testing end-to-end and then implementing unit tests is not what we are talking about when we question test-first development. Test-first is writing the test before writing the code. You have even stated that you don't always do this. This is the only thing in question. I am not advocating writing unit-tests at the end of development. I don't think you intend to set up a strawman here but it could be interpreted that way.
  58. But you will bleed your assumptions about what the implementation will be.
    Or maybe you'll just be writing tests that assert the requirements.
    Better factored or simply more factored?
    Better factored.
    I think it would be reasonable to assert that you aren't done if you haven't written the tests yet.
    Actually the number 1 thing I struggle with is getting developers to buy in on what 'done' means. To me it means, coded, javadoced, reviewed, unit tested, and tested within the system. So many times I have developers check in code that's not done and they say, you can start testing I just need to go back and write my tests. This leads to other people trouble shooting their code. Under time pressure this often leads to them not writing the tests at all (usually because they are tracking down their own bugs up until the deadline).
    No, you'll know as soon as all of your tests pass. Running the tests may be an automatic part of your build process, but they are not part of the compiler.
    Well I guess you have me here. Now I can just tell you really don't have much of an argument if your going to knit pick having to right-click and say run as Junit.
    Also, most of the studies I've read indicate that people code slower with TDD.
    Well my guess is there aren't lots of studies, maybe you found a link, and if you did why not include it? I'll read it. Well tested code is much easier to work with and can be refactored safely and quickly. My guess would be that if you timed a simple alogrithm without taking into account a real world lifecycle you might find someone who writes tests is slower, but since you didn't provide any evidence to back this up its sort of pointless for me to try to refute it.
  59. Also, most of the studies I've read indicate that people code slower with TDD.


    Well my guess is there aren't lots of studies, maybe you found a link, and if you did why not include it? I'll read it.

    Well tested code is much easier to work with and can be refactored safely and quickly. My guess would be that if you timed a simple alogrithm without taking into account a real world lifecycle you might find someone who writes tests is slower, but since you didn't provide any evidence to back this up its sort of pointless for me to try to refute it.
    I'd like to see those studies too. Earlier in this thread I posted a summary of several studies on TDD. Here it is again: http://www.computer.org/portal/cms_docs_software/software/homepage/2007/s3024.pdf Of the 15 studies where productivity was measured: * 13 showed a productivity gain with TDD * 2 showed no effect on productivity * None showed a loss in productivity Stan Silvert http://www.jsfunit.org
  60. Of the 15 studies where productivity was measured: * 13 showed a productivity gain with TDD * 2 showed no effect on productivity * None showed a loss in productivity
    I read at least two, maybe three, that said there was a net reduction in productivity. Unfortunately I didn't save the links (my bad). Search the ACM library for "test driven development" and they are near the top. One was a study at Microsoft (reduction in productivity), one at IBM (I think reduction), and I believe two studies conducted at universities using (one reduction an one no change). At least that's what I remember. Hopefully I'll get some time on Monday to look them up again (home now, no access to ACM). I should note that all them were very positive on TDD, and at least subjectively believed that it increased code quality. I don't think I encountered a single paper against TDD. The common thread was really that automated unit testing is an integral part of programming activities, not something to be delayed until the end.
  61. Re: TDD is an Approach, not a Task[ Go to top ]

    If your method has 10 boolean decisions, I would seriously question the design.
  62. Re: TDD is an Approach, not a Task[ Go to top ]

    I had a wise old English teacher in high school. Whenever a student asked, "How long should this paper be?", she would just reply, "Sufficient to cover the subject." I think the same applies to unit testing.
    Tomi's right. It is a stupid answer. Well, maybe not stupid but it's not a real answer. I guarantee that some of the subjects you were supposed to cover are the subjects of fairly large books. If a teacher asks grade-school students for a paper on the decline of Rome, I don't think they expect something on the scale of Toynbee's works on the subject.
    For several years I've been writing almost all my code this way. When I don't, I end up with crap.
    Really? Have you considered that this may not be true for everyone? Perhaps test-driven development is more helpful to some developers than others. I think unit-tests are great but my biggest problem is that I read too many things coming out of the TDD world suggesting that they are the most important thing or that they guarantee that the program is correct. The reality is that tests cannot properly specify behavior or even verify it for non-trivial programs.
    The fact is, if you are writing code you need some way to ensure that it works. The old way is just to manually test it at the UI level. The problem with that is, the code/compile/test cycle is very long and slow. You have to write a lot of code that may or may not work before you can tell if you've done anything good. So it's much better to write smaller, simpler, targeted, automated tests at a lower level.
    Beyond the bias towards programs with user interfaces, it's truly dangerous to suggest that unit tests are "better" than high-level functional tests. Just because all the parts of a system are correct, it doesn't mean that the system as a whole is correct. The high-level testing burden is just as high with unit tests. The value of unit tests is that they make it more likely that the code will be correct when it comes time to do functional testing. Unit testing is optional, functional testing is not.
  63. Re: TDD is an Approach, not a Task[ Go to top ]

    I had a wise old English teacher in high school. Whenever a student asked, "How long should this paper be?", she would just reply, "Sufficient to cover the subject." I think the same applies to unit testing.


    Tomi's right. It is a stupid answer. Well, maybe not stupid but it's not a real answer. I guarantee that some of the subjects you were supposed to cover are the subjects of fairly large books. If a teacher asks grade-school students for a paper on the decline of Rome, I don't think they expect something on the scale of Toynbee's works on the subject.

    It is a real answer - and a real good one. It acknowledges that good work is subjective. Just as you can't judge the quality of an English paper by the number of pages, you can't judge the quality of your unit tests by the number of tests. And, BTW, the assignments in my English class were specific enough that you didn't need to write a book to "sufficiently cover the subject". The same is true in the real world of programming. The situation determines how thorough you need to be.
    For several years I've been writing almost all my code this way. When I don't, I end up with crap.


    Really? Have you considered that this may not be true for everyone? Perhaps test-driven development is more helpful to some developers than others.
    Perhaps, but I'll say this: All the best programmers I know write unit tests and do some form of TDD.


    I think unit-tests are great but my biggest problem is that I read too many things coming out of the TDD world suggesting that they are the most important thing or that they guarantee that the program is correct. The reality is that tests cannot properly specify behavior or even verify it for non-trivial programs.

    Yes, it's not a silver bullet - just an important best practice. I'm not sure anything can guarantee corectness, but I would quibble with you a bit about it specifying proper behavior. In pure TDD, proper behavior is actually defined by the tests. So, by definition, the test actually is specifying proper behavior.
    The fact is, if you are writing code you need some way to ensure that it works. The old way is just to manually test it at the UI level. The problem with that is, the code/compile/test cycle is very long and slow. You have to write a lot of code that may or may not work before you can tell if you've done anything good. So it's much better to write smaller, simpler, targeted, automated tests at a lower level.


    Beyond the bias towards programs with user interfaces, it's truly dangerous to suggest that unit tests are "better" than high-level functional tests. Just because all the parts of a system are correct, it doesn't mean that the system as a whole is correct. The high-level testing burden is just as high with unit tests. The value of unit tests is that they make it more likely that the code will be correct when it comes time to do functional testing. Unit testing is optional, functional testing is not. I wasn't suggesting that. Sorry if I wasn't clear. You need both kinds of tests. I was just saying that it's better to write a low level test for low level code. Obviously, you still need to do higher level integration and system tests as well. But you don't want to wait until you have all those pieces put together before you start testing your lower level modules. Stan Silvert http://www.jsfunit.org
  64. Re: TDD is an Approach, not a Task[ Go to top ]

    I screwed up the blockquotes at the end of my last message. I guess I needed a better test tool. :-) Here is how it should have looked:
    The fact is, if you are writing code you need some way to ensure that it works. The old way is just to manually test it at the UI level. The problem with that is, the code/compile/test cycle is very long and slow. You have to write a lot of code that may or may not work before you can tell if you've done anything good. So it's much better to write smaller, simpler, targeted, automated tests at a lower level.


    Beyond the bias towards programs with user interfaces, it's truly dangerous to suggest that unit tests are "better" than high-level functional tests. Just because all the parts of a system are correct, it doesn't mean that the system as a whole is correct. The high-level testing burden is just as high with unit tests. The value of unit tests is that they make it more likely that the code will be correct when it comes time to do functional testing. Unit testing is optional, functional testing is not.

    I wasn't suggesting that. Sorry if I wasn't clear.

    You need both kinds of tests. I was just saying that it's better to write a low level test for low level code. Obviously, you still need to do higher level integration and system tests as well. But you don't want to wait until you have all those pieces put together before you start testing your lower level modules.

    Stan Silvert
    http://www.jsfunit.org
  65. Re: TDD is an Approach, not a Task[ Go to top ]

    I wasn't suggesting that. Sorry if I wasn't clear.

    You need both kinds of tests. I was just saying that it's better to write a low level test for low level code. Obviously, you still need to do higher level integration and system tests as well. But you don't want to wait until you have all those pieces put together before you start testing your lower level modules.
    This, I have no argument with. I will note that I write a lot of code that doesn't involve UI and I wish there was more focus on automated regression and functional testing. It's something I end up having to hand-roll.
  66. Re: TDD is an Approach, not a Task[ Go to top ]

    It acknowledges that good work is subjective. Just as you can't judge the quality of an English paper by the number of pages, you can't judge the quality of your unit tests by the number of tests.

    And, BTW, the assignments in my English class were specific enough that you didn't need to write a book to "sufficiently cover the subject". The same is true in the real world of programming. The situation determines how thorough you need to be.
    But that's exactly the point: it's subjective. And given that the teacher is the one making the subjective judgement, it's reasonable to ask her what her expectations are. Now, in that context, I like your teacher's answer. Don't get me wrong. But it's not really an answer. It amounts to 'figure it out for yourself'. While English teachers and Zen masters are supposed to torture their pupils in such ways, it's generally not how we like to treat our peers, right?
    Perhaps, but I'll say this: All the best programmers I know write unit tests and do some form of TDD.
    It's a very effective strategy and one of the benefits of good design. But good code can be written without TDD. As you pointed out, it's almost impossible with poorly designed modules. Perhaps that's the really the point you are making.
    but I would quibble with you a bit about it specifying proper behavior. In pure TDD, proper behavior is actually defined by the tests. So, by definition, the test actually is specifying proper behavior.
    I'd actually like to discuss this with some rational persons and you seem to be one. My last attempt to discuss this was derailed with arguments about what the definition of 'correct' is, which is (to me) terribly boring and not really useful in anyway. Anyway, I maintain that it's impossible to specify the requirements of many useful applications purely as tests and that this follows from the proof that there is no general solution to the Turing Halting Problem. Another way to state this is that there are requirements that can be specified in more 'standard' ways that cannot be stated as tests that can be executable within a finite time frame. Even for many applications that could be tested in a finite time frame, the amount of time required to do so would be astronomical. Secondly, on a more pragmatic level, having all requirements specified only as tests would not make things easier on a developer. It means that the developer must analyze the test cases and infer from that what problems the program should solve. As an example, how would you specify the required behavior of a SAXParser only as tests?
  67. Re: TDD is an Approach, not a Task[ Go to top ]

    I'd actually like to discuss this with some rational persons and you seem to be one. My last attempt to discuss this was derailed with arguments about what the definition of 'correct' is, which is (to me) terribly boring and not really useful in anyway.
    Love to. I also think 'correct' is a loose, subjective term. A lot of programmers have trouble with anything subjective, fuzzy, nondeterministic, etc. Maybe it's just in our nature.
    Anyway, I maintain that it's impossible to specify the requirements of many useful applications purely as tests and that this follows from the proof that there is no general solution to the Turing Halting Problem. Another way to state this is that there are requirements that can be specified in more 'standard' ways that cannot be stated as tests that can be executable within a finite time frame. Even for many applications that could be tested in a finite time frame, the amount of time required to do so would be astronomical.
    I'll buy that. In some sense, saying that the tests define the behavior is a way to punt on the larger issue of "Does my program do everything I would like it to do?" "Real" TDD just says that this can never really be resolved. So you simply define the behavior in terms of the tests. As long as the tests pass, you have at least proven that your program does everything you expected it to do. (And yes, you can give a recursive counterargument that says you must also prove that the test is behaving right. But let's just put that aside as something that would obviously reap diminishing returns.)
    Secondly, on a more pragmatic level, having all requirements specified only as tests would not make things easier on a developer. It means that the developer must analyze the test cases and infer from that what problems the program should solve.
    So you are not talking about the programmer who wrote the tests and the code, right? We're talking about a programmer who sees it for the first time and wants to know what the program does. I don't think TDD intends to be the only source of communication about what the program does. You still have comments and javadoc and user documentation as needed. But that kind of human communication is subject to interpretation. So we say that the tests provide the true definitive word on behavior. In practice, a programmer looks at the human docs first and then goes to the tests to see the definitive record of what the original programmer specified for behavior. The tests tend to be a far better look into the programmer's intent.
    As an example, how would you specify the required behavior of a SAXParser only as tests?
    You would do it the same as any other program. Something like a SAX parser is a rare example of a program that has very good written specs. So the question of what it SHOULD do can be well understood from the docs. But the behavior of a specific parser is still only defined by the tests. Stan Silvert http://www.jsfunit.org
  68. Re: TDD is an Approach, not a Task[ Go to top ]

    I'll buy that. In some sense, saying that the tests define the behavior is a way to punt on the larger issue of "Does my program do everything I would like it to do?"

    "Real" TDD just says that this can never really be resolved. So you simply define the behavior in terms of the tests. As long as the tests pass, you have at least proven that your program does everything you expected it to do.

    (And yes, you can give a recursive counterargument that says you must also prove that the test is behaving right. But let's just put that aside as something that would obviously reap diminishing returns.)
    I don't really disagree with this but saying that the code passes all the test would then mean the application works the way it should even if the tests are not compatible with the requirements or if the tests don't adequately verify that requirements were met.
    So you are not talking about the programmer who wrote the tests and the code, right? We're talking about a programmer who sees it for the first time and wants to know what the program does.

    I don't think TDD intends to be the only source of communication about what the program does. You still have comments and javadoc and user documentation as needed. But that kind of human communication is subject to interpretation. So we say that the tests provide the true definitive word on behavior.
    In traditional development approaches, the requirements are gathered from the proper sources and documented in a specification. The specification becomes the ultimate arbitrator of whether the software is working as required. When you say the tests are the specification, then it suggests to me that the tests are the last word on whether the code works as intended. But if something that was clearly not intended passes through the tests, you wouldn't say the code is correct, would you? Another thing about this is that it's kind of like letting the fox guard the hen-house. The specs are not necessarily defined by the programmer of a unit but tests generally are. There have been many times where I clearly misunderstood a requirement and coded something perfectly to that misunderstanding. My unit tests would have likewise been geared towards that incorrect understanding. I just don't see how the tests can be "the true definitive word on behavior".
    In practice, a programmer looks at the human docs first and then goes to the tests to see the definitive record of what the original programmer specified for behavior. The tests tend to be a far better look into the programmer's intent.
    This I agree with but that's different than the specifications. Often the programmers intent is not what was asked of him or her. Saying that what the programmer intended is necessarily correct seems a little off-kilter to me.
    As an example, how would you specify the required behavior of a SAXParser only as tests?

    You would do it the same as any other program. Something like a SAX parser is a rare example of a program that has very good written specs. So the question of what it SHOULD do can be well understood from the docs.
    But a SAXParser has an effectively infinite inputs and outputs. No matter how many tests you create, it's possible to have two implementations that pass the tests yet exhibit different behavior for other untested inputs. issue of "Does my program do everything I would like it to do?"

    "Real" TDD just says that this can never really be resolved. So you simply define the behavior in terms of the tests. As long as the tests pass, you have at least proven that your program does everything you expected it to do.

    (And yes, you can give a recursive counterargument that says you must also prove that the test is behaving right. But let's just put that aside as something that would obviously reap diminishing returns.) I don't really disagree with this but saying that the code passes all the test would then mean the application works the way it should even if the tests are not compatible with the requirements or if the tests don't adequately verify that requirements were met.
    So you are not talking about the programmer who wrote the tests and the code, right? We're talking about a programmer who sees it for the first time and wants to know what the program does.

    I don't think TDD intends to be the only source of communication about what the program does. You still have comments and javadoc and user documentation as needed. But that kind of human communication is subject to interpretation. So we say that the tests provide the true definitive word on behavior.
    In traditional development approaches, the requirements are gathered from the proper sources and documented in a specification. The specification becomes the ultimate arbitrator of whether the software is working as required. When you say the tests are the specification, then it suggests to me that the tests are the last word on whether the code works as intended. But if something that was clearly not intended passes through the tests, you wouldn't say the code is correct, would you? Another thing about this is that it's kind of like letting the fox guard the hen-house. The specs are not necessarily defined by the programmer of a unit but tests generally are. There have been many times where I clearly misunderstood a requirement and coded something perfectly to that misunderstanding. My unit tests would have likewise been geared towards that incorrect understanding. I just don't see how the tests can be "the true definitive word on behavior".
    In practice, a programmer looks at the human docs first and then goes to the tests to see the definitive record of what the original programmer specified for behavior. The tests tend to be a far better look into the programmer's intent.
    This I agree with but that's different than the specifications. Often the programmers intent is not what was asked of him or her. Saying that what the programmer intended is necessarily correct seems a little off-kilter to me.
    But the behavior of a specific parser is still only defined by the tests.
    That implies any behavior that is not tested is undefined. The untested behaviors are unlimited. This means that the behavior of the SAXParser is effectively undefined. It is possible to specify the exact behavior of a SAXParser. It may not be able to verify that any meet this specification but at least it can be specified. And there are some classes of specifications that code can be verified against without testing.
  69. One does not imply the other[ Go to top ]

    It's a very effective strategy and one of the benefits of good design. But good code can be written without TDD. As you pointed out, it's almost impossible with poorly designed modules. Perhaps that's the really the point you are making.

    I think however, that by using TDD, you raise the likelihood of actually WRITING good code. With a set of test cases, you have proof that code validly implements specs, and that to me is a major aspect of "good" code. If you have a poor design of modules but a decent set of user tests then you can: -Go ahead and revamp that design, and know that the inputs still produce the required outputs -- that you haven't broken something If you have bad design and no tests, then you spend the time to generate the tests. That should both give you insight into a better design for refactoring AND give you a tool to validate the new design. If you don't have a test suite, then it's sheer dumb luck or heroics.
  70. Re: One does not imply the other[ Go to top ]

    It's a very effective strategy and one of the benefits of good design. But good code can be written without TDD. As you pointed out, it's almost impossible with poorly designed modules. Perhaps that's the really the point you are making.



    I think however, that by using TDD, you raise the likelihood of actually WRITING good code. With a set of test cases, you have proof that code validly implements specs, and that to me is a major aspect of "good" code.

    Untested code is almost guaranteed to be broken and automation of testing is a huge timesaver. I don't think anyone is debating that.
    If you have a poor design of modules but a decent set of user tests then you can:

    -Go ahead and revamp that design, and know that the inputs still produce the required outputs -- that you haven't broken something
    OK, perhaps I am missing something but 'user tests' are very different from 'unit tests'. As I understand it, user tests are not part of the TDD cycle. If you actually meant unit test, then no, I disagree. Unit tests only validate units that have not been changed. If I change a unit's behavior the tests associated with that unit must be modified. If I change the structure of one or more units entirely, the tests may need to be effectively written from scratch for those units.
    If you have bad design and no tests, then you spend the time to generate the tests. That should both give you insight into a better design for refactoring
    When I read statements like this, it makes me think we mean different things by design. Are you talking about the design of the internal implementation of a unit? When I say design, I mean the overall structure of the application consisting largely of the relationships between units. If you mean the latter, then how does a unit test tell you that the overall design is flawed? It seems to me that a unit test can only tell you whether a given behavior is incorrect.
    AND give you a tool to validate the new design.
    I don't understand how a unit test that verifies an old behavior will tell me if the new behavior is correct. Can you explain?
    If you don't have a test suite, then it's sheer dumb luck or heroics.
    Where I work we don't receive any specific requirements. This makes producing a correct implementation much easier. We are then guided to use tools that allow only monolithic approaches to software design so there are no units. This greatly reduces the unit testing burden. There is no real QA department so testing consists of superficial verification of the most obvious use cases. And because the users have no faith in us, they don't bother to point out anything but major issues. It's a really great system.
  71. Testing is for suckers.
  72. Re: TDD is an Approach, not a Task[ Go to top ]

    Testing is for suckers.
    And that is why you fail, young Jedi.
  73. Do I need to write code to test my test code? If so do I write that first?
  74. Do I need to write code to test my test code? If so do I write that first?
    Yeah baby! Now you get it!
  75. But we need more good practices and guidelines.
    I believe this is the crux of the problem. Practices and guidelines are like pizza toppings. Everyone will argue over what are the best combinations which yield the best results, or the better pizza. Some practices are dead simple, such as using the default package, or not. There are technical reasons and social reasons to not do that. The benefit is quite upfront. Multithreaded programming and the patterns behind which are not. People at times will screw it up getting to the goal of a project, much like TDD. Some of it requires talent, be it natural or developed. Some of it is political, similar to the mythical man month. Anyone who is in charge of development can guide those who are not in charge. When there is poor social infrastructure and lack of talent, it seems like the entire thing falls apart over time.
  76. It cracks me up that when Agile came out it was all about not doing redundant work like updating documentation. So now the idea is to always create you test cases first. Great. But the idea is also that you know the software requirements are going to change. Of course you're not going to know how it's going to change until it's written and the customer tells you to re-write it. So then, as a developer, knowing this, why would I spend all my time up front making a really thorough unit test when in a few days or a week I know I'm going to have to rip out everything I've done and do it over? It's hilarious to me that the same crowd that evangelized Agile is now evangelizing TDD. Not that they're evangelizing TDD, but that they're doing it with a straight face. I'd rather write tracer bullets, then once I feel that what we've got is close I'll go ahead and write the test case.
  77. No one is actually doing TDD[ Go to top ]

    I have also been thinking that "Test Driven Development" is a bit misleading but for a slightly different reason.

    If you compare it to Use Case Driven Development the use-cases actually are the ones that define what you write, in TDD the tests are not.

    Let say we really where doing TDD and say for example that you should write a function that returns the square of what you put in. If you only had the test, that might be: 1->1, 2->4 and 3->9, the most obvious implementation could look something like this: public int testCase1(int in) { if (in == 1) return 1; if (in == 2) return 4; if (in == 3) return 9; throw new UnknownInputException(in); } So why doesn't people write stupid implementations like this? Well simply because they have the requirements and that means that they are actually doing RDD (Requirement Driven Development). The test cases you write are only there to assist you. A better term would be TAD (Test Assisted Development) even if the acronym could be target for jokes and is used in the original article to denote Test After Development.

  78. Re: No one is actually doing TDD[ Go to top ]

    TAD (Test Assisted Development) even if the acronym could be target for jokes and is used in the original article to denote Test After Development.
    i am slow, whats kinda of joke could be made? (smile)
  79. I personally think that the problem with TDD is that it is elevated to the status of a process. I treat TDD as a programming technique: useful sometimes and not useful other times. For example, I often use TDD when I am programming fiddly string manipulations because it is helpful to have a few up-front tests to catch off-by-1 errors and the like. I often don't bother with up-front tests for simple code where I have a solid idea of how the code should be written. It is easier for me to just write the code, then create a few tests to ensure proper code coverage and make it less likely future changes will break backwards compatibility.
  80. I personally think that the problem with TDD is that it is elevated to the status of a process. I treat TDD as a programming technique: useful sometimes and not useful other times.
    I'm not sure I 100% agree with this. If your business analysts are encouraged to include test cases with their feature specs and you implement those test cases this becomes part of a process. If you are asked to build a system that scales so you put together a test environment upfront to measure scalability (as opposed to waiting until the end to start tuning) that's a form of TDD. I think it's more of a philosophy about how things are built that pervades all aspects of software. In other forms of Engineering this approach is common place. As far as at the coding level, I'd assert the following. 1. People who practice TDD develop more reusable code. A Unit test is a chance at reusing code (to test it) and therefore leads to better implemented and factored classes. In most cases when I work with someone and they say their code can't be tested. They are right, the code is so poorly factored that a test can't access the interesting stuff and it is unlikely the code would ever be reusable. I'd say TDD developed code is far more reusable than say code produced from a UML based design approach. 2. People who practice TDD write better more usable APIs. Again, writing a test you actually are using the classes and cumbersome multi step calls or indirect "handleIt" type methods that are awkward to use are fixed early. 3. TDD combined with Javadocs that explain usage are more useful form of documentation than a design document (and is therefore more important). Of course a detailed requirements/spec doc is more important than either, but a year down the road, well commented how-to-use/tutorial style code combined with tests that assert assumptions and rules of this system are much more important than stale design docs. If you don't believe this, ask yourself, which would you rather find on a project you are new to. Design Docs, which are out of date and give you a close but not exactly idea of the system or a self testing system that whenever you make a change and break a test, you go to the test, it explains why this assertion was made and the code explains the approach so you can determine if your change is valid or you misunderstood something. Said another way, having design docs in no way proves that the code implements the design, where as a testing approach leaves a design trail validating what the original author intended the code to do.
  81. Design Docs, which are out of date and give you a close but not exactly idea of the system or a self testing system that whenever you make a change and break a test, you go to the test, it explains why this assertion was made and the code explains the approach so you can determine if your change is valid or you misunderstood something.

    Said another way, having design docs in no way proves that the code implements the design, where as a testing approach leaves a design trail validating what the original author intended the code to do.
    Unit tests that are out of date are generally more useless than design documents that are out of date. I've worked with both.
  82. Unit tests that are out of date are generally more useless than design documents that are out of date. I've worked with both.
    This is going to sound really obvious, but a good test does 2 things. 1. It passes when the system is behaving correclty. 2. It fails when the system is not behaving correctly. Tests that pass when the area they are testing is broken or fail when that area is working are not good tests. I agree that as the system changes tests can become obsolete and should be updated or removed, whichever is more relevant, but this should be obvious because they would start failing as the changes occur. If you have out of date tests that are passing the issue is your test writing, not the process. If you have out of date tests because you stopped running them, well that's a different issue entirely.
  83. Unit tests that are out of date are generally more useless than design documents that are out of date. I've worked with both.
    ...I agree that as the system changes tests can become obsolete and should be updated or removed, whichever is more relevant, but this should be obvious because they would start failing as the changes occur.

    If you have out of date tests that are passing the issue is your test writing, not the process.
    I agree but the same thing could be said for design documents. I think what is valid is that it's easier to determine that tests are out of date.
  84. On a side note, do you have to wear a bush hat to be an 'Agilist'?
  85. A matter of Cost Benefit[ Go to top ]

    After reading a large part of this thread, it seems the matter of whether TDD (or whatever you call it) is justified boils down to a simple matter of cost versus benefit. I've attempted to develop with unit tests in both a test first and test last approach and found it provides a negative benefit to my code. If anything it becomes intrusive in the design since it forces my design to conform to automated unit testing. In addition, it actually hinders agility. If I change my code, I then have to change my test and make sure it's correct, even though the code is not broken - the test is broken. Or I begin to think not what is the best way to clean up this code so that there is less coupling, more clarity, higher cohesion, more simplicity, etc. Rather, I begin to think how do I write this so that I can test it or how do I write this so that it fits the tests. It becomes an opposing or hindering force against my design. In addition, I've found unit tests to be unreliable. Simply because all tests pass does not guarantee the tests are correct or complete. The tests are just as susceptible to human error as much as the code that is being tested. In my experience, it actually promotes poor design since it imposes itself upon your design, it's unreliable, and adds maintenance overhead. It's simply a nuisance. Anyway, that's just my 2 cents.
  86. Re: A matter of Cost Benefit[ Go to top ]

    I've attempted to develop with unit tests in both a test first and test last approach and found it provides a negative benefit to my code. If anything it becomes intrusive in the design since it forces my design to conform to automated unit testing. In addition, it actually hinders agility. If I change my code, I then have to change my test and make sure it's correct, even though the code is not broken - the test is broken. Or I begin to think not what is the best way to clean up this code so that there is less coupling, more clarity, higher cohesion, more simplicity, etc. Rather, I begin to think how do I write this so that I can test it or how do I write this so that it fits the tests. It becomes an opposing or hindering force against my design.

    In addition, I've found unit tests to be unreliable. Simply because all tests pass does not guarantee the tests are correct or complete. The tests are just as susceptible to human error as much as the code that is being tested.

    In my experience, it actually promotes poor design since it imposes itself upon your design, it's unreliable, and adds maintenance overhead. It's simply a nuisance. Anyway, that's just my 2 cents.
    I know I'll get blasted once again for saying this, but it happens to be true. TDD is a skill that can't be learned over night. I've seen others have the same kinds of problems that Paul had with TDD. And I had them myself when I first tried it. Most likely, Paul made some of these mistakes: TDD Anti-patterns For those wanting to learn TDD, it is best to find someone who can teach you in person. Short of that, these two books are a must read: * Refactoring by Martin Fowler * Test Driven Development by Kent Beck The estimates I've heard for learning TDD are that it takes about three months to get the hang of it and about a year to get really good at it. In my experience, that sounds about right. Some won't want to invest that much time and energy in a skill where they are skeptical about the payoff. But if you do give it a try, you will at the very least improve your refactoring skills. Stan Silvert http://www.jsfunit.org
  87. Re: Re: A matter of Cost Benefit[ Go to top ]

    The estimates I've heard for learning TDD are that it takes about three months to get the hang of it and about a year to get really good at it. In my experience, that sounds about right. Some won't want to invest that much time and energy in a skill where they are skeptical about the payoff. But if you do give it a try, you will at the very least improve your refactoring skills.
    I think that's the crux of the debate. A full year to get good at something that appears to give marginal benefit. In the meantime, my code is still solid and I'm getting things done quickly. I see the tests perhaps finding a bug that would otherwise take me at most an hour to find and fix, but most likely a few minutes. I appreciate your point of view, but at least for me, again, the cost doesn't justify the benefit.
  88. Re: Re: A matter of Cost Benefit[ Go to top ]

    The estimates I've heard for learning TDD are that it takes about three months to get the hang of it and about a year to get really good at it. In my experience, that sounds about right.

    Some won't want to invest that much time and energy in a skill where they are skeptical about the payoff. But if you do give it a try, you will at the very least improve your refactoring skills.


    I think that's the crux of the debate. A full year to get good at something that appears to give marginal benefit. In the meantime, my code is still solid and I'm getting things done quickly. I see the tests perhaps finding a bug that would otherwise take me at most an hour to find and fix, but most likely a few minutes. I appreciate your point of view, but at least for me, again, the cost doesn't justify the benefit.
    That's a fair comment. From my experience, the payoff is quite big though. That bug that you fix in an hour can be very expensive if it makes it to production. Stan Silvert http://www.jsfunit.org
  89. Re: Re: Re: A matter of Cost Benefit[ Go to top ]

    That bug that you fix in an hour can be very expensive if it makes it to production.
    Of course, but as the slew of anti-patterns you point out to attest, there's no guarantee that you'll be bug free regardless of what approach you take, but there is a guaranteed cost to a TDD approach. Anyway, I'll check out the Beck book to see if any of my assumptions are incorrect. However, perusing a number of websites on the subject, it seems the approach is fraught with pitfalls.
  90. Re: Re: A matter of Cost Benefit[ Go to top ]

    I think that's the crux of the debate. A full year to get good at something that appears to give marginal benefit.
    Paul, same could be said about any new technology or tool we used to now. Some time ago they were this year-needed-to-learn stuff much like TDD. I think that now there are few new big things that really burst your productivity. Most of them just add a little here or there. Same is true for TDD.
    In the meantime, my code is still solid and I'm getting things done quickly.
    Sorry, but you words reminded me my 3-year students which said that "it just works so why should I refactor this 200-line method?". I don't mean you write bad code, I'm sure you write good one, but there is always field for self improvement and investments in learning TDD could give you benefits. Or they couldn't?
  91. What is good with TDD and Agile stuff is that everyday problems of a coder are considered. Suddenly, that coder is not alone with code and that ridiculous analysis document. New ideas are introduced and the coder is listened. What is wrong with the process? Would this help? Try this one, that could make your life better... This listening itself must be motivating and lead to better results. Also, other people in a project will be aware of problems and try to create better atmosphere for that poor coder. It's no wonder that all Agile researches show excellent results. About coding methods... I think TDD is really good for novice coders. It forces you to think first what you are doing. I think it's really useful on the server side and with "POJOs". Of course unit testing is one of my coding tools. But when my customer wants me to implement big group of use cases into production in two weeks because of new government laws that nobody thought in time... What do I do? (AATDD-method introduced) I will code one week without running the application, run and test two days and then let the customer test three days. This is not test-first, it's purely think-first. I call this Anti-Agile-TDD for serious coders. It does not work in group or with new technologies. And I know it's very Dirty, but there is no other way to do this in time (believe me, I've tried). How does it work in production? Customer is happy and gives my company a lot of money. After you have learned TDD, try my method :) Tomi
  92. Re: Re: A matter of Cost Benefit[ Go to top ]

    Sorry, but you words reminded me my 3-year students which said that "it just works so why should I refactor this 200-line method?". I don't mean you write bad code, I'm sure you write good one, but there is always field for self improvement and investments in learning TDD could give you benefits. Or they couldn't?
    Yeah I guess it came across as a bit puerile, but my point was that before there was XP and TDD, good design existed and good code existed. I have no problem taking a year learning stuff. It took me about a year to feel very fluent in the Gof4 patterns. But again, my point is cost benefit. Not just to myself, but to the code as well. Minor benefit (i.e. if your code works and is well designed how much more do you gain?) big cost (a piece of test code can be twice as large as the code that's actually being tested and still may not work nor be correct)
  93. Re: Re: A matter of Cost Benefit[ Go to top ]

    The estimates I've heard for learning TDD are that it takes about three months to get the hang of it and about a year to get really good at it. In my experience, that sounds about right.

    Some won't want to invest that much time and energy in a skill where they are skeptical about the payoff. But if you do give it a try, you will at the very least improve your refactoring skills.


    I think that's the crux of the debate. A full year to get good at something that appears to give marginal benefit. In the meantime, my code is still solid and I'm getting things done quickly. I see the tests perhaps finding a bug that would otherwise take me at most an hour to find and fix, but most likely a few minutes. I appreciate your point of view, but at least for me, again, the cost doesn't justify the benefit.
    Right on. Strong developers don't need test code. What a waste of time.
  94. A task has a fixed price that can be put on an invoice and completed. An approach is billable consulting time (all the time) that lasts indefinitely.
  95. TDD on a team[ Go to top ]

    It seems to me that unless you can get everyone on a team doing TDD, it's not going to be successful. Do people who are TDD advocates agree or is there a way to do it without getting everyone on-board?
  96. Re: TDD on a team[ Go to top ]

    It seems to me that unless you can get everyone on a team doing TDD, it's not going to be successful. Do people who are TDD advocates agree or is there a way to do it without getting everyone on-board?
    You can definitely do it by yourself. It's better to have the whole team doing it, but there is no reason that your own code shouldn't get the benefits of TDD. I can remember several years ago I was at a new job and was the only one on the team doing TDD. I did an update from the repository and one of about 50 tests suddenly failed. I quickly traced down the cause and went to the person who made the change. He was amazed at what I found, which was an edge-case bug. He promptly fixed the problem and we moved on. Had our entire team not been outsourced to Israel, I think I would have been able to eventually get the rest on board. There is a certain amount of positive peer pressure that can be brought to bear. If others on the team see that you have well-factored code with a nice unit test suite, they will want to have that as well. Stan Silvert http://www.jsfunit.org
  97. Re: TDD on a team[ Go to top ]

    It seems to me that unless you can get everyone on a team doing TDD, it's not going to be successful. Do people who are TDD advocates agree or is there a way to do it without getting everyone on-board?

    You can definitely do it by yourself. It's better to have the whole team doing it, but there is no reason that your own code shouldn't get the benefits of TDD.
    But if other people are modifying the code I wrote and not updating the test cases, they will quickly become out of date. If anyone comes back to try and update the test cases, they will be stuck trying to do it after the code is complete. They will not be doing test-first development.
  98. Re: TDD on a team[ Go to top ]

    It seems to me that unless you can get everyone on a team doing TDD, it's not going to be successful. Do people who are TDD advocates agree or is there a way to do it without getting everyone on-board?

    You can definitely do it by yourself. It's better to have the whole team doing it, but there is no reason that your own code shouldn't get the benefits of TDD.


    But if other people are modifying the code I wrote and not updating the test cases, they will quickly become out of date. If anyone comes back to try and update the test cases, they will be stuck trying to do it after the code is complete. They will not be doing test-first development.
    That's true. But most of the time you will maintain the code you wrote for as long as you work at the company. That's just a fact of life. So it's usually not so hard to keep your stuff in good shape. As I've said earlier though, TDD isn't just about regression. Regression tests are a bonus. The big benefits come in the initial development of well-factored code. Stan Silvert http://www.jsfunit.org
  99. Re: TDD on a team[ Go to top ]

    That's true. But most of the time you will maintain the code you wrote for as long as you work at the company. That's just a fact of life. So it's usually not so hard to keep your stuff in good shape.
    That is not the really the case where I work now. We have contractors leaving their marks on the code at all times. They complain bitterly when we require their code be reviewed and that they document their "unit testing". I can't even imagine what would happen if we required they do TDD. Actually, that might be fun to try. It was the same at my last employer. Outsourcing is king regardless of the fact that its affect on quality is mostly negative, in my experience.