Discussions

News: Rod Johnson Highlights Technologies for Developers to Watch

  1. Reporting from last week's TSS Symposium, eWeek writes about Rod Johnson's trend keynote in which Rod outlined technologies we should be on top of. Rod also mentioned that J2EE today is in a very healthy state, with ongoing innovation and successful execution, and in a better place to take on .NET.

    Read Open-Source Leader Highlights Technologies for Developers to Watch.

    An interesting point Rod brougth up was on what should and shouldn't be standardized:
    Johnson also questioned the notion of applying standards for every technology. Speaking on the recent skirmish regarding Java persistence, Johnson said, "What would have happened if O/R mapping had not been standardized in Java? We wouldn't have lost two to three years with entity beans and we wouldn't have wasted six to 12 months with the JDO/EJB3 [Java Data Objects/Enterprise JavaBeans] war."

    Threaded Messages (23)

  2. I'm a Luddite but...[ Go to top ]

    The contant focus of developers on testing has caused more problems than it's solved, even though it *can* solve problems in the long run. People have learned to mangle their solutions to cater for testing - and unit testing is often the concept they shoot for as opposed to being able to solve actual requirements.

    This isn't to say that unit testing isn't important - but I think an architecture is more valuable, and TDD proponents tend to see architecture as "whatever enables testing" as opposed to "whatever enables fulfillment of requirements."

    Or so it seems to me.
  3. I'm a Luddite but...[ Go to top ]

    The contant focus of developers on testing has caused more problems than it's solved, even though it *can* solve problems in the long run. People have learned to mangle their solutions to cater for testing - and unit testing is often the concept they shoot for as opposed to being able to solve actual requirements.This isn't to say that unit testing isn't important - but I think an architecture is more valuable, and TDD proponents tend to see architecture as "whatever enables testing" as opposed to "whatever enables fulfillment of requirements."Or so it seems to me.

    TDD is mostly used as part of agile development processes. And it is for fulfilling business requirements not just "enable testing". TDD is one of the tools to get your business requirements into the code, which helps by the way to get to the better architecture.
  4. I'm a Luddite but...[ Go to top ]

    TDD is mostly used as part of agile development processes. And it is for fulfilling business requirements not just "enable testing". TDD is one of the tools to get your business requirements into the code, which helps by the way to get to the better architecture.

    If that's the case, why are so many agile projects infested with architecture that caters to testing rather than scalability?
  5. I'm a Luddite but...[ Go to top ]

    If that's the case, why are so many agile projects infested with architecture that caters to testing rather than scalability?

    In my opinion scalability is something which needs complex approach. Making upfront design will put into application a lot of code which most probably I will not need. Agile development helps me keep my code clean, doing exacly things which I (or/and my customer) want it does. And it is much easier to make performance optimizations for better refactored code rather than to keep optimization in head from very beginning and let it drive my development.

    Fuad.
  6. I'm a Luddite but...[ Go to top ]

    If that's the case, why are so many agile projects infested with architecture that caters to testing rather than scalability?
    That's because a scalable application that _does not work_ is completely useless. First make it work, then make it better.
  7. I'm a Luddite but...[ Go to top ]

    The contant focus of developers on testing has caused more problems than it's solved, even though it *can* solve problems in the long run. People have learned to mangle their solutions to cater for testing - and unit testing is often the concept they shoot for as opposed to being able to solve actual requirements.This isn't to say that unit testing isn't important - but I think an architecture is more valuable, and TDD proponents tend to see architecture as "whatever enables testing" as opposed to "whatever enables fulfillment of requirements."Or so it seems to me.

    I think, the problem is in the diverse and obscure definition (or - lack of it) of the notion of "Software Architecture", in the industry. Martin Fowler has an interesting article about it, which you, most probabely, have already read.

    According to him (and I can not help agreeing) Architecture in building construction and software engineering are two similiar but different things. What is different is that - in construction, it is almost impossible to change the architecture on a later date. Nothing is impossible in the software engineering, or even - hard, if approached the right way.

    So, it is WRONG to over-emphasize the importance of the software architecture and position it as something holy, untouchable or eternal :)

    Martin defines Software Architecture as a common understanding of the software project's design, as shared by the expert developers of the project. That common understanding can (and probabley, will) change as the project progresses and/or business requirements change. It is normal, the architecture to constantly change - one of the main principles of the iterative development and Agile processes.

    Why do I like TDD?

    1) by the nature of software engineering any code needs refactoring. Having code covered with unit-tests, gives me confidence during the refactoring. I do not know any other reliable way of having the same kind of confidence. Without the confidence, I would be afraid to do refactorings often, which would make the software product of lesser quality.

    2) Due to the nature of unit-tests (emphasize: unit), it is impossible to test a "badly-smelling" code. In order to be able to unit-test your code, it has to be cleaner, and better designed: short, focused methods, writing to Interfaces, lously coupled etc.

    So, by its nature, unit-tests require better code design, writing unit-tests affects and improvea the design.

    I think, this might not have been something that was foreseen initially but discovered later. My guess is, initially, people just wanted to make computer do what it is best for - repetitive work (in this case - testing) and have code test the application, not humans. But as a "byproduct" they noticed that testing requires design improvement, too. When they discovered it, somebody smart-enough had an idea - if it is so, why not write tests first? Design is supposed to come before the implementation and if tests affect design, it makes sense to write them before the implementation, too.

    And the TDD started...

    I find this approach fair and useful. It, also, helps my inherent laziness. If I write code first, then I may be lazy to add unit-tests to it, later :-)
  8. TDD with existing code[ Go to top ]

    Most of the discussion here is for new development where usecases exist and tests can be written in advance of implementation.
    However, many a times, there is already a large body of code that exists which needs to be refactored. Here there are no test cases, use cases are hidden deep in the code and documentation is sparse.
    I am interested in knowing others experience in similar situation. How do you ensure that the refactored code will behave similar to the old code in the absence of tests?

    -Bijal

    My definition of TDD: "Use constants to verify changes."
  9. I'm a Luddite but...[ Go to top ]

    I agree but there is something else that troubles me with this ongoing TDD euphoria. When you look at a set of test cases, what you normally see is a huge number of tests for trivial and completely obvious cases and almost none to test the really ugly stuff.

    To some degree this is simply a sophisticated attempt at self deception on the part of developers but I believe it is also a reflection of the fact that some things will always be very very hard to test. How do you test if somebody has used a wrong transaction isolation level or cursor sensitivity somewhere? The effects might only be visible once a year as a small inconsistency in the all important report.

    How do test for character set problems that could be introduced somewhere in a chain of edit, transform, load, save operations distributed over various client and server machines? A single erroneous environment setting on some client machine could be the source.

    There are sources of errors that cannot be effectively tackled by writing large numbers of tests. Some can be found by writing a few very well targeted tests and some types of issues can only be prevented by a deep understanding of the design and logic of the system. For example, the correctness of everything connected to concurrency has to be proved logically, not simply empirically shown to work at some point in time.
  10. Well, yes and no. It's hard to argue that no technique is a silver-bullet and, probably, you may only be able to cover 80% of your code, successfully, with the unit-tests, but I do not think it is as hopeless as you say.
    How do test for character set problems that could be introduced somewhere in a chain of edit, transform, load, save operations distributed over various client and server machines? A single erroneous environment setting on some client machine could be the source.

    That's why we are writing UNIT tests, not just any kind of tests. You separate your complex application into very small pieces, that are testable. You never test complex structure as one whole entity. Small pieces _are_ testable, even if together they produce a highly complicated component.

    And, of course unit-tests are not the only kind of tests, you have to use other types of testin: regression, acceptance, profiling etc.

    The big difference is - code without unit-tests is a chaos, unknown jungle. With unit-tests it becomes much more clear and reliable.
  11. I'm a Luddite but...[ Go to top ]

    I think I agree with some of the objections raised with TDD, but the problem seems more with how people use it than a basic flaw in the concept. The 2 primary objections raised are:
    1. There is a lot of test-centric coding with less focus on ensuring overall business requirement is fulfilled.
    2. Developers seem to think that having run the product through the Unit tests ensures that the product is well-tested and production ready.

    All that TDD should be expected to do is run a sanity check at the end of a new build to ensure that the contracts that each unit of code is supposed to fulfill have not been broken. You still need to go through your entire QA cycle to ensure you have a functional product. So, the automated unit tests are just additional checks and do not replace QA. But, unfortunately I have seen situations where people believe that once they have a proper build and unit tested they can reduce the time they spend in QA. Not really true!!! It just ensures that if the unit tests were written properly, the quality of the code reaching QA will be a little better.
  12. I think what Srinivas says is worth repeating. As far as quality 'assurance' goes, TDD ensures you have a stable codebase. Period.

    A stable codebase means: refactorable & buildable. Confidence in refactoring your code on an ongoing basis (which leads to more productivity, cleaner/easier to understand/maintain code, better designs that fit the 'evolved' requirements, etc.); plus, a build that isn't dead on arrival (which allows QA teams, customers, etc. to be able to reliably try to use your product). That's all.

    Higher level functionality testing (not the domain of TDD) requires customers, QA teams, system tests, ad hoc tests, etc., in short, a well-thought out strategy for ensuring that requirements have been met.

    Here are Srinivas' comments again (like I said, they're worth repeating :-) -->
    I think I agree with some of the objections raised with TDD, but the problem seems more with how people use it than a basic flaw in the concept. The 2 primary objections raised are: 1. There is a lot of test-centric coding with less focus on ensuring overall business requirement is fulfilled. 2. Developers seem to think that having run the product through the Unit tests ensures that the product is well-tested and production ready. All that TDD should be expected to do is run a sanity check at the end of a new build to ensure that the contracts that each unit of code is supposed to fulfill have not been broken. You still need to go through your entire QA cycle to ensure you have a functional product. So, the automated unit tests are just additional checks and do not replace QA. But, unfortunately I have seen situations where people believe that once they have a proper build and unit tested they can reduce the time they spend in QA. Not really true!!! It just ensures that if the unit tests were written properly, the quality of the code reaching QA will be a little better.
  13. Well, yes and no. It's hard to argue that no technique is a silver-bullet and, probably, you may only be able to cover 80% of your code, successfully, with the unit-tests, but I do not think it is as hopeless as you say.
    How do test for character set problems that could be introduced somewhere in a chain of edit, transform, load, save operations distributed over various client and server machines? A single erroneous environment setting on some client machine could be the source.
    That's why we are writing UNIT tests, not just any kind of tests.

    I don't understand your reasoning there. Unit tests normally don't cover situations like the one I have described. Of course there are other kinds of tests, but I find it questionable to throw lots of tests of whatever kind at a problem that is not well understood. If you run into problems connected to an overly complex distributed system and its configuration, it's probably better to change the architecture or learn to understand it better. I'm well aware though, that one doesn't preclude the other.
  14. I'm a Luddite but...[ Go to top ]

    There are sources of errors that cannot be effectively tackled by writing large numbers of tests.

    That's likely why the Agile Manifesto doesn't discuss testing. Indeed "agile development assumes that the requirements are incomplete and that the customer is the only one able to refine the requirements". Requirements changes often means lost work, compounded by whatever effort was put into test development. To some extent Agile and testing are conflicting forces.
  15. I'm a Luddite but...[ Go to top ]

    Requirements changes often means lost work, compounded by whatever effort was put into test development. To some extent Agile and testing are conflicting forces.

    I just don't see this. Agility is not about doing the least amount of typing possible in any situation. Rather, a core principal is accepting that change will happen. If you have no way to verify any change even as you are working other than another QA cycle, the team tends to fall into paralysis as they try to deflect any changes, much to the detriment of the customer.


    Keep in mind that these changes are taking (at least) two forms: new/modified requirements and refactoring to simplify the system in preparation for the next shift in direction. You may be able to localize requirements changes at first, but without refactoring my experience has been that the overall complexity system tends to rise without bound making each incremental change that much more difficult and risky. Having a suite of unit tests helps nail down the necessary invariants and ensure that the contract of each unit (however that is defined) is upheld with each transformation. I have found that having that test suite allows developers to tackle changes more effectively and with greater confidence, leading to actually implementing features and refactorings correctly rather than as a chain of fragile but allegedly low-impact hacks.
  16. I'm a Luddite but...[ Go to top ]

    Vincent, great way to put it. I agree whole-heartedly. TDD is indeed very important.

    It would be nice to test every object to a great degree by mocking every aspect of every object - however, in my experience this has become overwhelming for large size projects - and so I now prescribe to the theory of (1)unit-testing features and sub-features valued by BA/QA - parallelling QA testcases as much as possible and (2)objects deemed complex by architect/developer.

    The first one is essentially comprehensive from a testing prespective but the second one is important for dealing with complex internals that may be modified in isolation and unit-tested.

    Ofcourse, this is in regards to POJO unit-testing only and not the final integration level testing with all databases, Queues, etc. hooked up.

    This seems to reduce and focus the number of unit tests.
  17. I'm a Luddite but...[ Go to top ]

    My experience with this mirrors yours. I believe, however, that there is a common misunderstanding about what TDD should entail.

    A test-driven approach does not consist of writing tests for every implementation detail of every object. That effectively pins down the entire implementation of the unit as an invariant to be rechecked and preserved with each transformation. Rather, the tests that actually add value are the ones that verify exactly the requirements you describe. When one uses those tests that actually reflect intended behaviors to drive development, additional classes/methods/entities may fall out of the implementation. However, those implementation details should not be pinned down by writing ex post facto tests for them. Doing so adds little value as the intended behavior is already exercised elsewhere and actually makes future changes more difficult. One would have to determine for each test whether actually reflects an intended invariant and either rewrite or remove it.

    I have noticed that the cases where TDD led to a profusion of low-value tests, the developers frequently were not taking a test-driven approach at all. Rather, they wrote a profusion of tests after the fact with the expectation that the more tests they had lying around the better the system would be, regardless of what was actually tested. To a great extent, these exercised internal implementation details that were not reflected in actual requirements. This has the dual effect of creating sea of tests that do not add much value and pinning down as invariants attributes and behaviors that need not be.
       An extreme (in the bad way :) ) example I saw was a developer who would run the system, write down the output, and then write a test to assert that output. Every time any component on which his depended changed, he would have to tweak his tests. He figured it was better to have a larger number of tests, even if their failure had no meaning other than it was time to change the expected values.
  18. I'm a Luddite but...[ Go to top ]

    "A test-driven approach does not consist of writing tests for every implementation detail of every object."

    Well stated. I am yet to see the dramatic benefits of TDD or Agile for that matter, or be convinced that they offer a huge improvement over what's gone before. They might help on your CV though. Nobody ever bothered with full blown RUP, but that doesn't mean it was all bad and now Agile / TDD is all good.

    It seems to me that if your are developing a fairly generic layer of a SOA whose future uses (i.e. clients) may be unknown, writing lots of unit tests that exercise the internal workings of that layer is not a good approach - you are locking down the functionailty of that layer, making future changes more difficult and costly. Better to have end-to-end unit tests invoked from the client layers that exercise the real business requirements.

    I also find the agile concept of only implementing known requirements and disregarding possible future requirements counter-intuitive. Maybe it's a pushback from all those overengineered systems with functionality that never got used. Once a system is in production, it is expensive to go back and reengineer it to make it more flexible. So I think a reasonable balance needs to be achieved.

    My simple advice is to take what works for you from Agile / TDD, RUP or whatever your favourite methodoly is, and adapt it to your cicumstances, using common sense and experience (if your lucky enough to have any of either!) as your guides.
  19. I'm a Luddite but...[ Go to top ]

    Better to have end-to-end unit tests invoked from the client layers that exercise the real business requirements.

    Agile emphasizes end-to-end readiness (steel thread), and The Manifesto mentions neither testing nor modular development. I don't write unit tests, except sometimes at for a subsystem, but almost never for a class. I've found that design-by-contract assertions exercised during sub/system testing is the quickest way to uncover bugs.

    Remember that you have to bill for the labor it takes to write unit tests. If the system tests can catch the same bugs, then it is cheaper for the developer to validate his modules by reusing the system tests. In this case, reuse saves test development labor. Ie, more time for TheServerSide.
    I also find the agile concept of only implementing known requirements and disregarding possible future requirements counter-intuitive.

    I'm not particularly bright, and I independently learned that rule before ever knowing of Agile or XP. That rule is a naturally inescapable fact. It's not just the obvious saving of engineering labor. It's an affirmation that the customer explicit description is always right, and not the corrupting forces, such as a developer's hope for reusability.
  20. I'm a Luddite but...[ Go to top ]

    Requirements changes often means lost work, compounded by whatever effort was put into test development. To some extent Agile and testing are conflicting forces.

    There very well might be changes to the JUnit's that test the functionality that changed. (Which is why developers should realize that unit tests need to be maintainable as well.) The idea of a comprehensive test suite is that you can run all your tests and be relatively comfortable that the change you made didn't affect other parts of the system. Without this test suite any change in the system is a scary thing that will take a long QA cycle to verify that the system is still stable. This is how agile embraces changes and testing to go hand in hand.
  21. Interesting. Does make sense.

    Rest of the industry goes like this:
    1. Here is a problem.
    2. Innovate, create and improve a solution
    3. Use. go to step 2 if not satisfactory
    4. Hmnn.. lets standardize

    If you observe, many fields (not only computer science) follow this approach.

    JCP goes like this:
    1. Here is a problem
    2. Here are a group of bright guys, lets make a solution
    3. Innovate and make a solution, standardize (We think this will work, we have NOT worked with it ourselves), here is a crappy reference implementation if you want to prove it works. Yey !!! the pet store runs.
    4. Make industry use it for 2 years. Hmnn they hate it
    5. Ok industry, we screwedup, here is another standard (go to step 2)

    In short, this is the difference between innovate - standardize or standardize and innovate.

    For the persistence part of EJB3 , JCP is following the innovate - standardize model (based on hibernate)
  22. Interesting. Does make sense.Rest of the industry goes like this:1. Here is a problem.2. Innovate, create and improve a solution3. Use. go to step 2 if not satisfactory4. Hmnn.. lets standardizeIf you observe, many fields (not only computer science) follow this approach.JCP goes like this:1. Here is a problem2. Here are a group of bright guys, lets make a solution3. Innovate and make a solution, standardize (We think this will work, we have NOT worked with it ourselves), here is a crappy reference implementation if you want to prove it works. Yey !!! the pet store runs.4. Make industry use it for 2 years. Hmnn they hate it5. Ok industry, we screwedup, here is another standard (go to step 2)In short, this is the difference between innovate - standardize or standardize and innovate.For the persistence part of EJB3 , JCP is following the innovate - standardize model (based on hibernate)

    But this isn't the model of the JCP. The JCP uses existing technology if it's there.
  23. For the persistence part of EJB3 , JCP is following the innovate - standardize model (based on hibernate)
    But this isn't the model of the JCP. The JCP uses existing technology if it's there.

    Didn't I already mention that ?
  24. Error in FireFox using link[ Go to top ]

    Active Server Pages error 'ASP 0126'

    Include file not found

    /component/display_module_group_component/0,2443,s%3D26706%26t%3D1,00.asp, line 348

    The include file '/component/display_whitepapers/0,1291,c%3D34,00.asp' was not found.