TMC Releases Productivity Case Study Results

Home

News: TMC Releases Productivity Case Study Results

  1. TMC Releases Productivity Case Study Results (47 messages)

    The Middleware Company has released their productivity case study results. This study took two J2EE development teams, and had them write the PetStore from scratch, one using an MDA approach, and the other a more traditional approach.

    The paper discusses what was done each week, design patterns that were used, issues that came up along the way, and more.

    This makes an interesting read, as we get to observe two parrallel processes and see the differences. Take a quick peek at the PDF and let us know what you think.

    2003 Productivity case study

    Middleware Company Case Study Site

    Threaded Messages (47)

  2. Please explain.[ Go to top ]

    Take a quick peek at the PDF and let us know what you think.


    According to the study the best productivity result was obtained with the MDA approach and took about 330 hours.
    So let's see .... according to Clinton Begin "JPetStore was developed by a single developer in his spare time (i.e. not his full time job) over about two weeks." he also refactored the code in 40-50 hours to make JPetStore2.
    I have no reason to believe that Clinton is lying because I also developed a sample Petshop in a very similar timeframe.
    I don't know exactly what to think, but the following thoughts are crossing my mind.

    1 - Clinton and I are some kind of super-duper developers... unlikely... at least as far as I'm concerned :-)

    2 - Both approaches mentioned in the study suck, one just sucks a little less than the other.

    3 - Maybe this is a case of "Too Many Cooks in the Kitchen Spoil the Broth". The cooks being the "One senior J2EE architect" and "Two experienced J2EE programmers".


    Do the time measurements mentioned in the article seem reasonable to anyone?
    They sure don't seem reasonable to me.
    Can the time inflation be explained by the use of EJB's?

    Regards,
    Luis Neves
  3. Please explain.[ Go to top ]

    Luis,

    The "petshop" you implemented was called the same (or similar) thing as this application, but this is a different application. Below, I am copy-pasting from the Specification document, the section on "History of the Specification".

    Another data point is that Clinton Begin of ibatis/JPetStore was part of the expert panel writing the spec, and was very involved, and he spent more time than what you describe working with us just to make JPetStore a compliant non-EJB reference implementation of the specification, which we were very interested in having, from a third party.

    The relevant paragraphs from the spec:

    "The Middleware Company Application Server Platform Baseline Specification derives from Sun’s original PetStore sample application, but contains several new and important aspects."

    "The first, obvious, and important one is that Sun’s PetStore was only an implementation, not a specification. Until this document, there has never been a functional and behavioral specification."

    ... 2 more paragraphs not being included ...

    "The “modern” PetStore implementations which conform to this specification will have almost nothing to do with Sun’s original implementation. Practically, the only thing the modern PetStore codebases have in common with Sun’s original PetStore is that the application domain involves pets being purchased."

    Salil Deshpande
    The Middleware Company
  4. Hi all,

    To be fair, and to add further clarification to what Salil has said, the majority of the (new) time spent on JPetStore was not all spent on functionality and compliance. Rather it was spent somewhat on non-functional requirements and mostly on implementing a non-EJB distributed transaction via JTA that works on multiple app servers (configuration and tinkering). TMC was involved because they have the enviroment in which the PetStore impelmentations are best tested for compliance to the new spec, and they were very helpful!

    The changes to the actual functional requirements were implemented in a very short timeframe of 2.5 days (can be verified via SourceForge CVS). It should also be noted that those were regular workdays for me (Mon, Tue), and although not verifiable, I was not likely able to work on JPetStore for the majority of that time. A list of the functional changes required to make JPetStore 2.1.0 compliant are included at the bottom of this post.

    I should state that the productivity benchmark that I set out to beat (and did) with JPetStore was that of Microsoft's .Net PetShop 1.5 implementation. Scott Stanfield (CEO, Vertigo Software) stated it took 5 weeks and 2 developers, with the last week being for performance tuning. Although the .Net Pet Shop 1.5 is not compliant, It's likely that Vertigo beat these benchmarks (or could have), without a spec. I almost think this would be harder because they had to extrapolate their requirements from Sun's J2EE Pet Store, much as I had to extrapolate from the .Net Pet Shop. In all honesty, that's what took most of my time! I think with a spec, I could have implemented it faster (I even looked for one!).

    Are all of these comparisons fair? No. Here are a few reasons why:

    1) I implemented JPetStore by myself. Therefore, there was no "team" overhead. No meetings, no communication, no debates, no conference calls, no emails etc. We all know that this takes time. I would say that the PetStore application is too small for any more than one person to implement. Beyond one person, productivity is lost. Plus, watching the Swordfish DVD while you code increases productivity by a multiple of 10x. ;-)

    2) JPetStore makes use of a code-saving persistence framework called the iBATIS Database Layer (DAO + SQL Maps --NOT a code generator). This framework actually makes JPetStore non-compliant, as the spec clearly states that persistence frameworks are not allowed. On the contrary, it would seem that raw JDBC was used in the case of the "Traditional" team, which certainly cost them time (as the article states). For the record, I find it unfair that JDBC frameworks are not allowed, but presentation frameworks and code generators [compatible with the presentation framework] are...

    3) I didn't use EJB. This study stated that "We did mandate that the teams use EJB in their code bases". EJBs do not save time (unless they solve an evident problem), especially when they are hand coded (was xdoclet allowed?)!

    4) I used scary deadly tools to develop JPetStore. Like IntelliJ IDEA. :-)

    5) Finally, Luis and I are in fact some kind of super-duper developers! (okay I'm just kidding about that one) ;-)

    I hope this helps clear things up.

    Cheers,
    Clinton


    ---JPetStore Change Log----

      - Category and Product lists now display 4 items per page
      - Item list now displays 4 items per page
      - Pets favorite list now displays 4 items per page
      - Shopping cart now shows real-time "in-stock" indicator
      - Checkout page now shows line totals for each row
      - Shopping cart now shows line totals for each row
      - Order page now shows line totals for each row.
      - The favorites list is now displayed after AddItemToCart
      - Session timeout set to 10 minutes
      - Banner now only displays on Index and Shopping Cart pages
      - Order confirmation only displays address information (no payment info)
      - Order ID is now generated upon Order completion rather than beforehand
      - Search functionality for multiple keywords ("any", "or")
      - Shopping cart paginated to 4 items per page
      - Checkout summar paginated to 4 items per page
      - Index page is no longer dynamic (all links static)
      - Improved authentication
      - Implemented pluggable PetStoreLogic (see logic.properties)
      - Implemented OraclePetStoreLogic to support Oracle sequences
      - Implemented MsSqlPetStoreLogic to support auto-generated IDs
      - Tuned the catalogue cache models
  5. Bad press for Java development[ Go to top ]

    I have to agree with Luis Neves on this one. We are supposed to be impressed with 330 hours to develop this Pet Store website.

    Both approaches are classic examples of the over-engineering which pollutes Java development. This relatively simple website should have been developed in less than 80 hours.
  6. I don't know how long most of you folks have been writing code, but it has been my experience that every time we try to move to a higher level of abstraction we encounter the same resistance. When assembly was overtaken by compiled/functional languages. When they were overtaken by OO languages. And now for MDA.

    In every case developers bemoaned the methodology until dragged kicking and screaming into the fold. Then, once they discovered they were much more productive, they became converts. So it will be with MDA. You don't lose your development skills, you just concentrate them on the business logic, not the mundande glue between layers.

    I have no idea whether OptimalJ's code generator produces better code than I could by hand. As long as it's better than my average day, I'm better off.

    doug
  7. A very interesting article. I'm certainly going to investigate MDA in a bit more detail.

    However, what I would really like to see now is a follow up article that compares MDA to the other approach over a maintenance cycle where new requirements are appearing and new features being constantly added. Seeing as how I spend 80% of my time maintaining existing applications, being able to complete the development 30%(ish) faster using MDA doesn't actually save much unless that saving is carried forward into the whole produce lifespan.
  8. However, what I would really like to see now is a follow up article that compares MDA to the other approach over a maintenance cycle where new requirements are appearing and new features being constantly added.

    Yes, a maintenance competition would be nice. But maintenance is more than adding "new features" and "new requirements" as you say. What I'd rather see is a loss-of-work comparison as existing requirements are refined. For honing existing requirements and then stuffing the refinements into the deliverable, I'm guessing MDA would be even better than this paper suggests. Ie, this paper is conservative in its praise of MDA.

    Seeing as how I spend 80% of my time maintaining existing applications, being able to complete the development 30%(ish) faster using MDA doesn't actually save much unless that saving is carried forward into the whole produce lifespan.

    Agreed.
  9. Chris Turner and Brian Miller,

    Thank you. Yes, this was out of scope of this study, but I agree it is very important (and perhaps more so than development, as you say). We are hoping to do such a follow on study in the future.

    Salil Deshpande
    The Middleware Company
  10. MDA vs libraries[ Go to top ]

    I don't know much about MDA development, but here is a question for those that do.

    It sounds to me like MDA is suitable for solving known problems. The petstore would seem to be a typical known problem (call it the 'ecommerce' problem).

    Similarily, template solutions (example solutions that one copies and pastes and then modifies) and libraries are also solutions to known problems that can greatly speed up development time (especially template solutions). I have 'copy-n-pasted' and customized an open-source bullentin board solution in a couple of days. Developing the bullentin board from scratch would have obviously taken an order of magnitude longer. I suspect the same would can be said for an app like the petstore (which is itself a template, ironically ;-). A colleague of mine (with an MS focus) has used some of Microsoft's new template sites (which apparantly are actually pretty good now according to him) to develop a production quality ecommerce solution in a couple of weeks (i.e. 2 developers for a total of around 160-200 hrs of development), which kicks both example's butts in the above study.

    1.) Is MDA better than a 'template' solution? My anecdotal evidence suggests to me that for known problems, a working template solution is faster.

    2.) Can MDA assist much for problems not in it's template repository? It seems unlikely that it could. For example, say I want write a program to locate submarines by matched field inversion (simulating possible locations of the submarine, solving the acoustic wave equation in water and comparing wave pressure values to the values at my hydrophones). Is MDA going to help me out here? I don't think so.

    3.) MDA is using UML as the new language, but is UML really detailed enough to be a implementation language? Can someone point me to some UML is detailed enough that it could be transformed directly into an application? I have never seen UML that could come anywhere close to this and am curious how this is done. Is UML really the 'best' high level language??

    Cheers.
  11. MDA Discussion[ Go to top ]

    Check out this very interesting discussion at TSS for MDA:

    http://www.theserverside.com/home/thread.jsp?thread_id=20314

    Lofi.
    http://www.openuss.org
  12. MDA vs libraries[ Go to top ]

    MDA is using UML as the new language, but is UML really detailed enough to be a implementation language?

    Sadly no since UML lacks fine grained process modelling. In most cases it is impossible to describe an algorithm using UML. Eg, quick sort can't be written in UML. MDA is requesting proposals for adding action semantics to UML, and then UML will be a complete programing language. The proposal from Action Semantics Consortium is the best I've seen. It gives an abstract syntax with unspecified mappings possible to arbitrary text or visual flow languages.
  13. MDA vs libraries[ Go to top ]

    I wouldn't call the petstore the "ecommerce" problem. First of all, there is no commerce involved in it. Secondly, it covers only one potential aspect of ecommerce: the shopping cart. It may sound banal to refer to such a glorious application as that, but that's all it is, and you could buy a $40 cgi program off the shelf at your local office supply store in 1998 to do the same thing, with comparable performance. So, yes, it's a solved problem.
  14. MDA vs libraries[ Go to top ]

    I wouldn't call the petstore the "ecommerce" problem. First of all, there is no commerce involved in it. Secondly, it covers only one potential aspect of ecommerce: the shopping cart. It may sound banal to refer to such a glorious application as that, but that's all it is, and you could buy a $40 cgi program off the shelf at your local office supply store in 1998 to do the same thing, with comparable performance. So, yes, it's a solved problem.


    Agreed. None of the above messages get to the heart of the issue for me, i.e. is MDA anything beyond a set of parameterized template architectures, and in this case wouldn't you be better off to use an engine/library/API approach?

    Clinton Begin pointed out in a message that the non-MDA team was prevented from using frameworks, libraries, etc, which to me makes the comparison between the two processes pretty meaningless.

    Imagine this scenario. You have to write a 3D Game. You have the option to license a Game engine (basically a specialized library and some tools) or you can purchase an MDA tool and design and build your 3D game from scratch. Which method do you think will produce a game the fastest and with the best performance, most features, etc ;-)

    I guess I don't see the value of code generation. To me anything that can be done with code generation can be done with a library/engine/template approach and is likely to be more usable, because there is better abstraction and seperation of the problem.

    Interested to get comments, but probably too late in the life of this thread. Oh well.

    Cheers.
  15. I think that it is extremely commendable that The Middleware Company has the courage to address such a potentially controversial issue.

    Old saying is that "Anything labor-intensive will go offshore" . .and much work IS going offshore. Are we doing too much manual labor (I'm not expert enough to answer that . .)???

    FYI:

    The UML standards committee ( http://www.omg.org/ ) approved a major upgrade (UML 2.0) in early June, supposed to greatly expand its capabilities. Don't know when the specs get released to the public, heard a rumor in September.

    IBM / Rational has a new MDA tool, details at:

    http://www.rational.com/products/rapiddeveloper/index.jsp?SMSESSION=NO

    (I believe that Rational bought this company, then IBM bought Rational, which would account for IBM having at least 2 productivity paths - this new tool and all the built-ins in WebSphere V5. I believe that they do require a J2EE Architect to set some of the underlying stuff up, which is good. A separation of App Programmer & Architect / Infrastructure did have advantages in some old legacy worlds, the specialization did allow each to become better at each role.)

    Best to all - GH
  16. I think that it is extremely commendable that The Middleware Company has the courage to address such a potentially controversial issue.

    >


    Commendable?!? ... I don't know about that.
    Controversial ... well, I think yes. The study was after all funded by
    Compuware Corporation a company that among other things sells MDA tools.
    I am not surprised that a study funded by a MDA tool vendor concludes that MDA results in "productivity gains"... are you?
    You can believe that "The Middleware Company pledges to you that it has conducted itself in a fair and impartial manner in this case study."... I'm cynic.

    But even if you believe in what "The Middleware Company" is telling you, after reading the study I'm a little surprised with the conclusions.
    When I was in school one of the things I learned was that to measure the variation caused by a variable in a complex system, you keep everything the same and just change the variable.
    What did the "The Middleware Company" did? They given *different* development approaches to *different* teams and then concluded that MDA results in higher productivity because one team finished the job in less time ... for me this makes no sense.
    Why didn't they use the same team?!
    Why didn't they give the two teams the opportunity to use both approaches?

    I'm I the only programmer that thinks that the team is one of the most crucial aspects of software development?

    I must say that I'm little baffled by this study... in my mind this study gives further evidence that the "The Middleware Company" should stay well clear of anykind of product comparison. Their past and present history doesn't qualified them for that.

    Does MDA results in increased productivity? ... perhaps, in theory it looks like it, but this study fails to show that.

    Regards,
    Luis Neves
  17. <Luis Neves>
    What did the "The Middleware Company" did? They given *different* development approaches to *different* teams
    Why didn't they use the same team?!
    Why didn't they give the two teams the opportunity to use both approaches?
    </Luis Neves>

    Luis: that makes no sense. When writing the app the second time, the team would be biased from writing it the first time. The way TMC did it is not perfect, for the reasons you mention, but better than the alternatives you are suggesting.
  18. That was part of the reason. The project manager spent a great deal of time making sure that the skills and experience of the team members were equivalent. I believe two to three weeks were spent just balancing the team and skillsets, with a great deal of input from the team members themselves.

    I agree with John and Luis that having two separate teams is not perfect, but the teams were as close as they could get them.

    Using one team would have certainly been "easier" and it would have achieved the objective of keeping the team constant, but unfortunately the results would have been less meaningful in this case than with the approach we took.

    Salil Deshpande
    The Middleware Company
  19. <Luis Neves>

    > What did the "The Middleware Company" did? They given *different* development approaches to *different* teams
    > Why didn't they use the same team?!
    > Why didn't they give the two teams the opportunity to use both approaches?
    > </Luis Neves>
    >
    > Luis: that makes no sense. When writing the app the second time, the team would be biased from writing it the first time.

    What do you mean the team would be biased?
    Do you mean that on the "second round" the team would be influenced by
    the knowledge of the previous round? (in which a different
    development approach was used)

    That may very well be true, but keep in mind that this application follows a very clear specification and very well known problem space... also... they used *experienced* J2EE developers in both teams, so in a way the developers are already biased to boot.

    Nonetheless you have a good point, I wonder how can the influence of past experiences be measured?

    Regardless of the correctness of my proposed alternatives I stand by my position, the conclusions of the study are built over very shaky foundations.

    Regards,
    Luis Neves
  20. What do you mean the team would be biased?

    >Do you mean that on the "second round" the team would be influenced by
    >the knowledge of the previous round? (in which a different
    >development approach was used)

    Yes. I do not think the foundations are shaky. I would instead argue that the study is valid but not profound. All it is showing is using higher level tools can increase productivity. That is intuitive, not shaky.
  21. <John Wong>
     Yes. I do not think the foundations are shaky. I would instead argue that the study is valid but not profound.
    </John>

    Well... you got me humming the song Let's Call The Whole Thing Off with Ella Fitzgerald & Louis Armstrong ;-)

    <John Wong>
    All it is showing is using higher level tools can increase productivity. That is intuitive, not shaky.
    </John>

    Actually you touch upon another problem with the study... the tool.
    The "The Middleware Company" went out off their way not to mention the tool used in the MDA approach, when it's one of the most important factors in the all thing.
    Wouldn't you agree that the quality of the MDA tool is highly relevant to the productivity gain?
    It's conceivable that the use of a bad quality MDA tool can result in a longer time to develop.

    I agree with you. it's intuitive that higher level tools can increase productivity. But we should measure not speculate.
    I find this attempt at measurement flawed at best and a poorly disguised advert to MDA tool vendors at worst.

    Regards,
    Luis Neves
  22. I agree. Not only does this study assert that "increased productivity is necessarily good" (when in fact, this statement is ignorant of quality and maintainability and usability), it also doesn't sufficiently measure MDA in an environment where code-generation is controlled properly. In this study it was up the "traditional" group to find their own code generation if they wanted to.

    When you aren't measuring for UI or API usability, maintainability or quality, code generation is always going to increase productivity (especially if your code generator generates lots of code :).

    A better way to do this study would be have more than two groups, and measure for quality and maintainability.

    We get, at the end, ONE metric for each code base. That would not be sufficient for a senior project in any undergrad program. Embarassing to say the least.

    I applaud TSS's desire to present some scientific data, but they really need to learn how to do it. A basic overview of software metrics seems to be in order, as well as a review of the scientific method.

    Of course, if it is a vieled advertisment for whatever that company is, than poo on TSS.
  23. <Luis Neves>
    Why didn't they use the same team?!
    Why didn't they give the two teams the opportunity to use both approaches?
    </Luis Neves>

    I think they need to specify some design parameters, and have multiple teams implement the application like so:

    Team A: MDA, Traditional
    Team B: MDA, Traditional
    Team C: Traditional, MDA
    Team D: Traditional, MDA

    On the second round the team is most likely going to be faster since they implement the application in the first round. Therefore need 4 teams to reduce differences in the teams (even if the teams are equal skill, you can still get differences. BTW how did they compare skill levels?).

    Before any study of this kind is done, they need to read "The Basic Practice of Statistics" by David S. Moore
  24. Rollin' now.[ Go to top ]

    MDA is commonly bashed as being clumsier at round tripping than traditional coding. This paper should dispel that myth.
  25. Rolling?[ Go to top ]

    MDA is commonly bashed as being clumsier at round tripping than traditional coding. This paper should dispel that myth.

    Hi, Brian. I read through the entire paper, and couldn't find anything that tended to 'dispel that myth'. Could you indicate which part of the paper you are referring to?

    In fact, roundtripping was my primary concern once I finished reading the study. From the paper:

    '...there are other interesting aspects to MDA that we had not evaluated in this case study, such as application performance and maintainability... When you refactor an MDA-based system, you modify the original UML and re-generate code from that UML.'

    YIKES! The subtext here is that you can not directly refactor your code, because this will get it out of sync with the PIM. This might work great in an environment where a single master architect is the only one allowed to toy with the design, and the unquestioning code monkeys are only supposed to implement stubbed methods. But in a team of capable developers you probably want to allow people to continuously refactor as they code.

    I think this study represented the best possible environment for an MDA tool, where all of the requirements are static and known up front. Because it appears that in an environment that requires the system to evolve more dynamically, you may rapidly descend into a PIM-code synchronization nightmare.
  26. Rolling?[ Go to top ]

    Could you indicate which part of the paper you are referring to?

    The paper's description of the traditional team's 2nd week is:

    "They also had some challenges with their IDE, in that if they tried to generate J2EE components from their IDE, and needed to modify them later, the components did not round-trip back into the IDE very easily."

    Because it appears that in an environment that requires the system to evolve more dynamically, you may rapidly descend into a PIM-code synchronization nightmare.

    The paper doesn't mention which MDA tool was used. Does OptimalJ suffer from the "nightmare" you describe?
  27. Reply- Rolling[ Go to top ]

    Code is generated from a pattern. Patterns are assessible to skilled knowledgable architects via pattern authoring. Generated code is guarded so that if you want to change something as a developer is concerned you check in with the architect to make sure you really need to change the originally designed code. Once you convince the architect that a change is necessay, he can make changes to the pattern, push a button and shazam your generated code does what you want it to. Point in case here is that the Architects remain in control of their original designs as it relates to architecture.

    Developer are able to add code into free blocks where generated code was created in order to further enhance/refine or make the application more enterprise ready.

    Michael
  28. Reply- Rolling[ Go to top ]

    Code is generated from a pattern. Patterns are assessible to skilled knowledgable architects via pattern authoring. Generated code is guarded so that if you want to change something as a developer is concerned you check in with the architect to make sure you really need to change the originally designed code. Once you convince the architect that a change is necessay, he can make changes to the pattern, push a button and shazam your generated code does what you want it to. Point in case here is that the Architects remain in control of their original designs as it relates to architecture.

    Sure. In many ways MDA is a great step toward Capability Maturity Model levels 2 "Repeatable" and 3 "Defined".
  29. Thanks![ Go to top ]

    Thanks - I haven't had a really good laugh from TSS in a long time. My own version is far less funny "and shazam - your architects are now a major bottleneck on the project!".

        -Mike
  30. Hand coding back to model?[ Go to top ]

    When auto generated code won't do and hand coded changes become scattered across the application how does your MDA tool degenerate (yuk yuk) these changes back into the model and regenerate code with hand coded changes intact?
  31. This paragraph is interesting:

    "In this study, we will not be mentioning the names of the tools being used by team members, although to ensure fair representation of the code-centric approach, we can verify that one of the market’s leading IDEs
    was used. We want this study to be an educational evaluation of the productivity gains that may be obtained from tools that apply the MDA approach, as compared to traditional, code-centric environments. We don’t want this study to turn into a “vendor shoot-out.”"

    So the study was sponsored by Compuware, who makes OptimalJ, an MDA tool. Yet the actual MDA tool used for the study isn't disclosed, but I have a strong suspicion it's OptimalJ. It's hard to believe Compuware would sponsor this if a different MDA tool was used. And if OptimalJ was used, I think that makes the results a little less sound.

    Overall I believe there is potential for MDA to shorten the development cycle, increase quality, etc. I just haven't tried it myself so I'm skeptical. I plan to get a copy of MDA explained and find out more.

    Michael
  32. Was this really impartial? Yes[ Go to top ]

    Although the whitepaper inadvertently creates some mystery about the MDA tool used, Compuware has publicly identified it as OptimalJ. Their intent was to test and document the claims of increased developer productivity coming from OptimalJ customers.

    From my viewpoint as a member of the MDA team, the study was impartial. We made a serious, good faith effort to equalize the skill sets of the two teams and minimize the peripheral factors. Compuware supported that effort and never tried to bias the outcome. Once we began developing, the teams worked independently of any outside influences. The MDA team's leader had substantial knowledge of OptimalJ. He functioned as any competent team leader would, helping us complete the project quickly and properly.

    I went into this study with minimal knowledge of MDA and less of OptimalJ. I come away from it convinced that the results are legitimate and that MDA has an important future in software development.
  33. Hi,

    This case study showed how MDA works in an environment where a fairly comprehensive and rigid specification is available (written by IT people). I wonder how it would stand up to a real customer with little more than a vision.

    An interesting case study would be one where the specification was not in a white paper, but rather a script that a "customer" role-plays throughout the course of the study. Teams are allowed (required) to engage the customer, ask questions and gather the requirements. Each team would be provided with their own "customer", so no tug-o-customer occurs.

    The script would include some significant changes to both the functional and non-functional requirements, as well as some annoying "oops I forgot this requirement" and "actually I said no meaning yes" events. For control, the hosting body would slowly "leak" these major script events to the "customers", so the teams fairly hear the information at the same time.

    I think this would be an excellent and more realistic study. Anyone up to hosting or competing in such a competition?

    Cheers,
    Clinton
  34. That would indeed be an interesting case-study!

    I think that this case study is indeed valid, but within a very restricted scope: IF all requirements are well known and well understood, AND you won't use any persistence frameworks at all, AND your code for some reason (use of EJB's) will have to contain a lot of duplication, THEN an MDA approach is faster.

    Some might say that the assumption of wellknown requirements is almost never valid (wonder what Kent Beck, Ron Jeffries or Martin Fowler would think of that)!

    Some might say, that not using any persistence framework at all is a very bad idea (wonder what Gavin King, Clinton Begin or Robin Roos would think of that)!

    Some might say, that code-generation is a design-smell, covering the smell of duplicated code.

    We´ll, I am not "Some", and code-generation is indeed a valid approach if all else fails. Code-generation will only save you work, if your code is going to contain a lot of duplication, and you are unable to refactor it away, because you are using a framework that disallows such refactorings, say EJB's. Otherwise i believe simple removing of duplication will be better. You might say, that instead of removing the duplication, you hide it by using another language (UML). Maybe very soon it will be possible to actually remove this by using AOP?

    Code-generation will also work best in a waterfall project (the underlying assumption of stable/well-understoood requirements is the same).

    The study Clinton is suggesting would indeed be interesting, and probably more valid than the performed one. I believe though, that it should also be allowed for the standard team to use any persistence framework, even CMP.

    But why use EJB's at all for this study? I fail to see the business requirements, that make the use of them necessary, and it might be argued, that code-generation is just hiding the duplication, introduced by unnecessary use of EJB's.
  35. This case study showed how MDA works in an environment where a fairly comprehensive and rigid specification is available (written by IT people). I wonder how it would stand up to a real customer with little more than a vision.

    MDA's lead is widened when requirements are fuzzy and emerge gradually. Hand coding has a tendency to affect developers detrimentally. With hand code developers are taken down fruitless implementation paths, distracted with petty or irrelevent implementation details, and often denied an overview of their application. The frustration level is higher with hand code. Hand coding penalizes design speculation. Fuzzy and evolving requirements favor a methodology that allows rapid refinement of analysis by minimizing the derivative labor required to validate the refinement. Ie, less strain gives more nimble response to requirements flux.

    An interesting case study would be one where the specification was not in a white paper, but rather a script that a "customer" role-plays throughout the course of the study. Teams are allowed (required) to engage the customer, ask questions and gather the requirements.

    Indeed the Shlaer-Mellor (MDA's precursor) training I got emphasized requirements gathering by evaluating the interviews of subject matter experts. A developer who cares more about twiddling bits by hand might be socially handicapped at interactive gathering of requirements from computer illiterate folk. Maybe the hand coder comes up short.
  36. Productivity measured in LOC?[ Go to top ]

    I know all measurements can be misleading for several reasons.. but I just ask for rough numbers: what's the generic avarage acceptable LOC per day per person?

    To remain somewhat article-related, what was the avg LOC/day of the two teams (including generated codes) taking part in the case study? (I'd be also interested in the sum LOC of the two teams from a maintaince point of view).
  37. LOC is evil :)[ Go to top ]

    I wouldn't worry about the LoC marks. They are deceiving.

    - Better designed software will probably have less LoC
    - A more experienced programmer will have a better design (and hence less LoC)
    - As soon as code generation comes in LoC goes out of the window

    I prefer to benchmark on output metrics like:

    - Are the requirements accounted for
    - Do all of the tests run
    - How many bugs are there
  38. LOC is evil - not necessarily[ Go to top ]

    Lines of Code is a *decent* normalizing metric withing a project team. Given that language, developers and development tools are constant, LOC can be used to normalize other metrics.

    The only other normalizing metric would be function point analysis, which is just as debatable as LOC.

    There are several normalizing metrics similar to lines of code, such as McCabe's code complexity metric (basically a count of the number of conditionals). This is relatively simple to compute.

    Then you get into more difficult things to compute like number of paths through the code, data flow (probably the best complexity metric) and so on.

    Yes, LOC is not perfect. But, it's easy to calculate, and if the values are used in a comparative fashion with other values being held equal, it CAN give you some insights.

    So, you cannot easily say "300 LOC/day is good. less than that is bad", but you CAN say "this project has 10 bugs/LOC and this one has 20. The former is of lower quality", or "John's productivity was 200 LOC/day on Project A and 300/day on Project B. What happened on project B to make John write more code?" Obviously the last one requires some analsys as to defect density and other quality measures.

    No one metric is illustrative, but all are useful when looked at together. Including LOC.
  39. How can you compare productivity when each team used different design patterns?

    <table>
    <tr><th>Pattern</th><th>Traditional Team</th><th>MDA Team</th></tr>
    <tr><td>Session-entity wrapper</td><td>Yes</td><td>Yes</td></tr>
    <tr><td>Primary Key generation in EJB components</td><td>Yes</td><td>Yes</td></tr>
    <tr><td>Business delegate</td><td>Yes</td><td>Yes</td></tr>
    <tr><td>Data Transfer Objects (DTOs)</td><td>Yes</td><td>Yes</td></tr>
    <tr><td>Custom DTOs</td><td>Yes</td><td>Yes</td></tr>
    <tr><td>DTO Factory</td><td>Yes</td><td>No</td></tr>
    <tr><td>Service locator</td><td>Yes</td><td>No</td></tr>
    <tr><td>JDBC for Reading via Data Access Objects (DAOs)</td><td>Yes</td><td>No</td></tr>
    <tr><td>Business interface</td><td>Yes</td><td>No</td></tr>
    <tr><td>Model driven architecture</td><td>No</td><td>Yes</td></tr>
    </table>

    In summary the Traditional Team used these extra patterns:
    * DTO Factory
    * Service Locator
    * JDBC for Reading via DAO
    * Business interface

    If both teams didn't use MDA the team that didn't use this patterns would have been faster. Therefore this results don't mean anything.
  40. In the end it is just code. . .[ Go to top ]

    I was the team lead on the non MDA team.

    <rebuttal>
    As stated in the white paper, some of the patterns we used actually sped up our productivity and/or reduced the lines of code we had to debug.
    </rebuttal>

    <my-two-cents>
    I approached the development of this application as a *real* development problem and did not attempt to use any shortcuts -beyond the code-generation facilities of the IDE and the reuse of the knowledge of my team. It is my belief that the MDA team also had this attitude and we both strived to build a solid implementation of the spec while only faintly paying attention to the ticking clock.

    In my opinion, the study addresses a real and existing context:

    Take an IT shop wherein open-source tools such as xdoclet and middlegen are not the norm, and a standard IDE is used by all developers on a particular project. (These places do exist.)
    For many reasons these developers are happy using the IDE: familiarity through consistent use accross many projects(not just J2EE), extensive documentation (they do not ever have to look at the source code when a new feature is being utilized), and management is happy knowing there is a vendor behind the methodologies used and that they can likely find other developers who also know that tool and can presumably become productive and continue to maintain the code.

    Will those developing software in such a context fair better using "traditional" methods of code production:
    *The IDE's built-in wizards
    *Occasional bits of copy-paste
    *Typing new code
    Or -using another vendors product (that is comparable to the first in that it is supported by a vendor and therefore includes extensive documentation, and can be reused etc.) that *enforces* a model-driven development process and also requires that developers produce code using:
    *The IDE's built-in wizards
    *Occasional bits of copy-paste
    *Typing new code

    The major difference is in the emphasis in the second case of using the model as source code. It reminds me of a visual IDL enforcer. I always thought it a good thing that in CORBA you defined your contracts before any implementation could begin. The MDA approach also insists on this. The MDA team gained some time by never being out of synch with their model. My team had to maintain the model in their heads or spend additional time documenting it and so we did have a few meetings where we determined that we had slipped a bit off track. It is interesting to me that we used the Business Interface pattern to help us stay on track and clearly define our contracts. While they used a visual model to accomplish much the same thing. (we were able to use our IDE to generate stub code based on our Business Interfaces so we benefitted time wise as well)

    I personally was very pleased with the tempo at which my team completed the application development. I was suprised at the difference in development time between the teams ( I thought all along that we were winning ) and I am satisfied that both teams created a working implementation of the spec. It will be up to a future study (studies) to more carefully determine the maintainability and performance aspects of the code created by each team. I for one, hope these do take place.

    Considering the context of the study and the development strategies which it is designed to address, I have come away from the experience with a new respect for MDA as a very useful technology/strategy and am very interested to see if it stands up to further testing with similar (superior) results.

    - oh yeah, in the end (beginning) they have a model that potentially can be used to generate a completely different implementation (using other pattern templates or even languages) in presumably just as rapid a fashion.

    </my-two-cents>

    Owen Taylor
    Senior Enterprise Architect
    The Middleware Company
  41. In the end it is just code. . .[ Go to top ]

    So one team got to use an MDA tool which supposedly is there to increase their productivity, and the second team was artificially limited to use nothing but their IDE and were forbidden from using non-MDA productivity-enhancing tools.

    And the results showed this. Petstore is actually small enough that an average developer _could_ easily keep the whole model in his head, and I've lead teams of 1-3 people who have accomplished alot more in the same time frame. The fact that graphics, HTML, and DB schema were pre-supplied only exacerbates this.

    And no, I'm not saying I'm a super programmer, or trying to get into a pissing contest, or saying your team was sub-par. What I'm saying is that a typical commercial development team shouldn't take anywhere near 5 weeks to develop a system as tiny as pet store in the year 2003. And throwing in static requirements _and_ pre-made HTML and DB schema only makes it more strange (heck, the MDA implementation time wasn't even very interesting given these huge legs up). If I apply _either_ your MDA results or "traditional" results (with a static pet store) to the application group I'm working with right now (and their true enterprise application, with mondo reliability/throughput issues), they'd be entering production sometime in 2004 or later instead of this month. So are the guys and gals I work with cyborg super programmers? I think not.

        -Mike
  42. Version Control[ Go to top ]

    On the beginig of this article i read this:

    "StarTeam is recognized as a leading version control system. We use it internally at The Middleware Company for many purposes."

    While i certainly belive that is certainly alright to give free choice of vcs to everybody, i certainly DO NOT understand what the word "leading" means in this context.

    Is it "leading" in the sense of market share?

    Is it "leading" in the sense of features?

    c'mon guys.. have you ever heard for BitKeeper, CVS, Perforce, SourceSafe?

    What is the point in reading this report further, when you write such b******t on the begining. Are you paid to promote StarTeam?

    I have nothing against StarTeam. But it's certainly NOT RECOGNIZED as leading vcs tool on the market. And it never has been.
  43. Seeing the bigger picture[ Go to top ]

    Nicolas Cugnot introduced the first steam-powered road vehicle in 1769. Imagine the TSS discussion thread it might have provoked: Many would express curiosity and intrigue over the idea of a self-propelled vehicle. But others would be skeptical. One critic would observe that a horse-drawn wagon could provide equivalent cargo-hauling functionality with greater ease of maintenance, at lesser cost, and sans all that noise and air pollution. Another would point out that a man on a bicycle or even on foot could easily outrun the new contraption. And a third, particularly acerbic commentator might dismiss the invention out of hand because the inventor was French.

    The critics would be missing the bigger picture.

    MDA is not just another, "cooler" way to build Petstore. And it's not just about code generation. It's fundamentally a way to connect code to a model that makes the model more useful and thus improves the development and maintenance of software.

    It seems developers have a kind of love-hate relationship with modeling: At one extreme are those who see models as unnecessary constraints on the real work, i.e. writing code. At the other extreme are those who see models as the centerpiece of development, with code as simply a byproduct. (They may also see modeling as a necessary control over loose cannons in the first group.) In between are those (probably the majority) who use models up to a point but set them aside after the development process reaches a certain stage.

    MDA purports to make models more useful by:

    * Abstracting to a higher level, allowing for a more complete and comprehensive model. This makes it possible to generate a "complete" app.
    * Inserting the PSM (Platform Specific Model) between the top-level model and code. This makes it possible to model (and thus auto-generate) platform-specific features (like EJB custom finders) w/o compromising the top-level abstraction.
    * Opening the transformation process, so you can control the generated output.
    * Standardizing the whole thing.

    While MDA is a much more mature technology than Cugnot's automobile (there are viable implementations out there now, including OptimalJ), it is still maturing. For example, the round tripping issue needs to be addressed. Nevertheless, looking at the bigger picture, I repeat my conviction that MDA has an important future in software development.
  44. Seeing the bigger picture[ Go to top ]

    Alot of the negativity you may be seeing here is due to the fact that people have heard _exactly_ the same line for the past 20 years. And each new modelling fad tells how it's magically different from the older (failed) one, and how this time it really will make a difference....

    I'm quite serious, you can take _exactly_ the terms you've used to describe MDA, plug in an old term like CASE, and find that your argument matches precisely the arguments made in the 80s by CASE advocates.

    All the old technologies fell apart because visual models don't convey enough semantic content to write a real app with all of its behavior. What you see is pretty until you realize that major characteristics of the application do not appear at all in the model.

        -Mike
  45. Seeing the bigger picture[ Go to top ]

    ...visual models don't convey enough semantic content to write a real app with all of its behavior.

    If you're refering to the current version of UML, then you're right. It is possible that OMG's quest for action semantics leads to completely visual programing, which has been done before. The language Prograph proved years ago that every level of detail of object orientation is amenable to entirely visual programing.
  46. Seeing the bigger picture[ Go to top ]

    <brian miller>
    It is possible that OMG's quest for action semantics leads to completely visual programing, which has been done before.
    </brian miller>

    Is visual programming better then textual programming? Is OCL better then Java? or Action semantics language they are planning to add?
  47. unbelievable[ Go to top ]

    "While MDA is a much more mature technology than Cugnot's automobile (there are viable implementations out there now, including OptimalJ), it is still maturing. For example, the round tripping issue needs to be addressed. Nevertheless, looking at the bigger picture, I repeat my conviction that MDA has an important future in software development."

    You're using a classic underhanded argumentation trick: make the opponents of idea A look like idiots because idea A is really a lot like idea B which we all know was succesful. Right.

    MDA has absolutely nothing to do with steam-powered road vehicles. It's not a tremendoulsly innovative idea - products have been attempting to do this kind of thing for years. MDA's one claim to innovation is that it is a _standard_ approach to platform-independent round-tripping. Otherwise, this is yet another run at a very (very) old fence.

    Note I'm not saying anything particularly bad about the fence. Visual tool round-tripping can improve productivity. By how much? And with what applicability? We have a fair amount of evidence about the productivity of CASE tools in an appropriate environment; prior attempts should be an indicator. There will be some teams that will benefit from it, but in a big-picture sense, the oral and ad-hoc communication mediums in small development teams will most likely remain the productivity champions.

    And don't get me started about "portability across architectures". For Hello, World! maybe.
  48. How to estimate labor of effort?[ Go to top ]

    How do you guys estimate the number of hours needed for the project of this size before starting to work? Based on number of requirements?

    How many screens does each team implement?